The world of artificial intelligence (AI) is moving rapidly. At Cochrane, we are keenly aware of concerns within the health research community about keeping pace with - and making the most of - these advancements. In response, Cochrane is stepping forward to outline its strategic engagement with AI.
Our aim is to be strategic about our involvement with these emerging technologies and to foster collaboration across the sector to leverage the significant opportunities AI could bring in enhancing the quality and efficiency of health evidence synthesis. We want to share our planning about AI, how we're getting involved, and how we hope to work with others on this. We're serious about making the most of what AI can offer for our work in producing high quality health evidence.
Cochrane’s Ella Flemyng (Head of Editorial Policy and Research Integrity) and Anna Noel-Storr (Head of Evidence Pipeline and Data Curation) are collaborating with experts to plan a way forward for Cochrane to benefit from AI, looking at four priority areas:
Setting Standards for responsible AI use in evidence synthesis
We're creating guidance for using AI in evidence synthesis so that it is ethical and appropriate. We're not going it alone – we're currently working with the Campbell Collaboration and the International Collaboration for Automation in Systematic Reviews (ICASR), and are now calling to those across the evidence synthesis field to get involved.
We plan to share our first iteration of the guidance before the Global Evidence Summit 2024 and are collecting the details of those who want to provide feedback now. If you are interested, please do complete this survey so we can let you know when it’s available for feedback.
Making Sure AI Tools Measure Up
As part of developing standards, we're also looking at the best way to validate AI tools to ensure they're good enough to use. There is exciting work happening across the community developing AI tools for evidence synthesis, and we want to help tool developers to understand how to meet Cochrane standards and have a clear route for endorsement.
Building a Strong Foundation for Tool Implementation
Instead of trying to build our own AI tools from scratch, we're focusing on strengthening interoperability with our technical environment. This way, we can deploy endorsed AI technologies that are a good match for our needs. We hope this approach will help to ensure faster implementation of new AI technology.
Studying How AI Tools Affect Our Work
We encourage those across the community to investigate how using AI tools changes the way we develop, publish, and share our reviews. Cochrane Evidence Synthesis and Methods, our open-access journal, welcomes research from across this area to contribute to the evidence base on how we conduct evidence synthesis and ensure we maintain our high-quality standards.
To sum up, we're looking closely and working hard behind the scenes to stay ahead on AI and are keen to work with anyone interested in developing ethical AI standards and practices. Instead of just talking about new tools, we want to focus on meaningful contributions to how AI can be used responsibly in evidence synthesis.
We encourage the wider community and partners to get involved, share ideas with us and help shape how we all use AI. Together, we can make sure that AI helps us improve health evidence synthesis for everyone.