Clinic Exploratory research Preclinical

Why are we still conducting meta-analyses by hand?

« It is necessary, while formulating the problems of which in our further advance we are to find solutions, to call into council the views of those of our predecessors who have declared an opinion on the subject, in order that we may profit by whatever is sound in their suggestions and avoid their errors. »

Aristotle, De anima, Book 1, Chapter 2

Systematic literature reviews and meta-analyses are essential tools for synthesizing existing knowledge and generating new scientific knowledge. Their use in the pharmaceutical industry is varied and will continue to diversify. However, they are particularly limited by the lack of scalability of their current methodologies, which are extremely time-consuming and prohibitively expensive. At a time when scientific articles are available in digital format and when Natural Language Processing algorithms make it possible to automate the reading of texts, should we not invent meta-analyses 2.0? Are meta-analyses boosted by artificial intelligence, faster and cheaper, allowing more data to be exploited, in a more qualitative way and for different purposes, an achievable goal in the short term or an unrealistic dream?

Meta-analysis: methods and presentation

A meta-analysis is basically a statistical analysis that combines the results of many studies. Meta-analysis, when done properly, is the gold standard for generating scientific and clinical evidence, as the aggregation of samples and information provides significant statistical power. However, the way in which the meta-analysis is carried out can profoundly affect the results obtained.

Conducting a meta-analysis therefore follows a very precise methodology consisting of different stages:

  • Firstly, a search protocol will be established in order to determine the question to be answered by the study and the inclusion and exclusion criteria for the articles to be selected. It is also at this stage of the project that the search algorithm is determined and tested.
  • In a second step, the search is carried out using the search algorithm on article databases. The results are exported.
  • Articles are selected on the basis of titles and abstracts. The reasons for exclusion of an article are mentioned and will be recorded in the final report of the meta-analysis.
  • The validity of the selected studies is then assessed on the basis of the characteristics of the subjects, the diagnosis, and the treatment.
  • The various biases are controlled for in order to avoid selection bias, data extraction bias, conflict of interest bias and funding source bias.
  • A homogeneity test will be performed to ensure that the variable being evaluated is the same for each study. It will also be necessary to check that the data collection characteristics of the clinical studies are similar.
  • A statistical analysis as well as a sensitivity analysis are conducted.
  • Finally, the results are presented from a quantitative and/or non-quantitative perspective in a meta-analysis report or publication. The conclusions are discussed.

The systematic literature review (SLR), unlike the meta-analysis, with which it shares a certain number of methodological steps, does not have a quantitative dimension but aims solely to organize and describe a field of knowledge precisely.

The scalability problem of a powerful tool

The scalability problem is simple to put into equation and will only get worse over time: the increase in the volume of data generated by clinical trials to be processed in literature reviews is exponential while the methods used for extracting and processing these data have evolved little and remain essentially manual. The intellectual limits of humans are what they are, and humans cannot disrupt themselves.

As mentioned in the introduction to this article, meta-analyses are relatively costly in terms of human time. It is estimated that a minimum of 1000 hours of highly qualified human labor are required for a simple literature review and that 67 weeks are needed between the start of the work and its publication. Thus, meta-analyses are tools with a high degree of inertia and their temporality is not currently adapted to certain uses, such as strategic decision-making, which sometimes requires certain data to be available quickly. Publications illustrate the completion of full literature reviews in 2 weeks and 60 working hours using automation tools using artificial intelligence.

“Time is money”, they say. Academics have calculated that, on average, each meta-analysis costs about $141,000. The team also determined that the 10 largest pharmaceutical companies each spend about $19 million per year on meta-analyses. While this may not seem like a lot of money compared to the various other expenses of generating clinical evidence, it is not insignificant and it is conceivable that a lower cost could allow more meta-analyses to be conducted, which would in turn explore the possibility of conducting meta-analyses of pre-clinical data and potentially reduce the failure rate of clinical trials – currently 90% of compounds entering clinical trials fail to demonstrate sufficient efficacy and safety to reach the market.

Reducing the problem of scalability in the methodology of literature reviews and meta-analyses would make it easier to work with data from pre-clinical trials. These data present a certain number of specificities that make their use in systematic literature reviews and meta-analyses more complex: the volumes of data are extremely large and evolve particularly rapidly, the designs of pre-clinical studies as well as the form of reports and articles are very variable and make the analyses and the evaluation of the quality of the studies particularly complex. However, systematic literature reviews and other meta-analyses of pre-clinical data have different uses: they can identify gaps in knowledge and guide future research, inform the choice of a study design, a model, an endpoint or the relevance or not of starting a clinical trial. Different methodologies for exploiting preclinical data have been developed by academic groups and each of them relies heavily on automation techniques involving text-mining and artificial intelligence in general.

Another recurring problem with meta-analyses is that they are conducted at a point in time and can become obsolete very quickly after publication, when new data have been published and new clinical trials completed. So much time and energy is spent, in some cases after only a few months or weeks, to present inaccurate or partially false conclusions. We can imagine that the automated performance of meta-analyses would allow their results to be updated in real time.

Finally, we can think that the automation of meta-analyses would contribute to a more uniform assessment of the quality of the clinical studies included in the analyses. Indeed, many publications show that the quality of the selected studies, as well as the biases that may affect them, are rarely evaluated and that when they are, it is done according to various scores that take few parameters into account – for example, the Jadad Score only takes into account 3 methodological characteristics – and this is quite normal: the collection of information, even when it is not numerous, requires additional data extraction and processing efforts.

Given these scalability problems, what are the existing or possible solutions?

Many tools already developed

The automation of the various stages of meta-analyses is a field of research for many academic groups and some tools have been developed. Without taking any offence to these tools, some examples of which are given below, it is questionable why they are not currently used more widely. Is the market not maturing enough? Are the tools, which are very fragmented in their value proposition, not suitable for carrying out a complete meta-analysis? Do these tools, developed by research laboratories, have sufficient marketing? Do they have sufficiently user-friendly interfaces?

As mentioned above, most of the tools and prototypes developed focus on a specific task in the meta-analysis methodology. Examples include Abstrackr, which specialises in article screening, ExaCT, which focuses on data extraction, and RobotReviewer, which is designed to automatically assess bias in reports of randomised controlled trials.

Conclusion: improvement through automation?

When we take into account the burgeoning field of academic exploration concerning automated meta-analysis as well as the various entrepreneurial initiatives in this field (we can mention in particular the very young start-up:, we can only acquire the strong conviction that more and more, meta-analysis will become a task dedicated to robots and that the role of humans will be limited to defining the research protocol, assisted by software that will allow us to make the best possible choices in terms of scope and search algorithms. Thus, apart from the direct savings that will be made by automating meta-analyses, many indirect savings will be considered, particularly those that will be made possible by the best decisions that will be taken, such as whether or not to start a clinical trial. All in all, the automation of meta-analyses will contribute to more efficient and faster drug invention.

Resolving Pharma, whose project is to link reflection and action, will invest in the coming months in the concrete development of meta-analysis automation solutions.

Would you like to discuss the subject? Would you like to take part in writing articles for the Newsletter? Would you like to participate in an entrepreneurial project related to PharmaTech?

Contact us at! Join our LinkedIn group!

To go further:
  • Marshall, I.J., Wallace, B.C. Toward systematic review automation: a practical guide to using machine learning tools in research synthesis. Syst Rev 8, 163 (2019).
  • Clark J, Glasziou P, Del Mar C, Bannach-Brown A, Stehlik P, Scott AM. A full systematic review was completed in 2 weeks using automation tools: a case study. J Clin Epidemiol. 2020 May;121:81-90. doi: 10.1016/j.jclinepi.2020.01.008. Epub 2020 Jan 28. PMID: 32004673.
  • Beller, E., Clark, J., Tsafnat, G. et al. Making progress with the automation of systematic reviews: principles of the International Collaboration for the Automation of Systematic Reviews (ICASR). Syst Rev 7, 77 (2018).
  • Lise Gauthier, L’élaboration d’une méta-analyse : un processus complexe ! ; Pharmactuel, Vol.35 NO5. (2002) ;
  • Nadia Soliman, Andrew S.C. Rice, Jan Vollert ; A practical guide to preclinical systematic review and meta-analysis; Pain September 2020, volume 161, Number 9,
  • Matthew Michelson, Katja Reuter, The significant cost of systematic reviews and meta-analyses: A call for greater involvement of machine learning to assess the promise of clinical trials, Contemporary Clinical Trials Communications, Volume 16, 2019, 100443, ISSN 2451-8654,
  • Vance W. Berger, Sunny Y. Alperson, A general framework for the evaluation of clinical trial quality; Rev Recent Clin Trials. 2009 May ; 4(2): 79–88.
  • A start-up specializing in meta-analysis enhanced by Artificial Intelligence:
  • And finally, the absolute bible of meta-analysis: The handbook of research synthesis and meta-analysis, Harris Cooper, Larry V. Hedges et Jefferey C. Valentine

These articles should interest you


Introduction to DeSci

How Science of the Future is being born before our eyes « [DeSci] transformed my research impact from a low-impact virology article every other year to saving the lives and…
Illustration In Silico

Towards virtual clinical trials?

Clinical trials are among the most critical and expensive steps in drug development. They are highly regulated by the various international health agencies, and for good reason: the molecule or…

To subscribe free of charge to the monthly Newsletter, click here.

Would you like to take part in the writing of Newsletter articles ? Would you like to take part in an entrepreneurial project on these topics ?

Contact us at ! Join our group LinkedIn !

By Alexandre Demailly

Pharmacist graduated from Lille University, France, Alexandre pursued his studies in Medical Economics at the Paris-Dauphine University and developed his knowledge of Artificial Intelligence in Health at the University of Paris.
Passionate about health innovation and entrepreneurship, Alexandre is currently involved in two early stage biotechs in the neurodegenerative diseases field.