Software tools for systematic review literature screening and data extraction: Qualitative user experiences from succinct formal tests

Main Article Content

Cathalijn H. C. Leenaars , Frans Stafleu, André Bleich
[show affiliations]

Abstract

Systematic reviews (SRs) contribute to implementing the 3Rs in preclinical research. With the ever-increasing amount of scientific literature, SRs require increasing time investment. Thus, using the most efficient review tools is essential. Most available software tools aid the screening process; tools for data extraction and/or multiple review phases are relatively scarce. Using a single platform for all review phases allows auto-transfer of references from one phase to the next and enables work on multiple phases at the same time. We performed succinct formal tests of four multiphase review tools that are free or relatively affordable: Covidence, Eppi, SRDR+ and SYRF. Our tests comprised full-text screening, sham data extraction, and discrepancy resolution in the context of parts of a systematic review. Screening was performed as per protocol. Sham data extraction comprised free text, numerical and categorial data. Both reviewers logged their experiences with the platforms throughout. These logs were qualitatively summarized and supplemented with further user experi­ences. We show value of all tested tools in the SR process. Which tool is optimal depends on multiple factors, comprising previous experience with the tool but also review type, review questions, and review team member enthusiasm.


Plain language summary
Systematic reviews (SRs) are reliable summaries of scientific studies that have been done in the past. They can help to improve animal welfare and reduce the use of animals in research. However, because new studies are published all the time, summarizing them reliably takes more and more time. Different software tools can help people do an SR more efficiently. We did a brief study to compare four tools that are free or low-cost: Covidence, Eppi, SRDR+ and SYRF. We tested how they work in different steps of an SR. During testing, two reviewers wrote down all their experiences. We summarize the results in this paper. All four tested tools can help reviewers work efficiently. We advise on which tool can help best in different settings.

Article Details

How to Cite
Leenaars, C. H. C., Stafleu, F. and Bleich, A. (2025) “Software tools for systematic review literature screening and data extraction: Qualitative user experiences from succinct formal tests”, ALTEX - Alternatives to animal experimentation, 42(1), pp. 159–166. doi: 10.14573/altex.2409251.
Section
BenchMarks
References

Beller, E., Clark, J., Tsafnat, G. et al. (2018). Making progress with the automation of systematic reviews: Principles of the international collaboration for the automation of systematic reviews (ICASR). Syst Rev 7, 77. doi:10.1186/s13643-018-0740-7

Bramer, W. M., Milic, J. and Mast, F. (2017). Reviewing retrieved references for inclusion in systematic reviews using endnote. J Med Libr Assoc 105, 84-87. doi:10.5195/jmla.2017.111

Cleo, G., Scott, A. M., Islam, F. et al. (2019). Usability and acceptability of four systematic review automation software packages: A mixed method design. Syst Rev 8, 145. doi:10.1186/s13643-019-1069-6

Cowie, K., Rahmatullah, A., Hardy, N. et al. (2022). Web-based software tools for systematic literature review in medicine: Systematic search and feature analysis. JMIR Med Inform 10, e33219. doi:10.2196/33219

Gates, A., Guitard, S., Pillay, J. et al. (2019). Performance and usability of machine learning for screening in systematic reviews: A comparative evaluation of three tools. Syst Rev 8, 278. doi:10.1186/s13643-019-1222-2

Harrison, H., Griffin, S. J., Kuhn, I. et al. (2020). Software tools to support title and abstract screening for systematic reviews in healthcare: An evaluation. BMC Med Res Methodol 20, 7. doi:10.1186/s12874-020-0897-3

Leenaars, C., Tsaioun, K., Stafleu, F. et al. (2021). Reviewing the animal literature: How to describe and choose between different types of literature reviews. Lab Anim 55, 129-141. doi:10.1177/0023677220968599

Leenaars, C., Hager, C., Stafleu, F. et al. (2023). A systematic review of the effect of cystic fibrosis treatments on the nasal potential difference test in animals and humans. Diagnostics (Basel) 13, 3098. doi:10.3390/diagnostics13193098

Leenaars, C. H. C., Stafleu, F. R., Hager, C. et al. (2024a). A case study of the informative value of risk of bias and reporting quality assessments for systematic reviews. Syst Rev 13, 230. doi:10.1186/s13643-024-02650-w

Leenaars, C. H. C., Stafleu, F. R., Hager, C. et al. (2024b). A systematic review of animal and human data comparing the nasal potential difference test between cystic fibrosis and control. Sci Rep 14, 9664. doi:10.1038/s41598-024-60389-9

O’Connor, A. M., Tsafnat, G., Thomas, J. et al. (2019). A question of trust: Can we build an evidence base to gain trust in systematic review automation technologies? Syst Rev 8, 143. doi:10.1186/s13643-019-1062-0

Page, M. J., McKenzie, J. E., Bossuyt, P. M. et al. (2021). The prisma 2020 statement: An updated guideline for reporting systematic reviews. BMJ 372, n71. doi:10.1136/bmj.n71

Russell, W. M. S. and Burch, R. L. (1959). The Principles of Humane Experimental Technique. London, UK: Methuen.

Van der Mierden, S., Tsaioun, K., Bleich, A. et al. (2019). Software tools for literature screening in systematic reviews in biomedical research. ALTEX 36, 508-517. doi:10.14573/altex.1902131

Van der Mierden, S., Spineli, L. M., Talbot, S. R. et al. (2021). Extracting data from graphs: A case-study on animal research with implications for meta-analyses. Res Synth Methods 12, 701-710. doi:10.1002/jrsm.1481

Wang, Z., Nayfeh, T., Tetzlaff, J. et al. (2020). Error rates of human reviewers during abstract screening in systematic reviews. PLoS One 15, e0227742. doi:10.1371/journal.pone.0227742

Most read articles by the same author(s)