On the usefulness of animals as a model system (part I): Overview of criteria and focus on robustness
Main Article Content
Abstract
Banning or reduction of the use of animals for laboratory experiments is a frequently-discussed societal and scientific issue. Moreover, the usefulness of animals needs to be considered in any decision process on the permission of specific animal studies. This complex issue is often simplified and generalized in the media around the question, “Are animals useful as a model?” To render an often emotional discussion about animal experimentation more rational, it is important to define “usefulness” in a structured and transparent way. To achieve such a goal, many sub-questions need to be asked, and the following aspects require clarification: (i) consistency of animal-derived data (robustness of the model system); (ii) scientific domain investigated (e.g., toxicology vs disease modelling vs therapy); (iii) measurement unit for “benefit” (integrating positive and negative aspects); (iv) benchmarking to alternatives; (v) definition of success criteria (how good is good enough); (vi) the procedure to assess benefit and necessity. This series of articles discusses the overall benchmarking process by specifying the six issues. The goal is to provide guidance on what needs to be clarified in scientific and political discussions. This framework should help in the future to structure available information, to identify and fill information gaps, and to arrive at rational decisions in various sub-fields of animal use. In part I of the series, we focus on the robustness of animal models. This describes the capacity of models to produce the same output/response when faced with the “same” input. Follow-up articles will cover the remaining usefulness aspects.
Article Details
This work is licensed under a Creative Commons Attribution 4.0 International License.
Articles are distributed under the terms of the Creative Commons Attribution 4.0 International license (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium, provided the original work is appropriately cited (CC-BY). Copyright on any article in ALTEX is retained by the author(s).
Begley, C., Buchan, A. and Dirnagl, U. (2015). Robust research: Institutions must do their part for reproducibility. Nature 525, 25-27. doi:10.1038/525025a
Bert, B., Heinl, C., Chmielewska, J. et al. (2019). Refining animal research: The animal study registry. PLoS Biol 17, e3000463. doi:10.1371/journal.pbio.3000463
Browne, P., Judson, R. S., Casey, W. M. et al. (2015). Screening chemicals for estrogen receptor bioactivity using a computational model. Environ Sci Technol 49, 8804-8814. doi:10.1021/acs.est.5b02641
Busquet, F., Kleensang, A., Rovida, C. et al. (2020). New European Union statistics on laboratory animal use – What really counts! ALTEX 37, 167-186. doi:10.14573/altex.2003241
Daneshian, M., Busquet, F., Hartung, T. et al. (2015). Animal use for science in Europe. ALTEX 32, 261-274. doi:10.14573/altex.1509081
Ferreira, G. S., Veening-Griffioen, D. H., Boon, W. P. C. et al. (2020). Levelling the translational gap for animal to human efficacy data. Animals 10, 1199. doi:10.3390/ani10071199
Grass, G. M. and Sinko, P. J. (2020). Physiologically-based pharmacokinetic simulation modelling. Adv Drug Deliv Rev 54, 433-451. doi:10.1016/s0169-409x(02)00013-3
Hartung, T. (2007). Food for thought ... on validation. ALTEX 24, 67-80. doi:10.14573/altex.2007.2.67
Hartung, T. and Leist, M. (2008). Food for thought ... on the evolution of toxicology and the phasing out of animal testing. ALTEX 25, 91-102. doi:10.14573/altex.2008.2.91
Hartung, T. (2010). Evidence-based toxicology – The toolbox of validation for the 21st century? ALTEX 27, 253-263. doi:10.14573/altex.2010.4.253
Hartung, T., Hoffmann, S. and Stephens, M. (2013). Mechanistic validation. ALTEX 30, 119-130. doi:10.14573/altex.2013.2.119
Judson, R., Kavlock, R., Martin, M. et al. (2013). Perspectives on validation of high-throughput assays supporting 21st century toxicity testing. ALTEX 30, 51-56. doi:10.14573/altex.2013.1.051
Kafkafi, N., Agassi, J., Chesler, E. J. et al. (2018). Reproducibility and replicability of rodent phenotyping in preclinical studies. Neurosci Biobehav Rev 87, 218-232. doi:10.1016/j.neubiorev.2018.01.003
Kilkenny, C., Browne, W. J., Cuthill, I. C. et al. (2010). Improving bioscience research reporting: The ARRIVE guidelines for reporting animal research. PLoS Biol 8, e1000412. doi:10.1371/journal.pbio.1000412
Krebs, A., van Vugt-Lussenburg, B. M. A., Waldmann, T. et al. (2020). The EU-ToxRisk method documentation, data processing and chemical testing pipeline for the regulatory use of new approach methods. Arch Toxicol 94, 2435-2461. doi:10.1007/s00204-020-02802-6
Leist, M., Hartung, T. and Nicotera, P. (2008). The dawning of a new age of toxicology. ALTEX 25, 103-114. doi:10.14573/altex.2008.2.103
Leist, M., Hasiwa, N., Rovida, C. et al. (2014). Consensus report on the future of animal-free systemic toxicity testing. ALTEX 31, 341-356. doi:10.14573/altex.1406091
Luechtefeld, T., Maertens, A., Russo, D. P. et al. (2016a). Analysis of Draize eye irritation testing and its prediction by mining publicly available 2008-2014 REACH data. ALTEX 33, 123-134. doi:10.14573/altex.1510053
Luechtefeld, T., Maertens, A., Russo, D. P. et al. (2016b). Analysis of public oral toxicity data from REACH registrations 2008-2014. ALTEX 33, 111-122. doi:10.14573/altex.1510054
Ly Pham, L., Watford, S., Pradeep, P. et al. (2020). Variability in in vivo studies: Defining the upper limit of performance for predictions of systemic effect levels. Comput Toxicol 15, 1-100126. doi:10.1016/j.comtox.2020.100126
Percie du Sert, N., Hurst, V., Ahluwalia, A. et al. (2020). The ARRIVE guidelines 2.0: Updated guidelines for reporting animal research. BMJ Open Sci 4, e100115. doi:10.1136/bmjos-2020-100115
Pistollato, F., Bernasconi, C., McCarthy, J. et al. (2020). Alzheimer’s disease, and breast and prostate cancer research: Translational failures and the importance to monitor outputs and impact of funded research. Animals 10, 1194. doi:10.3390/ani10071194
Pound, P. (2020). Are animal models needed to discover, develop and test pharmaceutical drugs for humans in the 21st century? Animals 10, 2455. doi:10.3390/ani10122455
Richter, S. H., Garner, J. P. and Würbel, H. (2009). Environmental standardization: Cure or cause of poor reproducibility in animal experiments? Nat Methods 6, 257-261. doi:10.1038/nmeth.1312
Richter, S. H., Garner, J. P., Auer, C. et al. (2010). Systematic variation improves reproducibility of animal experiments. Nat Methods 7, 167-168. doi:10.1038/nmeth0310-167
Smirnova, L., Kleinstreuer, N., Corvi, R. et al. (2018). 3S – Systematic, systemic, and systems biology and toxicology. ALTEX 35, 139-162. doi:10.14573/altex.1804051
Strech, D. and Dirnagl, U. (2019). 3Rs missing: Animal research without scientific value is unethical. BMJ Open Sci 3, e000035. doi:10.1136/ bmjos-2018-000048
van der Worp, H. B., Howells, D. W., Sena, E. S. et al. (2010). Can animal models of disease reliably inform human studies? PLoS Med 7, e1000245. doi:10.1371/journal.pmed.1000245
Voelkl, B., Altman, N. S., Forsman, A. et al. (2020). Reproducibility of animal research in light of biological variation. Nat Rev Neurosci 21, 384-393. doi:10.1038/s41583-020-0313-3
Voelkl, B., Würbel, H., Krzywinski, M. et al. (2021). The standardization fallacy. Nat Methods 18, 5-7 doi:10.1038/s41592-020-01036-9
Vogt, L., Reichlin, T. S., Nathues, C. et al. (2016). Authorization of animal experiments is based on confidence rather than evidence of scientific rigor. PLoS Biol 14, e2000598. doi:10.1371/journal.pbio.2000598
Wang, B. and Gray, G. (2015). Concordance of noncarcinogenic endpoints in rodent chemical bioassays. Risk Anal 35, 1154-1166. doi:10.1111/risa.12314