Making big sense from big data in toxicology by read-across

Main Article Content

Thomas Hartung
[show affiliations]


Modern information technologies have made big data available in safety sciences, i.e., extremely large data sets that may be analyzed only computationally to reveal patterns, trends and associations. This happens by (1) compilation of large sets of existing data, e.g., as a result of the European REACH regulation, (2) the use of omics technologies and (3) systematic robotized testing in a high-throughput manner. All three approaches and some other high-content technologies leave us with big data – the challenge is now to make big sense of these data. Read-across, i.e., the local similarity-based intrapolation of properties, is gaining momentum with increasing data availability and consensus on how to process and report it. It is predominantly applied to in vivo test data as a gap-filling approach, but can similarly complement other incomplete datasets. Big data are first of all repositories for finding similar substances and ensure that the available data is fully exploited. High-content and high-throughput approaches similarly require focusing on clusters, in this case formed by underlying mechanisms such as pathways of toxicity. The closely connected properties, i.e., structural and biological similarity, create the confidence needed for predictions of toxic properties. Here, a new web-based tool under development called REACH-across, which aims to support and automate structure-based read-across, is presented among others.

Article Details

How to Cite
Hartung, T. (2016) “Making big sense from big data in toxicology by read-across”, ALTEX - Alternatives to animal experimentation, 33(2), pp. 83–93. doi: 10.14573/altex.1603091.
Food for Thought ...

Most read articles by the same author(s)

1 2 3 4 5 6 7 8 9 10 > >>