Integrated Testing Strategy ( ITS ) – Opportunities to Better Use Existing Data and Guide Future Testing in Toxicology

Currently we are witnessing a tremendously increased pace of discovery in biology, especially molecular biology, that has increased our knowledge of biological systems’ structure and function. New opportunities for chemical management methods are created as more mechanistic insights become available, improving our ability to address human relevance and to reduce use of full animal models. Policy changes, especially in europe, such as the ReACH and Cosmetics directives, encourage inSummary The topic of Integrated Testing Strategies (ITS) has attracted considerable attention, and not only because it is supposed to be a central element of REACH, the ambitious European chemical regulation effort. Although what ITSs are supposed to do seems unambiguous, i.e. speeding up hazard and risk assessment while reducing testing costs, not much has been said, except basic conceptual proposals, about the methodologies that would allow execution of these concepts. Although a pressing concern, the topic of ITS has drawn mostly general reviews, broad concepts, and the expression of a clear need for more research on ITS. Published research in the field remains scarce. Solutions for ITS design emerge slowly, most likely due to the methodological challenges of the task, and perhaps also to it its complexity and the need for multidisciplinary collaboration. Along with the challenge, ITS offer a unique opportunity to contribute to the Toxicology of the 21st century by providing frameworks and tools to actually implement 21st century toxicology data in the chemical management and decision making processes. Further, ITS have the potential to significantly contribute to a modernization of the science of risk assessment. Therefore, to advance ITS research we propose a methodical approach to their design and will discuss currently available approaches as well as challenges to overcome. To this end, we define a framework for ITS that will inform toxicological decisions in a systematic, transparent, and consistent way. We review conceptual requirements for ITS developed earlier and present a roadmap to an operational framework that should be probabilistic, hypothesis-driven, and adaptive. Furthermore, we define properties an ITS should have in order to meet the identified requirements and differentiate them from evidence synthesis. Making use of an ITS for skin sensitization, we demonstrate how the proposed ITS concepts can be implemented.


Introduction
Currently we are witnessing a tremendously increased pace of discovery in biology, especially molecular biology, that has increased our knowledge of biological systems' structure and function.New opportunities for chemical management methods are created as more mechanistic insights become available, improving our ability to address human relevance and to reduce use of full animal models.Policy changes, especially in europe, such as the ReACH and Cosmetics directives, encourage in-

The topic of Integrated Testing Strategies (ITS) has attracted considerable attention, and not only because it is supposed to be a central element of REACH, the ambitious European chemical regulation effort.
Although what ITSs are supposed to do seems unambiguous, i.e. speeding up hazard and risk assessment while reducing testing costs, not much has been said, except basic conceptual proposals, about the methodologies that would allow execution of these concepts.Although a pressing concern, the topic of ITS has drawn mostly general reviews, broad concepts, and the expression of a clear need for more research on ITS.Published research in the field remains scarce.Solutions for ITS design emerge slowly, most likely due to the methodological challenges of the task, and perhaps also to it its complexity and the need for multidisciplinary collaboration.Along with the challenge, ITS offer a unique opportunity to contribute to the Toxicology of the 21 st century by providing frameworks and tools to actually implement 21 st century toxicology data in the chemical management and decision making processes.Further, ITS have the potential to significantly contribute to a modernization of the science of risk assessment.Therefore, to advance ITS research we propose a methodical approach to their design and will discuss currently available approaches as well as challenges to overcome.To this end, we define a framework for ITS that will inform toxicological decisions in a systematic, transparent, and consistent way.We review conceptual requirements for ITS developed earlier and present a roadmap to an operational framework that should be probabilistic, hypothesis-driven, and adaptive.Furthermore, we define properties an ITS should have in order to meet the identified requirements and differentiate them from evidence synthesis.Making use of an ITS for skin sensitization, we demonstrate how the proposed ITS concepts can be implemented.
Keywords: Integrated Testing Strategy, 21 st century toxicology tools, probabilistic, hypothesis-driven, adaptive framework "21 st Century Validation Strategies for 21 st Century Tools" On July 13-14, 2010 the Johns Hopkins Center for Alternatives to Animal testing held a workshop at the Johns Hopkins Bloomberg School of Public Health in Baltimore.the two-day workshop, titled "21 st Century Validation Strategies for 21 st Century tools," consisted of four original papers, each followed by four invited responses and a discussion period.the papers, published in this issue of Altex, addressed topics of interest to regulators, industry and academic scientists charged with implementing the recommendations of the new approach to toxicological testing outlined in the National Academy of Sciences 2007 report, Toxicity Testing in the 21 st Century: A Vision and A Strategy.and discuss desired ItS elements required for a systematic -as opposed to a case-by-case -approach to ItS and present a road map leading from conceptual requirements to the operational framework.elements of information and decision theory are introduced and their application to testing is demonstrated.We conclude with an example to illustrate how our proposals can be deployed in practice.

ITS -what they are and what they are not
A systematic approach to ITS starts with defining what they are.this is crucial for following our review.Furthermore, a more precise definition of ITS is also needed for the larger community in order to harmonize its use and improve scientific exchange.In narrative terms, ItS can be described as combinations of test batteries covering relevant mechanistic steps and organized in a logical, hypothesis-driven decision scheme, which is required to make efficient use of generated data and to gain a comprehensive information basis for making decisions regarding hazard or risk.We approach ItS from a system analysis perspective and understand them as decision support tools that synthesize information in a cumulative manner and that guide testing in such a way that information gain in a testing sequence is maximized.
This definition clearly separates ITS from tiered approaches in two ways.First, tiered approaches consider only the information generated in the last step for a decision as, for example, in current regulated sequential testing strategy for skin irritation (OeCD, 2002) or the recently proposed in vitro testing strategy for eye irritation (Scott et al., 2010).Secondly, in tiered testing strategies the sequence of tests is prescribed, albeit loosely, based on average biological relevance and is left to expert judgment.In contrast, our definition enables an integrated and systematic approach to guide testing such that the sequence is not necessarily prescribed ahead of time but is tailored to the chemical-specific situation.Depending on the already available information on a specific chemical the sequence might be adapted and optimized for meeting specific information targets.

A systematic approach to ITS -defining the conceptual requirements
Building upon earlier papers delineating the conceptual requirements (Hoffmann and Hartung, 2006;OeCD, 2008;Jaworska et al., 2010), ItS should be: a) Transparent and consistent -As a new and complex development, key to ItS, as to any methodology, is the property that they are comprehensible to the maximum extent possible.In addition to ensuring credibility and acceptance, this may ultimately attract the interest needed to gather the necessary momentum required for their development.the only way to achieve this is a fundamental transparency.-Consistency is of similar importance.While difficult to achieve for weight of evidence approaches, a well-defined creasing reliance on non-animal approaches and pose challenges to the existing frameworks for chemical safety evaluation.As a consequence, a paradigm for data evaluation was proposed: Integrated testing Strategies (Anon, 2003(Anon, , 2005(Anon, , 2007)).Integrated testing Strategies (ItS) are expected to perform better than tiered testing strategies by maximizing use of existing data and gaining a more comprehensive, mechanistic basis for decision making using in silico, in vitro, -omics and ultimately in vivo data, as well as exposure information (Bradbury et al., 2004;van leeuwen et al., 2007;Ahlers et al., 2008;Schaafsma et al., 2009).Additionally, especially in the context of ReACH, ItS are not only supposed to provide accurate inferences but also to be resource-effective and reduce animal testing.In essence these new data, while increasingly available, are complex and multifaceted.to fully leverage them, appropriate methodologies for their analysis and interpretation are required.Although a pressing concern, the topic of ItS has drawn mostly general reviews, broad concepts, and the expression of a clear need for more research on ItS (Hengstler et al., 2006;Worth et al., 2007;Benfenati et al., 2010).Published research in the field remains scarce (Gubbels-van Hal et al., 2005;Hoffmann et al., 2008a;Jaworska et al., 2010a).In this regard, the opinion has been expressed that no overarching scheme will be able to handle the diversity of sciences and approaches involved (van Leeuwen et al., 2007).From where we stand this is difficult to assess, but we object to such a general statement before an attempt is even made to find such as scheme.
As ItS pose unique challenges but also offer a unique opportunity, the most pressing need is to progress beyond the existing conceptual frameworks and develop transparent, structured, consistent, and causal methodological approaches (Hoffmann, 2009;Jaworska et al., 2010a), e.g. as postulated under the concept of an evidence-based toxicology (Hoffmann and Hartung, 2006;Hartung, 2009).Shortcomings of existing ItS were recently analyzed in detail by Jaworska et al. (2010a) and, in short, are as follows.The use of flow charts as the ITS' underlying structure may lead to inconsistent decisions.there is no guidance on how to conduct consistent and transparent inference about the information target, taking into account all relevant evidence and its interdependence.Moreover, there is no guidance, other than purely expert-driven, regarding the choice of the subsequent tests that would maximize information gain.
Overall, there are high expectations for ItSs and good agreement on what they are supposed to do.However, solutions for ItS design emerge slowly, most likely due to the methodological challenges of the task and perhaps also to its complexity.Along with the challenge, ItS offer a unique opportunity to contribute to the toxicology of the 21 st century by providing frameworks and tools to actually implement 21 st century toxicology data in the chemical management and decision-making processes.Further, we consider ItS to have the potential to significantly contribute to a modernization of the science of risk assessment.therefore, to advance ItS research we propose a methodical approach to their design and discuss currently available approaches as well challenges to overcome.First, a definition of ItS, which serves as a basis for this review, is presented and then contrasted with evidence synthesis.Next, we identify Moreover, the inference should be hypothesis-driven.the hypothesis-driven workflow proceeds from gathering existing data, through computational analysis, towards quantifying uncertainty in relation to the information target and, then, if required to new experimentation.A cyclic or iterative hypothesis-driven workflow cannot be incorporated into current testing strategy schemes where inference is fixed in one direction.Hypothesisdriven inference implies that causal relationships are preserved in the testing sequence.this goes beyond Hansson and Rudén (2007), who have outlined a decision-theoretic framework for the development of tiered testing systems, but who omit the idea of hypothesis-driven inference.In addition, it should be noted that hypothesis driven inference can be easily formalized in Bayesian probabilistic approach.
Currently, regulatory toxicological inference is linked to a descriptive Weight-of-evidence (Woe) approach, which is a stepwise procedure for integration of data and for assessing the equivalence and adequacy of different types of information.this approach, associated with some fundamental problems (Weed, 2005), aims at optimal integration of information from different sources featuring various aspects of uncertainty (Ahlers et al., 2008).We acknowledge that it is a useful tool for today's chemical hazard and risk assessment.However, we doubt its use regarding ItS, primarily because it lacks a methodological basis for making transparent consistent inferences, as highlighted by Jaworska et al. (2010a) and Schreider et al. (2010).Schreider et al. (2010) mainly link credibility to transparency, but transparency is also a crucial prerequisite to improving the risk assessment process regarding the methodology used.Finally, ItS have to be assessable in order to allow overall optimization.Besides uncertainty reduction and, of course, predictive performance, relevant optimization parameters are costs, feasibility, and animal welfare, e.g. as incorporated by Hoffmann et al. (2008a).
The last conceptual requirement pertains to efficiency.Optimally efficient methods are adaptive.They optimize the solution to a specific situation and information target.In our case they need to be, e.g.chemical-specific and exposure driven.As such, this requires the departure from the check-box approach.
In summary, the outlined conceptual requirements translate into an operational framework that needs to be probabilistic, even better Bayesian and adaptive.Such a framework would be transparent and consistent by definition.Furthermore, it allows handling all kinds of uncertainty and it can be used in a rational way.

Elements of ITS
Having defined and described the framework of ITS, we propose to fill it with the following five elements: 1. Information target identification; 2. Systematic exploration of knowledge; 3. Choice of relevant inputs; 4. Methodology to evidence synthesis; 5. Methodology to guide testing.and transparent ItS can and should, when fed with the same, potentially even conflicting and/or incomplete information, always (re-)produce the same results, irrespective of who, when, where, and how it is applied.In case of inconsistent results, reasons should be identified and used to further optimize the ItS consistency.
-In particular, transparency and consistency are of utmost importance in the handling of variability and uncertainty.While transparency could be achieved qualitatively, e.g. by appropriate documentation of how variability and uncertainty were considered, consistency in this regard may only be achievable when handled quantitatively.b) Rational -Rationality of ItS is essential to ensure that information is fully exploited and used in an optimized way.Furthermore, generation of new information, usually by testing, needs to be rational in the sense that it is focused on providing the most informative evidence in an efficient way.c) Hypothesis-driven -ItS should be driven by a hypothesis, which will usually be closely linked to the information target of the ItS, a concept detailed below.In this way the efficiency of an ITS can be ensured, as a hypothesis-driven approach offers the flexibility to adjust the hypothesis whenever new information is obtained or generated.

The operational framework for ITS
Definition of conceptual requirements for ITS is a prerequisite that guides the possible choices regarding methodological approaches.Since chemical risk assessment is inherently uncertain due to imperfect understanding of underlying toxicological mechanisms, variations among and between individual species used for testing, as well as those due to measurement and observational errors, a formal approach is needed to quantify this uncertainty in order to systematically reduce it.In other words the framework should allow for evidence maximization, i.e. it should guide testing in such a way that the information content can be updated stepwise and that the choice of subsequent tests is guided by the highest expected uncertainty reduction.Probabilistic methods provide a formal approach for quantifying uncertainty from heterogeneous input sources, relationships between them, and overall target uncertainty.Further, probabilistic methods are based on fundamental principles of logic and rationality.In rational reasoning every piece of evidence is consistently valued, assessed and, coherently used in combination with other pieces of evidence.While knowledge-and rule-based systems, as manifested in current testing strategy schemes, typically model the expert's way of reasoning, probabilistic systems describe dependencies between pieces of evidence (towards an information target) within the domain of interest.this ensures the objectivity of the knowledge representation.Probabilistic methods allow for consistent reasoning when handling conflicting data, incomplete evidence, and heterogeneous pieces of evidence.a reference standard, including latent class analysis, composite reference standard construction, and panel diagnosis (Alonzo and Pepe, 1998 and1999;Knottnerus and Muris, 2003;Zhou et al., 2002;Baughman et al., 2008).Interestingly, solutions in case of absence of a reference standard also are available, which might be useful for newly emerging toxicological health effects and concerns, e.g.endocrine disruption or the effects of nanomaterials.As comparable problems are encountered, similar solutions have been developed in parallel and applied to the diagnosis of animal diseases (enøe et al., 2000;McInturff et al., 2004;Georgiadis et al., 2005).Also, in genomics the need for an appropriate reference standard for the evaluation of functional genomic data and methods has been recognized, and an approach has been proposed based on expert curation as an equivalent to panel diagnosis (Myers et al., 2006).
Acknowledging that the solutions presented above mainly address binary test outcomes, a potential benefit of those approaches when adapted to toxicology has been postulated (Hoffmann and Hartung, 2005 and2006).the potential use of latent class analysis for the assessment of toxicological in vitro tests has already been highlighted (Hothorn, 2002;Hoffmann et al., 2008b).In this regard it has to be noted that latent class approaches carry the challenge that, while mathematically elegant, they require the adoption of the reference from non-observable quantities, which may be difficult to communicate.
As mentioned above, new test methods usually are assessed by comparison to a reference standard.even today those comparisons remain one-to-one, i.e. comparing one new test with one existing test, largely disregarding the fact that more than one reference method might exist.therefore, composite reference results obtainable by integration of existing data in a consistent and transparent manner are likely to be better suited for assessment purposes (Pepe, 2003;Hoffmann et al., 2008b).Composite reference results for reference chemicals could be generated, with or without the knowledge of expert panelists, by integrating data from agreed reference tests and, potentially, other information sources.As successful implementation of an ItS requires upfront agreement on the assessment procedure, including success criteria, definition of further reference points in addition to the information target may be required.Such an approach would be facilitated by composite reference standards, as they offer the flexibility to define reference results for each reference point.

Systematic exploration of knowledge via integrated knowledge management as a basis for ITS
With continuously increasing biological and toxicological understanding, it is evident that ItS must be frequently adapted to the current state of knowledge.But the wealth of information (and the scientific discussion about it) is overwhelming, making complete and consistent capture impossible.this explosion of biological data and the growing number of disparate data sources are exposing researchers to a new challenge -how to acquire, maintain, and share knowledge from large and distributed databases in the context of rapidly evolving research.this chal- The earlier presented definition separates ITS from evidence synthesis.Addressing the elements of ItS should make it even more clear that ItS go beyond evidence synthesis, especially as they present an overall, context-specific approach to guide testing.evidence synthesis methodology will, however, be an essential element of ItS.In fact, the complexity and heterogeneity of data to be integrated will require more advanced methodologies.evidence synthesis will be further elaborated in the section devoted to this topic, below.

Information target
A well-defined information target has to be formulated in an unambiguous and precise way as a fundamental prerequisite for any testing strategy.It is crucial to know what decision is to be informed.Only in this way can it be assured that an efficient and transparent ItS can be constructed and optimized, leading to consistent decisions.It is obvious that any change of the information target will at least require ItS-adaption.While ItS will more frequently be applied to hazard and classification and labelling (C&l), with implicit consideration of exposure, rather than to risk assessment where consideration of exposure is explicit and leads to a dose-response characterization, our framework is suitable to both.
Currently, for C&l an in vivo animal study result usually is used as an information target.taking the OeCD sequential testing strategy for skin corrosion/irritation (OeCD, 2002) as an example, the information target is dermal corrosion/irritation potential of a chemical expressed as a category (corrosive, not corrosive, irritating, not irritating) guided by the Draize test.While acknowledging that the established in vivo tests have served their purpose, their predictive properties and relevance for human health are largely unknown.therefore, treating an in vivo animal test result as the gold standard for human risk assessment is not appropriate.It should, rather, be considered a reference test, which much better reflects its real properties and use.Confusion of the two concepts -gold standard and reference standard -created considerable problems, especially when assessing the predictive capacity of in vitro tests.It needs to be understood that, as the reference tests can be imperfect, a new test may have better predictive characteristics than the reference (Alonzo and Pepe, 1999;Pepe, 2003;Hoffmann et al., 2008b).In such cases, considering the information target as a reference test will allow evolution of information targets that are more human-relevant and more predictive.therefore, in our opinion, the gold standard, although frequently used, e.g. as by Burlinson et al. (2007) or by Andersen and Krewski (2009), is a concept hardly applicable to toxicology, as it conveys wrong associations and thus should be avoided.
the problems associated with the lack of gold standards are common to other fields as well, especially the related life sciences of human and veterinary medicine.In medical diagnostic test assessment, approaches that have been developed to account for imperfect reference standards have recently been reviewed extensively (Rutjes et al., 2007;Reitsma et al., 2009).Solutions include correction for imperfectness, which may use sensitivity analysis and various approaches aiming for construction of lenge has to be faced, as systematic exploration of knowledge is becoming critical to advance the understanding of life processes (Antezana et al., 2009;landner, 2010).
Data can be described as incoherent bits of information without context.Only upon adding context and interpretation does data become information.Information forms the basis for the development of knowledge.Knowledge means the confident understanding of a subject with the ability to use it for a specific purpose and to recognize when that is appropriate.Regarding toxicology, with continuous advances of technologies and methodological approaches, information is no longer equivalent to knowledge (in fact the deluge of data and information may create confusion instead of knowledge).Knowledge mapping is one approach that will allow the creation of shared knowledge and the leveraging of all existing information, which can readily be updated.Knowledge mapping produces a transparent record and structure of information and knowledge available within a field.As it is mainly employed for managing complex organizations, it also should be well suited to managing the complex, diverse, and quickly evolving information in toxicology.Knowledge mapping is considered to be of particular importance for ItS where different technologies, methodological, and evidence synthesis approaches meet to aid chemical management decision making.
Knowledge maps are created by transferring tacit and explicit knowledge into graphical formats that are easily understood and interpreted by the end users.A knowledge map contains information about relevant objects and their associations and relationships.Depending on the information available, objects of a toxicological knowledge map will include but not be limited to genetic information, metabolic processing, influences of protein induction and activity, pathways, molecular targets, cellular targets, organ and whole body concepts, and reactomes (Vastrik et al., 2007;Matthews et al., 2009).the associations and relationships need to capture all known processes by which chemicals perturb the functional equilibrium of a human body in a mechanistic manner.As knowledge is continuously evolving, knowledge mapping must be a continuous effort in order to be useful.Knowledge maps can be represented by different degrees of formalization.Semantic networks, Concept maps and Bayesian networks offer interesting opportunities for knowledge mapping.Approaches that can transform knowledge maps into machineunderstandable representations are particularly useful, allowing in this way the exploration of novel connections and therefore accelerating learning about the domain (Kim et al., 2002).these knowledge representation methods also provide an appropriate representation to facilitate human understanding.
the key concept for knowledge mapping is ontology.Ontologies provide a hierarchically organized vocabulary for representing and communicating knowledge about a topic in the form of terms, i.e. words or compound words in specific contexts and the relationships between them.Whether simple or complex, ontologies capture domain knowledge in a way that can be processed with a computer.the use of ontologies facilitates standard annotations, improves computational queries, and supports the construction of inference statements from the information at hand.Furthermore, ontologies are pivotal for structuring data in a way that helps users understand the relationships that exist between terms in a (specialized) area of interest, as well as to help them understand the nomenclature in areas with which they are unfamiliar.the main advantages of ontologies are: -sharing common understanding of the information structure; -enabling reuse of knowledge; -making domain assumptions explicit; -separating domain knowledge from operational knowledge; -vanalyzing domain knowledge; -increasing interoperability among various domains of knowledge; -enhancing the scalability of new knowledge into the existing domain.
In biology, ontologies currently are applied in communicating knowledge as well as in database schema definition, query formulation, and annotation, i.e. investigation of current "known" facts or data.However, ontologies also can be employed to facilitate conceptual discovery, often leading to a paradigm shift.
When the use of conceptual annotation grows we can expect to see a concomitant change in database retrieval strategies.It will become much more precise and complete than is currently possible.It also should allow the exploration of the relationships describing, e.g.functions, processes, and components of retrieved entries, resulting in significantly increased insight garnered from the search results (Gottgtroy et al., 2006).the biomedical community is engaged in several activities to advance new methodologies for leveraging the semantic content of ontologies to improve knowledge discovery in complex and dynamic domains (Gadaleta et al., 2010;Weeber et al., 2005;Spasic et al., 2005).they envisage building a multidimensional ontology that will allow the sharing of knowledge from different experiments undertaken across aligned research communities in order to connect areas of science seemingly unrelated to the area of immediate interest (Caldas et al., 2009).However, more research is still needed to harmonize existing efforts so that a unique, interoperable, universal framework can arise.this will lead to future uses of computers for heterogeneous data integration, querying, reasoning, and inference, which in turn will support knowledge discovery.In toxicology, efforts so far have concentrated largely on gene ontologies (eBI, 2010;GO, 2010).For the advancement of ItS, similar efforts are urgently needed.

Strategies to identify relevant inputs to ITS
to advance ItS, knowledge mapping has to be converted to reality.Certainly we will not measure everything that is known but, ideally, only what is needed to make a decision.However, ItS should be consistent with the respective knowledge map.In order to control quality and relevance, potential pieces of ItS need to be characterized to a certain extent.For this initial step, guidance in identifying promising and/or appropriate tests is required.Initially, the driving aspect should be individual test properties, including the aspects of biological/mechanistic relevance (if known or assessable), endpoints measured, While these algorithms are quite complex, finding weak signals of high importance, which may arise due to complex feedback mechanisms in biological signalling, is even trickier.For their detection, algorithms and methods enabling realistic dynamic models of information processing in cells, in particular dynamic network approaches, will be needed (Han, 2008).Rao et al. (2002) suggested applying signal processing techniques to study weak but important signals.Hong and Man (2010) give an excellent example of an important but weak signal processing in a signalling pathway.
Another way of reducing the effects of noise is to use prior knowledge about the target of interest.For example, in the learning from numerical data, Šuc et al. (2004) and lin and Lee ( 2010) showed the benefits of making the learning algorithm respect the known qualitative properties of the target.As the prior knowledge, i.e. the qualitative aspects, can be extracted from knowledge mapping, it is very appealing to take advantage of knowledge mapping as it immediately provides intuitive understanding.

Data handling
It is evident that in pursuing the vision of toxicology in the 21 st century and ItS as one of the relevant tools, a lot of toxicological data will have to be handled sufficiently.Data need to be stored in databases allowing easy access and efficient extraction/combination for data integration (Kavlock et al., 2008;Hardy et al., 2010).
exploring new approaches to regulatory toxicology, such as ItS, will involve a collaborative effort.Many groups will want to and need to contribute.this requires either central data management and storage or local data storage using compatible database structures.Both entail a huge challenge, however.It will be necessary to make researchers aware of the greater scope of the effort.this concerns not only questions related to one or a few substances and/or mechanisms, but also how the data generated can help to define test methods or the substance-specific toxicology and, ultimately, how the information might be used strategically.Only if this is properly understood will researchers be able to dedicate themselves to adequate data management.Otherwise, data will be incomplete, poorly reported, biased, and ultimately of uncertain use for testing strategies.
In addition to the management, the data collection also presents challenges.Prerequisite to any data synthesis, especially to a quantitative one, is to collect data in a systematic way. the importance of a systemic approach -and the consequences if not applied -was demonstrated by Rudén using risk assessments of trichloroethylene as an example (Rudén, 2001).Reasons for incomplete collection might be manifold.Among them is the fact that in toxicology reviews are traditionally narrative, opening the door for subjective, biased, not transparent, and incomplete data collection.It has been proposed that systematic reviews, as defined under evidence-based medicine (Cook et al., 1997), would be a more appropriate approach (Hoffmann and Hartung, 2006;Hartung, 2009).Potential problems in systematic reviews to be aware of include publication bias, i.e. the general tendency to publish studies demonstrat-reproducibility/reliability, applicability domains, and expected contribution to the final aim, i.e. making a decision.Most of these properties are intrinsic and, in a first step, can be addressed independently of others.However, it has to be noted that, as detailed information may not always be readily available, this preparatory work is a screening intended to support the framing and structuring of information to facilitate ItS construction.Of course, the choice ultimately will be driven by many factors, such as testing costs, animal welfare considerations, or simply test complexity/availability.

Methodological considerations
Noise in biological data can be due to various, often unavoidable causes, such as inherent variability of the biological object under study, technological limitations, measurement errors, or human mistakes.ItS require integration of these noisy data and identification of important variables among the measured variables in order, finally, to develop, improve, or adapt a strategy.If relevant, a statistically significant variable most likely will be an important variable.However, biologically important variables providing a weak signal may be crucial as well.Both need to be found and properly placed in an ItS.
Practically, simplification of the hypothesis and variable selection is usually attempted by evaluating a training set aimed at identification of the most informative variables.However, this approach has been criticized recently due to problems with overfitting.Furthermore, power is low if variables are correlated.It diminishes even further when taking the multiplicity of the testing problem into account.When dealing with biological data, in addition to noise, often only a small set of variables carries most of the information of interest.this makes classic algorithms unsuitable and leads to a high number of false positive and, to a lesser degree, false negative findings (Wu, 2009).Several new algorithms to address these problems were developed.Blanchard and Roquain (2009), e.g. proposed a false discovery rate controlling procedure for correlated variables and showed it to be much more powerful than classic procedures that control the traditional family-wise error rate.Zehetmayer and Posch (2010) developed algorithms to assess power and false negatives rate in large scale multiple testing problems.to gain further power and insight, Meinshausen (2008) developed a hierarchical testing procedure approach to address importance, not at the level of individual variables but rather at the level of clusters of highly correlated variables, and suggested that hierarchy can be derived from specific domain knowledge.Furthermore, for high dimensional noisy data probabilistic approaches to variable selection are currently being developed (Jiang and tanner, 2008).All these new algorithms share one common feature -they are adaptive and/or hierarchical.they first evaluate data at a coarse level and then refine hypotheses via multiple iterations.In view of these recent advances, it is not entirely surprising that retrospective analyses of large biological data sets are revealing large number of false positives due to so called "fishing for significance" and need to be interpreted with caution (Boulesteix, 2010;Boulesteix and Hothorn, 2010;Boulesteix and Strobl, 2009).
(Q)SARs, or threshold of toxicological Concern.Further examples of evidence synthesis approaches are the systematic reviews and meta-analysis introduced above, which have been proposed as a potentially useful methodology for test validation/assessment under the concept of an evidence-based toxicology (Hoffmann and Hartung, 2006;Hartung, 2009).Among evidence synthesis tools to assess exposure, both external and internal, there is a whole spectrum of models with different degrees of sophistication, from simple steady-state to dynamic models considering physiology and kinetics, such as PBPK models.these models have a physics law base and use a chemical's physical-chemical properties as input.
Due to poorer understanding of the pharmacodynamic effects, ItS are more applicable to hazard assessment, while pure exposure and distribution considerations can be better handled by increasingly more explicit exposure simulation models.However, we implicitly assume that inputs to ItS regarding effects do consider exposure in a concentration-effect manner, ensuring their biological relevance.

Methodology to integrate and guide testing
Once we have relevant pieces of information we need a framework for their integration.Data integration for ItS can be understood as a form of meta-analysis.However, classic metaanalytical techniques are not directly applicable, mainly due to differences in the inputs.Meta-analysis produces an estimate of the average effect seen in trials of a particular treatment (Smith, 1997) by a statistical approach that integrates the results of several independent studies considered to be combina-ing the presence of an effect, and selection bias (Horvath and Pewsner, 2004).the latter, especially, also poses a problem for toxicological reviews because selection criteria are difficult to establish when data are not assessed in a transparent and consistent way.taking the assessment of the reliability of existing data quality/reliability in the ReACH-context as an example, currently available methodology (Klimisch et al., 1997) is prone to selection bias, as it is not transparent and is open to subjectivity.especially in this setting, however, subject reliability assessment can be expected, since such an assessment might be decisive for the use/rejection of existing data and thus is directly related to costs.As data quality is also of considerable importance for ItS, an objective assessment is needed, which allows for proper incorporation of this aspect into ItS construction and assessment.this has recently been recognized, and first attempts toward an objective assessment have been put forward (Schneider et al., 2009;Hulzebos and Gerner, 2010;Jaworska et al., 2010a).Interestingly, similar developments are taking place in ecotoxicity (Hobbs et al., 2005;Breton et al., 2009).

Evidence synthesis
In toxicological hazard and risk assessment there are many evidence synthesis information approaches.evidence synthesis methods range from empirical approaches that are purely datadriven to phenomenological models and to explicit simulation models.they vary with regard to biological insight and degree of realism.Among evidence synthesis approaches used to fill in a hazard data gap are narrative weight of evidence, read-across, to high correlation can perfectly arrive at the same final decision, the problem raises questions regarding the consistency and interpretation of the tree.Recently, Jaworska et al. (2010a) introduced an information-theoretic probabilistic framework for ItS in the form of a Bayesian network (BN).Bayesian networks can be seen as folded decision trees, which allow for a compact representation of the complex decision model and better handling of consistency by finding an optimal solution for the whole network structure.Other advantages become evident in table 1, which presents a comparison of functionalities of BNs and decision trees regarding the identified conceptual requirements and other relevant factors related to their practical implementation.
ble (egger and Smith, 1997).the data inputs are homogenous in nature as the same effect is studied, only in different settings.In addition meta-analysis does not have the capacity to guide testing.Decision trees were expected to be the method of choice, albeit some authors expressed concern about their potential size and complexity (Hartung, 2010).Despite the popularity of decision trees, it was already shown in other scientific areas (Bloemer et al., 2003) that the model structure of decision trees can sometimes be unstable due to variable correlation.this means that when carrying out multiple tests, mostly the same variables enter the decision tree, while the order of entry differs.even if different decision trees that suffer from variable masking due The LLNA potency is the information target of this ITS.The bioavailability, reactivity and dendritic cells nodes are latent (unobservable) variables and represent combined evidence from the observable tests (nodes) connected to them.The latent variables that can be interpreted as major lines of evidence connect to the information target.The nodes are characterized by a probability distribution.The arcs are characterized by conditional probability tables.Arcs are tagged with mutual information values that change upon providing evidence to the network.others.In particular, it can separate chemicals with well known potency, especially on the extremes, from chemicals for which more evidence needs to be generated by providing information target uncertainty distributions.

Conclusions
Future chemical management faces the challenge of dealing with a wealth of multifaceted information, all of which might support associated decision making.to be able to handle the resulting complexity, more structure and reduced heuristics are needed in decision making.ItS hold the promise of addressing this need and have the potential to significantly contribute to a modernization of risk assessment science.Along with the challenges, ItS have a unique opportunity to contribute to the toxicology of the 21 st century by providing frameworks and tools to actually implement 21 st century toxicology data in the chemical management and decision making processes.
ItS development requires a conceptually consistent and transparent framework for data integration and efficient guidance of the testing.In order to make a real impact, more research on ItS operational frameworks is required.Operational frameworks will provide methodologies to identify decisionrelevant information in the potentially unmanageable heap of information that we may be able to generate.It is essential that their structure is adaptive, recognizing that while preserving consistency, ITSs are supposed to flexibly accommodate substance-specific, exposure-related information as well as new mechanistic knowledge.
Because high-throughput datasets may suffer from technical and biological noise or from various technical biases and biological shortcomings, improved statistics are needed for the separation of signal from noise, as well as for better data integration annotating biologically relevant relationships. the logical interpretation of the complex signal propagation leading to an observed effect is not easily comprehensible.therefore, computational modelling can be expected to play a crucial role in predicting the output from the signal input or system perturbation to obtain a more comprehensive, less technically biased and more accurate view of the true effect.
Due to its complexity, progress in ItS research requires a combined expertise from several life science fields, leveraging tools, methodologies, and technologies that were not traditionally used by toxicologists.Building such multidisciplinary teams presents another, organizational, challenge.

References
Ahlers, J., Stock, F. and Werschkun, B. (2008).Integrated testing and intelligent assessment -new challenges under ReACH.Environ.Sci. Pollut. Res. 15, 565-572. Alonzo, t. A. and Pepe, M. S. (1999).Using a combination of reference tests to assess the accuracy of a new diagnostic test.Stat.Med. 18, 2987-3003. Alonzo, t. A. andPepe, M. S. (1998).Assessing the accuracy of the Bayesian network methodology is a formal approach for evidential reasoning that has been proven useful in many different domains, including medical diagnosis and testing and bioinformatics.A BN is a quantitative approach allowing for consistent, transparent, and reproducible inferences and decisions suited to combining information from multiple, heterogeneous sources.It is able to handle noise data with varying degrees of uncertainty.the probabilistic approach allows differentiation of relations that are known with a high level of certainty and those that are more speculative.It resolves conflicting evidence, reasons consistently given different and incomplete data sets.The Bayesian network formulation offers flexibility that can be used to express knowledge on major lines of evidence and on specific evidence on the level of individual tests.this functionality is very important because we expect ItS to have hierarchical structures.BN can also be used for guiding adaptive testing strategies based on dynamic calculations of Mutual Information and Value of Information (Pearl, 1988).A "one step look-ahead hypothesis" approach is used to identify information that has the highest potential to refine hypothesis variables.Reduction in the certainty of the evidence synthesis outcome related to conditional dependence between tests can be demonstrated and taken into account while assessing information gains from multiple assays.Although a BN currently seems an appealing solution to ItS design, it certainly is not the only solution, and others will emerge as use of ItS become more established.

Skin sensitization example
Using above considerations, Jaworska et al. (2010b) developed an ItS for skin sensitization in the form of a Bayesian network.The structure of the developed network reflects the current knowledge mapping about skin sensitization and includes the three key processes of dermal penetration, reaction with proteins, and dendritic cell activation.the framework combines prior biological knowledge with heterogeneous experimental evidence from twelve in silico, in chemico and in vitro tests and generates a probabilistic hypothesis about the skin sensitization potency of a chemical in the local lymph node assay (llNA), which has served as the reference standard (Fig. 1).Inputs to bioavailability have been generated in silico and include molecular weight, the octanol/water partition coefficient (Log Kow), and calculated variables related to penetration based on a dynamic skin model (dose absorbed systemically, free chemical concentration in the skin, and maximum concentration in the epidermis).Inputs characterizing reactivity include data from in chemico tests such as reactivity with lysine, cysteine peptides and ARe-luciferase reactivity.Finally, the evidence related to dendritic cells is based on CD86 expression and Il-8 production of the human lymphoma cell line U937.Jaworska et al. (2010b) demonstrated how to use the BN both as a purely evidence synthesis tool as well as a tool to guide testing strategies.Moreover, BN-based ItS present an approach toward reduction and refinement postulated by Bessems (2009) and

Fig. 1 :
Fig. 1: Bayesian network ITS structure to reason about skin sensitization.The LLNA potency is the information target of this ITS.The bioavailability, reactivity and dendritic cells nodes are latent (unobservable) variables and represent combined evidence from the observable tests (nodes) connected to them.The latent variables that can be interpreted as major lines of evidence connect to the information target.The nodes are characterized by a probability distribution.The arcs are characterized by conditional probability tables.Arcs are tagged with mutual information values that change upon providing evidence to the network.