ToxAIcology – The Evolving Role of Artificial Intelligence in Advancing Toxicology and Modernizing Regulatory Science

Toxicology has undergone a transformation from an observational science to a data-rich discipline ripe for artificial intelligence (AI) integration. The exponential growth in computing power coupled with accumulation of large toxicological datasets has created new opportunities to apply techniques like machine learning and especially deep learning to enhance chemical hazard assessment. This article provides an overview of key developments in AI-enabled toxicology, including early expert systems, statistical learning methods like quantitative structure-activity relationships (QSARs), recent advances with deep neural networks


Hartung
ALTEX 40(4), 2023 560 in a responsible manner to accelerate evidence-based toxicology focused on enhancing human and environmental health.
Overall, the rapid maturation of AI paired with the increasing availability of diverse toxicology data streams creates a timely opportunity to blend these two fields.Toxicology is poised for transformation through thoughtful adoption of modern AI.The synergistic integration of these disciplines can pave the way for the next generation of predictive, mechanistic, and data-driven safety science.
As apparent from this introduction, the article takes a positive stance on AI as an enabling technology like Peter Diamandis (Diamandis and Kotler, 2012), different to other voices such as, e.g., Stephen Hawking "Success in creating AI would be the biggest event in human history.Unfortunately, it might also be the last, unless we learn how to avoid the risks".I will argue, however, for upholding legal and ethical principles, for humans in the loop, and against autonomous AI.

Digital pathology
Pathology provides the anatomical and physiological evidence that underpins toxicological science.It offers a comprehensive view of the toxic effects of substances, from the cellular level to the whole organism, and is indispensable for risk assessment, regulatory compliance, and the advancement of toxicology as an evidence-based science.Pathology plays a critical role in toxicology by providing the empirical evidence needed to understand the effects of substances on biological systems.− Identification of target organs: Pathological examinations help identify which organs are most affected by a toxic substance.This is crucial for risk assessment and the development of safety guidelines.− Mechanistic insights: Pathology can reveal the cellular and molecular mechanisms by which a substance exerts its toxic effects.Understanding the mechanism of action is essential for predicting toxicity and for the development of antidotes or preventive measures.− Dose-response relationship: Pathological findings can help establish the dose-response relationship of a toxic substance, which is fundamental for determining safe exposure levels.− Temporal aspects: Pathology can provide insights into the time-course of toxic effects, helping to distinguish between acute and chronic toxicity.− Biomarker identification: Pathological studies can lead to the identification of biomarkers that can be used for early detection of toxic effects, long before clinical symptoms appear.− Validation of non-animal models: In the context of alternatives to animal testing, pathology serves as a gold standard against which the results of in vitro and in silico models often can be compared.− Ethical benefits: Pathological techniques like histopathology development, enable personalized medicine approaches tailored to genomics, and automate or augment human decision-making in patient care and biomedical research.Large language models, the currently very much hyped form of generative AI such as ChatGPT, hold promise for medicine (Dave et al., 2023).
Toxicology is the scientific discipline focused on understanding the adverse effects of chemical, physical and biological agents on living organisms and the ecosystem.It seeks to identify and characterize the association between exposures and toxic outcomes, determine mechanisms of action, and enable risk assessment of toxins and toxicants (Leist et al., 2017).This involves generating, integrating and analyzing data from diverse sources including animal studies, epidemiology, clinical reports, in vitro assays, and various omics approaches.While the enormous economical role (Meigs et al., 2018) of toxicology in ensuring product safety and preventing disease is obvious, the discipline is notorious in its delayed adaption of new approaches.It is probably the only discipline where key approaches have essentially been the same for over 50 years.The main point of this article is that the field can benefit the more by embracing AI, as a dAIus ex machina1 to solve many of the problems of today's safety sciences.
While toxicology has traditionally relied on small-scale in vivo toxicity studies and observational evidence, the advent of high-throughput screening assays, adverse outcome pathways, systems biology models, sensor technologies, and biomonitoring data has led to an exponential growth in volume of heterogeneous evidence (Kavlock et al., 2012;Krewski et al., 2020).For instance, the Tox21 initiative2 has screened over 8,000 chemicals in ~85 high-throughput assays.Combined with increasing electronic health record data, chemical databases, literature corpus, and various omics measurements, toxicology has been transformed from a historically data-poor to an increasingly data-rich scientific discipline (Hartung, 2023a).
This transition creates major opportunities as well as complexities for the integration of AI techniques.The volume, variety, veracity, and velocity of toxicological data pose challenges but also offer fertile ground for applying machine learning and AI approaches.In particular, the ability of AI methods to analyze large volumes of heterogeneous data makes them well suited to handle the complexity of modern toxicology research.AI has the potential to transform areas like predictive toxicology, chemical screening, risk assessment, mechanistic understanding, and knowledge integration (Tang et al., 2018).
This article provides an overview of the emerging applications and promise of AI techniques in advancing toxicology research and practice.It summarizes developments reflecting increasing adoption of AI in various facets of toxicity evaluation, prediction, and safety assessment.The article discusses these domains in detail and highlights specific accomplishments and state-ofthe-art tools leveraging AI.It concludes with recommendations for addressing current limitations and integrating AI techniques − Personalized toxicology: The data-rich nature of digital pathology can be leveraged for more personalized assessments, considering individual variations in susceptibility to toxic substances.This can be imagined also for the comparison of MPS derived from different donors' iPSC lines.While the opportunities are vast, challenges such as data security, standardization of digital formats, and the need for specialized training should not be overlooked.While increasingly embraced in clinical pathology, the use of digital pathology in toxicological studies lags behind.In summary, digital pathology has the potential to revolutionize toxicological assessments by providing more accurate, efficient, and ethical methods of evaluation.Its integration and translation into toxicology could significantly advance the field, making it more aligned with the principles of evidence-based science.

Evolution of AI in toxicology
The advent of AI has heralded a transformative era across various scientific domains, and toxicology is no exception.As we increasingly transition toward high-throughput and high-content data (Leist et al., 2008;Hartung and Leist, 2008) and in silico models (Hartung and Hoffmann, 2009), a nuanced understanding of the role of AI in toxicology becomes indispensable for modernizing regulatory science.This chapter aims to provide an overview of the essential components that synergize AI and toxicology, setting the stage for a deeper exploration of each in the subsequent sections.
The trajectory of applying AI for toxicity prediction and mechanistic understanding mirrors advances in data availability, algorithm capabilities, and cross-disciplinary synergies between computer scientists and toxicologists.The integration of AI approaches presents tremendous potential to transform various aspects of toxicology, from predictive modeling to risk assessment and mechanistic elucidation.Of particular promise are machine learning techniques like deep neural networks, which can be trained on existing toxicity data sets to predict hazards associated with new chemical entities (Paul et al., 2021).Such predictive models could reduce reliance on animal testing.Equally suited to mining legacy animal studies and scientific literature, AI methods enable extracting key facts and relationships from vast corpora of unstructured data (Williams et al., 2021).They facilitate analyzing high-throughput screening and omics data to reveal patterns that underpin toxicity pathways.
Initial efforts in the 1980s-1990s focused on rule-based expert systems like DEREK, METEOR and OncoLogic encoded human expert knowledge for toxicity prediction (Benfenati and Gini, 1997).However, comprehensive knowledge representation via rigid rules proved challenging.Small datasets further constrained predictive performance, though these pioneering efforts provided promising proofs-of-concept on complementing human reasoning with prediction tools.Emphasis soon shifted to statistical and machine learning models driven by accumulating data.Quantitative structure-activity relationships (QSARs) utilized techniques like regression and support vector machines to relate can sometimes reduce the number of animals needed for toxicological testing by providing more comprehensive data from each animal.− Comparative analysis: Comparative pathology allows for the evaluation of toxic effects across different species, aiding in the extrapolation of data from animal models to humans.− Quality control: Pathological assessments are integral to the quality control processes in industries like pharmaceuticals and agrochemicals, ensuring that products are both effective and safe.− Regulatory compliance: Pathological data is often required for regulatory approval of new drugs, chemicals, and other substances to ensure they meet safety standards.Pathology has embraced digitalization and AI (Madabhushi and Lee, 2016;Tizhoosh and Pantanowitz, 2018;Niazi et al., 2019;Chang et al., 2019) faster than toxicology.The full integration of "digital pathology" into toxicology presents a transformative opportunity to enhance the accuracy, efficiency, and reproducibility of toxicological assessments.In the field of alternative methods, this becomes particularly relevant with the move to microphysiological systems (Marx et al., 2016(Marx et al., , 2020;;Roth et al., 2019).Here is a summary of the key opportunities: − High-resolution imaging: Digital pathology allows for the capture of high-resolution images of tissue samples, which can be analyzed in greater detail compared to traditional microscopy.This is particularly useful for identifying subtle morphological changes indicative of toxicity in human or animal samples as well as tissue equivalents of MPS.− Quantitative analysis: Digital tools enable the quantitative analysis of pathological features, such as cell count, morphology, and staining intensity.This provides a more objective and reproducible measure of toxic effects compared to subjective, human-based evaluations.− Machine learning and AI integration: Advanced algorithms and machine learning models can be trained to recognize patterns and anomalies in tissue samples, thereby aiding in the early detection of toxicological effects.This is especially beneficial for large-scale screening studies.− Data management and collaboration: Digital pathology systems facilitate easier data management and sharing.Researchers can access digital slides from anywhere, fostering collaboration and enabling multi-center studies without the need to physically transfer slides.− Time and cost efficiency: Automated image analysis can significantly reduce the time required for evaluations, thereby accelerating the toxicological assessment process.This is crucial for industries like pharmaceuticals, where time-to-market is a key factor.− Ethical considerations: Digital pathology can be integrated with in vitro and in silico models, aligning with the 3Rs principle (replacement, reduction, refinement) in toxicology.This holds both for getting more and better information from animal models that are not yet replaceable and for the analysis of in vitro systems, especially MPS, or to feed into computational approaches.This could minimize the use of animals in toxicological studies.

Predictive toxicology
Predicting potential toxicity and adverse effects of chemicals is a crucial application of computational methods in toxicology.
Traditional quantitative structure-activity relationship (QSAR) models have limitations in predictive power as they rely solely on chemical descriptors and lack capacity to integrate diverse data types (Hartung and Hoffmann, 2009).The field of computational toxicology has seen rapid growth over the last decade, with maturation of machine learning and AI tools to enable more robust toxicity prediction.Deep learning, a dominant AI technique, refers to neural network models with multiple layers capable of learning hierarchical representations from raw input data (LeCun et al., 2015).With sufficient training data, deep learning models can effectively capture complex relationships between chemical structure, bioactivity and toxicity.Various types of neural networks, like convolutional and recurrent networks, have been applied for diverse toxicity prediction tasks.
One seminal study demonstrated the power of deep learning for predicting mutagenicity, achieving performance comparable to in vitro mutagenicity assays for over 90% of chemicals (Mayr et al., 2016).Deep learning models integrating chemical structure and bioactivity data have shown high predictive accuracy for rodent carcinogenicity, outperforming traditional QSAR models (Li et al., 2021).Deep neural networks have also shown promise for predicting liver toxicity (Wang et al., 2019) as well as developmental and reproductive toxicity endpoints3 (Luechtefeld et al., in preparation).
A key advantage of deep learning is the capacity to learn both chemical structural features as well as patterns in assay bioactivity data that are predictive of toxicity (Hartung, 2016).This ability to fuse heterogeneous data allows for more robust toxicity prediction (Luechtefeld et al., 2018a).Deep learning methods are also able to handle large and complex toxicological datasets.Importantly, deep learning models provide probabilistic predictions conveying the confidence of toxicity potential rather than binary classifications.
However, reliance on large training datasets, lack of interpretability, and susceptibility to data biases remain as challenges.Ongoing research aims to use xAI techniques to increase model interpretability.Overall, as chemically diverse and multi-modal toxicological datasets grow, deep learning shows immense potential to transform predictive toxicology.

Data analysis
Modern toxicology research involves generating and analyzing large heterogeneous datasets from various sources.These include scientific literature, legacy animal studies, high-throughput screening assays, and diverse omics measurements.Manual curation and analysis of such multifaceted big datasets is infeasible.chemical descriptors to toxicity (Cherkasov et al., 2014).Public efforts like the OECD QSAR Toolbox compiled data and algorithms for regulatory QSARs.However, reliance on human-engineered descriptors, small datasets, and simplistic models limited robustness (Hartung and Hoffmann, 2009).
Growing recognition in the 2000s of the need for curated public data led to repositories like PubChem, ACToR, ToxRefDB, ChEMBL and ToxCast aggregating volumes of chemical bioactivity data and high-throughput assay results (Judson et al., 2014).Diverse evidence streams warranted advanced machine learning capable of integrating heterogeneous modalities.
The current era features a rising adoption of deep learning, leveraging neural networks to automatically learn features and patterns from raw data (Mayr et al., 2016;Luechtefeld et al., 2018b).Deep learning on chemical and bioassay data has shown successes for various toxicity predictions and mechanisms.Active research is expanding into multi-modal deep learning, synthetic data generation, and interpretable models.Extensive public datasets available from efforts like Tox21 2 further AI capabilities.Overall, the progress in applying AI for toxicity evaluation closely reflects growth in data resources and algorithms.While early expert systems were promising proofsof-concept, lack of large, curated datasets was a key limitation.The availability now of extensive evidence coupled with modern deep learning offers immense potential to handle the complexities of toxicology and provide robust computational tools to augment human insights.
An emerging priority as AI matures is shifting from purely predictive modeling to uncovering causal relationships and integrating mechanistic insights.Causal analysis methods are gaining traction to infer plausible mechanisms from observational data (Schölkopf et al., 2021).Representing prior domain knowledge via graphical causal models helps derive explanatory networks from multifaceted toxicology data.Efforts are ongoing to integrate adverse outcome pathways and systems biology models with AI to provide biological contextualization.Techniques like graph neural networks can combine chemical, assay, and omics data to refine mechanistic understanding.Parallel advances are taking place in human-centered AI design -developing transparent and trustworthy models tailored to user needs.Initiatives like the DARPA xAI program (Gunning et al., 2021) spearhead work on explainable AI (xAI) to increase model interpretability for human users (Gilpin et al., 2018).Engaging diverse stakeholders, communicating insights accessibly, and participatory design will be pivotal going forward.Overall, causality, mechanistic integration, and human-centric techniques are at the frontier of advancing AI for toxicology beyond just predictive modeling.
As AI continues its rapid evolution, its symbiotic relationship with advanced toxicology has the potential to redefine the paradigms of regulatory science.The subsequent chapters will delve deeper into each of these facets, elucidating the mechanisms, challenges, and promises that lie ahead.regulatory science and industry is shared, including the wealth of data currently behind the firewalls of stakeholders.
Most AI models provide predictive outputs as probabilities or confidence levels rather than binary classifications.This allows capturing and propagating various uncertainties in the risk modeling workflow.AI can account for population variability by incorporating exposure, toxicokinetic and toxicodynamic data (Zhang et al., 2014).This enables refined risk probabilities and margins-of-exposure calculations.AI can help address challenges in extrapolating dose-response or exposure-response relationships from high-to-low doses typical in toxicology risk assessment.Deep learning models are capable of jointly analyzing diverse assay endpoints with in vivo data to derive robust point-of-departure metrics for low-dose extrapolation (Thomas et al., 2013).
Active research areas include aggregating human and animal data, combining in vitro and in vivo evidence (Caloni et al., 2022), and incorporating mechanistic biological knowledge into probabilistic risk models (Maertens et al., 2022).However, issues like model interpretability, uncertainty quantification, and bias management remain challenges.Alignment with adverse outcome pathways and quantitative IVIVE approaches is warranted.Overall, as toxicology progresses from qualitative hazard identification to quantitative risk-based paradigms, adoption of AI for predictive risk modeling is likely to accelerate and strengthen evidence-based safety decision making.

Challenges and opportunities of AI in toxicology
While AI promises to be transformative for toxicology, there are certain challenges that need to be proactively addressed to responsibly translate AI's potential into impactful applications.(Jia et al., 2023).While AI models like deep neural networks achieve high predictive performance, their inner workings are often opaque, making them hard to interpret.This black-box nature poses challenges for applications in toxicology where mechanistic transparency and causal explanations are crucial.The field of xAI aims to address these interpretability issues (Linardatos et al., 2020;Angelov et al., 2021; Saranya and Sub-AI methods offer computational solutions by automating data extraction, normalization, integration and mining. Natural language processing techniques enable mining of unstructured textual data in published literature and old animal toxicity studies to extract relevant facts, relationships, and experimental findings (Kleinstreuer et al., 2016b).This allows more efficient use of existing evidence.For high-throughput screening data, AI facilitates automating quality control, hit-calling, and data cleaning to streamline analysis (Allen et al., 2014).
Integrating diverse omics datasets to derive mechanistic insights is challenging but crucial for toxicologists.AI methods like graph neural networks can integrate transcriptomics, metabolomics, and lipidomics data to enable multi-omics analyses (Reel et al., 2021).Dimensionality reduction techniques allow AI models to integrate disparate datasets and find patterns predictive of toxicity phenotypes.
Causality assessment to infer plausible adverse outcome pathways from observational data is an active AI research area with applicability in toxicology (Hemmerich and Ecker, 2020;Lin and Chou, 2022).Generative adversarial networks (GANs), which as generative models create new data instances that resemble the training data, show promise for generating synthetic toxicology data where real-world evidence is sparse or unavailable (Chen et al., 2021).
However, issues like batch effects, reproducibility challenges, hidden biases, and need for multidisciplinary expertise remain caveats for applying AI in toxicological data analysis.Ongoing efforts like Tox21 are helping generate quality curated datasets to realize the promise of AI-driven data analysis in toxicology.

Risk assessment
Risk assessment is central to regulatory decision-making in toxicology.It involves integrating diverse evidence across biological levels to determine the probability of adverse effects under specific exposure conditions.The author likes to stress the probability aspect of this process (Maertens et al., 2022).
AI is well suited for data-driven quantitative risk assessment due to its capacity for probabilistic modeling and ability to capture uncertainties in a systematic manner (Mak and Pichika, 2019;Pérez Santín et al., 2021;Tran et al., 2023).This allows more robust quantitative assessment compared to traditional empirical or deterministic procedures.
Even for AI techniques considered "black boxes", recent progress in xAI (Minh et al., 2022) offers potential to unravel the intricate mechanisms underlying chemical toxicity predictions (Tang et al., 2018;Tetko et al., 2022).No model development process is complete without rigorous evaluation and tuning to ensure optimal performance.Translating theoretical strides into regulatory use facilitates real-world application, like AI-enabled risk assessment.With increasing implementation in sensitive domains, understanding ethical dimensions is critical to eliminate bias and ensure fair predictions.Perhaps the most exciting aspect is the immense potential when AI expertise across tions.Advanced techniques like natural language processing facilitate mining vast textual resources.− Democratized knowledge: AI systems and user-friendly interfaces can enable easy access to prediction tools and curated toxicity databases to aid various end-users from regulators to industry, e.g., the CompTox Chemistry Dashboard (Williams et al., 2017(Williams et al., , 2021) ) or the Integrated Chemical Environment (ICE) 4 (Bell et al., 2020).Publicly accessible platforms empower non-specialists to tap into state-of-the-art safety information.Overall, a responsible adoption of AI in toxicology research and regulation requires proactively addressing interpretability, data, and reproducibility concerns while harnessing the power of AI to transform evidence generation, integration and prediction.

Does the AI landscape allow regulation?
AI has permeated nearly every sector of societal functioning, not least in the domain of healthcare and toxicology.Its proliferation raises questions about its regulation and integration, striking a balance between unlocking its full potential and ensuring ethical practice.A fascinating illustration of the capabilities of AI can be seen in my analysis of the commentary "Getting a grip on data and Artificial Intelligence" by Jean-Claude Burgelman 5 using GPT-4.The AI system was asked to summarize, praise, and criticize the commentary and delivered very robust results.The experiment's results both amazed and alarmed, underscoring the need to tackle AI's growth with a nuanced, comprehensive, and updated understanding of its implications.Does AI's growth allow and/or necessitate regulation?I addressed this question in this recent commentary 6 "Can you take hashini, 2023).xAI refers to methods for producing more interpretable models, while also enabling human-understandable explanations for individual predictions.Strategies like visualizing hidden layer activations, occlusion analysis, and perturbation-based approaches are being applied to unpack AI toxicity models.
Local explanation methods can determine the influence of different input chemical features on specific toxicity predictions.However, AI also presents promising opportunities, and several key benefits can arise from thoughtfully integrating AI into modern toxicology practices (Fig. 1): − Animal replacement: AI predictive models trained on existing data can reduce reliance on animal studies to screen new chemicals for toxicity (Luechtefeld et al., 2018b).By learning from accumulated evidence, deep neural networks can generalize to new chemicals not tested in animals.− Accelerated assessments: AI can automate tedious tasks like data extraction and integration, allowing toxicologists to focus on high-value complex analyses to accelerate safety evalua- Looking ahead, AI adoption is expected to accelerate across the entire chemical safety assessment and decision-making workflow (Tang et al., 2018;Kleinstreuer et al., 2016a).Figure 2 summarizes what I like to call ToxAIcology (Hartung, 2023a).Several potentially transformative directions for integrating AI into modern toxicology include (Fig. 3): − Predictive toxicology models based on deep learning and reinforcement learning that generalize across diverse chemical and biological endpoints by leveraging massive, curated data resources (Luechtefeld and Hartung, 2017;Luechtefeld et al., 2018a,b;Wang et al., 2021).− Precision toxicology incorporating genetic, microbiome, and exposure information via AI to enable individualized toxicity risk assessment (Wetmore et al., 2014;Singh et al., 2023).− In silico to in vivo extrapolation platforms utilizing in vitro, in silico, and exposure data integrated via AI to promise human population-based risk forecasts9 (Hartung, 2023b) as attempted in the ONTOX project (Vinken et al., 2021).− High-throughput robotic toxicity testing guided by sensors and automated experimental design to efficiently screen environmentally relevant chemical mixtures.− Toxicity data mining algorithms automating literature analysis, study appraisal, legacy data extraction, and multi-modal evidence synthesis to construct robust backends for safety decisions.− Explainable and trustworthy AI techniques increasing model interpretability and transparency to elucidate mechanisms and meet regulatory needs.However, thoughtful translation is essential.Tom Chatfield, AI out of the wild and should you?"Our journey starts in the midst of an ongoing "gold rush" -the rapid expansion of the AI industry.Growth predictions for the AI industry showing a potential 13-fold increase within seven years7 signal both a thrilling opportunity and an urgent need for careful oversight of these developments.AI evolves rapidly and is used by dispersed actors for an array of purposes, not all foreseeable.Swift changes and unforeseeable applications make a blanket AI regulatory framework ill-fitted and limited.
Viewing the global AI landscape, AI development is not confined to the West, and the high level of AI development in China8 underlines this fact: Chinese institutions have authored 4.5x as many papers as American institutions since 2010; China dominates AI journal citations with 28%, compared to Europe 21% and the US at 17%; and even more AI patents originate with 52% from China and only 7% from the US and 4% from Europe.These numbers should be sobering for discussions of a moratorium for AI developments as well as international regulations and show a Western bias in the discussions.An international approach to the regulation of AI advances must embrace cultural and socio-political diversity, taking into consideration the leading role of non-Western countries in AI research and innovation.
The crucial question is whether we are unlocking AI's potential or creating anarchy?Crucially, AI is marked by the potent ability to democratize data and information, helping users decipher and extract meaning from huge data repositories.AI is the answer to information flooding.This ability cannot be discarded lightly, echoing T.S. Elliot's lament, "Where is the wisdom we have lost in knowledge?Where is the knowledge we have lost in information?"be rapidly updated with new data without needing to re-deploy large cloud models.− With increasing reliance on AI for regulation, transparency and explainability are essential to maintain trust and accountability.Black box models will not suffice.− Data-centric AI -generative models can create synthetic toxicity data to augment real-world datasets that are limited in scope.This provides more robust training data.Better data management and labeling enables building larger, higher-quality datasets to improve toxicity predictions.− Accelerated investment -recent advances like large language models reaffirm the vast potential of AI for all areas of life.
For regulatory science, increased investments are needed to accelerate development.For sustainable growth, investments should focus on practical implementations that solve real-world problems versus hype-driven activities.
In summary, leveraging these trends can significantly advance integration of AI into the applied practice of toxicology and risk assessment.But responsible development grounded in solving impactful problems is key to realizing the full benefits.Overall, the convergence of data growth, computing advances, and algorithmic innovations has set the stage for AI to become a transformative catalyst that could advance toxicology towards more predictive, mechanistic, and integrated safety science to effectively promote public health and environmental protection.
10 The big picture -unleashing the potential of AI for healthcare Healthcare, synonymous with "medicine and public health," pertains to "the maintaining and restoration of health by the treatment and prevention of disease especially by trained and licensed professionals" (Merriam-Webster).In the US, each of the 160 healthcare specialties enjoys unique needs, challenges, and British author and self-claimed "tech philosopher" rightly warns: "Forget artificial intelligence -in the brave new world of big data, it's artificial idiocy we should be looking out for".Cross-disciplinary teams should collaborate to ensure interpretable, robust, and human-centered systems.Communication and engagement with regulatory authorities will also be key for eventual adoption of validated AI tools.Mainstreaming AI literacy in toxicology education and training programs is crucial.Overall, AI should be responsibly integrated to promote an open and democratized scientific progress protecting human and environmental wellbeing.
Realizing the future potential of AI in toxicology necessitates thoughtful translation into practice.Key priorities include: − Promoting multidisciplinary and multi-sector collaboration in developing and applying AI tools.This includes proactive communication and engagement with regulatory authorities to enable eventual adoption of validated AI approaches.− Mainstreaming AI literacy into toxicology education and training programs.− Prioritizing responsible and ethical AI development practices focused on robust, transparent and human-centered design.
Responsible AI is critical to ensure AI systems for chemical risk assessment do not recommend actions that could harm people or the environment.Responsible development principles prevent unethical outcomes.− Cloud data ecosystems -for toxicology, having integrated cloud-based data ecosystems would facilitate seamless sharing of chemical hazard data across organizations such as between research institutes and regulators to inform risk assessments.Cloud platforms like AWS and Azure enable scaling computational power for training complex AI models on large toxicity datasets.− Edge AI -performing real-time toxicity predictions right at chemical production facilities using edge devices could prevent hazardous exposures.Edge AI models for toxicity could demonstrates remarkable efficiency in the annotation of scientific papers.Achieving an 84% accuracy rate, bioGPT surpasses the average human accuracy rate of 80% for the same task 14 (Luo et al., 2022).More impressively, the system can simultaneously analyze a multitude of papers, integrate the gleaned information, and even propose novel hypotheses and experimental designs based on existing literature.Such capabilities hold the promise of significantly accelerating the rate of scientific inquiry and innovation, ultimately leading to more rapid and cost-effective advancements in medical treatments.
While overlapping with technological capabilities to some extent, healthcare capabilities extend to several structural and classifying aspects of AI contributions, from analytics of language or vision for diagnostic or prognostic applications to value addition and creation of competencies.Reasoning includes aspects of evidence retrieval, data quality scoring, evidence integration, xAI, etc.The broadening sweep of robotics in medicine surfaces in various manifestations -from patient interaction, reviewing living conditions, supporting surgeries, and data mining to pharmacy efficiency and hospital room disinfection.This includes medical devices up to brain/machine interfaces.Interaction with patients as well as healthcare providers encapsulates this broad range.Predictive AI can be employed for data-gap filling and to support decision-taking.
In healthcare, AI serves different medical goals: Monitoring and prevention of diseases are fundamental cogs in the wheel of public health and various fields of preventive medicine.The extrapolation of trends and patterns to the future bolsters decision-making on public health.The amalgamation of differing diagnostic information combined with prevalence information gained from monitoring informs diagnosis.The advent of personalized medicine sees more precise tailoring of treatment recommendations, fostered by integrative evidence.The clinical development of treatment options similarly leans on AI tools.The arena of prognosis and treatment outcomes provides a fertile ground for the prediction and assessment triage of patients and treatment recommendations.
Healthcare's highly regulated environment necessitates that health policies play instrumental roles in managing and controlling AI use.A drawback of utilizing AI in the medical field is the risk of ethical and privacy issues.Healthcare AI systems are largely dependent on patient-specific data, which often includes confidential medical details.It is crucial to guarantee that this information is acquired, preserved, and employed in a manner that prioritizes security and privacy.Safeguarding the privacy of patients, ensuring the confidentiality of their data, and thwarting unauthorized access to individual health records are key factors that must be addressed.Key features encompass compliance also with ethical principles beyond data privacy, such as patient and consumer protection, public health concerns, and human-in-the-vistas of opportunities.The quality of healthcare can be gauged with parameters such as safety, effectiveness, patient-centeredness, timeliness, efficiency, and equity.The contributions of AI must be examined against these goals.The entire healthcare sector, constituting 10-18% of the GDP of most developed countries, is experiencing a paradigm shift facilitated by AI.As AI capacities continue to expand, doubling every three months, it becomes evident that every aspect of healthcare is invariably being remodeled and redefined.The promise is enormous, e.g., Stanford's Fei-Fei Li (1976-) "The day healthcare can fully embrace AI is the day we have a revolution in terms of cutting costs and improving care".
The author had the opportunity to contribute to drafting the recent World Economic Forum "Top 10 Emerging Technologies of 2023 Flagship Report"10 on AI-facilitated healthcare (see online event 11 ).AI technologies share several similarities across different AI applications.Notable among them are concepts like big data (literature, records, studies, omics, sensor technologies, high-throughput testing, images), IT infrastructure (structured and unstructured databases, internet, internet of things, cloud, blockchain), and machine learning (natural language processing, predictive machine learning, vision and image analysis, robotics, expert systems, speech, logistics).An example of a recent advancement includes transformers, primarily used in natural language processing and computer vision, introduced by Google in 2017.A host of influential trends can be observed.These include a transition to foundational models such as large language models (LLM) and a movement towards handling less-structured data.Big data must also be considered under different aspects, such as storage, mining, analytics, and visualization.In the context of LLM, we only know about the training corpus of Google's model.Notably, 9% of the information used to train their LLM was health and science related, requiring open access to information.Not surprisingly, two open-access publishers, PLoS and Frontiers, were the most important sources, with Frontiers being the 15 th most important overall source for the entire model12 .This nicely illustrates the importance of open-access publishing to the common body of knowledge through AI 13 .
Navigating the rapidly growing corpus of scientific and medical knowledge presents a formidable challenge for healthcare professionals.With PubMed alone adding approximately 1 million new articles annually to its existing database of over 35 million biomedical citations, the volume of information has surpassed human capacity for synthesis and interpretation.However, AI technologies excel in managing large-scale datasets efficiently.Advanced language models, for instance, can scan millions of research papers, identify key findings, and discern patterns or relationships that might elude human researchers.Consider the example of bioGPT, unveiled in February, which In summary, AI has progressed rapidly from early expert systems to modern disruptive techniques like deep learning that hold immense promise for advancing predictive toxicology.However, considerable efforts will be needed to integrate AI responsibly into workflows through human-centered design approaches.Coupling AI with ongoing improvements in primary evidence generation and appraisal is pivotal for it to robustly augment human sciences.With a collaborative spirit, and a focus on ethics and translational rigor, AI can transform toxicology into a more predictive, mechanism-based and evidence-integrated discipline to effectively promote public health and environmental protection.

References
Allen, T. E. H., Goodman, J. M., Gutsell, S. et al. (2014) The democratizing effect of AI induces access and equity into the healthcare system due to the nature of being low cost and highly accessible.Impact assessments and technology impact forecasts are important guideposts for policymakers.Education of every stakeholder becomes incredibly important to establish and maintain AI literacy to adapt to the fast-evolving landscape of this field, requiring a continuous learning mechanism.
The sustainability of healthcare orchestrated by AI as well as its impact on sustainability goals of the OECD warrant thorough examinations.The flourishing of these approaches by discerning policies and governance systems are needed for harnessing these technologies as problem-solving solutions.The long-term potential consequences of AI on healthcare and its social acceptance are underpinned by an appropriate governance mechanism.Associated with this are facets of cybersecurity, data security, and possible disruption of critical infrastructure.Another integral question lies at the heart of economics -who offers what to whom at what costs and what are the associated societal costs and liabilities?Altogether, the transformative impact of AI manifests a key opportunity for the healthcare sector.Ensuing an indepth evaluation of options based on holistic criteria of safety, effectiveness, patient-centeredness, timeliness, efficiency, and equity is beneficial.The framing of these approaches through establishing policies and governance is needed to leverage these technologies as viable problem-solving entities.

Conclusions
This article provided an overview of key developments reflecting the rapidly evolving role of AI in advancing predictive toxicology, mechanistic understanding, data analysis, and risk assessment.Early rule-based expert systems have given way to modern disruptive techniques like deep learning that hold immense promise to transform chemical safety evaluation.However, considerable efforts remain to integrate AI responsibly into workflows through participatory approaches focused on transparent, ethical, and human-centered design.Coupling AI with ongoing improvements in primary evidence generation and appraisal methods will be pivotal for it to robustly augment human science and reasoning.With a collaborative spirit, and focus on ethics and translational rigor, AI can help transition toxicology into a more predictive, mechanism-based and evidence-integrated scientific discipline to effectively promote public health and environmental protection in the modern era of big data and computational capabilities.
The intersection of AI and toxicology presents both exciting prospects and considerable challenges.Charting the best path forward requires a broad view that both acknowledges the potential benefits of AI and addresses ethical and regulatory concerns.The dynamic field of AI, bolstered by advances from all corners of the world, necessitates engagement from all stakeholders in developing comprehensive, flexible, and globally applicable regulatory measures.
Global explanation techniques characterize the entire model behavior through surrogate models or summary visualizations.xAI implementations (Gilpin et al., 2018) are being standardized through open-source libraries like InterpretML and initiatives such as DARPA's xAI program (Gunning et al., 2021).Most xAI approaches remain model-specific and focused on post-hoc explanations.Advances are needed for standardized, human-centered xAI techniques applicable across model types and toxicology use cases.Alignment with adverse outcome pathway frameworks could provide mechanistic and causal insights.Overall, xAI will be key for increasing trust and transparency in AI-based decision support systems for regulatory toxicology.

Fig
Fig. 1: The promise of AI in toxicology The figure depicts possible advantages of AI.The picture was generated with DALL-e.

Fig
Fig. 2: ToxAIcologythe emerging role of AI in toxicology The figure depicts how AI based on the growing body of toxicology-relevant data synergizing with computational power and the advance in AI technologies is supporting toxicological work.The figure is reproduced from Hartung (2023b).The picture was generated with DALL-e.

Fig
Fig. 3: Computer-AIded ToxicologyThe figure depicts areas of toxicology that should particularly benefit from AI integration.The figure was created with Simpleslides.co; the icons were generated with DALL-e.
Some key limitations and mitigation strategies include: − Data bias and quality: AI models are prone to reinforcing biases in training data which can lead to inaccurate predictions.Ensuring curated, representative, and unbiased datasets for model development is crucial.The old rule of garbage in, garbage out holds.Noteworthy, if humans can do this, machines can learn to do this too.So the question is only, who sorts out the garbage?− Reproducibility: Variability in AI model development and evaluation protocols hinders reproducibility.Standardized reporting guidelines and rigorous validation are important.− Interpretability: Complex AI models like deep learning are poorly understood "black boxes", limiting mechanistic insights, which are critical in toxicology.Advances in xAI techniques to derive understanding from models are warranted.xAI is certainly key for regulatory use of these methods Muratov, E. N., Fourches, D. et al. (2014).QSAR modeling: Where have you been?Where are you going to? J Med Chem 57, 4977-5010.doi:10.1021/jm4004285Dave, T., Athaluri, S. A. and Singh, S. (2023).ChatGPT in medicine: An overview of its applications, advantages, limitations, future prospects, and ethical considerations.Front Artif Intell 6, 1169595.doi:10.3389/frai.2023.1169595Diamandis, P. H. and Kotler, S. (2012).Abundance: The Future Is Better Than You Think.New York, USA: Free Press.Esteva, A., Robicquet, A., Ramsundar, B. et al. (2019).A guide to deep learning in healthcare.Nat Med 25, 24-29.doi:10.1038/s41591-018-0316-z Gilpin, L. H., Bau, D., Yuan, B. Z. et al. (2018).Explaining explanations: An overview of interpretability of machine learning.IEEE 5 th International Conference on Data Science and Advanced Analytics (DSAA), 80-89.doi:10.1109/DSAA.2018.00018loop requirements.Grounded in this context, the need for quality assurance (QA) and validation gains monumental significance.