The 3Rs Collaborative is inviting applications to join the 3Rs Collaborative Artificial Intelligence Initiative—a pioneering effort to accelerate the responsible use of AI across both drug safety evaluation and chemical risk assessment. Whether you're applying AI to toxicology, pharmacokinetics, predictive modeling, or NAMs development, this initiative offers a collaborative platform to amplify your impact.

The group is co-led by Szczepan Baran, Independent Consultant, and Weida Tong, Food and Drug Administration (FDA), and founding member institutions include National Institutes of Health (NIH), National Institute of Environmental Health Sciences (NIEHS), Health and Environmental Sciences Institute (HESI), Environmental Protection Agency (EPA), Johns Hopkins University, Novartis, AbbVie, BI, Pfizer, Merck, and Charles River Laboratories - uniting regulatory, academic, and pharmaceutical leadership in one place.

The group is now looking for stakeholders with a strong background or interest in AI who are ready to collaborate to advance the responsible use of AI in safety & risk assessment.

Membership application.

Why join?

  • Scientific ROI: Collaborate on high-impact outputs (case studies, peer-reviewed papers, SOPs) focused on the real-world application of AI to reduce animal testing, improve decision-making, and support new approach methodologies (NAMs).
  • Regulatory alignment: Engage in discussions that shape how AI methods can be validated, benchmarked, and adopted in both regulatory toxicology and nonclinical drug development—via ongoing interaction with agencies like FDA and EPA.
  • Precompetitive collaboration: Join an expert network solving shared challenges in AI adoption—from data quality and transparency to model validation and context-of-use.
  • Cost-effective membership: For just $1000/organization/year (pro-rated for 2025), with discounts for academic, non-profit, and government institutions, your team gains access to a multi-stakeholder think tank driving real progress.

Our 2025 Deliverables

  • A landmark landscape analysis publication detailing opportunities, challenges, and case studies
  • Training resources and SOPs for implementing AI in safety science
  • Strategic engagement with regulators and AI developers to enable broader acceptance
  • Foundation for an AI Knowledge Hub showcasing tools, use cases, and updates

The primary objectives of the 3RsC-AI Initiative are, with a focus on using AI in safety & risk assessment:

  1. Foster collaboration within a professional network of AI stakeholders
  2. Identify and address barriers to implementation of AI.
  3. Encourage the development and appropriate use of AI