
Regulating the Use of AI in Drug Development: Legal Challenges and Compliance Strategies
I. Introduction
Artificial intelligence (AI) and machine learning (ML) are increasingly becoming integral tools in pharmaceutical research and development. These technologies enable rapid analysis of large-scale biomedical data, the discovery of novel drug candidates, optimization of clinical trials, and personalized treatment strategies. The McKinsey Global Institute (MGI) has estimated that AI could generate $60 to $110 billion a year in economic value for the pharma and medical-product industries, largely because it can boost productivity by accelerating the process of identifying compounds for possible new drugs, speeding their development and approval, and improving the way they are marketed.[1]
Despite these promising developments, the deployment of AI in drug development brings a host of complex legal and regulatory challenges. These challenges include compliance with evolving regulatory frameworks, managing risks related to data privacy and security, ensuring intellectual property protection, and navigating ethical considerations and liability concerns. Given the high stakes involved in developing and approving medical products, regulatory agencies such as the U.S. Food and Drug Administration (FDA) and the European Medicines Agency (EMA) are beginning to address these concerns through new guidance documents and regulatory strategies.
II. Applications of AI in Drug Development
The drug development lifecycle remains one of the most resource-intensive and risk-laden processes in the healthcare and life sciences sector. Recent analyses indicate that the median cost of bringing a new drug to market is approximately $708 million, while the mean cost—influenced by a few high-cost outliers—can reach $1.31 billion. These figures underscore the substantial financial burden associated with drug development.[2] AI and ML are being increasingly integrated into this process to address long-standing inefficiencies, from molecular discovery to post-market safety monitoring.
A. Drug Discovery
AI-driven tools can rapidly analyze vast chemical, genomic, and proteomic datasets to identify promising drug candidates. ML models, including deep learning algorithms, can predict molecular behavior, drug-likeness, and target-binding affinities with greater speed and, in some cases, superior accuracy compared to traditional in vitro screening methods.
A notable example is Insilico Medicine, whose AI-designed drug candidate reached human clinical trials within 18 months of initial compound identification—a timeline significantly shorter than the standard preclinical development period.[3] The use of generative AI in molecular design presents novel questions regarding patentability and inventorship, particularly in jurisdictions like the United States, where the U.S. Patent and Trademark Office has repeatedly held that only natural persons can be named as inventors.[4] The European Patent Office (EPO) and the UK Intellectual Property Office (UKIPO) have adopted similar stances, potentially complicating IP strategies for pharma companies using AI-generated compounds.[5]
From a regulatory standpoint, the FDA’s 2023 discussion paper on AI in drug development acknowledged the value of such tools in enhancing molecular innovation, while also highlighting the importance of data transparency, algorithm explainability, and verifiable model performance.[6] This aligns with the growing trend toward “Good Machine Learning Practice” (GMLP) principles, designed to harmonize AI validation standards across jurisdictions.[7]
B. Preclinical Testing
AI also plays a growing role in simulating in vivo biological systems, enabling researchers to predict pharmacokinetics, toxicity, and other safety markers without immediate reliance on animal models. This supports both ethical and efficiency-driven objectives, particularly in toxicology profiling, a domain that remains heavily regulated across the EU and U.S. frameworks.
Recent efforts, including the EMA’s 2023 Reflection Paper on AI, urge developers to ensure robust model performance when AI is applied to preclinical decision-making.[8] This includes expectations of data integrity, traceability, and human oversight. In the United States, such tools must comply with FDA requirements regarding the use of computational models in regulatory submissions, particularly those influencing investigational new drug (IND) applications.[9]
Furthermore, using AI in preclinical settings may raise liability concerns under product safety and tort law if decisions informed by flawed algorithms result in harm during later clinical phases. Legal counsel must therefore advise on risk management strategies, including the implementation of internal AI governance policies and contractual frameworks to apportion liability between developers, contract research organizations, and software providers.
C. Clinical Trial Design and Management
AI is increasingly utilized to optimize various aspects of clinical trial design and management. AI algorithms assist in patient stratification, recruitment, and adherence monitoring. Natural language processing (NLP) tools can analyze clinical trial protocols and outcomes to identify best practices and optimize future trial designs. For instance, IBM Watson Health has been employed in oncology trials to suggest patient eligibility based on clinical data.[10]
Regulatory bodies are acknowledging the integration of AI in clinical trials. In January 2025, the FDA issued a draft guidance titled “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products” (the “Draft AI Regulatory Guidance”), providing recommendations on the use of AI to produce information or data intended to support regulatory decision-making regarding safety, effectiveness, or quality for drugs. The Draft AI Regulatory Guidance emphasizes a risk-based credibility assessment framework for establishing and evaluating the credibility of an AI model for a particular context of use.[11]
Similarly, the EMA published a Reflection Paper in October 2024 on the use of AI in the medicinal product lifecycle, highlighting the importance of a risk-based approach for the development, deployment, and performance monitoring of AI/ML tools. The EMA encourages developers to ensure that AI systems used in clinical trials meet Good Clinical Practice (GCP) guidelines and that any AI/ML systems with high regulatory impact or high patient risk are subject to comprehensive assessment during authorization procedures.[12]
D. Pharmacovigilance and Post-Market Surveillance
AI enhances the monitoring of drug safety by automatically detecting adverse drug events (ADEs) from electronic health records, social media, and patient forums. The Draft AI Regulatory Guidance acknowledges the role of AI in handling reports on post-marketing adverse drug experience information, contributing to the safety, efficacy, or quality assessments of drugs.[13]
The EMA has also taken significant steps towards integrating AI into pharmacovigilance processes. In 2024, the EMA published tools and guidelines to incorporate AI into pharmacovigilance, emphasizing the importance of transparency, accessibility, validation, and monitoring of AI systems to ensure patient safety and data integrity.[14]
The growing reliance on AI in these critical areas underscores the necessity for establishing regulatory guardrails to ensure that the technology does not compromise public health, fairness, or legal accountability. Both the FDA and EMA are actively developing frameworks to guide the responsible integration of AI into drug development and monitoring processes.
III. Regulatory Landscape in the United States and Internationally
As the use of AI in drug development expands, regulatory authorities around the world are developing policies to ensure the safety, efficacy, and reliability of AI-enabled tools. These policies aim to foster innovation while addressing risks related to algorithmic transparency, data integrity, and clinical validity.
A. United States: The FDA’s Evolving Framework
The regulatory landscape governing the use of AI in drug development in the United States is actively being shaped by the FDA through a combination of discussion papers and draft guidance documents. These initiatives collectively aim to foster innovation while ensuring the safety, effectiveness, and quality of drug and biological products
FDA’s “Using Artificial Intelligence & Machine Learning in the Development of Drug & Biological Products: Discussion Paper and Request for Feedback” (May 2023, Revised February 2025) serves as a foundational document for shaping the U.S. regulatory approach and aims to initiate broad dialogue and gather feedback from stakeholders. It is not a formal regulatory policy or guidance but rather a preliminary communication to inform future regulatory clarity.[15]
In addition, the Draft AI Regulatory Guidance outlines the FDA’s preliminary recommendations for the application of AI in generating data and information pertinent to regulatory submissions for drug and biological products within the United States. This draft document underscores the FDA’s evolving regulatory approach to AI technologies in the pharmaceutical sector.[16]
The Draft AI Regulatory Guidance establishes the seven-step risk-based credibility assessment framework as a foundational methodology for evaluating the reliability and trustworthiness of AI models in specific “contexts of use” (COUs).[17] Credibility is defined as the measure of trust in an AI model’s performance for a given COU, substantiated by evidence. The COU is a critical definitional element, delineating the AI model’s precise function and scope in addressing a regulatory question or decision. The document explicitly clarifies that it does not endorse particular AI methodologies but broadly addresses AI models, with a noted emphasis on ML as a prevalent AI subset in drug development. Excluded from the scope are AI applications in drug discovery and operational efficiencies that do not directly bear upon patient safety, product quality, or study integrity.[18]
The FDA acknowledges the transformative potential of AI in expediting drug development and enhancing patient care, citing diverse applications such as the reduction of animal studies, predictive pharmacokinetic modeling, integration of disparate data for disease understanding, and improved analysis of clinical trial endpoints. However, the Draft AI Regulatory Guidance also addresses significant challenges inherent in AI integration, including:[19]
- Data Variability: The potential for bias and unreliability introduced by variations in training data quality, volume, and representativeness;
- Transparency and Interpretability: The inherent difficulty in deciphering the internal workings and conclusive derivations of complex AI models, necessitating enhanced methodological transparency;
- Uncertainty Quantification: Challenges in accurately interpreting, explaining, or quantifying the precision of deployed AI models;
- Model Drift: The susceptibility of model performance to change over time or across different operational environments, underscoring the necessity for ongoing life cycle maintenance.
Additionally, the FDA’s Digital Health Center of Excellence plays a pivotal role in providing cross-cutting guidance across software-based medical products and supporting sponsors in aligning innovation with compliance. The Center collaborates with various stakeholders to develop policies that provide regulatory predictability and clarity for the use of AI as part of the FDA’s commitment to protect public health and advance innovation.[20]
B. International Perspectives
Internationally, regulatory bodies are shaping distinct yet converging strategies. The EMA adopts a more structured and cautious approach, prioritizing rigorous upfront validation and comprehensive documentation before AI systems are integrated into drug development. The EMA’s “AI in Medicinal Product Lifecycle Reflection Paper” provides considerations for safe and effective AI use,[21] and a significant milestone was reached with its first qualification opinion on AI methodology in March 2025, accepting clinical trial evidence generated by an AI tool for diagnosing inflammatory liver disease.[22] In the United Kingdom, the Medicines and Healthcare products Regulatory Agency (MHRA) employs a principles-based regulation, focusing on “Software as a Medical Device” (SaMD) and “AI as a Medical Device” (AIaMD).[23] Notably, the MHRA utilizes an “AI Airlock” regulatory sandbox to foster innovation and identify challenges in AIaMD regulation.[24] The agency also plans to provide guidance on human-centered design and interpretability, ensuring AI models are transparent and testable.[25]
Japan’s Pharmaceuticals and Medical Devices Agency (PMDA) is shifting towards an “incubation function,” aiming to accelerate access to cutting-edge medical technologies. Recognizing the evolving nature of AI algorithms, the PMDA formalized the Post-Approval Change Management Protocol (PACMP) for AI-SaMD in its March 2023 guidance.[26] This protocol enables predefined, risk-mitigated modifications to AI algorithms post-approval, reducing the need for full resubmission and facilitating continuous improvement of AI models.
The PACMP is particularly relevant for adaptive AI systems, which learn and evolve over time. By allowing manufacturers to submit a change management plan at the time of initial approval, the PMDA provides a pathway for efficient updates aligned with pre-approved parameters.[27] This forward-looking approach reflects Japan’s commitment to balancing innovation and regulatory oversight in the age of AI.
China’s National Medical Products Administration (NMPA) has developed a conservative yet evolving regulatory approach to the use of AI in drug development. While embracing the potential of AI technologies, the NMPA places strong emphasis on data quality, algorithm transparency, and risk management throughout the product lifecycle.[28]
In its most recent technical guidelines, the NMPA requires that AI tools—particularly those used in clinical trial design, safety monitoring, and target identification—demonstrate data sufficiency, diversity, and representativeness. Developers must provide detailed documentation on dataset sources, demographic inclusion and exclusion criteria, and justify the generalizability of AI outputs. The agency mandates comprehensive traceability, including version control and the ability to audit AI decisions, to ensure accountability.[29]
Bias mitigation is another regulatory priority. Sponsors must explain how their AI systems avoid discriminatory impacts and maintain clinical relevance to the target population. The NMPA also requires ongoing monitoring of AI systems post-approval, echoing global best practices for adaptive technologies.[30]
Compared to the U.S. FDA, China’s approach is viewed as more conservative but is gradually modernizing. While pilot programs and regulatory “sandboxes” are being considered, the current system emphasizes evidence-based validation and regulatory caution.[31] For companies seeking market entry in China, early engagement with the NMPA and compliance with strict technical requirements are essential for approval of AI-enabled tools in drug development.
Beyond national regulatory agencies, international organizations play a pivotal role in shaping the global framework for AI governance in healthcare and pharmaceutical development. The World Health Organization (WHO) has emphasized that ethical oversight is essential for the responsible deployment of AI in healthcare. Its guidance highlights key principles such as transparency, accountability, inclusiveness, and safety, urging governments and developers to adopt AI systems that are both clinically validated and socially equitable.[32] The WHO also underscores the importance of robust data governance, especially in light of cross-border data use and the sensitivity of health-related information.[33]
In parallel, the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) is contributing to the global regulatory landscape of AI-driven drug development through its draft M15 guideline on Model-Informed Drug Development (MIDD).[34] This guideline outlines general principles for the strategic use of computational modeling and simulation, including AI and ML models, to integrate clinical and nonclinical data in regulatory submissions. Its aim is to standardize terminology, define model evaluation criteria, and enhance transparency and consistency in how modeling informs drug development decisions.[35] The M15 framework emphasizes early regulatory engagement, multidisciplinary planning, and robust model validation and risk assessment, tailored to the model’s regulatory impact.[36] Importantly, it promotes harmonized global practices by encouraging the submission of predefined Model Analysis Plans (MAPs) and Model Analysis Reports (MARs) and the alignment of model acceptability standards across jurisdictions.[37] Although the guideline does not focus exclusively on AI, it recognizes its growing relevance in pharmacological modeling and provides a foundation for future regulatory convergence on AI-enhanced tools.[38]
Despite these varying approaches, common challenges persist across jurisdictions. These include ensuring high data quality and representativeness to mitigate algorithmic bias, addressing the “black box” problem inherent in many AI models, and managing the dynamic nature of AI through continuous validation and life cycle maintenance.[39] The black box problem refers to the inherent opacity of AI algorithms, where the rationale behind certain conclusions remains unexplained and often incomprehensible, even to their developers.[40] Effective compliance strategies therefore necessitate robust data governance, transparent AI model development, proactive risk mitigation, and strategic engagement with regulatory authorities across all relevant jurisdictions. The future of AI in pharmaceuticals hinges on sustained collaboration among regulators, industry stakeholders, technology developers, and ethical experts, ensuring safe and effective medicines reach patients worldwide.
IV. Legal and Ethical Challenges in the Use of AI in Drug Development
The integration of AI in pharmaceutical development, while innovative, introduces significant legal complexities. These issues span from patient privacy and intellectual property to product liability and algorithmic bias. Legal professionals in life sciences must grapple with how to adapt traditional legal doctrines to a rapidly evolving technological landscape.
A. Data Privacy and Protection
AI systems require access to vast datasets, often involving sensitive personal health information. In the United States, the Health Insurance Portability and Accountability Act (HIPAA) governs the use and disclosure of protected health information (PHI).[41] However, HIPAA was not designed with AI in mind and lacks provisions specific to algorithmic processing or automated decision-making.
In contrast, the EU’s General Data Protection Regulation (GDPR) offers a more robust framework, granting individuals the right to explanation and object to automated decisions. Article 22 of the GDPR specifically limits profiling and automated processing that significantly affects individuals, which could encompass AI-powered risk scoring or trial eligibility decisions in drug development.[42]
B. Intellectual Property and AI-Generated Inventions
Determining ownership of inventions created or assisted by AI presents novel challenges. Traditional patent law assumes a human inventor. The USPTO has determined that only natural persons can be named as inventors on a patent application.[43] This interpretation aligns with the statutory definition of “inventor” in 35 U.S.C. § 100(f), which defines an inventor as “the individual, or if a joint invention, the individuals collectively who invented or discovered the subject matter of the invention.”[44] As AI systems become more autonomous, legal questions will intensify around whether AI-assisted molecules or formulations are patentable and who holds the rights.
C. Product Liability and Algorithmic Accountability
If an AI-driven process leads to a harmful drug interaction or failed treatment, determining liability is complex. It raises critical questions about who bears responsibility—the pharmaceutical company, the software developer, or the healthcare provider. Courts traditionally rely on tort doctrines such as negligence, strict liability, and product liability to assess fault. However, applying these doctrines to black-box AI systems—where the rationale behind an algorithm’s decision-making may be opaque or incomprehensible even to its creators—poses significant challenges.[45]
One potential path forward involves mandating transparency and auditability as preconditions for regulatory approval. Just as clinical trial data must be publicly disclosed for new drug applications under FDA regulations, AI-based systems could be required to document decision pathways, training data sources, and risk assessments.[46] This would align legal exposure with the degree of control and insight over the system’s design and deployment, allowing regulators and courts to better assess fault and causation.[47] The FDA has taken initial steps toward such oversight in the Draft AI Regulatory Guidance, emphasizing model reproducibility, context of use, and evidentiary standards for AI tools used in drug development.[48] However, full-scale legal harmonization will likely require legislative action or landmark case law to establish precedent for allocating responsibility in cases of AI-induced harm.
D. Bias and Discrimination
AI systems used in drug development and clinical decision-making may unintentionally perpetuate systemic biases if trained on non-representative datasets. For instance, an AI tool developed using clinical trial data predominantly drawn from white or male patients may fail to produce accurate or equitable predictions for racial minorities, women, or individuals with disabilities. This lack of representativeness can lead to underdiagnosis, misclassification, or exclusion from access to emerging therapies, raising significant health equity concerns.[49]
Such outcomes may trigger legal scrutiny under U.S. civil rights statutes. Title VI of the Civil Rights Act of 1964 prohibits recipients of federal funds—including many clinical trial sponsors and health institutions—from engaging in practices that result in disparate impact based on race, color, or national origin.[50] Similarly, the Americans with Disabilities Act (ADA) requires that individuals with disabilities receive equal access to healthcare technologies and cannot be subjected to discrimination through automated processes.[51] Courts and regulators may find liability if it can be shown that AI tools had a discriminatory effect, even if unintentional, particularly where developers failed to implement bias mitigation strategies.
To reduce legal risk and promote fairness, AI developers and pharmaceutical sponsors are expected to adopt algorithmic auditing, representative data collection, and corrective training mechanisms.[52] The FDA has echoed these principles in the Draft AI Regulatory Guidance on AI-based medical technologies, highlighting the need for data management, demographic performance testing, and life cycle maintenance of AI models.[53] Ensuring that AI systems are tested across diverse populations is not only a best practice—it may soon become a legal imperative.
V. Conclusion
As AI technologies continue to redefine the drug development process, legal professionals play a critical role in ensuring that innovation proceeds within a framework of accountability, safety, and public trust. While AI offers transformative benefits—faster discovery, improved precision, and lower costs—it also introduces complex legal questions that touch every part of the pharmaceutical lifecycle.
To navigate these challenges, companies must adopt a proactive, interdisciplinary approach that blends legal rigor with technological literacy. Building robust governance frameworks, ensuring transparency, engaging regulators, and safeguarding patient rights are not just compliance requirements—they are essential elements of ethical innovation.
The path forward lies in collaboration: between regulators and industry, between lawyers and scientists, and between data and human judgment. By anticipating legal risks and championing responsible AI use, the life sciences sector can unlock the full potential of AI while preserving the values at the heart of healthcare—dignity, fairness, and trust.
[1] Bhavik Shah, Chaitanya Adabala Viswa, Delphine Zurkiya, Eoin Leydon & Joachim Bleys, Generative AI in the Pharmaceutical Industry: Moving from Hype to Reality (McKinsey & Company, Jan. 2024) at 1. https://www.mckinsey.com/industries/life-sciences/our-insights/generative-ai-in-the-pharmaceutical-industry-moving-from-hype-to-reality.
[2] Mulcahy A, Rennane S, Schwam D, Dickerson R, Baker L, Shetty K. Use of Clinical Trial Characteristics to Estimate Costs of New Drug Development. JAMA Netw Open. 2025 Jan 2.
https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2828689#google_vignette.
[3] Insilico Medicine, ‘First AI-Discovered and AI-Designed Drug Enters Phase I Clinical Trials’ (Insilico, July 2021). https://insilico.com/phase1.
[4] Stephen Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022), cert. denied, 598 U.S. (2023). https://www.wipo.int/wipolex/en/text/590863.
[5] European Patent Office, ‘Legal and Practical Implications of AI Inventorship’ (2023). https://www.epo.org/en/news-events/in-focus/ict/artificial-intelligence.
[6] FDA, Discussion Paper: Artificial Intelligence in Drug Manufacturing (2023). https://www.fda.gov/media/165743.
[7] FDA, ‘Proposed Framework for Good Machine Learning Practice (GMLP)’ (2021) https://www.fda.gov/media/153486/download.
[8] EMA, Reflection Paper on the Use of Artificial Intelligence (AI) in the Medicinal Product Lifecycle (EMA/369376/2023, July 2023). https://www.ema.europa.eu/en/documents/scientific-guideline/reflection-paper-use-artificial-intelligence-ai-medicinal-product-lifecycle_en.pdf.
[9] FDA, Guidance for Industry: Submission of Computational Models in Support of Regulatory Decisions (2023). https://www.fda.gov/regulatory-information/search-fda-guidance-documents/assessing-credibility-computational-modeling-and-simulation-medical-device-submissions.
[10] IBM Newsroom, “Watson for Clinical Trial Matching,” 2021. https://ftpmirror.your.org/pub/misc/ftp.software.ibm.com/common/ssi/ecm/hl/en/hlw03021usen/HLW03021USEN.PDF.
[11] FDA, Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products, Draft Guidance for Industry and Other Interested Parties (Jan. 2025). https://www.fda.gov/regulatory-information/search-fda-guidance-documents/considerations-use-artificial-intelligence-support-regulatory-decision-making-drug-and-biological.
[12] EMA, Reflection Paper on the Use of Artificial Intelligence (AI) in the Medicinal Product Lifecycle (Oct. 2024). https://www.ema.europa.eu/en/documents/scientific-guideline/reflection-paper-use-artificial-intelligence-ai-medicinal-product-lifecycle_en.pdf.
[13] FDA, Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products, Draft Guidance for Industry and Other Interested Parties (Jan. 2025). https://www.fda.gov/regulatory-information/search-fda-guidance-documents/considerations-use-artificial-intelligence-support-regulatory-decision-making-drug-and-biological.
[14] EMA, Artificial Intelligence and EMA: Initiatives for Use in Pharmacovigilance (2024). https://safetydrugs.it/en/artificial-intelligence-and-ema/.
[15] FDA, Using Artificial Intelligence & Machine Learning in the Development of Drug & Biological Products: Discussion Paper and Request for Feedback (May 2023, Revised Feb. 2025). https://www.fda.gov/media/167973.
[16] FDA, Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products, Draft Guidance for Industry and Other Interested Parties (Jan. 2025). https://www.fda.gov/regulatory-information/search-fda-guidance-documents/considerations-use-artificial-intelligence-support-regulatory-decision-making-drug-and-biological.
[17] Id.
[18] Id.
[19] Id.
[20] FDA, Digital Health Center of Excellence, https://www.fda.gov/medical-devices/digital-health-center-excellence.
[21] EMA, Reflection Paper on the Use of Artificial Intelligence (AI) in the Medicinal Product Lifecycle (Oct. 2024). https://www.ema.europa.eu/en/documents/scientific-guideline/reflection-paper-use-artificial-intelligence-ai-medicinal-product-lifecycle_en.pdf.
[22] EMA (2025a, Mar. 20). EMA qualifies first artificial intelligence tool to diagnose inflammatory liver disease (MASH) in biopsy samples. Retrieved from https://www.ema.europa.eu/en/news/ema-qualifies-first-artificial-intelligence-tool-diagnose-inflammatory-liver-disease-mash-biopsy-samples.
[23] MHRA (2024a, Apr. 30). MHRA’s AI regulatory strategy ensures patient safety and industry innovation into 2030. GOV.UK. Retrieved from https://www.gov.uk/government/news/mhras-ai-regulatory-strategy-ensures-patient-safety-and-industry-innovation-into-2030.
[24] MHRA (2024b, May 9). AI Airlock: the regulatory sandbox for AIaMD. GOV.UK. https://www.gov.uk/government/collections/ai-airlock-the-regulatory-sandbox-for-aiamd.
[25] MHRA (2024c, Oct. 26). With great fanfare, the MHRA publishes roadmap for future regulation of software and AI medical devices. Bristows. https://www.bristows.com/news/with-great-fanfare-the-mhra-publishes-roadmap-for-future-regulation-of-software-and-ai-medical-devices/.
[26] PMDA (Japan), Handling of Changes in Artificial Intelligence-Based Software as a Medical Device (AI-SaMD) 3–4 (Mar. 30, 2023). https://www.pmda.go.jp/files/000266100.pdf.
[27] PMDA, Overview of Post Approval Change Management Protocol (PACMP) for Medical Devices 11 (Aug. 31, 2020). https://www.pmda.go.jp/files/000245839.pdf.
[28] Yuehua Liu, Wenjin Yu & Tharam Dillon, Regulatory Responses and Approval Status of Artificial Intelligence Medical Devices with a Focus on China, 7 npj Digit. Med. 255 (2024). https://www.nature.com/articles/s41746-024-01254-x.
[29] NMPA, Guideline on Artificial Intelligence Medical Devices (Draft) (2022). https://chinameddevice.com/guideline-on-artificial-intelligence-medical-devices/.
[30] NMPA, Draft “Post-Market Surveillance for Medical Device Manufacturers” on (July 2024). https://chinameddevice.com/post-market-surveillance-for-medical-device-manufacturers/.
[31] You Mao, Xiao Yue, Yao Han, et al. Evaluation and Regulation of Medical Artificial Intelligence Applications in China[J]. Chinese Medical Sciences Journal (Mar. 2025). https://doi.org/10.24920/004473.
[32] WHO, Ethics and Governance of Artificial Intelligence for Health: WHO Guidance (2021), p. 13 https://www.who.int/publications/i/item/9789240029200.
[33] Id.
[34] International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use, ICH Harmonised Guideline: General Principles for Model-Informed Drug Development M15 (Draft, Step 2, Nov. 6, 2024). ICH M15 Guideline on general principles for model-informed drug development_Step 2b
[35] Id.
[36] Id.
[37] Id.
[38] Id.
[39] WHO, Ethics and Governance of Artificial Intelligence for Health: WHO Guidance (2021), p. 46 https://www.who.int/publications/i/item/9789240029200.
[40] Price WN. Artificial intelligence in healthcare: Applications and legal implications. The SciTech Lawyer. 2017;14(1). University of Michigan Law School Scholarship Repository. https://repository.law.umich.edu/cgi/viewcontent.cgi?article=2932&context=articles.
[41] Health Insurance Portability and Accountability Act of 1996, 42 U.S.C. § 1320d et seq. (2018).
[42] Regulation (EU) 2016/679, 2016 O.J. (L 119) 1 (General Data Protection Regulation).
[43] U.S. Patent & Trademark Office, Petition Decision on Inventorship: Invention Limited to Natural Persons (Apr. 27, 2020), https://www.uspto.gov/sites/default/files/documents/16524350_22apr2020.pdf.
[44] 35 U.S.C. § 100(f) (defining “inventor” as “the individual, or if a joint invention, the individuals collectively who invented or discovered the subject matter of the invention”).
[45] Ryan Abbott, The Reasonable Robot: Artificial Intelligence and the Law, 75–80 (Cambridge Univ. Press 2020); see also Rebecca Crootof, Torts and the Machine, 119 Colum. L. Rev. 1341, 1355–57 (2019). https://doi.org/10.1017/9781108631761.
[46] 21 C.F.R. § 314.50 (2023) (requiring submission of clinical and statistical data in new drug applications to FDA).
[47] Matthew U. Scherer, Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies, 29 Harv. J.L. & Tech. 353, 394–98 (2016). https://jolt.law.harvard.edu/articles.
[48] FDA, Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products, Draft Guidance for Industry and Other Interested Parties (Jan. 2025). https://www.fda.gov/regulatory-information/search-fda-guidance-documents/considerations-use-artificial-intelligence-support-regulatory-decision-making-drug-and-biological.
[49] Irene Y. Chen et al., Why Is My Classifier Discriminatory?, 22 Advances in Neural Info. Processing Systems 3539, 3539–44 (2018). https://dl.acm.org/doi/10.5555/3327144.3327272.
[50] Title VI of the Civil Rights Act of 1964, 42 U.S.C. § 2000d (prohibiting discrimination based on race, color, or national origin in federally funded programs).
[51] Americans with Disabilities Act of 1990, 42 U.S.C. §§ 12101–12213; see also 28 C.F.R. § 35.130(b)(7) (requiring reasonable modifications in policies and practices to avoid discrimination).
[52] Sandra Wachter, Brent Mittelstadt & Chris Russell, Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI, 41 Computer L. & Security Rev. 105567 (2021). https://ora.ox.ac.uk/objects.
[53] FDA, Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products, Draft Guidance for Industry and Other Interested Parties (Jan. 2025). https://www.fda.gov/regulatory-information/search-fda-guidance-documents/considerations-use-artificial-intelligence-support-regulatory-decision-making-drug-and-biological.
FDLI is a nonprofit membership organization that offers education, training, publications, and professional networking opportunities in the field of food and drug law. As a neutral convener, FDLI provides a venue for stakeholders to inform innovative public policy, law, and regulation. Articles and any other material published in Update represent the opinions of the author(s) and should not be construed to reflect the opinions of FDLI, its staff, or its members. The factual accuracy of all statements in the articles and other materials is the sole responsibility of the authors.