
Advancing 21st Century Regulatory Science via AI and HI
Introduction
According to the World Economic Forum, we are experiencing the fourth Industrial Revolution. This revolution is characterized by a range of new technologies affecting all disciplines, economies, and industries by fusing the physical, digital, and biological worlds. It even challenges the idea of what it means to be human.[1] Artificial intelligence (AI)[2] will be one of the major catalysts for change with its unimaginable potential. It will revolutionize every area of our life and has already impacted many functions of regulated industry, academia, and the Food and Drug Administration (FDA).
According to FDA, “Artificial intelligence has emerged as a transformative force.”[3] AI will facilitate what the drug development ecosystem lacks today—coordinated and efficient systems for developing actionable evidence on safety and effectiveness. But is AI of greater or lesser importance than “HI”—human intelligence?
The first question we must ask ourselves is whether that is a binary—“yes or no”—question. It isn’t, and this isn’t a new, strictly modern, or technological debate.
Computing begins at its core as a binary function—zeroes and ones. Therefore, AI is built on a binary foundation. But HI is not. HI is built on much fuzzier, emotional, irrational, and profound propositions. Humans are intuitive; we make guesses and take risks. We take things “on faith.” Machines don’t do that. Machines don’t have faith; they have programs.
To really understand this duality, let’s examine it the way the 12th Century nominalist philosopher, Pierre Abelard,[4] did through his revolutionary concept of sic et non—yes and no.[5]
Twelfth Century Europe was a very black and white world. It was a very binary world. A world of yes or no. In Sic et Non, Abelard asks five key questions[6]:
(1)Must human faith be completed by reason, or not?
(2)Does faith deal only with unseen things, or not?
(3)Is there any knowledge of things unseen, or not?
(4)May one believe only in God alone, or not?
(5)Is God a single unitary being, or not?
Unlike FDA advisory committees, none of these five questions has a binary, yes or no answer, and in the 12th Century, that was a wildly innovative and threatening concept. The 12th Century was a nasty, brutish, and very absolutist place. But this isn’t a theology article, Thank God!
When discussing AI and HI, it’s pointless to ask, “which is more powerful, AI or HI?”; that’s a binary 12th Century question. When it comes to AI and HI in the 21st Century regulatory science, the three most important question are:
(1)Which is more powerful in specific circumstances?
(2)Most crucially, how can AI and HI work together?
(3)How will that impact healthcare technology development, regulation, and usage?
This Isn’t Science Fiction. The Future is Now.
We must view AI through the lens of 21st Century interoperability (another extension of sic et non): the idea that different systems used by different groups of people can be used for a common purpose. Usually, we make sense of the world around us with the help of rules and processes that build up a system. The world of “Big Data” is so huge that we will need AI to be able to keep track of it.[7] In the good old days before the development of digital social networks, experts were defined by their amount of knowledge on a specific topic. However, now that knowledge is available at the click of a mouse, the value of an expert resides in experiences that are unique and often not measurable or published. An expert shows value in helping to address uncertainty, which can influence the eventual therapeutic outcome based on that data. Uncertainty is the 21st Century version of sic et non.
As innovation in AI continues to advance, we need to consider how combining human and AI resources can augment, for example, evolving methodologies for clinical trial design, validated methodologies for real-world data, biomarkers, surrogate endpoints, more precise clinical trial recruitment, predictive pharmacovigilance, and the various tools and techniques of regulatory science.
Nobel laureate Joshua Lederberg observed that the failure of regulatory institutions to integrate scientific advances into risk-selection and assessment is the biggest barrier to improved public health.[8]
AI represents the very real opportunity for drug development and review that is faster and more accurate. This does not mean we can replace FDA or the European Medicines Agency with a room full of computers; in fact, it means the reverse. The job of drug developers and regulators will become more difficult because HI must do the hard work of parsing sic et non.
At the beginning of 2019, FDA announced four activities to help the agency and the broader drug development ecosystem advance these AI technologies for the benefit of patients[9]:
(1)Support the seamless integration of digital technologies into clinical trials by developing a framework for how digital systems can be used to enhance the efficient oversight of clinical trials. These technologies present important opportunities to streamline drug trials and improve data site integrity by remotely monitoring data trends, accrual, and integrity over the course of a trial.
(2)Use digital technologies (for example, smart phones and web applications) to bring clinical trials to the patient, rather than always requiring the patient to travel to the investigator. More accessible clinical trials can facilitate participation by more diverse patient populations within diverse community settings where patient care is delivered and, in the process, generate information that is more representative of the real world and may help providers and patients make more-informed treatment decisions.
(3)Explore how reviewers can have more insight into how labeling changes inform provider prescribing decisions and patient outcomes.
(4)Work with the medical product centers to develop an FDA curriculum on machine learning and AI in partnership with external academic partners.
The aim of the FDA’s Strategic Framework is improving the ability of FDA reviewers and managers to evaluate products that incorporate advanced algorithms and facilitate FDA’s capacity to develop regulatory science tools harnessing these approaches.
But isn’t AI risky in the context of regulation? First, machines are not risk takers; indeed, they have no capacity to make leaps of faith. Second, AI is not about replacing reviewers; it’s about freeing them to do what they do best—to think. This is precisely the kind of human risk-taking that leadership of advanced regulatory agencies want to see from their staffs and encourage in the industries they regulate.
We must also understand and embrace AI’s abilities to make sense of non-traditional data derived from social media and other sources of spontaneous notification. AI presents the pharmacovigilance community with the opportunity to go beyond drug utilization studies[10] as the only tool and drug recalls as the only solution.
Such a scenario becomes even more urgent when considering the current situation relative to biologics. AI combined with HI provides the opportunity for incorporating more data from new sources and enables multi-dimensional regulatory decision-making by putting the data into context faster. Consider the implications of this for ever-more complex biologics, cell and gene therapies, and programs that are brought to market through the accelerated approval and other expedited review pathways.
HI is Also the Patient Voice
In a world increasingly driven by outcomes-reporting and “Big Data,” more patient-level information from individuals is not always synonymous with validated data. But where there’s smoke, there’s fire.
Historically, the traditional role of the patient voice in drug development shares the human component of disease.[11] This has largely meant sharing personal, emotional anecdotes. These highly charged stories certainly help to make the drug development process more three-dimensional. But do patient stories result in more meaningful information? Anecdotes have impact, but is it impact of the right kind, of the most powerful nature? No.
The plural of “anecdote” is not “data.” Regulatory actions are and must always be data-based. Patient passion is important, but it must be combined with a more dispassionate, scientific understanding of regulatory paradigms.[12]
The 21st Century patient voice can and must evolve into a tool used to impact regulatory decision-making from both the heart and the head. Legislation now requires FDA to consider patient input as part of its regulatory considerations,[13] and pharmaceutical companies are increasingly understanding its value in clinical development plans. This means the increased use of patient-driven data collection and patient views on risk–benefit analysis.
Consider FDA’s recent approvals of new therapies for Friedreich’s Ataxia, Alzheimer’s Disease, Duchenne’s Muscular Dystrophy, and Amyotrophic Lateral Sclerosis. These reviews were based on small n studies.
In the case of the Friedreich’s Ataxia product, 20 years’ worth of natural history data. HI, flawed and emotional but also intuitive and hopeful, allows us to incorporate evidence beyond the randomized clinical trial gold standard. AI gives us the tools to collect and validate these new data sets.
But not everyone is on the same page. These recent approvals were controversial and caused an internal revolution at FDA. For Alzheimer’s Disease approvals, amyloid plaque was used as a biomarker validated by humans based on a preponderance of evidence. But are we absolutely certain of that? No.
Yet the HI risk–benefit analysis gave FDA and drug developers a plausible regulatory path forward and specifically developed algorithms to facilitate collection and analysis of post-licensing real world data that will either confirm or deny the HI-driven hypotheses on both the appropriateness of amyloid plaque as a biomarker for Alzheimer’s Disease and the safety and effectiveness of these treatments. Coming soon: Masters and PhD programs in the development and use of AI.
In the case of one Duchenne’s drug, senior FDA officials resigned[14] because neither the sponsor nor the agency could precisely define how the treatment worked or how to identify the subpopulation for whom it was effective.
Commenting on her decision to approve this product over the objections of her senior staff, Dr. Janet Woodcock, who was at the time Director of FDA’s Center for Drug Evaluation and Research, said, “It’s possible to reach different conclusions based on the data presented today. . . . Failing to approve a drug that actually works in devastating diseases—these consequences are extreme.”[15] That’s an HI-based decision, and Dr. Woodcock was promoted to Deputy Commissioner.
Data Collection Isn’t the Goal. Using the Data is the Goal.
AI-enabled capabilities will significantly improve public health by removing obstacles to receive and understand new and meaningful information that can be used to make more informed decisions faster and helping to decrease the overall costs and inefficiencies of the healthcare system. This is as true in drug development as it is in healthcare technology assessment.
Reimbursement decisions are moving from quality adjusted life year (QALY)-based cost analysis[16] to value-based outcomes measurements. Is that scientific? Are those metrics data-driven? Sic et non. Yes and no. Even when UK’s National Institute for Health and Care Excellence held the QALY up as the gold standard in its reimbursement decisions,[17] its director, Sir Michael Rawlins, admitted that the QALY was “only guessing.”
Today, more and more healthcare technologies are being measured by value. How do we measure value? How do we define and capture outcomes data? HI and AI. We’re still guessing—but the combination of the two is helping us guess better based on accurate, different, and more data. That’s the most honest way to defend and define the partnership of AI and HI.
According to Duke University professor Vincent Conitzer, AI “involves picking up on some statistical pattern that can be used to great effect, but it sometimes produces answers that lack common sense.”[18]
Healthcare professionals are supposed to provide the patient with the best possible therapeutic advice based on a common knowledge of a specific diagnosis. This approach is improved by AI tools, but according to the World Health Organization, health is not merely the absence of disease or infirmity.[19]
That broader definition of well-being is not easily defined by an algorithm, because it involves relationship dynamics and issues between patient and healthcare provider. The role and value of “expertise” is an ethical domain that can’t be solved by AI alone.
AI will have a huge impact on everything from genetics to genomics. It will help identify patterns in huge data sets of information and medical records and look for mutations and linkages to disease. But for any of this to happen, we must view AI through the lens of 21st Century interoperability—specifically, the teamwork required between AI and HI.
There is a danger that increases in the signal-to-noise ratio in data sources may lead to analysis paralysis. AI can, rapidly and at low cost, integrate and analyze searchable electronic health records and treatment data, social media, and information coming from wearable health trackers, not only to give healthcare providers real-time data-driven information about how to design and update treatment programs for the needs and habits of their patients, but also provide regulators not just with more data, but better data, in context, faster.
Of course, using AI rashly is dangerous. Speed kills. Rather than improving the global system and the therapeutic outcomes of patients, the improper, injudicious, and ill-informed use of AI will only result in the decreased use of crucial professional experience and expertise. Per Malcolm Gladwell, “The key to good decision making is not knowledge. It is understanding. We are swimming in the former. We are desperately lacking in the latter.”[20]
In the optimal situation, AI frees both physician and pharmacist from many time-consuming tasks, creating better efficiency, and allowing them to spend more time where expertise (nurtured by experience) is needed. It maximizes time, talent, and opportunity.
A good parallel is the growth of freestyle chess, which allows humans unrestricted use of computers during games. The team of human and computer is called a “centaur”—the mythical creature that is half-man, half-horse.[21] Which half of this beast do you want to be?
According to former world chess champion, Garry Kasparov, “weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.”[22]
HI via FDA Commissioner Dr. Robert Califf
FDA’s embrace of AI starts at the top. According to Commissioner Califf:
“AI has the potential to enable major advances in the development of more effective, less risky medical products and more nutritious food. To give you an idea of its impact, consider that since 1995 the FDA has received over 300 submissions for drugs and biological products with AI components, and more than 700 submissions for AI-enabled devices.
The FDA is also exploring the use of AI technologies to facilitate our internal operations and regulatory processes, which could benefit both agency experts and the public by streamlining workflows and facilitating high quality, novel medical products more quickly reaching the patients who need them.
At its most basic, AI can strengthen our operational systems and bring increased productivity, opportunity, and efficiency to our work, helping us process and analyze complex data faster, including data from medical imaging or digital health technologies, for example. We can free up staff by automating repetitive administrative functions and enable them to focus on more complex meaningful activities to weigh the evidence and arrive at better decisions. Our workforce should also have more time to explain those decisions to the public and learned intermediaries in the biomedical and clinical world.”[23]
Conclusion
There are companies out there today inventing a new generation of computational technologies that can tell doctors what will happen within a cell when the DNA is altered by genetic variation, whether it is natural or therapeutic. Imagine the predictive capabilities for pharmacovigilance. For any of this to work, we must understand and embrace the distinct but mutually supportive abilities of AI and HI.
AI isn’t a healthcare magic bullet because machines are terrible risk-takers and have no capacity to make leaps of faith. Humans, by comparison, are risk-takers because we have a sense of consciousness and intuition that machines don’t possess. In other words, we have a soul. Maybe we should be talking about religion after all?
[1] Klaus Schwab, The Fourth Industrial Revolution: What It Means, How to Respond, World Econ. Forum (Jan. 16, 2016), https://www.weforum.org/agenda/2016/01/the-fourth-industrial-revolution-what-it-means-and-how-to-respond/.
[2] Artificial Intelligence, Wikipedia (May 9, 2024), https://en.wikipedia.org/wiki/Artificial_intelligence.
[3] Artificial Intelligence and Medical Products, U.S. Food & Drug Admin. (Mar. 20, 2024), https://www.fda.gov/science-research/science-and-research-special-topics/artificial-intelligence-and-medical-products.
[4] Peter King & Andrew Arlig, Peter Abelard, Stan. Encyclopedia of Phil. (Aug. 12, 2022), https://plato.stanford.edu/entries/abelard/.
[5] Sic et Non, Wikipedia (Apr. 8, 2024), https://en.wikipedia.org/wiki/Sic_et_Non.
[6] Martin Jenkins & Peter Abelard (1079-1142), Phil. Now (2019), https://philosophynow.org/issues/134/Peter_Abelard_1079-1142.
[7] Przemek Chojecki, How Big is Big Data?, Medium (Jan. 13, 2019), https://towardsdatascience.com/how-big-is-big-data-3fb14d5351ba.
[8] Frank M. Snowden, Emerging and Reemerging Diseases: A Historical Perspective, 225 Immunological Rev. 9 (Sept. 19, 2008), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7165909/.
[9] Digital Health Technologies (DHTs) for Drug Development, U.S. Food & Drug Admin. (Dec. 26, 2023), https://www.fda.gov/science-research/science-and-research-special-topics/digital-health-technologies-dhts-drug-development.
[10] Shalini S., et al., Drug Utilization Studies – An Overview, 3 Int’l J. of Pharm. Scis. & Nanotechnology 803 (May 2010), http://ijpsnonline.com/index.php/ijpsn/article/view/470.
[11] Peter J. Pitts, Towards Meaningful Engagement for the Patient Voice, 12 The Patient 361 (June 5, 2019), https://link.springer.com/article/10.1007/s40271-019-00366-x.
[12] Peter J. Pitts & François Houÿez, Patient Contribution to the Development and Safe Use of Medicines During the Covid-19 Pandemic, 55 Therapeutic Innovation & Regul. Sci. 247 (Oct. 27, 2020), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7590989/.
[13] U.S. Food & Drug Admin., Plan for Issuance of Patient‐Focused Drug Development Guidance Under 21st Century Cures Act Title III Section 3002 (May 2017), https://www.fda.gov/files/about%20fda/published/Plan-for-Issuance-of-Patient%E2%80%90Focused-Drug-Development-Guidance.pdf.
[14] Ben Adams, FDA’s New Drugs Director Jenkins Retires, Months After Criticizing Regulator, RAPS reports, Fierce Biotech (Dec. 5, 2016), https://www.fiercebiotech.com/biotech/fda-new-drugs-director-jenkins-retires-months-after-criticising-regulator-raps.
[15] Ed Silverman, FDA Panel Votes Against Sarepta’s Drug for Duchenne Muscular Dystrophy, Stat (Apr. 25, 2016), https://www.statnews.com/pharmalot/2016/04/25/fda-panel-sarepta-muscular-dystrophy/.
[16] Quality-Adjusted Life Year, Wikipedia (Mar. 7, 2024), https://en.wikipedia.org/wiki/Quality-adjusted_life_year.
[17] Christopher McCabe, Karl Claxton & Anthony J. Cuyler, The NICE Cost-Effectiveness Threshold: What it is and What that Means, 26 PharmacoEconomics 733 (2008), https://pubmed.ncbi.nlm.nih.gov/18767894/.
[18] Vincent Conitzer, Natural Intelligence Still Has Its Advantages, Wall St. J. (Aug. 28, 2018), https://www.wsj.com/articles/natural-intelligence-still-has-its-advantages-1535495256.
[19] Major Themes: Health and Well-Being, World Health Org., https://www.who.int/data/gho/data/major-themes/health-and-well-being (last visited May 10, 2024).
[20] Blink: The Power of Thinking Without Thinking, Wikipedia (Jan. 22, 2024), https://en.wikipedia.org/wiki/Blink:_The_Power_of_Thinking_Without_Thinking.
[21] Freestyle Chess Study Group, Univ. at Buffalo, (last visited May 10, 2024), https://cse.buffalo.edu/~regan/chess/fidelity/FreestyleStudy.html.
[22] Andrew McAfee, Did Garry Kasparov Stumble Into a New Business Process Model?, Harvard Bus. Rev. (Feb. 18, 2010), https://hbr.org/2010/02/like-a-lot-of-people.
[23] Harnessing the Potential of Artificial Intelligence, U.S. Food & Drug Admin. (Mar. 15, 2024), https://www.fda.gov/news-events/fda-voices/harnessing-potential-artificial-intelligence.
Update Magazine
Summer 2024