Organs on a Chip! FDA’s Predictive Toxicology Roadmap

Judy Rein, Editor in Chief, FDLI

by Judy Rein, Editor in Chief, FDLI

Human tissue on a computer chip that fully replicates the structure and function of organs sounds like the stuff of science fiction. Yet industry is already starting to adopt this new technology and it is quickly making its way into the regulatory armature for assessing safety and risk of new compounds. These assay systems use cultured cells adapted in a three-dimensional computer chip manufacturing process that can provide highly sensitive and specific toxicology analysis. Predictive toxicology is also gaining ground through machine learning based on data mining and sophisticated systems modeling. The future is here, and seemingly, momentum is growing to integrate these emerging technologies into the safety and risk assessment toolset for all FDA-regulated products.

The Roadmap

On September 12, 2018, FDA held a public hearing to receive stakeholder feedback on the agency’s proposed Predictive Toxicology Roadmap. The main goal of the Roadmap is to “invigorate and strengthen FDA’s long commitment to promoting the development and use of new technologies to better predict human, animal, and environmental responses to a wide range of substances relevant to FDA’s regulatory mission.”

Released in December 2017, the Roadmap is the product of ongoing efforts by the Toxicology Working Group at FDA (Working Group). Comprised of senior level toxicologists from all the agency’s centers, the Working Group is addressing the need for a comprehensive strategy to evaluate these new methods so that they can be qualified and accepted by regulatory authorities. As the Roadmap notes, “FDA’s six product centers have very different legal authorities for evaluating product safety for toxicity.” Although the legal requirements are not uniform across the centers, there is recognition that a collaborative approach to developing these resources can lead to analyzing and overcoming impediments to regulatory acceptance across product areas.

FDA Commissioner Dr. Scott Gottlieb opened the hearing by expressing the agency’s support for moving this emerging technology from the realm of research to meaningful applications relevant to all regulated products, often across the life cycle of product development. He alluded to agency work on intestines on a chip, “gut chips” as they are called at FDA, that have potential to be effective in emergency countermeasures, such as assessing radiation.

FDA Commissioner Dr. Scott Gottlieb opened the hearing by expressing the agency’s support for moving this emerging technology from the realm of research to meaningful applications relevant to all regulated products, often across the life cycle of product development. He alluded to agency work on intestines on a chip, (a.k.a. “gut chips”), that have potential to be effective in emergency countermeasures, such as assessing radiation.

Potential Benefits

As Dr. Gottlieb observed, “Results in animals are not always predictive of results in humans.” The conventional use of animals as proxies for humans in toxicity analyses soon may be eclipsed by preclinical in silico testing of human tissue and the predictive heft of computational biology. This is good news to animal welfare organizations, who have joined forces with developers in seeking more accurate ways to engage in preclinical drug testing.

Jan Lichtenberg of InSpero AG, a Swiss company that is developing a liver assay made up of tissue the size of a grain of sand, stated that the technology is able to recreate the function of a liver over extended time, and that the sensitivity of this test is greater than existing animal tests by a factor of two.

The range of technologies under development includes customized cell cultures, but also embraces genomics, bioinformatics, and gene editing, among a broad swath of “in silico” methods to provide predictive analysis of toxicity as well as potential drug targets. Using artificial intelligence, data aggregation and sorting, as well as high content data analysis, can lead to much more effective high throughput screening for drug candidates, creating a preclinical environment with faster, broader, and deeper analysis of compounds.

Opportunity Meets Hesitancy

According to Bob Young of Millipore Sigma, the unmet needs in drug development that the new methods can address include predicting failure earlier, drug-drug interactions, data-sharing among developers of failed drug compounds, and improved testing for carcinogenicity. However, as Lichtenberg and others noted, industry has been somewhat reluctant to fully adopt the new technology. Lichtenberg cited research indicating that only 20 percent of the top 50 pharmaceutical companies are using it routinely. Although industry is applying the new methods much more widely in the screening stage for therapeutic targets, where no regulatory submissions are at issue, than for toxicity testing, where regulatory uncertainty prevails, some of the same barriers constrain adoption in both instances.

Speakers at the hearing agreed that the core hurdle for routine use of predictive toxicology is the lack of industry standards for validation parameters, such as reference libraries for compounds, end points, and exposure times. Significantly, uncertainty about acceptance/rejection criteria from FDA and other regulators keeps pharmaceutical innovators in a conservative stance, reticent to experiment with new approaches without adequate forewarning of regulatory acceptance.

Additionally, as one audience commenter noted, the lack of industry consensus and established guidelines for sensitivity and specificity hampers chip developers in this emerging space, especially in tissue areas such as neurons, which are less well-characterized than the liver. There is currently lack of clarity as to whether guidelines can be established to apply to all tissue models, or if they will necessarily vary among types of tissue.

Donna Mendrick, the National Center for Toxicology Research (NCTR) representative in the Working Group, raised the critical question of when to “freeze” the parameters for regulatory acceptance. Given the highly dynamic nature and pace of innovation, when is the optimal time to create standards without risking creating impediments to further innovation?1

Chipping away at the Impediments to Using Predictive Tools

Dr. Kevin Cross of Leadscope, a contractor working with FDA on developing computational tools, noted that the agency is not just in the position of adopting emerging methods; regulatory guidance in this field will also drive technological change. “Guidance does not just spur use, it also spurs development.” He echoed other commenters in emphasizing the value of FDA deploying a “toe in the water” approach to guidance. If the agency can get clarity on some aspects of these applications even before it is ready to define the entirety of the regulatory scheme, this could be a useful catalyst to unleash development in industry.

Dr. Cross called for protocols that (1) standardize in silico tool use and interpretation; (2) reduce the burden on both industry and regulators to provide justification for the use of these methods; and (3) lead to results generated, recorded, communicated, and archived in a uniform, consistent, and reproducible manner for regulatory use.

Elizabeth Baker of Physicians for Responsible Medicine suggested improved transparency through better communication mechanisms in the early stages of drug development. For example, she indicated that denials of proposed new technologies do not always receive substantive review that could aid developers in improving their tools. Moreover, she called for more clearly identified interlocutors within the agency with whom sponsors could interact directly in the submission process. Baker also suggested that the annual reports from the Working Group to the Chief Scientist be made public.

Not Just for Drugs

Although much of the attention has been focused on the dramatic emergence of drug screening and toxicity analysis technologies, the relevance of predictive modeling in other product areas is also substantial. For example, at the public hearing, tobacco industry representatives from Altria and Imperial Brands spoke of the role of predictive toxicology methods for tobacco product submissions.

Specifically, Dr. Kimberly Ehman of Altria Client Services observed that regulatory submissions for demonstrating safety in substantial equivalence (SE) and premarket tobacco applications (PMTA) often involve new constituents with no available toxicity data for risk assessment. She argued that FDA’s Center for Tobacco Products (CTP) should adopt the approaches used by the other FDA centers for determining the toxicological threshold of concern (TTC) (the threshold below which a chemical entity is thought to be very low risk to human health). Gary Phillips of Fontem Ventures expressed the need for the agency to identify which assays of the many that are available are the ones that the agency wants to see in applications.

There is precedent at the agency for applying predictive toxicology tools successfully. The Center for Food Safety and Applied Nutrition (CFSAN) has already made use of computational toxicology tools such as quantitative structure-activity relationship (QSAR) analysis for assessment of hazards in food contact substances and food additives.

The future is here, and it is richly innovative and reaching broadly across FDA. Dr. Gottlieb referenced the coinage of the phrase “medical device on a chip” from recent research at the Center for Device and Radiological Health (CDRH) proposing applying the organ-on-a-chip approach to testing medical device components.2

This ongoing collaborative effort is complemented by coordination with other U.S. agencies and international entities, such as the International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use (ICH). FDA’s Toxicology Working Group is part of the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), an initiative made up of 16 federal agencies that share data and information on toxicological and safety testing information. The overarching goal for industry and regulators alike is to establish confidence in the emerging technology based on the agency’s informational needs.

  1. This problem is broadly bedeviling to regulation of methods emerging from artificial intelligence and machine learning, since by definition they are constantly changing.
  2. Allan Guan et al., Medical Devices on Chips, Nature Biomedical Engineering, March 9, 2017.