With modeling and simulation, data can be leveraged to provide important insights on product safety and effectiveness. Modeling and simulation can offer a better understanding of existing clinical trial data as well as provide supportive data for future trial designs and decisions. Sometimes modeling and simulation can even be used to avoid a clinical trial all together. Modeling and simulation can be used in many different types of situations, such as:
- Predicting the optimal dose in adult patients from Phase 1-3 and in the product label
- Designing Phase 1-3 clinical trials
- Assessing the probability that a new compound will have better efficacy or safety compared to the gold standard
- Predicting the dose in patient subgroups (e.g., pediatrics, elderly, renal impairment)
- And more
Here we provide some examples and situations where different types of modeling and simulation can be used to maximize clinical trials that are already planned or completed. In some examples, we highlight how some clinical trials can potentially be avoided by using modeling and simulation and existing data. So, the big question is, “how can modeling and simulation be applied to help your development program?”
Using Population PK and Exposure-Response Modeling to Pick the Optimum Dose and Design for Phase 2 and 3
There are many Phase 2 and Phase 3 failures due to lack of efficacy or too much toxicity. Many of these failures could be due to the use of doses that are either too low to produce efficacy or too high and reduce safety. Using population pharmacokinetic modeling (popPK) and exposure-response to select the doses and/or dosing range to be studied in Phase 2/3 can improve the probability of success of the trial, thereby reducing the overall number of trials by eliminating the ones that have a low probability of success. Many times, a target product profile may state that DRUG X will have better efficacy than the gold standard. A full program is developed, but the Phase 2/3 trials fail. Before any of these studies are started, popPK and exposure-response concepts from the gold standard or placebo can answer whether the target product profile is even possible, through clinical trial simulation (e.g., developing a model and then assuming there are 100 subject per arm, simulating 1000 clinical trials, and analyzing them as if they were “real” clinical trials to develop the probability of success). These “virtual” clinical trials can be run assuming a parallel study, a placebo run-in, or a randomized withdrawal study with various numbers of subjects for a much lower cost than actually conducting the study. There are many examples of compounds that clinical trial simulations have shown that the initial target product profile was not feasible, but a different target a (e.g., better safety profile) was feasible. Other clinical trial simulations have shown that the initial target was only feasible with a very different clinical trial design b/c (e.g., placebo run-in, randomized withdrawal). Using modeling and simulation early in drug development can thereby avoid clinical trials that are unlikely to be successful.
Using Population PK Modeling to Avoid a Clinical Pharmacology Study Like Renal Impairment
The kidneys (renal system) play a critical role in eliminating many (but not all) drugs from the body. Therefore, it is always necessary to consider what changes are expected in patients with renal impairment compared to patients with normal renal function. A standard clinical pharmacology study can be conducted in patients with varying degrees of renal impairment or in end stage disease as per the FDA guidance. If the drug is not excreted renally or if the disease being treated includes many patients with renal impairment and renal impairment will not be an exclusion criteria in the Phase 2/3 trials, then there is a different way to get the needed information. With extra planning, proper study design, and popPK modeling approaches, it is possible to avoid the renal impairment clinical pharmacology study altogether, and still have reasonable dosing recommendations for the product label. Using pooled, cross-study data collected during Phase 2/3 studies from patients with varying degrees of renal impairment, popPK modeling can be used to understand the impact of renal impairment on drug exposure. Simulations can then be done to evaluate dosing recommendations for patients with mild, moderate, or severe renal impairment. If an exposure-response model for safety is available, this data can also be simulated to strengthen dosing recommendations. The key factors for this approach to successfully eliminate the need for a renal impairment clinical pharmacology study is 1) to understand the epidemiology of renal impairment in the disease state, 2) to understand the mechanism of drug elimination and any safety implications, and 3) to ensure open inclusion/exclusion criteria for Phase 2/3 trials. The same concepts can be used to eliminate the need for a separate elderly study, obesity study, or a particular drug interaction study if: 1) these patient populations or drug interactions are commonly found in Phase 2/3 subjects, 2) the Phase 2/3 inclusion/exclusion criteria are more open, and 3) the safety implications are well understood. It is important to note that this streamlined approach is not for every drug program, so prospective clinical pharmacology planning to determine the feasibility and cost‑effectiveness for your specific program is important.
Conducting Concentration-QT Analysis Instead of a Standalone Thorough QT Study
Another example where a little bit of extra planning can help tremendously is concentration-QT (C-QT) analysis. This type of modeling can help you avoid a future standalone thorough QT (TQT) trial. The FDA requires drug developers to conduct studies to determine the risk of a drug prolonging the QT interval and typically, this assessment is made by conducting a TQT clinical study. TQT studies can cost millions of dollars and take upwards of 1 year to complete. Using a C-QT analysis can help establish the risk of QT prolongation and potentially avoid the expense of conducting a full TQT study. The biggest advantage of using C-QT analysis, besides saving resources, is that one can predict the QTc effects early in drug development using Phase 1 data and potentially reduce the ECG burden later in drug development. In addition, C-QT analysis allows you 1) to predict QTc effects with doses and formulations that have not been directly evaluated by clinical studies, 2) to predict QTc effects in specific populations, 3) to predict QTc effects with drug-drug interactions, or 4) to clarify ambiguous results based on small numbers of subjects in Phase 1. There are two important documents that govern C-QT analysis: the ICH E14 Q&A R(3) document from December 2015 and the scientific white paper by Christine Garnett et al. Some key points to consider when planning for a C-QT analysis include:
- Serial matched ECGs and concentrations: “rich sampling” instead of “sparse sampling”
- Timing: you need samples around the time of maximum concentration, as well as the time of maximum concentration for metabolites (if known)
- Robust ECGs: triplicate ECGs (at baseline, at a minimum), centrally read with manual adjudication
- Range of doses: ideally, 2x coverage of the worst-case clinical exposure, including coverage for expected increases in Cmax from food, drug-drug interactions, or organ impairment (but this is not always possible)
- Placebo data: inclusion of placebo data allows for small effect on QT to be ruled out
Single ascending dose and multiple ascending dose studies are great for collecting data suitable for C‑QT analysis. It is important to escalate as high as possible in these studies since some of the above‑mentioned requirements for a successful C-QT analysis, especially worst-case clinical exposure, are not always known prior to starting these studies. While C-QT analysis will never completely remove the need for you to continue to monitor ECGs, it can reduce the level of ECG monitoring in later studies. It is important to remember that if you have collected the right data, C-QT analysis can be used as an alternative to conducting a full TQT study.
Modeling for Labeled Dose Not Directly Tested in a Clinical Trial
Modeling and simulation can directly impact the drug label, even as far as labeling for dose regimens that were not directly tested in clinical studies. Here’s a case study of a label dose coming directly from modeling and simulation using PK and time to event modeling. You can model the “time to an event” for any number of situations. Some examples of time to event modeling include time to the occurrence of a specific disease or a certain symptom, time to the alleviation of a certain symptom, or time to the occurrence of any other endpoint of interest. The underlying model for time to event modeling is a hazard model which is statistically very different from what you would apply to other types of PK/PD modeling. In this type of modeling, the hazard function is the potential that the event will occur per unit time, given that the event has not occurred in the individual up to that point. Most commonly, you would use a cox proportional hazard or a parametric hazard model which would calculate the likelihood of the event and the interval of time. In one example using an antiviral, the hazard of alleviation of symptoms is dependent on the duration that the drug concentration can inhibit viral replication. In this case study, time to event modeling combined with popPK modeling enabled inclusion of a dose regimen on the label that was not tested in a clinical study, saving time and resources thus showing that a good model can sometimes be just as effective as conducting an entirely new study.
Using In Vitro In Vivo Correlation Instead of Conducting a Clinical Bioequivalence Study
In vitro in vivo correlation (IVIVC) is a predictive mathematical model that describes the relationship between an in vitro property of a dosage formulation and a relevant in vivo exposure. IVIVC expresses the relationship between drug release in a dissolution apparatus and how that translates to the amount of drug that enters the bloodstream following administration. Importantly, IVIVC models can be used to support a biowaiver, which allows Sponsors to waive an in vivo bioavailability and/or bioequivalence (BA/BE) study requirement, particularly for modified release products. In this case, the dissolution test serves as a surrogate for human BE studies when there are manufacturing process or site changes post-approval. IVIVC can also help pick the best modified release formulation for progression. Different levels of IVIVC can be established (Levels A, B, and C and multiple Level C). A Level A relationship is required to waive BE studies for a modified release product. This is generally a point-to-point relationship between in vitro dissolution and the in vivo input rate. Data from clinical BA studies are required to establish this relationship, typically from cross-over studies in the fasted state utilizing at least 2 different formulations (one fast releasing formulation and one slow releasing formulation). Use of an IVIVC for a biowaiver can reduce the number of BE studies needed during development and post-approval. IVIVC may be sufficient for biowaivers for:
- Manufacturing site changes
- Changes in non-release and release controlling excipients
- Process changes
- Complete removal of or replacement of non-release controlling agents
- Lower strengths that are compositionally proportional or qualitatively the same
Optimizing Study Design with Clinical Trial Simulation Examples
The goal of a clinical trial simulation is to study the effect of a drug in a virtual patient population to help understand the likelihood of clinical impact of some unknown factor such as, recruitment issues, dropout rate, treatment effect, etc. The idea is to be able to increase the efficiency across all stages of clinical development for factors such as:
- Dose and scheduling determination: What is the predicted exposure and/or effect at different doses or regimens?
- Long-term study scenarios: What is the range of possible outcomes prior to initiating studies that will be resource and time intensive?
- Optimizing study design and sample size: What is the probability of success with different numbers of subjects or cohorts? How does that change if the dose- or concentration-response relationship looks different?
- Impact of variability: What is the impact of missed samples, subject drop out, or compliance on different factors?
Recently the FDA and EMA have both included modeling and simulation among their highest priorities to support efficient drug development and facilitate regulatory decision making. Modeling and simulation can and should be applied throughout all stages of development. There are many opportunities for conducting more efficient trials or avoiding additional trials altogether. Allucent has an expert team of pharmacometricians who build models tailored to guide your drug development program’s decisions. Our team can create a custom, model informed drug development (MIDD) plan that will facilitate the selection of the optimal model for your compound and disease area. Contact us to learn how Allucent’s modeling and simulation team can help you gain insights that move your clinical development program forward.
- a C Leonowens, J Lee, JP Therrien, C Ambery, D Quint, M Price, G Schmith. Early Clinical Development of GSK2245035B for Dermal Application: Use of Probabilistic Risk Analysis Approaches to Address Uncertainty. American Conference on Pharmacometrics (ACoP5). Las Vegas Oct 12-15, 2014 Abstract M-039 [published: in press]
- b Lovern, Schmith, Dukes, McSorley PAGE 15 (2006) Abstract 978
- c The future is now: Model‐based clinical trial design for Alzheimer’s disease