How AI could help dramatically change the drug discovery process

By Jonathan Brotchie

A couple weeks ago, at the 2nd Annual Protein Degradation for CNS Summit in Boston, I had the great pleasure of joining a roundtable discussion on the integration of AI in targeted protein degradation (TPD) drug discovery, particularly in pharmacokinetics and pharmacodynamics (PKPD) for neurodegenerative diseases. The conversation helped crystallize my own thinking around how AI will affect what we do at Atuka and more generally where we see the most immediate benefits in drug development for Parkinson’s. I think it’s good news for Atuka and Parkinson’s.

Of course, it’s not just me thinking about AI, over the past year we’ve seen a batch of new studies and media reports focusing welcome attention on the potential of AI-assisted drug development. As we are all too aware, pharmaceutical development today comes with an enormous expenditure of time and cost, at an average of ten years and $2 billion (USD) per drug according to The Economist, with approximately one-third of that total associated with the preclinical phase. It’s an investment that comes with enormous business risk, given that only one-in-ten drugs that begin preclinical development even make it as far as Phase 1, let alone become available to patients.

The upside of integrating AI in the drug discovery process is potentially game-changing, though it’s also important to be clear-eyed and not get too caught up in the hype. My sense is that AI could, on average, bring down the time it takes to develop a new drug, maybe shaving a couple of years off the timelines. I suspect the reduction in cost would also be modest, perhaps only a single digit percentage in total spend for each candidate over the preclinical phase of its development.

The area where I think we will see the greatest impact is in the likelihood of success in proceeding to Phase 1 trials, perhaps the current 1-in-10 chance of success could be doubled or tripled. How will AI do that? Largely by providing us with new and more efficacious molecules more quickly, along with new and more druggable targets, and the opportunity to repurpose drugs that have already shown clinical benefit and been de-risked.

New and better molecules for novel un-imagined targets

Where in the drug development process will AI have the most immediate impact? First, there’s the ability it provides researchers to quickly analyze and synthesize massive amounts of preexisting data and identify promising new molecules; we can even fine-tune a potential molecule’s structure to improve its likelihood of success in human trials. With Generative AI, as the Economist points out, we “can go a step further, by dreaming up entirely new molecules to test.” What was customarily many months, even years, of trial-and-error experiments, can in some cases be reduced to hours of computation. It seems realistic that within five years, AI could predict PKPD properties and half-life for novel molecules. Having just attended the TPD roundtable, we can look at the example of PROTACs. The way the models work now, data is probably available to train a LMM on, for instance, approximately 40 years of IND data—indeed that has probably already happened—then we could lay on top of that a model that is trained more about PROTACs.

That said, I am not ready to replace medicinal chemists given that the ability of LMM AI to define methods of synthesis seems to be trailing its ability to find patterns in biology and structure.

Repurposing of de-risked molecules for new targets

We are already at a point where AI trained on biological data can identify targets better than traditional routes. This potential aligns with the findings from our own paper, “Using artificial intelligence to identify drugs for repurposing to treat l-DOPA-induced dyskinesia”, published last year in the journal Neuropharmacology (Vol. 248, May 2024). Using natural language processing of published abstracts to identify drugs for potential repurposing, the paper demonstrates the value of an in silico approach to identifying candidate molecules for re-purposing, which, in combination with an in vivo screen, can facilitate clinical development decisions. The paper added to an already growing literature in support of this paradigm shifting approach in the repurposing pipeline.

Shortening timelines to preclinical proof-of-principle

Soon, if not already, AI could predict efficacy in cell-based systems with the same accuracy as wet biology, so the question then arises if we will still need to test in vitro. We are perhaps approaching a time where AI can support design of a new drug to the stage of whole animal studies before the wet lab is engaged. This is somewhat controversial, but it feels like much of the push back is reflexive and perhaps not warranted by the power of the emerging AI models. There is a focus on AI possibly making mistakes, but at what point do the models reach a level where they are “wrong” less often than biological experiments (P<0.05)? We are probably already there for protein and very close for cell level predictions. In the near term, what could stop us going straight to animal evaluation, and ultimately straight to human?

Overall, the roundtable discussion in Boston underscored the necessity of balancing AI predictions with experimental validation, particularly in navigating the complexities of animal models and human clinical testing. Concerns have been raised about the potential misuse of AI in generating pathogenic or toxic agents, whether protein or small molecule; obviously, the technology is equally suited to beneficial and malign applications, yet I am optimistic about AI’s ability to enhance research efficiency and inform drug design.

What it means for Atuka

These changes I’ve described could fundamentally change the way the drug discovery funnel works. From the current scenario, where we are somewhat blindly taking a wide range of potential candidate molecules and evaluating, tweaking,  and re-synthesizing, iteratively in vitro over years, with most ending up on the dead-end shelf, we envision a future where the funnel is wider but shallower, with AI potentially not only identifying the best molecule to synthesize in the first place but by predicting the biology of that molecule, eliminating the need for most in vitro testing. Instead, the biology part of the funnel starts not in the 96 well plate but in the animal, maybe confirming predicted PKPD, or maybe, as experience develops, in our models testing for efficacy, thereby dramatically removing some of the investment that was needed to get a program to us.

As more candidate drugs enter our studies, I would predict that they will have a greater likelihood to demonstrate proof of principle, as many of the reasons for PoP failure today will have been designed out by AI. More drugs will succeed in our hands and thus our impact, measured by our success in finding drugs likely to translate to clinical proof-of-concept, will increase.

A final note. In any conversation around the benefits of AI in drug discovery, at Atuka we feel that a critical distinction must be emphasized, reflected in our rather deliberate choice of words in describing this discovery process as AI-assisted, rather than AI-developed. For all its potential, the use of AI is not about it taking over the process and decision making from scientists but rather putting this very powerful new instrument at our disposal. AI helps Atuka focus our research efforts on precisely the part of the process we work in—animal evaluations of efficacy and target engagement. Much unlike other sectors, where AI can be seen as threat, for us it only leverages our opportunities to grow and accelerate our efforts to help make life-altering therapeutics a reality.