HomePharma & BiotechClinical Research OrganisationThe future of drug discovery, testing with computational models and Artificial Intelligence

The future of drug discovery, testing with computational models and Artificial Intelligence

By Jana Brajdih Cendak, Medical and Pharmacovigilance Lead/Medical Advisor/EU QPPV, Billev Pharma East

The history of drug discovery dates far back, to ancient civilizations, where various concoctions of roots, leaves, and other plant products were used to treat diseases. At that time, the experiments were carried out on patients themselves; loss of life was considered an acceptable risk.

Later, in vivo methods were developed using laboratory animals and later still, in vitro methods, such as cell lines appeared; these methods remain to this day a working horse of drug discovery and development.

With the discovery of genomics, there has been rapid growth in the understanding of the biological systems, which was followed by an exponential growth in the technology for sequencing and high-throughput methods. In order to manage all the data generated out of these experimental methods, the two disciplines of informatics, namely, bioinformatics and chemoinformatics, were adopted. Both these methods can be described under a broad title of in silico methods.

Nowadays, the landscape of drug discovery and testing is shaped not only by time- and cost optimization, but also by an ever-growing awareness that many animal experiments are unethical, unnecessary and their applicability should be reviewed, especially in light of new technologies that are becoming household names in the pharmaceutical, industry.

Computer-aided drug design (CADD) is a flourishing landscape with several technologies that allow for easy and fast recognition of potential targets. Especially in new therapies, such as biologics, CADD has a crucial role that allow for high-throughput screening of many molecular structures that could have a desirable pharmacological effect.

CADD approaches include structure-based design (using a known “template” and modifying it) or ligand-based design, which operate based on similarity to other, known molecules. There are also approaches that do not rely on known templates, but base the structure design on thermodynamics hypotheses, which state that the native conformation of a protein is determined by the totality of inter-atomic interactions that result in the lowest energetic state of the molecule.

Once the candidate molecules have been identified, in silico methods can perform the initial evaluation. The most recognized are PBPK (physiology-based pharmacokinetic) modelling and toxicity assessment in the form of (Q)SAR modelling, read-across and structural alerts, which are based on structure similarity and adverse outcome pathways (AOPs). In the scope of genotoxicity and mutagenicity testing, even more promising technologies are being developed and validated. Mutational signatures are specific patterns of mutations that have in many cases specifically correlated with mutations in human cancer cells and have thus a high predictive value. A database, called COSMIC (https://cancer.sanger.ac.uk/signatures/=) is available and collects all the identified signatures for scientific use. The database provides also mechanistic insights and proposed acceptance criteria that make the signature valid.

Next-generation sequencing is a new technology that utilizes computer power to differentiate spontaneous (background) and compound-induced mutations directly on the genome of the treated cell, animal or even human. This approach, once adequately validated, will eventually replace all standard genotoxicity and mutagenicity tests due to its exquisitely high specificity and sensitivity, ease of use and speed.

The applicability of in silico toxicology in pharmaceutical industry is nowadays still limited to impurities assessment under the scope of the ICH M7 guideline, while in the scope of industrial chemicals (REACH legislation), expert systems such as SAR and read-across are commonly considered valid.

In the past years, the European Medicines Agency started interacting with stakeholders with the intent of facilitating the implementation of the 3R principles (replace, reduce, refine); these concepts were developed to streamline the use of laboratory animals in chemicals testing, but drug development protocols still use animal studies to evaluate many ADME and toxicity endpoints. The ultimate goal would be to completely phase out the use of animals in chemical testing, without compromising safety in human trials.

Artificial Intelligence is ever more finding its way into drug discovery and assessment. While the development of adverse outcome pathways still needs human input and knowledge, AI technologies may assist with the development of complex AOP networks that ever more precisely describe the metabolic pathways, adaptive responses and adverse outcomes in the human body. Especially in scope of database screening and evaluating the weight of evidence (calculation and assignment of weight scores to individual database entries), machine learning and deep learning protocols are fast gaining importance.

Similarly, as drug discovery becomes increasingly automatic and data-driven, it is becoming common practice to combine CADD with advanced technologies like Artificial Intelligence. Such approaches can cost- and time-efficiently convert biological big data into pharmaceutical value. According to data from Morgan Stanley Research, modest improvements in early-stage drug development success rates enabled by the use of artificial intelligence and machine learning could lead to an additional 50 novel therapies over a 10-year period, which could translate to a more than $50 billion opportunity for pharmaceutical industry.

So, what do such advances in technology bring to the table for clinical research? Do more molecules mean more clinical trials, more trials mean more patients and more resources to be allocated? Despite faster generation and evaluation of data, every step in the process needs to have a clear rationale and comprehensive knowledge of human anatomy, physiology and pathology is required to assess the applicability of a certain candidate molecule for treating human beings. As of now, such a complex assessment still needs human brain, human experience, human sense of ethics and compassion to be successful. The way I see it, advanced technology will not result in a larger number of trials but will enable a more targeted approach to molecule development, evaluation and, above all, will speed up a process that nowadays still takes decades – the difficult journey from discovery to approval. Artificial intelligence will also speed up data management and assessment in clinical trials and will contribute to higher quality of generated data.  Taken together, this will translate to better patient care, which is (or at least should be) the ultimate goal of every person working in the pharmaceutical industry. At the end of the day, we all were, are and will be patients at some point in our lives…

Must Read

Related News