The Coming Boom In Drug Discovery
In 1997 a remarkable event made a lot of noise among millions of chess game fans worldwide - the then champion of the world Garry Kasparov lost a tournament to an IBM's supercomputer Deep Blue. The even became a symbolic milestone in a history of computer development manifesting a shift in perceiving computers as simply "powerful calculators" to something qualitatively different - a phenomenon that might overcome human intelligence one day... or, better, become a powerful adviser in solving the most daunting global problems.
A human’s brain simulation
Being a sub-set of artificial intelligence, machine learning involves algorithms allowing computers to autonomously learn from input data. A fundamental distinction from “usual” software programs, such as Photoshop or, say, Excel, is that in machine learning computers don’t have to be explicitly programmed but can change and improve their algorithms by themselves.
The history of machine learning goes back to the 1950th. The first learning program was created by Arthur Samuel in 1952 and it was the game of checkers. Implemented in an IBM computer, the program was able to improve itself the more it played studying winning moves and incorporating them into the next rounds.
Five years later Frank Rosenblatt designed the first neural network for computers - the perceptron, which simulated the thought processes of the human brain. And just a decade later the “nearest neighbor” algorithm was written, which was a conceptual step towards pattern recognition technology.
Today, machine learning algorithms enable computers to “see” and distinguish objects and text in images and videos, discover and categorize real-world things, "communicate" with humans, drive cars on auto-pilot, write and publish sport match reports, and ... help discover new drugs.
The stumbling stone of modern drug development
Drug development is a big challenge and a successful result is largely determined by early drug discovery efforts.
A widely used target-based approach to drug discovery starts with biologists identifying possible mechanism of a disease and suggesting a biological "target" - usually a protein involved in a cascade of processes behind a disease. Inhibiting or otherwise impacting such a protein can usually have a substantial effect on the pathogenesis and therefore, the disease might be suppressed and cured.
Once the target is proposed, the next big move is to screen hundreds of thousands or even millions of small molecules against the target to identify so-called "hits", i.e. molecules with substantial affinity to the target protein. Further, the hits undergo numerous additional tests and chemical modifications. Some of the compounds eventually make its way to clinical trials.
With this approach, however, it takes on average 12 years and about $2.9 billion to bring a new efficient drug on the market. The drug discovery is largely a “trial and error” process, even today, and “error” here is huge as only a very few experimental drugs ever see the medicine cabinet.
Can computers suggest new drug candidates?
In a pursuit of decreasing the cost and time of drug discovery and more accurately predicting the structure-activity relationship of early drug candidates, scientists developed specialized mathematical models and computer programs able to conduct “in-silico” drug discovery. In this approach, the available
structural information about target proteins is used to conduct a virtual screening of numerous chemical structures and identify hits which better fit the target in terms of energy of interaction and other calculated functions.
Although promising, standard in-silico methods are still limited and not accurate enough to substitute costly and time-consuming real-world experimental screening and trials, because of their explicitly pre-programmed nature and pre-determined models, used for calculations. Essentially, the models are limited to certain level of abstraction and they can not improve unless scientists update them manually.
This is where machine learning algorithms and new drug discovery startups come into play.
New drug discovery era is knocking on the door
Recently, a group of scientists at the University of Toronto created a machine learning algorithm that they hope will revolutionize the way pharmaceutical drugs are discovered.
They founded a health tech startup Atomwise offering a solution which can help researchers develop the next generation of drugs and do it faster and cheaper than ever before.
The algorithm Atomwise developed is similar to the Deep Learning Neural Networks used by artificial intelligence startup DeepMind, acquired by Google recently for $628 million. The algorithm teaches itself complex biochemical principles and the factors that are ultimately the most predictive when it comes to the effectiveness of a drug.
“Our system takes into account not a dozen or two dozen, but thousands of factors at the same time and combines them in complicated and nonlinear ways. It’s like having a virtual super-intelligent brain that can analyze millions of small molecules and potential interactions in days instead of years,” said Alexander Levy, chief operating officer at Atomwise.
The company’s machine learning algorithm is acting similarly to how computers go about image recognition, which is a unique feature of this approach. Levy says their system has devised some unintuitive methods for understanding what small molecules will properly latch onto a biological target.
To date, Atomwise has raised $6 million to advance artificial intelligence for drug discovery and launched more than a dozen projects to find cures for both common and orphan diseases. The company is collaborating with IBM to find a cure to Ebola and with Dalhousie University in Canada to search for a measles treatment. The startup studied 8.2 million small molecules to find potential cures for multiple sclerosis in a matter of days. Besides, Atomwise is already partnering with a pharmaceutical giant Merck to explore the frontiers of using artificial intelligence for drug discovery.
Not the only one
Besides Atomwise, a number of promising startups focusing on the application of machine learning and AI for drug discovery have emerged within the last several years.
Palo Alto based TwoXAR, founded in 2014, has recently raised $3.4 million in a seed round led by a tech investor Andreessen Horowitz. TwoXAR’s solution is DUMATM Drug Discovery platform able to evaluate large public and proprietary datasets to identify and rank high probability drug-disease matches in minutes rather than years.
The company has already tested their technology on more than twenty diseases and is now actively collaborating with academic researchers at the University of Chicago and Michigan State University to further develop the platform. Being a part of the elite Stanford-backed “StartX Med Program”, TwoXAR is ollaborating with some unnamed biopharmaceutical organizations.
A Berg company claims a very ambitious plans on using AI for revolutionizing cancer treatment and first promising results were reported recently by BBC News.
Recent news on using supercomputing and AI-based algorithms in drug discovery sparked in Spain, China, Great Britain and other regions. A clear rising trend is now developing and with the first FDA-approved example of "AI-born" drug, an investment stampede will follow.
Now, it is time for leading pharmaceutical companies and drug discovery CROs to start reflecting on how they can leverage new technologies and adjust to the coming boom in drug discovery. Those who will adopt earlier and better will dominate the future market.
**************************
Andrii Buvailo is a Ph.D in Chemistry and Head of E-commerce at Enamine Ltd - a leading supplier of building blocks and screening compounds for drug discovery research. Previously, he worked as a project manager and later as a director for YUNASKO - an energy storage startup, a developer and a licencor of supercapacitor technology. He was also involved in various research projects in Ukraine, United States, Germany and Belgium.
He is writing occasionally for science and technology blogs, including ChemCommerce.org (a blog about drug discovery trends), SciMax.biz (technology commercialization blog) and EnamineStore Blog (chemical products for drug discovery). All statements made, opinions expressed, etc. in his articles only reflect his personal opinion.
Feel free to connect also via Twitter.
NPI Specialist | PPI-Multitask | I4.0 enthusiastic | MES Consultant
8 年Rodrigo Adeu
3 Nm Enviromaster Franchises and 2 Texas Em Franchisees
8 年Have you heard of kyani?
Scientist & MBA Student
8 年First and foremost, I need to say that I'm currently a contracting drug discovery scientist at Pfizer and the following comments are my own, not those of Pfizer nor my agency. Computer-assisted drug design and modeling has so much potential which the entire field is only now beginning to truly comprehend. The developments out of these companies are very exciting and could lead to next generation treatments for millions of patients globally. However, we've seen time and time again that many development small molecule compounds have significant off-target binding which, ultimately, leads to a litany of side effects and entirely undesired outcomes. Given this computer-aided drug design, how could we incorporate off-target binding affinities into small molecule screening? I mean that's great that we could screen millions of small molecule compounds against one protein, but how about screening one compound against millions of proteins? Based on my limited knowledge in the area of drug modeling, that becomes *much* more difficult. Proteins are made up of chains of amino acids, of which twenty are utilized in humans. Given a random assortment of amino acids, a peptide (basically a very small protein) of ten amino acids would theoretically have 20^10 (just over ten trillion) different possible combinations of amino acids in its sequence. Now, imagine some proteins which contain close to 10,000 amino acids. Given that, how could we screen small molecules against tiny peptides, huge proteins and everything in between to assess undesired reactivity? To wrap that all up, I honestly believe is that we may never get drug development exactly right, even with computer screening and modeling; there will likely be some sort of undesired side effects caused by off-target binding, drug product metabolites or who knows what, exactly. What we *can* do is work with the knowledge that we have and the new knowledge that is continuously generated (e.g. new developments in computer-aided drug modeling/screening) to keep trying to make better products for patients. I really think that that's the most that we can ask of ourselves at any point in time, now and in the future.
Miguel Angel Rendon Lopez Electromechanical Engineer
8 年My final thouhgth over those chess games was that human being could beat computers if you had enough time! On the other hand it would be nice to have a mate that makes all the math work!