Using Artificial Intelligence to Predict the Next Covid Variants
Using Artificial Intelligence to Predict the Next Covid Variants
Apriori Bio says it can model mutations to help drugmakers develop better vaccines and treatments.
https://drive.google.com/file/d/1rygPomJ-rNg2BknbTBcYRlFVgs3HZWIZ/view?usp=sharing
As pharmaceutical companies struggle to keep up with the rapidly mutating coronavirus, a startup in Cambridge, Mass., says it can help them by using artificial intelligence to predict future variants. Apriori Bio models the ways a virus might change and predicts how it will behave. The company says it’s harnessing that information to design “variant-proof” vaccines and treatments that can fight current and future strains—and provide an early warning to governments, sort of like a hurricane alert, to guide the public-health response.
After honing its technology, called Octavia, for more than two years, the fledgling company is formally launching with $50 million in funding from Flagship Pioneering Inc., the incubator behind Moderna Inc. We spoke with co-founder Lovisa Afzelius, a computational chemist—and Pfizer Inc. veteran—who serves as Apriori’s chief executive officer. Here are edited excerpts from the conversation.
What problem does Apriori aim to solve?
Right now the virus is in the driver’s seat. Infectious diseases have put such a strain on humanity for so many years, and we almost seem to accept that power dynamic. We should be in the driver’s seat. We need to be prepared.
Why is it so difficult to predict how a virus will change?
If you look at the original wild-type of SARS-CoV-2 compared to omicron, there are 30 mutations. If you do all theoretical permutations of potential variants, you end up with the number of atoms in the universe. While many have tried to predict the next variant, it’s almost impossible. Artificial intelligence can help.
What does your system do?
It allows us to look at any sequence and determine its threat profile. That information can be used as an early warning and answer questions like “Should we impose lockdowns?” or “If you’re vaccinated, can this variant
escape its protection?” We can also run the engine in reverse to design variant-proof vaccines and antibody therapies. And we can use it to look at people who have, say, cancer or autoimmune diseases or HIV, and understand their current risk level or what therapy they might need.
How exactly does Octavia come up with those insights?
We look at the virus and identify strains from over the years to see how it’s evolving. We isolate a key protein and use that to create libraries of millions of synthetic variants. The system can test how each variant binds to antibodies generated as a response to an infection or a vaccine, or those delivered via drugs. These tests create billions of data points, and we apply machine-learning techniques to build models. Artificial intelligence is an extremely powerful tool to predict how variants would behave in real life and determine which ones pose the greatest threat.
What is a synthetic variant? And who conducts the tests?
It’s a made-up virus. We have a computational team integrated with an experimental team, and they turn to our systems to test whether it’s even a possibility for these made-up viruses to exist.
How quickly can Octavia evaluate a variant?
Octavia assesses the risk of a new SARS-CoV-2 variant in seconds. The sequence of the new variant is inserted, and the engine instantly returns an assessment of how well it might express and bind to the human receptor, as well as its predicted escape from antibodies. If the sequence is outside of the current model space, we generate more data to expand the model, which could take weeks to months.
Are you concerned that this information could be weaponized or used for a nefarious end?
We’ve been focused on biosecurity from the time we realized we could make biologically meaningful predictions. We have partnered with external experts to help implement, test, and stand up the security we need, even though we’re not working with viral material.
Have you provided any of your discoveries to governments or to your sister company, Moderna?
We have shared insights with many private and public entities in the US and UK, believing this pandemic was a situation in which it was important to do so. From an early-warning perspective, we think a lot of different parties want to understand the risk assessment of emerging variants.
Many companies pitch AI services for drug development. What makes you different?
Biology-informed AI models will never be better than the data they’re built from. We’ve put a lot of effort into thinking about the biology we’re looking to replicate, model, and predict, creating the right experimental data sets and the algorithms best suited to describe that biology. The more data we feed it, the bigger its scope. I don’t see anyone approaching this challenge in the same way.
Is Apriori developing any of its own variant-proof vaccines and therapeutics?
We have a full-fledged suite of preclinical activities. We’re keeping all options open. SARS, flu, and HIV are the areas we’ve been working on.
What’s the most important lesson you’ve learned from developing this platform?
Our experimental capabilities and understanding of machine learning are at a pivotal point, where we can actually start getting ahead of the virus. You couldn’t do that during the Spanish flu. We should seize the opportunity.
?
Artificial intelligence in criminal justice: invasion or revolution?
Monday 13 December 2021
Introduction
From Homer’s Iliad to 20th century science fiction movies, through Da Vinci’s humanoid robot,[1] artificial intelligence (AI) has been a subject of humankind’s dreams for centuries.
Although the notion of AI has started as a fantasy, sometimes even dystopian, like Spielberg’s film Minority Report which depicts a worrying future of advanced technologies in law enforcement, AI is now a reality in daily life, and has shifted human lifestyles. Cars, phones and even healthcare are just some examples of sectors which AI has penetrated.
Considered as a branch of computer sciences, AI refers to the building of ‘smart’ machines, able to perform human tasks by mimicking human attributes, intelligence, and reasoning, but without direct human intervention.
Within the last two decades of research,[2] AI has been improved to the point at which it can outperform human abilities. This includes AlphaGo, the first computer program to defeat, in the last decade, the world’s greatest player of GO, a 3,000-year-old Chinese complicated thinking game.[3]
AI has even penetrated the formal functions of the state, from taxation (eg, the UK program Connect), to border security, and even public order. Such use by government is partially explained by the latest trends. Crime, for instance, has become ‘high-tech’, as criminal groups have exploited technology from its earliest days to the latest trends of cryptocurrencies and cybercrimes.
Concern about disparities between criminals and law enforcers, criminal justice had to be equally equipped and prepared to leverage technologies such as AI to improve crime prevention and control. More specifically, AI is used in law enforcement and courts of justice for better, faster results with a highly reduced margin of error, due to the absence of human input.
Although the use of AI in criminal justice is meant to fulfil fundamental legal principles such as public order and security, it can also create negative externalities by amplifying pre-existing prejudices and errors, and consequently undermine the efficiency of justice and law enforcement.
AI in criminal justice, the perfect tool?
领英推荐
Brief history of AI
The earliest significant study of AI began in the mid-20th century, by Alan Turing, a British mathematician and logician, known for breaking the German Enigma machines’ encryption during the Second World War. Considered as one of the founders of computer science and artificial intelligence, Turing was the first to wonder whether machines could use information to imitate humans in problem-solving and decision-making.[4] Turing’s paper and its Turing Test formed the ultimate goal and vision of AI.
A few years later, John McCarthy,[5] a US researcher and professor of mathematics, coined the term of ‘artificial intelligence’ that he defined as the ‘science and engineering of making intelligent machines’. However, there is no universally consensual definition of artificial intelligence as AI is an interdisciplinary science combining multiple approaches, and multiple study fields such as sociology, cognitive sciences, and mathematics.
Use of AI
Criminal justice has recently turned to AI to improve its outcomes, cut crime, and reduce justice-related delays as research showed that AI could be a permanent part of the criminal justice ecosystem, through investigative assistance.[6] The use of AI can be readily noticed in both law enforcement and courts of law, as a prevention and prediction tool, but also as a crime-solving and recidivism tool.
Through video and image analytics, AI is used to improve law enforcement outcomes, by reducing time-consuming tasks and human error. AI facial recognition skills can establish the identity and whereabouts of an individual, considerably improving crowd surveillance results.
AI facial recognition assesses clothing, skeletal structure, and body movements in order to detect abnormal or suspicious behaviour among masses, such shoplifters or dangerous drivers breaking traffic laws. It also helps with vehicle identification as AI programs are taught to decipher number plates even with poor resolution or low ambient light. Several governments have already allowed the use of AI in law enforcement, such as the Canadian police.[7]
AI can be very helpful in detecting traffic accidents through closed circuit television (CCTV) surveillance, and online-related crimes including human trafficking, money laundering, fraud and sexual abuse.
By detecting suspicious activities, AI can prevent crimes, and help investigators identify suspects more rapidly, ensuring stronger public safety and increased community confidence in law enforcement and criminal justice in general.
AI also has a significant use in courts of law. Through crime-solving and from a scientific viewpoint, AI improves forensic laboratories and investigators in DNA testing and analysis by processing low-level or degraded DNA evidence which could not have been used a decade ago. Furthermore, decades-old cases have been reopened to submit sexual assault and homicide cold case evidence for perpetrator identifications. Such use of AI decreases unsolved crime which strengthens civilians’ sense of trust in justice.
Another application of AI is predictive justice, which is the statistical analysis of a large amount of case law data – mainly previously rendered court decisions – in order to predict court outcomes. This can help judges focus their time on cases for which their expertise has a higher added value. In the long term it can strengthen justice stability worldwide by offering economic players more harmonised court decisions, therefore helping better anticipation.
AI can also predict recidivism by analysing hundreds of thousands of criminal justice-related data to predict new offences of absconding offenders. Such AI application can be very useful for practitioners in warrants services, increasing fines recovery and allowing a more optimised resources allocation which, in the long term, helps the aim for swifter wheels of justice.
Transition
Around the world, criminal justice uses different resources such as IT technology to limit felonies and crimes. With Schumpeter’s theorised technical progress and its creative destruction, it would seem odd for the criminal justice sector not to assess AI’s potential contribution and utility.
However, AI is merely a tool, and since a tool is only as good as its user, it is important to evaluate potential negative externalities of AI uses in criminal justice, to avoid any counterproductive consequences such as bias and errors.
Risks of AI use in criminal justice
AI is not yet a mature technology in many of its applications. Criminal justice, and primarily law enforcement, shall then consider the use of AI not only in light of fundamental human rights principles such as privacy and non-discrimination, but also in light of the growing belief that AI algorithms are more objective and intelligent than humans, when in fact they can actually convey human error.
Bias and discrimination
Although AI modus operandi excludes any human intervention, it is created by humans, and in such regard, it implies a certain room for error. All datasets introduced in AI algorithms to generate results are human data, which mean they already contain human bias, which are then passed on in AI results.
Independent research reports show that the use of AI can lead to certain groups of people being more frequently stopped and searched by law enforcement than others, for instance, depriving citizen of fairness and egality and equity principles.[8]
For example, AI surveillance of criminal ‘hotspots’ can actually increase geographical discrimination, as those areas are more controlled by police than other areas, which results in higher arrests in such AI-monitored areas.
It is important to underline that databases used by law enforcement are actually private companies, such as Clearview, the world’s largest facial network company created for law enforcement use. Although Clearview is contractually bound to governments, it implies a partial transfer of certain regalian functions of the state to private companies, which could lead to other negative outcomes, such as a poisoned database or cyberpiracy, which would infringe privacy rights principles of hundreds of thousands of citizens.
Need for regulation
To avoid discrimination and fundamental rights infringement, the use of AI in law enforcement implies a high level of accountability, fairness and transparency.
As a result, the European Commission has understood this by proposing on 21 April 2021 the Artificial Intelligence Act,[9] to codify the high standards of the EU trustworthy AI paradigm, which it required for AI to be ‘legally, ethically and technically robust, while respecting democratic values, human rights and the rule of law’.[10]
The EU AI Act mainly introduces a ‘product safety framework’ formed around four risk (minimal, limited, high and unacceptable) categories. It enforces requirements for market entrance and certification of ‘High-Risk AI Systems’ through a new mandatory CE-marking procedure.
Regarding legality in AI outcomes, this regime also applies to all machine-learning training, testing and validation of datasets, specifying the forbidden use of private companies’ databases.
The European Parliament has recently opposed mass surveillance, calling for a ban on private facial recognition databases such as Clearview.[11]
Conclusion
In conclusion, humanity is called on to evolve by integrating new methods resulting from technical progress and creative destruction. Today’s ultra-connected world implies a technological overexposure but also an evolution of criminal practices.
In this context, an equivalent response seems to be crucial to face these new technological challenges. AI could be the answer to curb certain crimes which date back to the dawn of time, such as domestic violence.
In the current context of minorities (religion, race, sexual orientation), the use of AI seems to increase the discrimination they already face.
However, like any immature technology, it needs time and mistakes to progress. Until then, an international consensus is needed to guarantee fundamental rights and principles, especially those of fair trial, and to ensure the privacy of citizens around the world, through code ethics, based on transparency and accountability.
The EU draft bill seems to provide a framework defining the use of AI and its powers, in a social context where AI appears to civil society as being more intelligent and even surpassing its creator, humankind. Through predictive justice, AI seems in some way to align itself with the work of the judge it imitates. In this regard, it might be interesting to ask whether AI can also imitate the work of a lawyer to improve their time optimisation: could an AI program become a lawyer?
Boundless Robotics – Artificial Intelligence That Helps Home Growers
While Artificial Intelligence (AI) might sound scary, it’s actually an incredible technology that can help people in so many ways. For example, Boundless Robotics (Boston, MA) is using AI to help people grow large-format consumable plants such as robust ghost peppers, the freshest of herbs, and much more, at home. This automated growing machine removes the challenges of growing at home – all the user has to do is add water to a reservoir and the AI-enabled hydroponics system takes care of everything else except the harvest. They’ve named their system “Annaboto”.
The genesis of the idea was born out of necessity. The founder, Carl Palme, has what he describes as “the opposite of a green thumb”; he struggles to keep any plant alive. Carl and his team have used their background in robotics and AI to develop a system that makes it possible for anyone to grow plants at home effortlessly, anywhere in the world.
This new AI-powered technology uses a combination of Time Series Data (e.g. room temperature, humidity, water consumption, etc.) and Vision (a camera that looks at the plant and can assess its health) to help create custom recipes that will help the plant thrive, all while the user sits back and watches the plant grow.
According to a USDA report, the allocation of a consumer’s food dollar found that only 16.7% is directly related to the cost of the food itself. The remaining percent is labor, transportation, packaging, and other costs. The environmental impact of products, food and drink comprise 20% to 30% of the environmental impact of private consumption (Tukker et al., 2006).
Great Britain, with its short growing season and powerful supermarket chains, imports 95 percent of its fruit and more than half of its vegetables. Food accounts for 25% of their truck shipments, according to the British environmental agency, DEFRA.
Consumers want access to different, fresh food year-round, but at what cost?
As the world faces greater supply chain issues and reduced access to fresh food, technologies like those created by Boundless Robotics will be of increasing benefit to feeding our world. We need to grow food closer to the source of consumption. In order to succeed, we have to leverage tools like AI so we can make this process scalable and more efficient. Not everyone has the time or expertise to successfully grow high-quality produce at home. Products like Annaboto are making it possible for anyone, anywhere to grow consumable plants at home.
AI isn’t scary, it’s just misunderstood. Happy “at Home” Growing!