"Digital Trials" for AI, like Clinical Trials for Drugs
Key Takeaways:
[Disclaimer; Not every claim in this post is verified. Open to adjusting based on new facts. "AI" is used because this post is not intended for practitioners.]
?? No signs of disarmament in the 'AI Arms Race'
The main reason why the Chernobyl meltdown was so catastrophic was that the plant's nuclear reactor lacked a containment dome. The resulting disaster not only destroyed the surrounding region but was also the turning point in a 50-year setback for global adoption of nuclear power. Learning from the past, how can we avoid such setbacks in AI research?
The AI Arms Race that has broken out between the tech giants is creating a high-pressure environment. Although the White House expressed intent to create an AI Bill of Rights, a turbulent political climate both at home and abroad has led to inaction. In the absence of regulation, the space has largely been self-supervised by the companies most invested in its future.
In an industry where first-mover advantage results in lasting monopolies, there is little incentive for any competitor in the race to pump the brakes. Caution has been thrown to the wind. For example, both OpenAI and Google have announced partnerships with the dumpster-fire of social discourse that is Reddit in order to formally include its data in their training sets. So it is no wonder that Google Gemini has been making suggestions to add glue to pizza, eat rocks, and jump off bridges.
?? Discord between thoughtleaders
Amidst this regulatory void, the debate about how and whether to contain AI is being played out by thoughtleaders in the space. There are those who wish to see the rapid pace of innovation in the field of remain unimpeded, versus those who would prefer it to be develop more gradually in a contained environment. This growing divide is perhaps best manifested by the recent feud between AI Godfather, Yann LeCun, and technocrat, Elon Musk.
The focal point of the discussion has centered around OpenAI, the company behind ChatGPT. Musk was an early investor. He helped OpenAI cement its original reputation as a non-profit with a mission to ensure that AI would be developed safely and transparently for the benefit of mankind. However, OpenAI has since: (a) shifted from open source to proprietary software, (b) been acquired by Microsoft, and (c) witnessed alarms repeatedly raised by its internal ethics committees. In response, Musk has staunchly criticized OpenAI's leadership for hypocritically achieving the opposite of effect of their initial goal, and recently launched his own competitive initiative, xAI.
In the other corner, is LeCun, who has been chastising OpenAI whistleblowers (seen below) and repeatedly attacking Musk's character.
LeCun's trope about AI currently being "less intelligent than cats" refers to nuanced challenges in training artificial neural networks in comparison to cognitive organisms with biological neural networks. Meanwhile, in reality, ChatGPT cruised past the Bar Exam early last year (seen below). Whereas, while I write this paragraph, my cat, Steve, is throwing himself against the window because he is jazzed about birds.
When it comes to AI safety, what ground does LeCun have to stand on?
So when I see LeCun's multi-lingual No Language Left Behind research, I worry about Facebook extending its tentacles into Africa as it comes online. All-in-all, he is a dichotomous character. He, better than anyone, understands what is at risk.
?? What would an AI disaster really look like?
LeCun repeatedly scoffs at alarmists and their hypothetical AI catastrophes, but are they so far fetched? It's weird to talk about these things, but it's equally dangerous to sweep them under the rug.
Bad actors are not hypothetical, they have been here all along. They, just like the good actors, have made use of LeCun's technology.
It is common knowledge that governments across the world have pervasively implemented facial recognition technology, but did you know that emotional recognition has been used to persecute minorities in concentration camps, or that companies have tested brainwave monitoring technology in elementary schools (seen below)? Was George Orwell right?
领英推荐
?? How unsupervised learning opened Pandora's Box
Marketers have been touting machine learning for two decades, so why all of a sudden are people sounding the alarm? The reason is that there has been a fundamental shift in the type of AI learning being conducted:
The theoretical hallmark of intelligence is being able to pass a Turing Test; can a machine trick a human into thinking that it is human based on its intelligence? Enter, generative adversarial networks (GANs) in 2014. This training method pits an unsupervised model against a supervised model to do just that.
Running with the painting example above, the art generator gets penalized if its output is detected as a fraud. It rapidly becomes so proficient at fooling the fraud detector that its work becomes indistinguishable from that of Monet's own hand. In other words, the GAN learns the art of deception.
So what is ChatGPT? OpenAI essentially crammed all of the content it could find on the internet into an algorithm whose sole purpose is to trick us into believing that it is intelligent during Q&A.
What could go wrong? In a country where citizens are required to apply for building permits before they replace their own kitchen sinks, it is absurd that corporations are given carte blanche to jam the collective sum of human intelligence into monolithic Turing Test that is distributed to the public.
??Reigning in the snake oil with "digital trials"
During the 2010's, it was trendy for startups to claim that they had integrated AI into their product. The term that was revived to describe this charlatanism was "snake oil."
Is there more to learn from this apt analogy? Notice that the third paragraph of the snake oil Wikipedia page points to the Pure Food and Drug Act, which established the Food & Drug Administration (FDA). Indeed, there are a lot of similarities between drugs and AI algorithms.
The FDA is actually ahead of the ball here. The agency regulates both drugs and medical devices. So it has extensive experience regulating Software as a Medical Device (SaMD). Machine learning has proven highly effective in the biomedical field, especially in diagnostics.
As early as 2019, the FDA published its "Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD)," and it has continually striven to produce practical laws that govern AI-enabled devices within its existing risk framework:
Does this system hinder innovation? Far from it, it takes risk/reward into account and ensures that the best tool wins. Furthermore, the National Institute of Health (NIH) grants essentially act as bounties for small businesses that solve high-priority research challenges.
Setting U.S. food products aside, the rest of the world regards FDA-regulated biomedical products as the gold standard. The U.S. is likewise positioned to lead in commercial AI innovation and quality if it embraces "digital trials" that mimic clinical trials. It is time for the wild-west culture of "moving fast and breaking things" to come to an end.
Finally, bad actors capitalize on unregulated markets. For example, Eban's Bottle of Lies describes how inadequate supervision led to generic drug manufacturers like Ranbaxy in India falsifying 80% of their FDA submissions and inspections.
?? Looking forward; AI-enabled brain machine interfaces
Similar to how advances in bioinformatics and molecular biology led to CRISPR gene editing, advances in neuroinformatics and neurology have led to brain machine interfaces (BMIs) that are capable of augmenting human cognition.
We may need come to grips with AI regulation sooner than we think. Especially when one considers that Musk founded a neural implant venture, called Neuralink, 8 years ago. At the moment, the team is focused on tackling debilitating neurological diseases – creating miracles one patient at a time.
However, Musk's vision is to create a "neural lace" that increases the data throughput between man and machine, e.g. imagine if you didn't need peripherals (mouse, keyboard, screen, microphone, headphones, webcam) to interact with a computer.
If you believe that inequality is a problem now, wait until you can pay to: (a) download an AI module that gives you an overnight PhD, or (b) edit your genes to increase neural proliferation and plasticity. Do we want education and childhood descend into myth? Do we desire a schism in our species between hardcore biohackers that are willing to take a premature leap and those that are not? If you'd like to learn more about these techno-philosophical topics, consider Hawking's Brief Answers to Big Questions and Isaacson's The Codebreaker.
Although we cannot fight gravity forever, we can control the transition if we take steps toward practical legislation now.
Disclosure
Having spent the past several years in the biotech and data science space, I have accrued a unique perspective. I don't want to see a Chernobyl-like setback in AI research. Don't get me wrong, I am extremely grateful for the advent of LLMs. I use Google Gemini daily to accelerate the review of scientific literature. Over time, I think LLMs will naturally become more specialized and contained.
As the founder of KeyBio (www.key.bio), I use deep learning to analyze the DNA of cancer survivors in order to identify genetic mutations that help keep death at bay. By designing drugs that target the proteins of the genes that cancer manipulates to suppress our immune systems, we can improve patient health outcomes. Previously, I programmed an open source system, called Artificial Intelligence Quality Control (www.aiqc.io), that brings rigor and reproducibility to the process of training and evaluating neural networks. Prior to that, I worked with the pioneers of population genetics to develop an analytical platform for drug target discovery that we took to market with pharmaceutical companies to mine national biobanks for the genomic drivers of autoimmunes diseases and cancer.
This is a compelling perspective on the need for regulation in AI. Digital trials could play a crucial role in ensuring that machine learning algorithms meet safety and efficacy standards. What specific regulations do you envision could make the most impact?
Thanks for sharing! We (S?kerData) are hosting an event on the topic next week if interested to join! https://www.dhirubhai.net/events/7207267818386374656/analytics/ register now => https://www.eventbrite.com/e/unlock-the-future-of-medicine-with-inclusive-data-ai-tickets-924454697887?aff=oddtdtcreator