Today’s AI is ‘alchemy,’ not science — what that means and why that matters
VentureBeat
VB is obsessed with transformative technology — including exhaustive coverage of AI and the gaming industry.
Welcome to another edition of ?? The AI Beat ??!
At last month's AI??NY event that I co-hosted on a Manhattan rooftop with VentureBeat and Lightning AI , I chatted with Thomas Krendl Gilbert , a machine ethicist whose research work has long focused on the intersection between AI, science and politics.
One comment he made stood out to me: He told me that even with the thousands of research papers on deep learning and generative AI, no one really understood the exact mechanisms for how AI model inputs leads to their outputs. How could that be, I wondered?
That led to an in-depth interview with Gilbert last week in which he detailed his thoughts on AI as alchemy, not science — a topic he also delved into for the first episode of a new podcast, The Retort, that he hosts with Nathan Lambert , an AI researcher at Hugging Face .
Last week was also about:
— Sharon Goldman, senior writer covering AI at VentureBeat
纽约时报 article this morning, titled “How to Tell if Your AI Is Conscious,” says that in a new report, “scientists offer a list of measurable qualities that might indicate the presence of some presence in a machine” based on a “brand-new” science of consciousness.?
The article immediately jumped out at me, as it was published just a few days after I had a long chat with Thomas Krendl Gilbert, a machine ethicist who, among other things, has long studied the intersection of science and politics. Gilbert recently launched a new podcast, called “The Retort,” along with Hugging Face researcher Nathan Lambert, with an inaugural episode that pushes back on the idea of today’s AI as a truly scientific endeavor.?
Gilbert maintains that much of today’s AI research cannot reasonably be called science at all. Instead, it can be viewed as a new form of alchemy — that is, the medieval forerunner of chemistry, that can also be defined as a “seemingly magical process of transformation.”?
Like alchemy, AI is rooted in ‘magical’ metaphors
Many critics of deep learning and of large language models, including those who built them, sometimes refer to AI as a form of alchemy, Gilbert told me on a video call. What they mean by that, he explained, is that it’s not scientific, in the sense that it’s not rigorous or experimental. But he added that he actually means something more literal when he says that AI is alchemy.?
“The people building it actually think that what they’re doing is magical,” he said. “And that’s rooted in a lot of metaphors, ideas that have now filtered into public discourse over the past several months, like AGI and super intelligence.” The prevailing idea, he explained, is that intelligence itself is scalar — depending only on the amount of data thrown at a model and the computational limits of the model itself.?
But, he emphasized, like alchemy, much of today’s AI research is not necessarily trying to be? what we know as science, either. The practice of alchemy historically had no peer review or public sharing of results, for example. Much of today’s closed AI research does not, either.?
“It was very secretive, and frankly, that’s how AI works right now,” he said. “It’s largely a matter of assuming magical properties about the amount of intelligence that is implicit in the structure of the internet — and then building computation and structuring it such that you can distill that web of knowledge that we’ve all been building for decades now, and then seeing what comes out.”?
Anthropic and BCG form new alliance to deliver enterprise AI to clients
AI unicorn startup Anthropic, known for taking on OpenAI’s ChatGPT with its Claude 2 large language model (LLM) assistant, is moving to take its offerings to more enterprises.?
领英推荐
Today, the Dario Amodei-led company announced it has partnered with Boston Consulting Group (BCG) to provide its clients with “direct access” to Claude 2 and Anthropic’s AI tech.
Databricks raises $500 million with backing from rival Snowflake’s top client
Databricks announced a fresh funding haul of over $500 million today, which values the company at a whopping $43 billion. The round is led by T. Rowe Price Associates, but two new investors are notable in this round: Nvidia and Capital One Ventures (Capitol One is the top client of Databricks’ main rival, Snowflake).
Databricks CEO Ali Ghodsi told VentureBeat on a video call that the new funding round is “really about the strategic nature of the partnerships and investors that we brought in into this round.”
Adobe publicly launches AI tools Firefly, Generative Fill in Creative Cloud overhaul
Adobe, the software giant behind Photoshop, Illustrator, Premiere Pro and other popular creative tools, announced today it is charting a radical new course in creative software, integrating artificial intelligence (AI) across its Creative Cloud applications, a sign of the company’s faith in its liability protections for enterprises.
Central to the update is the official integration of Adobe Firefly, the company’s new AI engine, directly into Creative Cloud software. Firefly uses generative AI to allow users to create or modify images, graphics, and other media through simple text prompts. For example, a Photoshop user can now add or remove objects from an image by describing the changes in words.
Senate begins private AI meetings, says tech to ‘impact nearly every area of life’
After months of buildup, Senate Majority Leader Chuck Schumer (D-NY) finally opened the U.S. Senate’s inaugural bipartisan AI Insight Forum, in which all 100 senators have the opportunity to get a crash course on a variety of issues related to AI, including?copyright, workforce issues, national security, high risk AI models, existential risks, privacy, transparency and explainability, and elections and democracy.
The closed-door event with lawmakers featured Big Tech CEOs including Tesla’s Elon Musk, Meta’s Mark Zuckerberg, OpenAI’s Sam Altman, Google’s Sundar Pichai, Microsoft’s Satya Nadella and Nvidia’s Jensen Huang of Nvidia, as well as leaders from tech, business, arts and civil rights organizations including the Motion Picture Association, the Writer’s Guild, the AFL-CIO and the Leadership Conference on Civil & Human Rights.
Data & AI Leader | Board Advisor | DataIQ 100 | AI | Gen AI | Responsible AI | Behavioural Science | Risk Scoring | Insurance | Banking | Healthcare | Wellness | Thought Leader | Keynote Speaker
1 年I can’t agree with the statement that “today’s AI is alchemy, not science”. The fact that we don’t exactly understand how AI like large language models work doesn’t mean it is magic. We don’t call astrophysics alchemy even though researchers haven’t figured out how the dark matter works, for example. Calling something alchemy is a cheap shot often used by people who are afraid of things they don’t understand and who want to discredit someone’s body of work.
Senior Product Planning Development |CMF Design|Project Marketing
1 年next time you see a data scientist or AI engineer, don't just see a geek with a laptop. See an alchemist, a wizard of the digital age, a shaper of the future.
Strategy & Insights Leader | Data Storyteller | Integrated Brand Planner | Growth Partner | Freelance Strategy SVP/Fractional CMO
1 年Those of us who've worked in the data/AI world have gone from illusionists to mad scientists to harbingers of the apocalypse. Tossing 'alchemist' into the mix certainly adds a bit of a magical flair to the street cred.
Project coach and Automotive SPICE senior consultant
1 年AI (more precise, neural networks, because what else there is?) is a heuristic by its very definition. Think about it for a minute. Most human activities seem to be deterministic, but they are not. They are nearly all heuristics. Moreover, we often need to find out why we have any opinions. The brain is like a plant: you cannot MAKE it grow, no matter how much you try to force it to. Thinking is not a skip-logic. It is an alchemistic process. The worst thing that can happen is to try to force a heuristic into a finite-state machine. Being somewhat erratic or unpredictable has made us successful as a species. Human thinking is "alchemy," if you will, and it is precisely how it should be: an alchemy of mind. In that sense, the expectation that governments should "regulate" AI is a horrific idea. Whoever came up with this idea has an agenda I already dislike. Why? Can't say, honestly. I am not a finite state machine?? and I don't want to be.
发明家“IaaS”,天使投资人,成长黑客,导师
1 年I’ve been describing it as a “mystical art”