Neuralink’s AI Alignment Thesis Is Incomplete. We Need to Think More Deeply About Cognitive Autonomy in the Age of AI-Driven BCIs.

Neuralink’s AI Alignment Thesis Is Incomplete. We Need to Think More Deeply About Cognitive Autonomy in the Age of AI-Driven BCIs.

Brain Computer Interfaces Won’t Tame AI - But They Could Reshape Humanity

Next week at 英伟达 GTC, we’re unveiling a new program at Synchron that takes AI-integrated BCIs to the next level. Peter Yoo will be delivering a keynote on our vision for an intelligence paradigm where AI learns directly from human cognition. But there’s an elephant in the room to address.

There’s a prevailing thesis, largely pushed by Elon Musk and Neuralink , that BCIs are necessary to solve the AI alignment problem—that by increasing the bandwidth between humans and computers, we can keep up with accelerating artificial intelligence.

But this doesn’t make sense.

BCIs can’t solve AI alignment. The problem isn’t bandwidth. it’s behavioral control. AI is on an exponential compute trajectory, while human cognition—no matter how enhanced—remains biologically constrained. Even with BCIs, we would still think at a snail’s pace compared to AGI (and ASI??). AI safety depends on governance & oversight, not plugging into our brains. Alignment must be addressed in a paradigm where humans will never fully comprehend every model output or decision. This represents the grand challenge of our time, yet it is not one that BCIs will fix.

In fact, the risk might be the opposite of what Elon claims. BCIs could worsen AI Alignment risks. Algorithms already manipulate human cognition (TikTok and social media, recommendation engines). Integrating them directly into our brains puts us at risk of AI passively shaping our thoughts, desires and decisions, at a level we may not even perceive.

The real promise of BCIs isn’t about controlling AI, it’s about advancing human flourishing. A core human drive, encoded in our DNA, is to improve our condition. For patients with neurological injury, this means restoring function. In the future, it seems inevitable that it will include enhancement. BCIs will enable us to go beyond our physical limitations, to express, connect and create better than ever before. We’ve seen this before – cars, planes, computers – each transformed what it meant to be human.

This has led me to formulate a simple but essential ethical framework for the integration of BCI and AI. I have distilled it down to three pillars that capture the most significant addressable risks including autonomy, privacy, and discrimination.

I am interested in feedback from our community (especially my co-chair of the newly formed World Economic Forum Neurotechnology council, Nita Farahany – but note these views are my own). The pillars are Human Flourishing, Cognitive Sovereignty and Cognitive Pluralism.

1.????? Human Flourishing

Neurotechnology should be a force for well-being, expanding human potential and improving quality of life. It must empower individuals to enhance their cognitive and physical capabilities while respecting the fundamental right to self-determination. Innovation should prioritize human agency, fulfillment, and long-term societal benefits, ensuring that advancements uplift rather than diminish human dignity. Regulation should enable responsible progress without imposing unnecessary restrictions that limit personal autonomy or access to life-enhancing technologies.

If We Get It Wrong:

·?????? Loss of self-determination – Overly restrictive policies or centralized control could limit individual choice in adopting neurotechnology.

If We Get it Right:

·?????? Restored and enhanced abilities – BCIs help individuals with neurological conditions regain function.

·?????? Expanded human potential – BCIs become a tool for human expression, connection, and productivity, enabling humans to transcend physical limitations.

·?????? Ethical progress – Regulation protects self-determination while enabling responsible innovation, ensuring BCIs serve human well-being rather than centralized control.

?

2.??? Cognitive Sovereignty

Individuals must have absolute control over their own cognitive processes, free from external manipulation or coercion. Privacy and security are paramount: users must own and control their brain data, ensuring it is protected from exploitation by corporations, governments, or AI-driven algorithms. BCIs must prevent subconscious or direct co-option and safeguard against covert or overt AI influence in commerce and decision-making. This may require decentralized, user-controlled infrastructure to uphold cognitive autonomy. Above all, BCIs should enhance personal agency, not erode it.

If We Get It Wrong:

·?????? Loss of cognitive privacy – Brain data could be harvested, sold, or manipulated without consent, leading to unprecedented surveillance and behavioral control.

·?????? AI-driven coercion and persuasion – Advanced algorithms could exploit subconscious processes, subtly shaping thoughts, decisions, and emotions for commercial, political, or ideological agendas.

·?????? Centralized control over cognition – If brain data is monopolized by corporations or governments, individuals risk losing autonomy over their own thoughts, with AI acting as gatekeepers of perception and belief.

If We Get it Right:

·?????? Users own their minds – Brain data remains private, user-controlled, and secure, preventing unauthorized access, surveillance, or manipulation.

·?????? BCIs enhance agency – AI integration is assistive, not intrusive, empowering individuals without shaping their decisions or subconscious cognition.

·?????? Decentralized cognitive autonomy – A user-controlled, secure ecosystem ensures that thoughts, choices, and mental experiences remain free from corporate or governmental influence.

??

3.??? Cognitive Pluralism ?

Ensuring diverse cognitive models, perspectives, and ways of thinking are respected and valued. Cognitive diversity, much like neurodiversity, must be protected and upheld, preventing a singular model of intelligence, perception, or cognition from dominating at the expense of others. This includes addressing cultural discrimination between users and non-users of neurotechnology, particularly as enhancements become more widespread. Access to neurotechnologies must be democratized, ensuring that enhancements do not become a tool of exclusion but a potential means of empowerment for all.

?

If We Get It Wrong:

·?????? Tiered class system and social inequality – Cognitive enhancements could create an elite class, widening societal divides.

·?????? Discrimination – Those who choose not to enhance and those seeking to restore lost function?may face barriers in social stigma, economic, or political opportunities, leading to unfair disadvantages.

·?????? Cultural coercion – Subtle pressures or mandates may force individuals toward cognitive augmentation, eroding true self-determination.

If we get it right:

·?????? Diverse cognitive models thrive – BCIs expand, rather than narrow, cognitive expression, ensuring a rich landscape of thought, creativity, and intelligence.

·?????? Democratized access – Neurotechnology empowers all, preventing a cognitive elite and ensuring enhancements benefit broad populations, not just the privileged few.

·?????? Freedom to opt-in or out – Individuals retain true self-determination, —able to choose augmentation or non-augmentation without social, economic, or political penalties.


For the first time, technology can directly interact with human cognition—an opportunity that comes with immense responsibility. BCIs will either empower individuals or risk becoming tools of control. By prioritizing human flourishing, cognitive sovereignty, and cognitive pluralism, we can help ensure they enhance autonomy and creativity. There is much work ahead. Please comment.

Tom Oxley March 2025

David Putrino Abbey Sawyer Karen Rommelfanger Rafael Yuste Anna Wexler Vinod Khosla Alexander A. Morgan, MD PhD Brett Wall Geoff Martha Rafael Carbunaru Jonathan Haidt Andrew Huberman Lex Fridman John Krakauer Stephen Rainey ?? Neal Batra Jordan Peterson David Eagleman Reid Hoffman David Niewolny Anthony Costa Kimberly Powell Mostafa Toloui Josh Wolfe Jitka Kolarova Maryam Shanechi Prof Tim Denison FREng

Stephen Rainey ??

Philosophy. Brains, rationality, language, ethics.

2 周

Thanks for this post. I quite like the ethical framework you propose, though I’m skeptical about BCI or AI’s capacity to engage with mental content. AI and human cognition are sometimes (often?) compared as if they are the same thing on a scalar comparison. But AI doesn't use reasons in the operations it undertakes, whereas in human cognition reasons are standard. This is a fundamental difference between ‘mere’ behaviour and action: action is deliberate behaviour stemming from reasons. AI doesn’t do this. It doesn’t reason. It operates on a different, non-rational basis. In terms of flourishing, we should be live to the idea that while AI might have apparently expansive capacity and speed, it’s not as good as reasoning. Reasoning is better because it can be made explicit, scrutinised, and so on. That means that action, behaviour done for reasons, can be the responsibility of actors through highlighting the reasons and seeing how they check out. Humans are good, or can get good, at reasoning and we shouldn’t lose sight of our critical capacity. If 'AI alignment' is putting AI in line with edifying human purposes, taking responsibility – or ascribing it – for AI applications is realising alignment as an ongoing activity.

Nino Marcantonio

An Augmented & Inspirational Defense Technology Innovation Leader

2 周

Sean Manion love to read your thoughts ?? on this

回复

Thanks for the thoughtful conversation starter, Thomas Oxley. Continuing the dialogue with my response here: https://www.dhirubhai.net/pulse/response-tom-oxley-bci-ai-alignment-nita-farahany-tuo3e

Chris P. Costa

Neurotechnology / Hybrid Artificial General Intelligence Strategy / Open innovation / Kansei Engineering + First Principle Thinking / Swarm Business Intelligence / ML + PL: Machine & Planet Learning

2 周

Yes, it’s behavioral control, but... It's more, It's mind architecture in a hybrid distributed cognitive network links, by significantly increasing the number of electrodes and optimizing signal frequency and resolution, systems like NL can capture and transmit detailed neural data in real time, allowing for immediate feedback and precise control over Ai behavior too. Different rhythms at different cognitive levels, so higher layers of consciousness do not need to process all the signals in lower layers to be more agile.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了