The Dangerous Misconceptions About AI and Why Ethical Development is Key
Angelique Dawnbringer
Digital Trust | Information & Cybersecurity Advisor
Last night, I overheard a conversation that left me both concerned and frustrated. A man was enthusiastically explaining how tools like Copilot and ChatGPT could generate fully functional code and do almost anything you ask of them. To him, it seemed these tools were magical solutions to development challenges. What he didn’t realize—and what so many people fail to understand—is that the truth is far more complex, and frankly, quite the opposite of the picture he was painting.
The Risk of Leaking Sensitive Data
The reality is that these tools pose significant risks, particularly when used without care. Many users unknowingly leak corporate data, intellectual property, and other sensitive information while using them. Even worse, many don’t even care. This behavior is dangerous, especially when people casually recommend these tools without understanding the security implications. AI models like these don’t come with built-in privacy protections or guarantees. Without strict governance and policies in place, the potential for sensitive data to be exposed, misused, or even embedded into the learning systems of these AI tools is very real. And when that happens, the consequences can be catastrophic for businesses.
Encouraging the blind adoption of AI tools without educating users about these risks is reckless. Many organizations lack the necessary safeguards, which can lead to breaches that damage both reputation and trust. The focus should not be on how easy and powerful these tools are, but rather on how to use them safely and responsibly.
The MIT AI Risk Repository is a valuable resource for understanding these dangers. It catalogues a wide range of AI-related risks and provides clear reasoning on how developers and companies should act responsibly when deploying AI technologies. Ignoring such guidance not only undermines ethical standards but also puts businesses and their users at significant risk.
The Debate Around AI Regulations: Why Companies Are Wrong
As if the risks surrounding careless AI usage weren’t enough, we’re now seeing major companies like Ericsson claim that AI development is becoming nearly impossible due to the AI Act and other regulatory frameworks. But this argument misses the mark, and frankly, I call bullshit.
These regulations, like the AI Act, don’t exist to stifle innovation—they are designed to ensure that AI is developed responsibly. They ensure that those building AI systems are informed, manage end-user risks, and are transparent about how their systems function. The fact that some companies are pushing back against these regulations and even considering moving their development to the U.S. or India to bypass them is alarming. Instead of evading laws designed to protect people, why not adapt your approach and demonstrate that AI can be developed ethically?
Yes, compliance with these laws can be tedious. I don’t deny that. But these frameworks exist for a reason. I’ve seen firsthand the kinds of disasters that can happen when we ignore these risks. For example, we’ve witnessed radiation poisoning in software-controlled environments due to lapses in oversight. When companies prioritize speed and convenience over safety, the results can be deadly.
A Broader View of AI Risks
From a risk perspective, there are far more concerns than just the leakage of corporate data. In fields like healthcare, AI can be used to administer medication or control critical systems, and mistakes in these applications can result in harm or death. Imagine an AI system making an error in drug dosage, or a malfunction in a system used to monitor patients' vital signs. In autonomous driving, lives are literally at stake when decisions are handed over to AI systems. These technologies must operate with an almost flawless degree of reliability, and rushing development without proper scrutiny can lead to disastrous outcomes.
领英推荐
This is why stringent regulations are in place—they serve as a safeguard for when lives and safety are on the line. The MIT AI Risk Repository highlights, in essence, many of these dangers, from healthcare to transportation, pointing out the high stakes involved in AI mismanagement. Developing AI responsibly isn’t just about protecting data, it’s about safeguarding people’s lives in critical fields like these.
Lessons from MedTech: The MDR and Startups
The complaints about AI regulations remind me of a similar debate in the MedTech industry surrounding the MDR (Medical Device Regulation). Critics often argue that the MDR is too stringent, that it hampers innovation, and that it ties up startups, preventing them from achieving good results because they can’t handle the bureaucratic burden. But in the MedTech world, we’re talking about people’s lives—so it’s no surprise the regulations are strict. While it’s true that large companies often have the resources to navigate these regulations more easily, the need for rigorous oversight is undeniable when human lives are at stake.
What’s troubling is that we’re hearing similar complaints from companies developing AI, particularly big players like Ericsson. They claim that EU regulations are hindering innovation, painting the EU as the villain for creating barriers to technological advancement. They even suggest moving operations to the U.S. or India, where the regulatory environment is perceived as less strict and cheaper. But in reality, these regulations exist to protect lives and ensure the technology we build doesn’t cause harm. This is no different from MedTech—AI, especially in applications like healthcare, finance, and critical infrastructure, can directly impact lives. If companies can’t or won’t comply with these regulations, the issue may not be the law itself—it’s often their unwillingness to adapt their methods or invest in responsible innovation.
AI Can Be Developed in the EU—It Just Takes Effort
One of the biggest myths being circulated is that AI can’t be developed in regions like the EU because of strict regulations. This simply isn’t true. AI can absolutely be developed here, just like open-source software. The difference is that you must be willing to communicate clearly, prioritize safety, and do the hard work to ensure compliance.
To those arguing that the effort isn’t worth it, I would point to scientific research. In science, everything must be scrutinized, reviewed, and tested to ensure the results are valid and safe. AI development should be no different. We need to hold AI to the same standards if we’re going to avoid causing harm and ensure that this technology benefits society as a whole.
The Responsibility of AI Innovators
The pushback against AI regulations, and the casual attitude many take toward tools like Copilot and ChatGPT, points to a larger issue in the tech industry: a desire for speed and convenience at the expense of safety and ethical considerations. If we truly want AI to reach its potential, we must focus on responsible development, not just fast development. This means educating users about risks, establishing clear governance frameworks, and adhering to the regulatory standards that are in place to protect us all.
In the end, the development of AI is not just about what is technically possible—it’s about ensuring that the technology we create is safe, transparent, and ultimately serves the greater good. We have the tools to develop AI responsibly. Now it’s up to the companies and individuals building this technology to step up and make that happen.
Agree there are regulations for a reason and the responsibility always lays with the user to understand the potential risks which sometimes is a bit of a struggle to get the grip of
Information Security Officer at SEB Kort
1 个月Very well written, and calls for considerations. Too many decision makers just read the headlines regarding new technology, and then rush in the hype-direction, determined to be first, without thinking twice....
Striving to be your Lighthouse on Cybersecurity - enabling you to have security investments that matters
1 个月It is always interesting to take part of your insights and I read this with interest. Ethical development is indeed key and it would be so gratifying to see European leadership to pave the way. Not only with regulations, but with full-blown alternatives to everything else. When it comes to single entities/organizations that are competing on a global market we need to have an understanding of their reasoning. Just keep up your good work and please continue to share your thoughts and insights. It gives so much value.
I fully agree with Angelique Dawnbringer and Johan Bocander !!! In any other area, do you want unhinged profit maximizing firm doing what they want? Regulatory frameworks create stability and (healthy) friction.
People & Product Leader | cloudcloud.dev
1 个月Insightful as ever. I don’t quite agree with everything though. I do think that much of recent EU regulation in tech is stifling innovation. Regulation is generally written by lawyers for lawyers and quite inaccessible for most engineers. As such it misses the mark as there are several filters and numerous interpretation biases between the intent of regulation which may be sound and the implementation of regulation which is skewed by numerous interpretation biases on its way to the engineers.