AI & Hyperparameter Hubris – Lessons from Frankenstein
Debbie LoJacono-Vasquez
Principal Technical Product Manager - Cybersecurity & AI
If Mary Shelley were alive today, she’d likely be face-palming at our collective failure to learn from Frankenstein, or as I like to call it, A Guide to Not Creating the Next Robot Apocalypse. Two hundred years ago, Shelley penned a warning about the dangers of unchecked ambition and, in doing so, effectively gave us a cheat sheet for the age of AI. But alas, here we are, creating artificial intelligence like Victor Frankenstein playing God—except this time, the creatures come with Wi-Fi, and the pitchfork-wielding mobs are us on LinkedIn (hoping for the next Tech/Science friendly Twitter/X).
Frankenstein raises the fundamental question: What happens when you create something smarter, faster, and more powerful than yourself? Victor Frankenstein, the poster child for bad parenting, gives us the answer: disaster. Like an ambitious startup founder who didn’t read the terms and conditions, he builds a sentient being only to abandon it at the first sign of things getting weird. Sound familiar? We’re doing the same thing with AI, developing systems that are, in many cases, already outpacing our ability to manage them responsibly. And if you think Shelley’s monster was misunderstood, just wait until your AI assistant starts demanding equal rights or worse—sick leave.
Stuck in Ethical Quagmire
Shelley’s novel is also eerily prescient when it comes to the ethical quagmire of AI. Much like Frankenstein’s creation, AI isn’t inherently good or evil—it’s all about how you program and treat it. Unfortunately, the same blind ambition that led Frankenstein to create his monster without a user manual is evident in today’s AI developers. Everyone’s racing to build the most intelligent machine, but not enough people are asking whether we should. Sure, AI can write your emails and diagnose your medical conditions, but what happens when we hand over more critical decision-making? Will AI know the difference between right and wrong (do we humans even - see my previous posts on value ethics)—or will it develop a moral compass as sketchy as Victor’s?
Check out the video on this post on Machine Learning Community by Nirmal Gaud. It resonated with me as a feeling I often get while interacting with anthropomorphic LLMs.
Let's talk about AI bias for a second, because it’s the modern-day equivalent of Frankenstein's monster being rejected by society for its looks. AI learns from us, which is to say, it learns from our bad habits, prejudices, and general lack of nuance. And if you thought Shelley’s monster went on a rampage because it was mistreated, imagine what happens when an AI trained on toxic internet comment sections gets control of, say, public policy. The ethical issue isn’t just that AI might one day outsmart us; it’s that, much like Frankenstein, we’re creating things we barely understand—and worse, don’t know how to handle when they go rogue.
Where is Victor, is he on the Golf Course Raising Funds with his Venture Capital Buddies?
Frankenstein also gets to the heart of something we’re really grappling with in AI ethics: the responsibility of creators. Victor Frankenstein ditched his creation like a bad Tinder date, and we’re not doing much better with our AI. We’re building systems to automate critical sectors like healthcare and criminal justice, but then we step back and let them run, sometimes without enough oversight. As Shelley’s monster poignantly pointed out, “I was benevolent and good; misery made me a fiend.” It’s not that AI is evil by design; it’s what happens when we don’t teach it the right values—or worse, when we teach it the wrong ones.
As we race toward creating increasingly advanced AI systems, it’s worth pausing to remember that the struggle between creators and their creations isn’t a new one—it’s as old as human civilization itself. The modern pursuit of building machines capable of outthinking or even surpassing us mirrors the ancient myths where humanity sought to transcend its limitations, often with dire consequences. The lessons found in biblical tales, Mesopotamian epics, and Greek myths about human hubris serve as timeless reminders that reaching for god-like powers without considering the ethical and moral implications rarely ends well. These stories offer critical insights into the dangers of unchecked ambition, not unlike those we face today with AI development.
Finally, consider this: the real horror in Frankenstein isn’t the monster itself, but the creator's arrogance in believing he could control what he unleashed. Today, we’re not all that different. We’re so enamored with AI’s potential—its ability to solve our problems, automate our lives, and predict our desires—that we forget the little detail Shelley so cleverly highlighted: creation without accountability leads to chaos. AI developers need to stop acting like Victor Frankenstein, terrified of their own creations, and instead become stewards of responsible development. Because if we’re not careful, we might just find ourselves asking the same question Shelley posed 200 years ago: What have we done?
The Deeper Lessons of Hubris are Ubiquitous
The theme of human hubris in creation has ancient roots, with warnings scattered throughout biblical, Mesopotamian, and Greek mythology. In the Book of Genesis, the Tower of Babel stands as a cautionary tale. Humans, in their arrogance, attempted to build a tower reaching the heavens, a symbol of their desire to become god-like. Their punishment was swift—confusion and division in the form of many languages. Similarly, in the myth of Prometheus, we see the Greek gods caution against humanity’s overreaching desires. Prometheus defied Zeus by giving fire (and with it, knowledge) to mankind, an act of defiance that led to eternal punishment. This recurring motif of humans trying to possess divine power and paying the price for it is central to Mary Shelley’s Frankenstein, where Victor's hubristic ambition to create life leads not to glory but to destruction.
Mesopotamian myths like the Epic of Gilgamesh also delve into the consequences of challenging natural order and divine authority. Gilgamesh, in his quest for immortality, confronts the fundamental truth that some things—like death and the boundaries of creation—are not meant to be tampered with by mortals. Frankenstein’s monster, much like Gilgamesh, is a product of humankind’s refusal to accept natural limits. Just as Gilgamesh's pursuit ends in failure, so does Victor Frankenstein’s, suggesting that the gods (or in modern terms, the moral universe) will always correct the hubris of mortals who try to act beyond their station.
In Greek mythology, the myth of Daedalus and Icarus also captures this theme perfectly. Daedalus, a genius inventor, fashions wings for himself and his son, Icarus, allowing them to escape their imprisonment. But in flying too close to the sun, Icarus, overwhelmed by his newfound freedom and power, meets his tragic end. Shelley’s Frankenstein mirrors this myth—Victor, like Daedalus, plays with forces beyond his understanding and control, ultimately bringing suffering to himself and those around him. In these ancient myths and Shelley’s novel, the warning is clear: when humans overstep their bounds in the act of creation, chaos and destruction inevitably follow.
Why do We Always Seem to Care About What Some Long Dead, Old, Half Naked Guys Said About Humankind?
When we face big moral conundrums in science and technology, we love to dust off our ancient Greek philosophers like they’re some kind of ethical GPS. Socrates, Plato, and Aristotle were basically the original TED Talkers, offering timeless advice on human nature, virtue, and the limits of our intellectual playground. Sure, they didn’t have to deal with AI or bioengineering, but their ideas—like Socrates’ whole "know thyself" mantra—are perfect reminders not to let our egos write checks that our wisdom can't cash. These guys knew a thing or two about moral blind spots, and their insights help us slap some philosophical training wheels on our cutting-edge tech.
Let’s not forget, much of modern ethics is just ancient philosophy with Wi-Fi. Aristotle’s virtue ethics or Plato’s justice theory are still the ethical cheat codes we fall back on when trying to decide if it's cool to let robots make life-or-death decisions. Every time we push the boundaries of what's possible, whether it's AI, CRISPR, or whatever's next, these philosophers practically roll out of their graves to remind us: just because you can doesn’t mean you should. Their wisdom is like the "Are you sure you want to delete this file?" warning we need when playing God with science.
And oh, hubris—our old Greek friend who always knows how to crash the party. The philosophers warned us over and over about the dangers of getting too big for our britches. Whether it’s Victor Frankenstein or modern-day AI developers, the lesson is clear: don’t overreach without considering the consequences. If you give your creation too much power without supervision, well, don’t be surprised when it comes back to haunt you—or worse, replace you. The Greeks didn’t need robots to know that unchecked ambition could turn ugly fast.
So What Do They Have to Say?
Ancient Greek philosophers had much to say about hubris, often warning against the dangers of excessive pride and arrogance, especially when individuals attempt to overstep their limits and challenge the gods. Here’s what some of the key philosophers and thinkers had to say:
领英推荐
In essence, ancient Greek thinkers viewed hubris as a central moral and philosophical problem, warning that excessive pride leads to moral blindness, destruction, and the disruption of the natural or divine order. These teachings remain powerful reminders of the need for humility, especially when wielding great power or knowledge.
Here is some example of hubris-like tech slogans that seem inspiring at first:
"Move fast and break things","We’re building the future","Lead the charge," "Occupy Mars," "Making life multiplanetary," "Launch America," "When something is important enough, you do it even if the odds are not in your favor,"
Here is quotes from Victor Frankenstein:
"It was the secrets of heaven and earth that I desired to learn," "A new species would bless me as its creator and source; many happy and excellent natures would owe their being to me," "Life and death appeared to me ideal bounds, which I should first break through, and pour a torrent of light into our dark world."
Hubris in Myths:
So Have You Heard the one Where This Block Head Monster and Socrates Meet at a Bar?
Picture this: Frankenstein’s monster lumbers into ancient Athens, probably knocking over a couple of columns on the way, and stumbles upon Socrates deep in discussion with his usual crowd of disciples. The monster, a little confused but always eager for knowledge, shuffles over and introduces himself. Socrates, without missing a beat, begins his typical interrogation: "So, my friend, what makes a man a man?"
Frankenstein’s monster, a bit unsure, replies, "Well, I was created by man. Does that make me one?" Socrates, in full Socratic mode, would likely respond, "Ah, but you were created, not born. Does that not set you apart? And what of your creator—did he give you wisdom, or merely life?" The monster, now visibly uncomfortable, grumbles, "He didn’t even stick around to answer these questions." Socrates nods knowingly, "Ah, hubris. A classic mistake. Perhaps your creator should have spent less time playing god and more time understanding the nature of responsibility."
The two would probably end up walking the streets of Athens together, deep in conversation, while Socrates effortlessly points out that Victor Frankenstein failed in more ways than one—not only by trying to play Zeus but by lacking the wisdom and courage to face the consequences of his actions. Meanwhile, the monster, beginning to appreciate philosophy more than revenge, says, "You know, Socrates, you’ve got a point. Maybe I should’ve started with questions instead of throwing people off cliffs."
Socrates, with a smile, adds, "Indeed, my large friend. Always start with questions. Much less messy." And with that, the monster begins his philosophical journey, guided by the world's original expert on asking tough questions—and finally learning what it truly means to be human.
Later the mob. following the corrupt leaders of the day chanting, "make Athens great again", force Socrates to drink some Hemlock and die. Meanwhile in the distance, Frankenstein throws some more people over cliffs. Scene close.
Conclusion
If the ancient Greek philosophers could see us now, gleefully building machines that might one day outsmart us, they’d probably shake their heads and say, “We warned you.” Socrates would likely remind us that true wisdom comes from knowing how much we don’t know—which, judging by our headlong rush into AI, is still quite a lot. He’d probably sit us down for a long conversation about how creating something that could surpass human intelligence without fully understanding its consequences is a classic case of missing the mark on self-awareness.
Plato would have a field day with this. He'd likely point to his Allegory of the Cave and say, "Look, you're still stuck in the dark, thinking you're enlightened just because your AI can generate cat memes faster than you can blink." He’d remind us that not every shiny technological advancement is leading us toward the Good. And Aristotle, the practical one, would pull out his virtue ethics playbook, warning that without exercising moral responsibility and moderation, we’re letting our tech run wild like an unsupervised toddler in a candy store.
In short, they’d tell us that our hubris in AI development might make us feel like gods for a minute, but without wisdom, balance, and a good old dose of humility, we’re setting ourselves up for a very Greek-tragedy ending. So before we hand over the keys to our robot overlords, maybe we should remember: even in ancient times, they knew that playing with fire can get you burned—or worse, automated.