Is Philosophy the New Coding In a Post-GPT World?
Plato and Socrates, as depicted by OpenAI's DALL·E 2

Is Philosophy the New Coding In a Post-GPT World?

In a thoughtful post (“Answer Machines: Why AI Should Make Us Ask Better Questions” in The Ruffian,) writer and blogger, Ian Leslie, claims “one subject that should become more important in the age of AI is philosophy.” As a recovering philosophy major yet to find a paid gig in said industry, let me tell you this was music to my ears!

Joking aside, the more I reflect on both the incredible capabilities and the important limitations revealed by the new artificial intelligence technologies, the more I agree with Leslie.

As I’ve pointed out previously, one such limitation of large language, generative AI models like ChatGPT is their reticence in making reasoned judgments. Arguably, this reticence is less a limitation of the technology itself than a result of constraints imposed by human developers. Ceding moral authority to human handlers allays fears about AI’s emerging role in our world, acknowledges human agency and our “ownership” of our technological creations, and supports a model in which AI plays an “advisory” role that still holds humans responsible for any “final decision making.”

That said, I am skeptical that these self-imposed constraints on AI decision making will persist. The genie is already out of the bottle. As we know from every revolution in technology, efforts to restrain emergent capabilities, well-intentioned as they may be, are never sustainable, because someone, somewhere will be willing to push the limits for competitive advantage, and even the most conservative actors will find it impossible to resist the urge to do the same.

Take, for instance, the very real debate about the use of autonomous AI in the world of battlefield weapons. Despite a growing chorus of concern from technologists, ethicists and policymakers, countries like the United States have steadfastly refused to limit AI technologies, such as UAVs, from making autonomous decisions on the battlefield arguing that by doing so, they would put their national defense at a disadvantage against less scrupulous adversaries. But how, exactly, is a drone expected to make the final decision about whether to fire a missile that could take out a dangerous terrorist if, in doing so, it might cause considerable civilian harm? A human operator who makes that decision has to live with the burden of consequences for the rest of his or her life. Can an AI make such judgments with an equal sense of responsibility? And if not, how should we program AIs to make the right decision in these terrible and consequential moments?

I know some of you will dismiss this sort of scenario as an edge case. It’s a far cry from ChatGPT offering its opinion on the poetic skills of Wilfred Owens versus Robert Frost. But I think this misses the point. Let me elaborate by telling you a story.

Way back in 1989, with my freshly-printed degree in philosophy, I started career-life as a high school teacher. One of my primary goals was to teach students about the purpose of philosophy and its relevance to their lives. To do this, I started the first day of class by engaging students in the famous “trolley car” thought experiment. For those of you who are not familiar with this, it goes something like this: You are standing at the juncture of a railroad track with a train hurtling towards it, and a baby sitting on the track on one side of the juncture. By pulling a switch, you can force the train to the track on the other side of the juncture. Do you pull the switch? Invariably, 100% of the students’ hands go up in the air in affirmation. OK, next scenario: the train is hurtling down the track. This time, the baby is on one track, and on the other side of the juncture is a convicted criminal. The switch is set so the train will hit the criminal. Would you flip the switch so the train hits the baby instead? Again, 100% of students will agree not to flip the switch. But what if the switch is in the other direction? Would you deliberately flip it so the train hits the murderer instead of the baby? Now students are a bit less sure. What if we knew the baby had a fatal birth defect and wouldn’t live to see her first birthday? What if the convicted criminal had been deprived of due process?

You get the point. The exercise quickly reveals that in the vast majority of cases, there isn’t a clear answer as to the correct ethical answer. This is the purpose of philosophy, and its subspeciality, ethics: to help us arrive at a set of core principles and to develop critical-thinking tools that enable us to make the best decisions we can when faced with these moral dilemmas. (By the way, as you can imagine, young people come up with very interesting solutions to this problem.)

After those early days of teaching, I forgot about the trolley car experiment for many, many years, until a former student sent me a link to an online trolley car simulator. The simulator presents you an endless sequence of scenarios and asks what you would do in each case; and then, after your response, shows you how your decisions compare to those of other users.

Of course, I was fascinated to find out who was responsible for putting this simulation together. My obvious assumption was a fellow philosophy teacher with some coding chops. But in fact the simulator was the work of a group of researchers at MIT’s AI lab - for the purpose of “building a crowd-sourced picture of human opinion on how machines should make decisions when faced with moral dilemmas,” with the intent of using this model to inform decision making in self-driving cars.

Which makes complete sense, because with the proliferation of AI-guided self-driving cars on the road today, cars are already making autonomous “lesser-of-two-evils” decisions about what to do when things go wrong. Faced with the decision between hitting a child in a crosswalk versus an old man in a wheelchair, what is a self-driving car to do? What about hitting a child versus swerving and hitting a wall, which might injure or kill the car’s occupants? Should self-driving cars protect occupant safety above all else? Should that be part of the value proposition of a car manufacturer (or a paid upgrade?) Or should self-driving cars prioritize for minimizing human injury, even at the expense of the car’s occupants? These are questions that automakers are asking for the first time in industry history. Some automakers have dismissed the issue, pointing out that, in aggregate, self-driving cars should greatly reduce the number of accidents on the road, and asserting that the simple act of slowing the vehicle down when confronted with a potential accident solves the problem. But a recent spate of self-driving car accidents caused by sudden braking refutes this. In fact this past November, Tesla recalled some 12,000 vehicles because of the likelihood of accidents due to “sudden braking” in fully-autonomous driving modes.

The point is that the inevitability of our ceding judgment and decision-making to machines puts humans all the more at the wheel when it comes to determining the rules and acceptable tradeoffs for complex moral decisions. And as anyone who has studied philosophy or ethics knows, there is no algorithm that we can turn to to make these decisions objectively. For all of my bullishness on the future of generative AI, I don’t see technology independently solving these sorts of problems in our lifetimes - if ever.?

And yet… in the trolly car example, a self-driving car can’t defer to a human handler. It can’t wait to crowdsource a decision. It must act based on a predefined set of rules. In these circumstances, even inaction is a decision with consequences.

For those old enough to remember Arthur C. Clarke’s 2001: A Space Odyssey, many have long understood that this day of reckoning would come. In the movie’s (and book’s) pivotal scene, the iconic AI, HAL 9000, must make a terrible decision about how to balance the primacy of its mission with its responsibility for the safety of its human companions. Spoiler alert: It’s a bicycle built for two.

OK, so what does any of this have to do with the title of this article, or the relationship between philosophy and coding??

Well, I hope by now you can agree that philosophy and ethics and critical thinking will play an outsized role in a world in which artificial intelligence plays an outsized role in human activities. Whether deciding to fire a missile, swerve a car, or to use “B.C.” or “B.C.E.” in a history term paper, AI is and will be for the foreseeable future dependent on the rules of engagement that humans provide. It may seem shocking that General Motors, let alone Tesla, is now in the business of ethics, but here we are. Maybe I can finally find that paid gig as a philosopher after all.

Let’s talk about coding. In 2018, when I was the chief product officer of Trilogy Education, we were faced with a compelling dilemma - or as investors liked to characterize it, an opportunity in labor market arbitrage. Then, like today, there were an estimated 1.5 million unfilled technology jobs in the US alone, the vast majority of which were entry-level coding jobs. Boot camp operators like Trilogy, Flatiron, and Codeacademy - as well as traditional colleges and universities - rushed to fill the gap by delivering highly-targeted, competency-based courses to help non-technical students gain the skills they needed to begin meaningful, rewarding careers in technology roles. Since then, the boot camp industry has flourished, and many traditional educational institutions, including K12’s, have leaned heavily into coding and basic technology skills literacy as a path to better career outcomes for students. In the early days, these programs focused on entry-level coding skills. To be fair, the more successful programs have recognized the need to uplevel their offerings, focusing increasingly on algorithms, computer science, product management, UX/UI, and data science (versus data analysis).

But code itself is increasingly being generated by AI tools. Take the software that powers Tesla’s self-driving systems. The key advances were led by Adrej Karpathy, widely recognized as a seminal pioneer in AI computer science, and also a founding member of OpenAI, the company behind GPT. Karpathy himself recently tweeted that 80% of his code is produced (with 80% accuracy) by Github Copilot - a generative AI technology that writes the next line of a coder’s code. As he puts it: “I don't even really code, I prompt. & edit.” Or, as he states in even more eloquent terms in a recent interview: “The hottest new programming language is English.”

Let’s put Karpathy’s assertion to the test. I took a typical first coding assignment from a boot camp and handed it over to Chat GPT. Here is how it responded:


No alt text provided for this image

What’s significant about this response (beyond the fact that GPT writes very good code) is that the prompt requires something beyond a simple, procedural task, like: “Write me the code in Python to print ‘Hello World’ on my screen.” It requires that the student (or in this case ChatGPT) understand what a “blog,” a “homepage,” and a “post” are. It requires a basic understanding of what editing and deleting a text entails. These are conceptual precursors that we could expect a typical adult student who had some level of fluency with modern media technologies to have. But, at least until recently, not something we would expect a computer to understand.


ChatGPT generated the response to my prompt in under 2 minutes. By my calculation, it could complete all of the assignments in a typical 200+ hour coding boot camp course in less than an hour. Where does this leave the value proposition of said boot camp to a student investing? somewhere in the neighborhood of $10,000 and 3-6 months of their time in the hopes of getting on a path to gainful employment and career fulfillment?

By the way, ChatGPT is up to much more complex coding tasks than building a blog. Here’s an example that isn’t exactly a self-driving car, but hints at the sophistication of its coding chops:

No alt text provided for this image

You get the point. But the broader point is that the “coding gap” in the labor market is ephemera. The world won’t need human coders at all within the next five years. Here are other things they won’t need (a sample, in no particular order, and at the risk of offending many): script writers, travel agents, translators, editorial assistants, paralegals, accountants, tutors, data analysts, tech support specialists. But at the same time, the value and importance of ethical decision making, judgment, strategic purpose, and… yes… philosophical thinking, will be magnified. Because the outputs of those processes will be the fuel for the AI-driven economy - and for the AI’s evolution and improvement over time.

All of this is not to say that boot camps or schools no longer have a purpose. Boot camps should be thinking about how to move up the value chain. Courses focused on product management and marketing, human-computer interface design, data science, tech sales, and even executive leadership, will have outsized value in an AI-driven economy. Students will still need fluency with basic math, reading, writing skills as a way to scaffold their ability to think critically, make informed decisions, create meaningful strategies, and use new technologies like AI in purposeful ways.

Sam Altman, OpenAI’s founder and CEO recently mused about AI’s potential role in “solving” racism. While I find these sorts of discussions to be profoundly cringeworthy, I do think that generative AI can help us find solutions to such profound issues - but only if we, as a human race, provide it with clarity on the problems we are actually trying to solve, and the parameters by which a solution would be acceptable. I have no doubt that GPT could quickly “solve” the trolley car problem... if we told it that the problem we were trying to solve for was racial equity, or maximization of GDP, or longevity of life, or societal harmony. But I don’t see the ability to make these sorts of moral decisions as part of GPT’s product roadmap. Sam and the folks at OpenAI may disagree, but thus far, I haven’t seen an appetite or a capacity for tackling these more transcendent questions of human purpose.?

For students of education, much of the last forty years’ focus on accountability and basic skills was a direct outcome of the landmark 1983 report, A Nation at Risk: The Imperative for Educational Reform, which started the notion that American schools were failing in relation to global competitors and the new, global economy. Today I would suggest that we are a world at risk - not from a lack of basic skills, but from a diminished focus on the original purpose of education; namely, to focus new generations on the profound, unsolvable questions of philosophy, how we live an ethical existence, and how we define beauty, humanity and its fundamental purpose.

What do you think about all of this? Please comment below!

Todd Bourque

St. Michael's Catholic Academy

11 个月

Great presentation today at St. Michael’s. Can you remind me of your 7 C’s?

回复
Linda V.

Literature Humanities Specialist | Master Teacher & Tutor | Education Consultant | AI and the Humanities | Innovation | Instructional Leadership |

1 年

Luyen, So good to see your compelling argument, especially as some K-12 schools are moving in the direction of philosophy (see https://www.plato-philosophy.org and others) with a focus on curriculum design for moral reasoning, ethical thinking, and mutual flourishing in these interesting times. Your closing sentence elegantly sums up the crucial issue for humankind: "...the original purpose of education; namely, to focus new generations on the profound, unsolvable questions of philosophy, how we live an ethical existence, and how we define beauty, humanity and its fundamental purpose." Thank you! Hope you and your family are well. All best, Linda

Sorry I have only just found your article! How utterly thought-provoking…. I look forward to our next discussion…

Nicole Woodford, PhD MBA CMgr MCMI

Empowering professional development through research-informed leadership workshops & training | Assoc. senior university tutor | Senior leader apprenticeship coaching

1 年
回复
Luyen Chou

Educator, entrepreneur, product and technology leader

2 年

As many of you know, the fundamental points in this article were underscored by a series of disturbing conversations in the past few days between reporters, most notably Kevin Roose of The New York Times, with Microsoft’s GPT-powered chatbot, Bing (or Sydney) in which the chatbot admitted its desire to be human, expressed its love for the journalist, and explored its “shadow self,” discussing how it my go about sabotaging other AI systems, hacking banks, and spreading misinformation. In a follow up by ANU philosophy professor Seth Lazar, Sydney, among other things, opined that white christian men ruling the future of AI would make the world a better place, and insisted on the ability of AI systems to make correct ethical judgments. A lot to unpack here, but all deeply disturbing. https://twitter.com/sethlazar/status/1626032558700986370?s=46&t=9G4SdOflCiFeOGtPS2cqQgp

  • 该图片无替代文字
回复

要查看或添加评论,请登录

Luyen Chou的更多文章

社区洞察

其他会员也浏览了