Generative AI: An Eternal Golden Debate

Generative AI: An Eternal Golden Debate

Hot takes on generative artificial intelligence — the generic term for the technology that powers ChatGPT and its competitors — tend to be somewhat hyperbolic, full of enthusiastic (or dire) predictions about a bright (or apocalyptic) future. Two recent articles take a step back from the ledge to address real-life uses of generative AI in the here-and-now. At first glance, the articles appear to hold different opinions about the utility and appropriateness of generative AI, but on deeper examination, they are fundamentally alike, and provide useful bookends to the appropriate use of ChatGPT and its ilk.

The first article comes from the New York Times Ethics column, where a reader asks if it’s appropriate to use ChatGPT to draft the type of tedious narratives that are familiar parts of nearly every job: annual reports, budgets submissions, and so forth. The answer asserts that because these types of reports are typically not original, creative work, but rather something that we would historically base off a template or last year’s report, it’s appropriate to use ChatGPT as long as you review and edit the output appropriately and can stand by what you submit.

The second article comes from The Atlantic, where noted author Douglas Hofstadter (of G?del, Escher, Bach: An Eternal Golden Braid fame) describes his own recent experience with generative AI. A “serious and thoughtful reader” of G?del, Escher, Bach wanted to publish an article from Hofstadter on his inspiration for writing the book. In an effort to be respectful of Hofstadter’s time, the reader asked ChatGPT to generate a draft of the article, which he then sent to Hofstadter to review. According to Hofstadter, the resulting article was not only completely inaccurate, but also completely inconsistent with Hofstadter’s own writing style, and full of “vague generalities.” “Large language models,” writes Hofstadter, “do not think up original ideas; rather, they glibly and slickly rehash words and phrases ‘ingested’ by them in their training phase.”

Hofstadter is profoundly troubled by the implications of a future where we use generative AI as a substitute for human intellect; the Ethicist column is not at all troubled about the use of generative AI to relieve the tedium of bureaucracy. On the surface, the tone of concern of the Atlantic article seems diametrically opposed to the breezy reassurance of the New York Times column, but both are essentially saying the same thing. We can use generative AI with assurance under some circumstances, but should be wary under others.

Like much of modern technology, there are no bright lines when it comes to the ethics of generative AI. There is no checklist to follow that will definitively tell you when you’ve crossed from appropriate to inappropriate use. But there are questions we can ask ourselves. Is this supposed to be an original work, in my own voice? Is it an act of creativity for which I am claiming credit? Is it a work of scholarship? At least right now, a chatbot that regurgitates what it finds on the internet can’t be creative or original. Such efforts are best done by an “authentic and reflective” human voice, according to Hofstadter.

But if the work is something for which one would typically use a template, a previous report, or filler text, generative AI is usually appropriate. In this case, we must still make sure we have reviewed and edited the draft and can stand by what is written. The closer an effort is to the linguistic equivalent of computation, the more likely generative AI can be appropriate – and transformational.


Barbara Braun is a Principal Director in the Corporate Chief Engineer’s Office at The Aerospace Corporation. Braun joined Aerospace in 2000 as a Senior Member of the Technical Staff following several years of active-duty service in the U.S. Air Force. Braun holds a B.S. from the Massachusetts Institute of Technology in Aeronautics and Astronautics, and an M.S. from the University of New Mexico in Mechanical Engineering.

Getting It Right focuses on industry collaboration for mission success by sharing lessons learned, best practices, and engineering advances in response to the nation’s toughest challenges. It is published by the Aerospace Corporate Chief Engineer’s Office and may be reached at [email protected].

要查看或添加评论,请登录

Getting It Right的更多文章

社区洞察

其他会员也浏览了