Lies, Damn Lies, and ChatGPT

Lies, Damn Lies, and ChatGPT

For years, I've been intensely interested in the World War II-era and postwar American homefront, to the point that many of my novels are set during this timeframe. Back in 1991, a TV series called Homefront began airing on ABC. The show was set in a fictional Ohio town immediately after the end of WW2. (This Homefront is different than the 2012 mini-series by the same name, which was about military wives in the UK.)

I recently found all of the episodes of the 1991 version of Homefront on YouTube, and for the first time in more than 30 years, began watching all 42 episodes from the two seasons that the show aired.

For the most part, the cast remained the same between the two seasons, though a couple of characters from Season 1 were written out of the various storylines near or at the end of the first season (and another first-season character departed four episodes into the second season).

At the same time, I've also been preparing for my upcoming half-semester classes that begin in mid-October, trying to infuse generative AI technology into the curriculum of both undergraduate and graduate-level courses alike. So generative AI in general, and ChatGPT in particular, have been on my mind lately.

Yesterday I was working on the content for one of those classes while also watching an episode of Homefront in background. On a whim, I typed the following question into ChatGPT, interested in seeing the answer:

"Who was written out of the 1991 series homefront"

Initially, ChatGPT responded with a generic answer - actually, more of a non-answer - along the lines of "characters come and go from TV shows all the time, so you can consult episode guides and other material to see changeover among cast members" or something along those lines. (I didn't save the exact wording but the answer was more or less like the above verbiage.)

On a whim, I hit the "regenerate" button to see if ChatGPT could be any more precise. It was, resulting in the following (reproduced verbatim):

In the TV series "Homefront," which aired from 1991 to 1993, there were several changes to the cast during its run. One notable departure from the series was the character Mike Sloan, who was portrayed by actor David Newsom.
Mike Sloan was one of the central characters in the early episodes of the show. He was a young Army officer who returned to his hometown of River Run, Ohio, after serving in World War II. His character was romantically involved with Linda Metcalf, one of the Metcalf sisters. However, in the second season of the series, Mike Sloan was written out of the show.
The reasons for writing out Mike Sloan's character are typical of television productions and can include creative decisions, changes in story direction, or actor availability. When Mike Sloan left the series, it allowed for shifts in the show's romantic storylines and character dynamics.
After David Newsom's departure, the series continued with other characters and storylines, exploring the lives and experiences of the residents of River Run during the post-World War II era.

Wrong! On several fronts!

"Mike Sloan" - or more specifically, Mike Sloan Jr. since there is a character in the show named Mike Sloan Sr. (more on him shortly...) - was a character in the show only in spirit. The character was killed off in the 1st episode without ever making an appearance, other than in a photograph at his father's office, and was hardly "one of the central characters in the early episodes of the show." True, Mike Sloan Jr. was "romantically involved with Linda Metcalf" but only as backstory from before the war. In fact, Mike Sloan Jr. married a woman from Italy during the war before his death, and that "war bride" theme was one of early plot lines...but absent the "Mike Sloan Jr." character himself, who never "returned to his hometown."

Strike one!

Also, the "Linda Metcalf" character was not "one of the Metcalf sisters" per ChatGPT; that character had two brothers, but no sisters. A lesser point than the one above, but still wrong.

Strike two!

And the "Mike Sloan Jr." character was not played by an actor named David Newsom. David Newsom was an actor in the series...however, he played a different character named Hank Metcalf (Linda Metcalf's brother) and who was written out of the show after Season 1. So ChatGPT correctly got the name of one of the actors who didn't return in Season 2, but messed up on that actor's character.

Strike three!

And just to add a "strike four" (messing up the baseball metaphor, but still...), I later asked ChatGPT all over again the same question about who was written out of Homefront for the second season, and received a totally different, shorter - but equally incorrect - answer:

In the 1991 TV series "Homefront," the character Mike Sloan, played by actor Ken Jenkins, was written out of the show during the second season. His departure from the series was due to creative decisions made by the show's producers.

Strike four!

Ken Jenkins played Mike Sloan Senior, so ChatGPT got that part correct. However, the "Mike Sloan Sr." character was not written out of the second season of Homefront!

______________

So what's my point here? Simply this: generative AI in general, and ChatGPT in particular, are modern marvels. The narrative prose generated is truly amazing, at least from the limited samples I've seen in my occasional dabbling. For prompts along the lines of "give me a list of 5 things that..." or "what are the tradeoffs between..." I've seen helpful results. In fact, keeping with the theme of the World War II Homefront, I asked ChatGPT the other day "what were the most challenging hardships on the WW2 American Homefront" and thought the answer was a great one.

So I don't want to judge an entire generation of technology by the answers to a single question about an obscure TV series from thirty years ago, but I'll put it this way: for now, there is no way I would rely solely on what ChatGPT tells me without fact-checking and cross-referencing for accuracy. The multiple incorrect responses to "who was written out of Homefront"as well as that initial non-answer of "characters come and go, so check episode guides" have the feel of a student who didn't study enough trying to bluff their way through a pop quiz short-answer question.

I can't look under the covers and see how ChatGPT came up with any of its answers; but whatever was done, something came up short. And if such obvious errors were produced in response to such an innocuous question, I can't help but wonder what errors might show up with a great deal more at stake.

To be fair, ChatGPT (the 3.5 version usable at no cost) does have a disclaimer at the bottom stating that it "may produce inaccurate information about people, places, or facts." So basically: user beware what you are told, at least for now.

Again, I'm not diminishing what seems to be revolutionary and promising technology at the dawn of this latest era of artificial intelligence. I took post-graduate classes in artificial intelligence at the University of Denver back in the early 1990s (ironically, right about the time Homefront began airing) and at the time, AI was all about programming in LISP and Prolog; forward and backward chaining; and similar earlier-generation techniques and languages. We've come a long way since then. I'm merely reporting my experience with AI-produced results that are provably incorrect; and noting that at least for me, the "old methods" of researching and verification haven't yet gone the way of the slide rule and the abacus.

___________________________

Alan Simon is an author, university instructor, and consultant, specializing in enterprise-scale data management and business intelligence/modern analytics...and also various fiction genres. His latest book is Side Hustles For Dummies (Wiley, 2022).


要查看或添加评论,请登录

Alan Simon的更多文章