chatGPT and AI at Toronto LinkedIn Meetup
https://www.gartner.ca/en/articles/what-is-new-in-artificial-intelligence-from-the-2022-gartner-hype-cycle

chatGPT and AI at Toronto LinkedIn Meetup

Last night I attended a LinkedIn Meetup in Toronto. The evening was dedicated to chatGPT and AI. Both are subjects about which I am intensely curious. There were three things I got from the discussions.

  1. chatGPT is the greatest thing since greatest things and a new career,"Prompt Engineer", has appeared.
  2. AI will take over mundane repetitive tasks in an organization
  3. Lot's of "hoo rah" and "rah rah" about these technologies.

What disappointed me was the lack of discussion regarding the societal and legal implications of these technologies.

Having been around digital technologies since 1984 I have seen a lot of technologies initially regarded as the "greatest thing since the invention of greatest things" appear and wink out of existence a couple of years later.

All emerging technologies get exposed to Gartner's Hype Cycle( https://www.gartner.com/en/research/methodologies/gartner-hype-cycle ) and chatGPT has just about reached its Peak of Inflated Expectations. The current thinking about the implications of this technology means chatGPT is on its way down to the Trough of Disillusionment which is a good thing because chatGPT is so new and subjected to so much hype there needs to be a point where people ask a simple question: "Why do I need this and what is in it for me." At that point personal, corporate, legal and legislative standards start to evolve. And they are.

The biggest issue , for me, is regarding chatGPT's output as gospel. It isn't. There was an incident in the U.S. where a lawyer submitted a legal brief to the court citing a number of cases supporting his client's case. They all came from chatGPT and , when submitted to the court, it was discovered the cases didn't exist. chatGPT had made them up.?

On a personal note, I asked chatGPT, a question about code introspection around a prototyping application course I was developing for #linkedinlearning I knew that feature didn't exist and expected chatGPT to tell me this was the case. Not quite. I got a good 500 words around how to do it and which menu items I could use to do some code introspection. In chatGPT terms, the AI "Hallucinated". When I posed the same query to Microsoft's Bing chatGPT, I was told it couldn't do it and here is why Bing is so important: It annotated its response with sources, something chatGPT does not do.

For AI, in general, I completely agree that it will take over mundane tasks in organizations.That's the good news. Efficiency and reduced costs are a good thing. The bad news is no one is asking a fundamental question around this:"Where will the displaced workers go?" I really do wish I had an answer for that. What I suspect will happen is they will find positions that don't currently exist or have even been thought of.

As for the "hoo rah" and "rah rah", the creative community is all agog with Generative AI where images are created from a series of text prompts. Dall-E, Midjourney and Adobe's Firefly are great examples of this. I have to admit it is pretty cool but there has emerged some quiet whispering about the legality of the output. Let me give you an example.

?There is a plugin for Figma and Adobe's XD named "This Person Does Not Exist". You click a couple of parameters around gender, hair color and so on and an image appears. What this does is go out and grab some hair from this image, eyes from another, gender from another and face shape from another to assemble the image. This is how they all work.

Now let me ask the question being whispered: "Is this legal?"

In more personal terms, if Company X creates an ad and uses Generative AI to create the ad's image and the person appearing as the spokesman or subject has your eyes or the background photo contains a part of an image you posted a couple of years back, do you not have a copyright on that photo and should you be either compensated or asked to sign a release? Tough question and I suspect the courts will eventually be getting around to dealing with this one. The corollary? Is the artist?creating the image allowed to claim ownership of the image as original work?

As an aside, I recently confronted this. In a book I have just written the publisher questioned the source of every image used in the examples. One of them was a screen shot of an Avatar from a Figma project. I had to clearly explain the image was generated by "This Person Does Not Exist" and that I had clearly stated that in the manuscript. I suspect, with the rise of Generative AI, publishers are going to be even more rigid around source material to ensure they are not legally on the hook for a lawsuit.

Am I being a Luddite around this subject? Absolutely not. Business and Academia are deadly serious about how to deal with and use these technologies which means they have reached the Trough of Disillusionment regarding both chatGPT and AI in general. The Creative community needs to really take a hard look at Generative AI, especially around "Who owns what." I suspect Adobe's Firefly is still in beta because they are dealing with that very issue. These are all positive developments which are responsibly tamping down the "rah rah" and "hoo rah".

What it really comes down to is you and whether or not you act responsibly with these powerful new technologies.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了