Why is AI-generated content a problem?
With the advent of ChatGPT, the internet is buzzing with discussions about AI and the impact of AI-generated texts. A lawyer contact of mine ( Tom Braegelmann , thanks for bring this subject up and extensively testing on how this can ruin the future for lawyers and other professionals) has played with it and generated funny test cases,
Now MIT has published this article on how to spot machine-generated texts and distinguish them from human authors' works.
I am wondering why that is a problem. At least the laws I am familiar with require identification of the party responsible for publishing a "printed works". And I doubt that ChatGPT can be made that responsible party, even with claiming authorship, which also requires a person. So no matter what, some human's neck is on the line for a published work. That human will have to make sure that the text expresses, what they want to say, and will be held accountable for any errors.
An AI producing a work from your instructions is a level up from a typist of old typing up your notes. And you would not hold a typist responsible for your content.
So can somebody explain to me, why that is now suddenly a problem?
Co-Founder and CEO of Sonocrete
2 年Interesting question Ernst! From an scientific perspective: I quess a problem might be that chatGPT uses everything what can be sourced in the www. Since we were trained to carefully cite work from others in our work, chatGPT doesn’t. Maybe it’s quite worth asking ChatGPT to write a text AND list all sources (this would be useful for all the students ??).