On Perception vs. Reality & AI-Generated Media
Disclaimer: This article was written with the aid of GPT-3, given the first paragraph as a prompt; it's about 50% AI-generated. Follow the series for more AI-assisted articles.
Advancements in computer-generated content, augmented reality, and computer-aided editing, especially with the advent of AI in these applications, beg the question: How important is the difference between reality and perception? For instance, how much should we care whether a news presenter's image is one of a human or AI-generated? This may sound like a philosophical question; nevertheless, it has real implications in many fields, such as: psychology, politics, economics, law, etc. — dealing with issues like trust, identity, authenticity, and many more.
The problem is, from a viewer’s perspective, it can be hard to tell what is real and what is not; furthermore, even when it's known that something is computer-generated (such as most of this article), it is hard to really care — the viewer is getting the same value either way. I bet that if you interviewed hundreds of people and asked them to distinguish between the quality of real and fake photos, many of them wouldn't be able to tell the difference; and of those who could, most won't care; unless they link the fabrication to misconduct.
Talented makeup artists can recreate someone else's face using makeup alone. Plastic surgery is another way to change appearances although more expensive and much harder to reverse. The fact is: If people want to look like other people, they can. That said, the power of AI makes such impersonations much easier and more abundant. For instance, many text-to-speech systems can mimic your voice using just 5 minutes of audio. Moreover, Deepfake can generate fake videos of real people’s faces talking. It doesn’t stop there: Many apps are available that can change people’s faces and make them look younger or older. One can even go so far as to change the appearance of a person’s face to look like someone else entirely, such as a celebrity. And of course, there is also the problem of fake news.
There's no doubt that AI-generated media can be a force for good: giving voices to those who cannot otherwise communicate; improving the ways we learn; helping us better understand diseases; or just having fun. That said, the room for misuse is expanding, since the technology is becoming more advanced and more accessible. As the stakes are getting higher, we need to have more serious conversations regarding the implications of such technologies, especially when politics and the fabric of society could be at stake.
Online media, that may or may not contain altered content, have been associated with low perception of integrity and trustworthiness. Although social media platforms have been called upon to address fake news on their websites, it's been reported that it's harder than we think. Viewers of such media can't — or won't — do their share either: The vast majority of content viewers are not particularly motivated or equipped to verify the information that come across on social media. Cognitive biases (namely: authority bias, halo effect, and confirmation bias) can play a key role in the perception of information as trustworthy if it's shared by an authority or an expert; or if it aligns with their personal viewpoints. Confirmation bias studies show that even when we know the facts, they wouldn’t necessarily make us question fake information when we agree with it, even if it screams "fake" in our faces. Instead of discarding information that disproves what we already believe to avoid cognitive dissonance and maintain cognitive consistency, we should alter our belief system to value trustworthiness above all and discard unconfirmed information to maintain the truth.
The question is: When there's so much altered or unchecked content out there, what can we trust? It's safe to say that we encounter so much content every day that has no credible sources. Until proven credible, should we assume it's more likely to be fake? Just like when a judge asks a jury to disregard a fallacious statement, we need to disregard unverified content; that's easier said than done though, especially when there's an abundance of it.
Follow the series for more AI-assisted articles.
Father | Veteran | Simplify Personal Finance with Simple -Yet - Proven Strategies to Save-Grow-Protect Wealth!
2 年Mohamed, thanks for sharing! Great perspective.