Important Artificial Intelligence Case to be Argued in the US Supreme Court Today
David Meerman Scott
Author of 12 books including NEW RULES OF MARKETING & PR and WSJ bestseller FANOCRACY | marketing & business growth speaker | advisor to emerging companies
Oral arguments begin in the U.S. Supreme Court today in?Gonzalez v. Google, an important case about Artificial Intelligence amplifications of content on social networks. The lawsuit argues that social media companies should be legally liable for harmful content that their algorithms promote.
Google argues that Congress has already settled the matter with?Section 230, which covers protection for content companies. The relevant sentence in Section 230 reads: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
Basically, Section 230 says that Social Media companies like Meta (Facebook and Instagram) Alphabet (Google and YouTube), Twitter, and others are not responsible for the content (text, photos, videos, etc.) that their users post and share to the networks.
Section 230 was written in 1996, at the dawn of the Web, as part of the Communications Decency Act. This was well before social networking and AI algorithms.
I think this is a critically important case. I sure do hope the Justices and their staff have been studying AI and its ramifications. Here is a good?Washington Post?story?on the case if you want details.
Content appears in your social feed because of the company’s AI
Here is my take on the debate: The right to free speech does not mean a right of AI algorithm amplification. I wrote about this in a?post?back in April.
I strongly support the idea of free speech. Early in my career, I worked for Knight-Ridder, at the time one of the largest newspaper companies in the world. Free speech and freedom of the press is something I’ve been focused on my entire career.
领英推荐
Yes, I agree that social networking companies should not be held responsible for the content that is uploaded to their networks by users. However, once content is posted, I believe social networking companies have an obligation to understand how the content is disseminated by their Artificial Intelligence algorithms.
When YouTube chooses to show you a video you might like, either by auto-playing after another video is done, or showing it in a list of recommended videos, that’s not free speech, it’s AI amplification.
When Facebook shows text or video or photos in your personal newsfeed, that’s not free speech, it’s AI amplification.
Yes, if a user chooses to be friends with another user, or subscribes to a video channel, or likes a company or politician, fine. In this case, I’m cool that the content from that person or organization can and should be shared with the person who actively chose to engage with that other user.
However, I am not okay with social media companies hiding behind a blanket law that allows them to share content in feeds that people did not actively choose to see.
If the YouTube or Facebook AI feeds you COVID vaccine misinformation, QAnon conspiracy theories, or lies about who won an election from accounts or people or organizations you do not follow, that’s not free speech. It's their AI technology amplifying the content so you can see it, when you otherwise hadn’t chosen to see it.
I’m eager to hear what the Justices say on this important issue.
Absolutely agree!??It's important to remember that just because something can be amplified by AI algorithms, it doesn't necessarily mean that it should be. Looking forward to seeing what the Justices have to say about this. What are your thoughts on this topic?
Retired. Volunteer of the Brewster, MA Finance Committee
2 年I completely agree with your point of view,?David Meerman Scott Internet platforms should not be able to hide behind the first amendment's free speech protections if their algorithm supports or amplify disinformation or attempt to undermine the democratic process in our country. I am all for allowing public debate on these platforms. However, falsely reporting claims of a stolen election, for example, and having them amplified on cable networks breeds mistrust in our democratic institutions. Section 230 should be able to protect against false information.
Founder and COO at CognitivePath | Helping Organizations Create Strategic Advantage with Artificial Intelligence.
2 年Unfortunately, I suspect the Supreme Court is nowhere near informed enough to decide on this issue.
Fierce Veteran Mentor and Advocate | Mental Health Advocate | Non-profit Consulant| Speaker | Medically Retired USAF Veteran | Homeschool Mom |
2 年David Meerman Scott I completely agree with you. Whenever your thoughts are being manipulated to be one sided without you realizing you didn't have all the information, but think you did. That isn't freedom. That is control. Manipulation is a powerful tool but can also be extremely harmful. I think AI is going to do great?things, but it needs to be to be to the best of its abilities not for the abilities of those who are capable to do great harm.
I Can Show YOU ? How To Use LinkedIn To Share "Your Solutions" And "WHY YOU" | How To Be Seen & Heard | "Curiosity Corner" Newsletter | #LinkedIn LIVE ? "Let's Talk" | SOCIAL MEDIA ADVOCATE ? #COURSECREATOR > #SPEAKER
2 年Fascinating in so many ways David Meerman Scott - Stu Varney had a segment this morning and Susan Li had this conversation as well. So much to think about and unfortunately would seem like these companies hide behind the law and use it only when it is convenient. Varney might be right - "Brave New World" is upon us. Great share!