Dealing with AIs Darker Side
I am definitely a cheerleader for AI. I believe it can help society in so many ways. With in most countries an ageing population we need other ways to boost productivity. I also think AI can have a hugely positive impact on everything from healthcare to producing cheap, carbon neutral energy. Thats not to say I am right on this, just my perspective.
Some though have called me out for not addressing the darker side of AI. As someone pushing for greater adoption I think its only right that I do so. As with anything I look at I like to "chunk up the elephant" and put these challenges in different categories. The one's I have are below and I break out whether they are issues for brands and marketers (my area of expertise) or the wider society.
- Trust. In a world where content can be faked how will we ever know what's real and what can be trusted. A huge issue for individuals and societies. Also for brands and marketers in terms of protecting their content and utilising the space.
- Legal Ownership. If you use generative AI to produce content do you own it? Can you copywrite it. How do you ensure that it isn't infringing on someone else's copywrite? Note, these questions are rhetorical, some of the solutions we will look at below at a high level.
- Data leakage. Ensuring your data and IP are not being used by others. Vital for brands and publishers.
- Quality concerns. How do we ensure that our social feeds, emails, tv screens and more are not bombarded by poor quality (for now) AI.
- Manipulation concerns. How do we ensure that we are not over manipulated by algorithms. In a world where the next generation is getting its news from social feeds more than traditional sources(not that they were perfect)) how do we stop AI being used to manipulate society?
- Automation of jobs and removal of available work for many professions. If AI is producing creative for example what happens to all the creatives? How to Brands build marketing functions that are the most effective mix of human and AI.
- Apocalyptic concerns. Will AI take over? Annihilate or enslave us?
My first question to my readers today is what concerns have I missed? Please let me know in the comments.
Now these are all super valid concerns that we need to deal with as a society but also as Brands, individuals and marketers depending on what hat we are wearing. I am going to push them into these 4 categories.
1) Overcoming AI challenges to Brands and marketers in terms of data leakage and legal ownership.
领英推è
2) Threats to our individual self wroth in terms of jobs, employment and more. What do we do if AI does it better?
3) Threats to knowledge if everything can be faked how does society and democracy operate.
4) Threats to our existence against being enslaved/annihilated.
Now these threats are hilariously different in scale. From making sure you can use common gen AI tools to grow your business without being legally liable or subject to negative perception impacts is totally different to threats to humanity itself.
Over the next weeks I will address each of these areas separately. Before I leave you for today though I will calm those worried about number 4. Generative AI is not going to have such impacts. For that we need an AGI (artificial general intelligence) to act as the terminator (or pick your sci fi film of choice). For that we need first genuine human level intellect in all areas that might one day become a super intelligence. Some are predicting this but nothing we see on the market today is even in the same planet and the debate is no nearer being settled as to if it will ever happen today as when T2 came out in 1991. The growth of Language models AI can be thought of instead as the predictive text you have had in your phone for over a decade on steroids. Self driving cars are specific AIs. These tools cause threats but we dont need to worry about the terminiator just yet.
Media / Programmatic / MADTech / SaaS / ID / Audience Insights
9 个月I actually wrote a post on this last week and I've got to go straight to your last point. There has been a 2nd issuance of a letter warning about the issues of AI (https://righttowarn.ai/) which raises the concern about being able to openly report issues without the fear of retaliation, job loss/security. Musk wrote about this last year, 1,400 signed up to that. Google engineers wrote about it this year with hundreds signing. Don't get me wrong. What it can do is staggering and each week here on LI I see more and more about what it can do. But it needs to be governed properly as these warnings suggest. Otherwise it only needs one smart(er) person to realise that changing parameter A from 1 to 2 and that's it - Pandora's box is open. If you've not seen The Lawnmower man - my last point refers to the very last scene. If you like AI and you've not seen it, you'll get a blast from how dated it is. But the warning is still there...
Entrepreneur & Mentor | Focused on Business Growth & Digital Innovation
10 个月Great work Rob and I look forward to following your posts moving forward! Firstly, data security is a major concern. I believe, we need to focus on limiting the exposure of personally identifiable information (PII) and address the potential risks if large language models (LLMs) are hacked. Secondly, the environmental impact of large-scale data processing cannot be ignored. I would be interested to see you collaborate with organizations like Goodloop to get their perspective. Moreover, the invisible impact of AI on the general public is significant. AI could greatly improve surveillance capabilities, which raises privacy concerns. Ensuring everyone has access to these technologies is also crucial to prevent widening socioeconomic gaps. We are privileged to understand the power of these emerging technologies due to our work with big data in digital advertising. However, there is a risk that the public could be blindsided by clunky government legislation, realising too late that they are being manipulated. With general LLM usage at only 2% and awareness at 70% (according to the BBC) the disadvantage of the public not fully understanding the potential impact of these systems is a real concern to me.