The Big Dilemma in AI and Automation
Kim Jong Andersen
CCO & Partner, Wibroe, Duckert & Partners. Chairman/Founder of Danish Digital Award. M List 2022, 2023, 2024, 2025
We're humans and we're imperfect
I think it was Einstein who was quoted for saying “Everything should be made as simple as possible, but no simpler.” There's only so much you can reduce a problem to before the answer only becomes shallow and temporary. Which is basically the problem with most problem solving - it's so easy to get caught in the usual trap of addressing a big problem with too small an idea or avoid the pitfall of simply asking the wrong question to begin with.
At this moment in time, I think we need to stop and really reflect over what it is that we wish for in the ongoing development with AI and automation (I mention both because they rely so closely upon each other for a large scale build-up in which automation is being controlled by AI and AI is used for automation) because I believe that our focus has been too much on the technology itself and too little on the actual impact of application. Not to speak of the ethical dimensions.
The current state of affairs is that AI is still very much machine learning by pattern recognition and that lends itself to all sorts of bias. As humans we're deeply flawed by various kinds of bias and prejudice and so is our data both when entered into an algorithm and interpreted.
This can lead to very devastating results which we've seen in the American criminal justice system where AI is partly used to to predict offenders likelihood to commit and/or repeat a crime. What has emerged instead is that historical data will often feed the algorithm with a strong bias towards certain population groups because of these data.
In consequence, strongly prejudiced prejudgements suddenly become a risk where eg. the colour of your skin becomes a predominant factor and not say, the more hidden, socio-economic factors. Because we're living in a culture obsessed with the virtues of problem solving, we often amplify one bad rule with another bad decision, looking for way too easy solutions for complex problems.
In other words, the world is a messy place because humans are far more messy than the economic theory would have us believe, and just because we think that we're acting very rationally by entering data into some AI algorithm and hoping to automate many decisions, we're not necessarily doing something as effective as we could be doing. More efficiently, for sure, but perhaps not in a way that would be closer to human nature and aspirations and as such, be more likely to influence our behaviour in a more positive way.
How to make non-human technology into a human experience
I've been working in the digital field for a very long time now, and in recent years the focus has been on personalization in customer service and marketing automation. The AI applied here is still relatively unsophisticated and often pretty crude. However, it's quite understandably a big bet for any marketeer worth his or her salt because the personal brand and customer relevance has never been more key to success in many oversaturated market and product segments.
However, as marketing and customer service operations are becoming more automated (but not exactly depopulated since new technology requires new skills) and all the world's tech giants are using AI to predict users every next move, it has become clear to me that we need to look beyond the obvious triggers and decision rules which much of the software employed today is quite capable of handling.
Make no mistake, such a focus is indeed a very important first step but to truly become effective and also to minimize any bias, brands need to make sure the services and contents are renderend with great integrity and respect of privacy. Content should be designed using predictions but never be overtly predictable as you don't want to provoke either the opposite response with the audience or remove them from other options. What we need be really aware of is if AI and automation gets us into a downward spiral where it's not only reproducing human bias. If that happens, brands will only be harvesting the very low-hanging fruits and at the same time they might even be introducing a reinforced bias to the whole customer journey.
Always recognize that we're messy individuals and the big dilemma we're facing right now is that on the one hand we want to enjoy the benefits of AI and automation-enabled scalability in processing individual data, on the other hand we're a long way from trusting the algorithms to understand or predict irrational psychology to deliver a "humanized" message or experience.
As brands will need to move even more attention, time, money and staff into the brave new world of CX and marketing, I propose that it's also highly relevant now to really ask the bigger and more difficult questions about what actually causes something and what is merely correlations. Because the first set of questions will probably be more profitable to pursue strategically than just betting on correlations that might already be subject to a certain amount of data and interpretational bias.
When you realize the actual causality, you also know where to put your efforts more than just try to act from an observation that for example 75% of customers who submit a complaint, are also living in a particular geography. Perhaps, if that particular region has not seen any investments in years that's where you should start to amend relationships and not by just sending people more emails or exposing them to more banner ads trying to convey some accommodating message.
Now, customer and marketing automation is both inevitable and a driving force for higher productivity and upgrading the customer experience by a higher level of personalization. No doubt. But to take it to the next level, I'm of the opinion that while it's all good and fine that AI and automation will take us some of the way, it's not going to take us all the way anytime soon in terms of creating a warm, human experience from a cold piece of technology. And you know what? It really doesn't have to!
I would much prefer that we as humans are taking the responsibility for our own flaws and biases by making sure they're properly taken into account when designing experiences and crafting content - and then of course use AI to detect any data patterns that appear to be biased so we can explore if the variance is something that requires any special action. It's a great achievement that the big marketing cloud vendors today have provided us with the infrastructure to segment and transport our commercial intentions to almost the one-to-one level but the train will some day stop cold in the tracks if the goods aren't emotionally bound and only designed for AI business logic.
There's an amazing Interflora case - "Sig det med AI" - one of the major winners from the 2019 season of Danish Digital Award - that proves you can do both but it's still very much an exception and not something which I see a lot of.
A new narrative is called for
At Berg + Jong Advisory Partners we are specialized in developing new narratives that can strengthen a brand's and an organisation's ability to conquer a bigger part of its future by aligning that narrative with all key stakeholders. One of those new narratives could very well be build around the expectations of what AI and automation will eventually mean to the organisation and its stakeholders in the coming years - and it's absolutely essential to get it right because arguably, how a brand demonstrates it can handle those expectations, internally and externally, now and going forward, will very much determine its future value
AI and automation are turning out to become the most pervasive and significant technological evolution (not revolution because it's been coming for years and years) in both this and probably also the next decade - and big, sweeping changes like this always entail big fears as well as big hopes. Be bold enough to address the elephants in the room that will appear or already have been present for a long time when contemplating the company trajectory over the course of the next 4-5 years.
Get your narrative right by building the bridge from your past DNA, current initiatives and future transformation to come together in articulating and specifying (very important, don't be abstract, be very concrete)- a revitalised vision that will be perceived as progress and not the beginning of some future where the heart & soul of the brand has become lost in translation. Even better, make sure that whatever tech stack you're going to invest in or have already invested in, is positioned so it actually increases the future value of your business by being not only delivered but perceived consistently among stakeholders.
And finally, realize that to succeed you cannot just broadcast the new narrative once and then that's that. It needs to become an integrated and repeated part of the overall messaging at several occasions and touch points through a considered and complete communications program running more than a couple of months.
If you wish to know where to begin and how to do this in order to engage people in this endeavor and improve your brand's and business future value perception, please feel free to contact S?ren Berg or myself regarding an introduction to our unique approach.
Senior Art Director at ALK
5 年The biggest dilemma is WHEN to use it and for WHAT. It can be the biggest help and the total destruction. It's all so exiting but I also feel I need to back to basic and know how to grow potatoes...