Is AI too much of a good thing for media organisations?
Stuart Almond
Global SVP Sales: Technology Consultancy/Change Delivery (Ex Microsoft, Sony, BBC, Broadcast Journalist).
AI is here to stay
We live in a time where new technologies are making what was previously impossible, a reality. From virtual assistants on our smartphones and automated transactions in financial services, to applications in the defence space, Artificial Intelligence (AI) is completely redefining how we live, creating efficient, effective, and creative workflows in all sectors.
To give you some insight, the AI industry itself is expected to grow by 50% each year until 2025, by which point it is set to be worth a staggering $127 billion (Source: World Economic Forum). The Media & Entertainment Industry is reportedly set to see a contribution of $150 billion a year (Source: PwC).
90% of articles will be written by AI - this could be a reality by 2027
Ethical considerations
According to Reuters, almost 75% of media organisations are exploring how AI can support and streamline the creation and distribution of content. Bloomberg, for example, already utilises automation to cover certain stories on financial markets, freeing up journalistic resource, and speech-to-text and facial recognition are just as realistic possibilities. Imagine a world when 90% of articles will be written by AI – this could be a reality by 2027.
As exciting as adoption of AI is, it’s so important to get it right, and not implemented for the sake of it. Most importantly in the era of data protection is the consideration of ethics. While media organisations may want to be at the forefront of AI, we now have a responsibility to implement and integrate it safely and securely.
AI versus fake news
AI will not only create content, but filter it, guiding what we consume, transforming news and content production. But like every good ‘robot-takeover’ horror movie – we need to consider just how much control we give up. Smart technology may be transforming the world for the better, but like anything it needs to be studied and guiding principles need to be established, especially in the fight against fake news.
Just as we can use AI to create content, we can utilise it to learn from data, identify patterns and make decisions, with minimal human input – This process, known as Machine Learning (ML), then become integral in combating fake news. Over time, computers can become autonomous with ML algorithms that have been trained and programmed by us. Essentially, we teach them what’s fake and what’s not and then the student goes on to become the master (not literally if we’ve heeded those scary film warnings!). Without this in-depth training given, the lines between news, opinions and lies can become blurred, causing carnage, efficient carnage mind you.
Beware of the bubble
Consumers love personalisation – and AI takes this to the next level, simplifying choices consumers make – Netflix has done a great job of this, providing tailormade recommendations based on watch history and browsing behaviour, leaving us more time to… chill.
AI can also help media organisations implement personalisation, enhancing user experience. The Times and Sunday Times for News UK have already developed a service called ‘James’ which has the ability to tailor editions based on a study of user behaviour over time. While this is great in helping us skip the news we don’t want to see, what if it means we’re not exposed to news we need to be aware of?
Imagine living in your own little bubble – the ‘filter bubble’ as it’s known – influenced only by the like-minded people and/or trends you chose to follow or be aligned with. Imagine all your individual beliefs are reinforced; sounds great until you realise everyone else is experiencing the same reinforcement and some of them have some pretty strange views. This is then on media organisations to ensure there is a balance between tailored content and a view from all sides of a story. But at which point will you not accept or like hearing alternative views, if you don’t have to?
There’s no doubt that AI is a pivotal development in history – but as with every development, we need to be aware of not just the positives, but also the pitfalls. Everyone can benefit from the use of AI as long as this is in line with clear ethical guidelines and used for the right purposes. Media organisations must be ready to abide by rules, putting ethics front and centre… or champion the morals for a balanced society at the very least!
Find out more about Sony’s Intelligent Media Services here: www.pro.sony/ims
Let me know what you think – Feel free to message me or comment below.
Business Owner at d=3
5 年Interesting and scary at the same time. Robotic ethics. But human instructions first on what is ethical. Not sure humanity has decided on what is and what isn’t. So what do we teach AI?