The Media Perspective on AI

The Media Perspective on AI

Navigating through the contrasting viewpoints surrounding artificial intelligence, amidst the excitement and concerns, and drawing insights from the perspective of the data center industry, I am launching a new series to delve into AI impact starting with these perspectives from the media industry.

Danial Thomas, Anjli Raval, Murad Ahmed

I joined the Financial Times for a discussion on the influence of technology on business with the FT editorial team Murad Ahmed , Technology Editor, Anjli Raval , Management Editor, and Daniel Thomas , Global Media Editor. The event brought together media buyers, agencies, and publishers striving to adapt to the evolving industry landscape and changing consumer behaviours.

The Threat of Deepfake Technology

With the rise of AI, the issue of deepfake technology has become a growing worry. This technology creates deceptive content that can harm your reputation, prompting brands to protect themselves. The severity of the problem is exemplified by cases like Arup , a UK engineering firm, falling victim to a £20M deepfake scam.

During election seasons, deepfake videos of candidates are becoming widespread. While humans can discern context for positive purposes, they struggle to do the same for negative intentions, leading to the rapid spread of false information. Moreover, AI-generated deepfakes are cost-effective to produce, making it easy for individuals to engage in sophisticated misinformation campaigns for as little as $24.

Concerns of Media Leaders

Media leaders like Tim Davie of BBC Worldwide and Alex Mahon of Channel 4 express apprehension about the proliferation of misinformation, the polarization of political ideologies, the erosion of trust in news, and the importance of upholding reliable news sources.

Generational Mistrust

A Reuters report highlights that young people are disengaging from news consumption due to concerns about mental health and a lack of trust in politicians and major corporations. This skepticism stems from witnessing economic hardships, the unresolved global issues, and a diminishing faith in authority figures

Virality

Platforms such as TikTok and Instagram rely on algorithms to promote viral content, leading to rapid dissemination of misinformation. This phenomenon predominantly affects individuals over 50, who tend to share content without verifying its accuracy.

The Rise of Authentic Content

In a landscape where negativity is less tolerated, establishing trust through authenticity is crucial for media outlets to differentiate ourselves amidst shifting consumer preferences.

The media's gold standard of fact-checking to mitigate the risk of audiences feeling deceived remains fundamental.

AI offers numerous advantages, yet it is reshaping our communication landscape. We are approaching an era where every piece of content may be scrutinized for authenticity. We will surpass this challenge as we enhance our proficiency in using this technology and training it to decipher contextual hints.

Trustworthiness in AI models content sources

The debate in the publishing sector revolves around the content used to train AI models, its reliability, and the financial compensation involved. Having access to a valuable and trusted dataset can greatly benefit the models. As financial details from a deal with the FT and World News remain undisclosed, the future developments in the media industry will be intriguing to observe, given the current focus on copyright and intellectual property rights. Ensuring trusted content is crucial for achieving favourable outcomes, highlighting the importance of reliable information for effective AI training. The use of Reddit as an information source has raised doubts also delays in updating content in real-time, leading to misconceptions in AI learning and hallucinations.

Issues Regarding Racism in Chat GPT

There are concerns about Chat GPT's understanding of racism and the sources it draws information from. If the AI model trains on racist material, it may generate discriminatory content. Depending on AI for creativity poses the risk of promoting racism. Moreover, utilising these tools for unmonitored mass advertising and marketing could lead to significant problems in the future.OpenAI versus the Walled Garden Approach of Enterprises

'Red teams' at big tech creating huge models are necessary to test them. A chemical expert on OpenAI's red team said "we've got it to make a new nerve agent".

OpenAI versus the Walled Garden Approach of Enterprises

Businesses adopting these models must internally adjust them by utilising internal data to tailor communication for employees and campaigns. For instance, Google's Gemini image-generating system produced inaccurate images of black Nazi soldiers, highlighting the challenge of adapting for a diverse global audience while being trained on data primarily created by a homogenous group.

While contextual issues may improve with time, those utilizing generative AI in internal systems need to carefully fine-tune their models to avoid such pitfalls.

The Human Touch

When crafting a speech or a piece of corporate communication, human intervention is invaluable in adjusting for nuances and conveying the message to different audience types, like fund managers or young activists.

AI-Generated Content Trend

The marketing and advertising sectors are transitioning towards labelling AI-generated content. Companies like WPP, known for AI-generated influencers, are incorporating small tags that clarify content as AI-generated. This practice is expected to become more prevalent. Even prominent tech firms acknowledge the issue and are adjusting their policies to mandate disclosure for AI-generated content.

Murad Ahmed, Lina Tayara

Balancing Consumer Energy Demand vs. Data Center Expansion

From a standpoint of climate change being condemned and fossil fuels being detested, the narrative has shifted towards the necessity of oil and gas.

I inquired with Murad Ahmed, Technology Editor at the Financial Times, about the utilisation of generative AI in creating intellectual property such as text and images, considering the significant energy consumption, where each prompt requires ten times the electricity. Are individuals giving due consideration to energy consumption, its source, and environmental impact amidst the climate crisis?

Ahmed pointed out that people are not prioritizing energy consumption as they should, similar to how they overlook it when streaming movies. The focus is primarily on major private equity firms acquiring data centers and warehouses in anticipation of future demands, alongside hyperscalers securing additional capacity.

Efforts are underway to ensure these data centers rely on renewable energy sources. However, achieving a fully renewable energy usage policy remains a distant goal. The current trend involves tech companies procuring large warehouses due to the escalating demand for data usage.

AI a double edge sword for energy consumption

The narrative of investors and developers is that whilst AI energy demand is doubling every 100 days, this upfront huge cost will pay off when it has solved for global warming and energy efficiency. Ahmed envisions various scenarios unfolding:

Short-term: Cynical practices like Microsoft's deal with Occidental, purchasing carbon credits to offset AI-induced energy consumption.

Medium-term: Companies will focus on developing more energy-efficient data centers.

Long-term: DeepMind founder, Demis Hassabis, believes AI has the capability to address climate change by processing vast amounts of data swiftly. This potential has already shown promise in drug discovery, with cancer researchers optimistic about finding solutions in the next decade through AI.

While concerns persist about AI's energy consumption, its growth is inevitable, with its impact set to expand exponentially.

#authenticcontent #media #AImodels #deepfake #AIgeneratedcontent #datacenters

Share views and perspectives on our platforms


Alan Morrison

Research Analyst · Tech Business Consultant · Writer · Public Speaker

4 个月

"This phenomenon predominantly affects individuals over 50, who tend to share content without verifying its accuracy." As an over 50 trained as a researcher, I'd point out that the over 50s have no monopoly on the spreading of misinformation. Would also point out that those who mentored me as a researcher were also well over 50 when doing so. Perhaps you should look up the term observational bias and ponder how much of the content you've been posting or sharing reflects such bias.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了