AI trends 2025: What to expect this year
Jan Tissler
CONTENTMEISTER ? – AI Content Strategy, AI Content Creation, Generative AI Workshops and Trainings, German Content, Translation and Transcreation.
We are in the third year of the generative AI boom. While some are predicting a stall for the industry as a whole, I personally expect 2025 to be a year of interesting, meaningful and useful advances. Generative AI is here to stay.
The advances to come may not always look as exciting as those of the past two years. The hype will probably cool down a bit.
My perspective as I write this: I’ve been writing for a living for 30 years. Generative AI directly affects my profession and therefore my livelihood. I see it as both a threat and a tool.
This dichotomy is always in the back of my mind as I read, think and write about it.
Table of Contents
T O O L S
OpenAI’s GPT-5 development faces significant delays and cost issues
OpenAI’s next major project, GPT-5 (code-named Orion), is experiencing substantial setbacks and escalating costs, according to a Wall Street Journal report by Deepa Seetharaman. The project, which has been in development for over 18 months, has encountered multiple challenges during training runs, each costing approximately half a billion dollars in computing expenses alone.
The company’s attempts to create a more advanced successor to GPT-4 have been hampered by data limitations and technical difficulties. Microsoft, OpenAI’s largest investor, had initially expected to see the new model by mid-2024. However, at least two large-scale training runs have failed to meet researchers’ expectations.
Google launches new AI reasoning model Gemini 2.0 Flash Thinking
Google has released a new artificial intelligence model called Gemini 2.0 Flash Thinking Experimental, designed to enhance reasoning capabilities in complex problem-solving tasks. The model, available through Google’s AI Studio platform, is described by the company as being optimized for multimodal understanding, reasoning, and coding across fields including programming, math, and physics.
Google introduces Veo 2 AI video generator
Google DeepMind has announced Veo 2, its latest AI video generation model, positioning it as a direct competitor to OpenAI’s Sora. The new model is currently available through Google Labs’ VideoFX platform on a waitlist basis, with users required to apply through a Google Form for access.
According to Google, Veo 2 can generate videos up to two minutes long in resolutions reaching 4K (4096 x 2160 pixels), though the current implementation in VideoFX limits outputs to 720p resolution and eight-second clips. The company claims the model offers improved understanding of real-world physics, human movement, and cinematographic controls, including the ability to specify genres, lenses, and cinematic effects.
Tools in brief
N E W S
领英推荐
Record Investment in Generative AI Reaches $56 Billion
Venture capital investment in generative AI companies hit an unprecedented $56 billion in 2024, marking a 192% increase from the previous year. According to reporting by Kyle Wiggers for TechCrunch, this funding was distributed across 885 deals worldwide. Major contributors to this surge included substantial investments in industry leaders, with Databricks securing $10 billion, xAI raising $6 billion, and OpenAI receiving $6.6 billion in funding rounds. The fourth quarter of 2024 proved particularly significant, with deal values reaching $31.1 billion.
News in brief
B A C K G R O U N D
Companies struggle to regulate workplace AI usage
A Financial Times report reveals that employees are rapidly adopting AI tools like ChatGPT before their employers can establish proper guidelines. Nearly 25% of US workers use generative AI weekly, with usage reaching 50% in software and financial sectors. By September, less than half of organizations had implemented AI usage policies, according to a Littler survey. Major companies like Walmart are developing in-house AI solutions, while others initially banned tools like ChatGPT due to security concerns. Many workers remain hesitant to disclose their AI use, fearing negative consequences. Companies face challenges in creating comprehensive policies due to evolving technology, diverse workforce needs, and uncertain legal frameworks around AI implementation.
Google increasingly flooded with AI slop
This Reddit post begins with the user KnightTrain expressing frustration after watching “John Wick 4” and searching for information about a potential “John Wick 5.” The comments show a broad frustration with the current state of internet search engines, particularly Google, which many users feel has devolved into a platform filled with AI-generated content, ads, and misinformation – also known as AI slop.
6 interesting talks about AI from 38C3
The 38th Chaos Communication Congress (38C3) in Hamburg, Germany, was the latest installment of the annual four-day conference on technology, society and utopia organised by the Chaos Computer Club (CCC) and volunteers. From the long list of talks, I chose six I found especially relevant for readers of Smart Content Report and myself. I used AI summaries to better understand the topics and perspectives discussed in these talks beforehand.
Background in brief
G L O S S A R Y
Overfitting
Overfitting is a common problem in AI training where the model learns the training data too precisely, rather than understanding general patterns.
It can be compared to a student who memorizes example problems from a textbook instead of understanding the underlying mathematical principles. When faced with slightly different problems in an actual test, they fail.
Similarly, in AI systems with overfitting, the model performs excellently with training data but fails when confronted with new, unfamiliar data. This is particularly problematic in generative AI, which needs to respond flexibly to new situations. When a model overfits, it essentially becomes too specialized, focusing on noise or irrelevant details in the training data rather than learning meaningful features that would help it generalize to new situations.
To prevent overfitting, developers use various techniques, such as training with a greater variety of data or deliberately "forgetting" overly specific details. The goal is always to achieve a balanced relationship between accuracy and generalization capability.
Helping you integrate AI with a human-first approach to make your organisation more efficient, effective, and people-centred.
1 个月I think that as everybody is jumping on the agent bandwagon something useful WILL materialize...