3 things every CTO should know about Artificial Intelligence in 2023
Just before the end of 2022, OpenAI released a new Deep Learning model called ChatGPT. Within a matter of days, over one million users registered to use its capabilities. That is mind blowing in and of itself, but to put it in perspective: Netflix was out for more than 3 years before a millions users registered for its services. This shows that the next generation of software applications will not be engineered, they will be modelled through Artificial Intelligence (AI).
For organisations to stay ahead in this competitive age, CTOs need to have a clear understanding of how AI is impacting the software engineering lifecycle and how to adopt it. In particular, they should be aware of three things:
#1 Pouring money into AI without a proper AI strategy is a recipe for disaster
AI projects can be complex and involve many different factors, such as data collection and preparation, model training and evaluation, and deployment and maintenance. Without a clear strategy, it can be difficult to ensure that all of these factors are properly considered and that the AI solution meets the needs and goals of the organisation.
A well-defined AI strategy can help an organisation to:
#2 A new subfield of AI will move out of research labs and into the industry
There are several types of machine learning, each with its own unique characteristics and applications. One type of machine learning is supervised learning, which involves training a model on labeled data, where the input features and corresponding output labels are provided (think of an image of a computer chip as the input features and the label being whether it contains any defects or not).
Another type of machine learning is semi-supervised learning, which involves training a model on a mixture of labeled and unlabelled data. This is useful when we have a limited amount of labeled data and a large amount of unlabelled data, as the model can still learn from the unlabelled data.
Large language models, such as GPT-3, are a type of generative AI that can generate human-like text and perform a wide range of language tasks. These models are trained using a combination of supervised and unsupervised learning, allowing them to learn the structure of language and generate coherent text.
Finally, there is reinforcement learning, which is a type of machine learning in which an agent learns to interact with its environment in order to maximise a reward signal. This type of learning is often used to train AI agents to perform complex tasks, such as playing video games or controlling robots. The agent learns through trial and error, receiving rewards or punishments based on its actions.
领英推荐
Up until now, reinforcement learning has seen most of its successes in robotics and gaming, but we start to see new industries adopting this subfield of AI (e.g. Finance and Manufacturing). For example, a trading agent might be trained to increase profits by buying and selling stocks at the right times, while a manufacturing optimisation agent might be trained to reduce production costs by adjusting process parameters.
These new models are among the most sophisticated AI models out and we can expect them to continue becoming more and more powerful in the years ahead. According to Mo Gawdat (former Chief Business Officer of GoogleX), we can expect AI to become a billion (yes, billion with a 'b') times smarter than every single person on this planet and unlocking an array of new applications on its way.
#3 Data will no longer be the limiting factor
In recent years, the importance of compute in training machine learning (ML) models has been increasing, with some experts even suggesting that compute is becoming more important than data. There are several reasons for this trend.
First, the size of ML models has been increasing dramatically, with some models having billions of parameters. Training such large models requires a significant amount of compute power, as the model must be trained on a large dataset and perform many calculations to update its parameters.
Second, the demand for ML models with higher performance has also been increasing, which often requires training on larger and more complex datasets. This can further increase the amount of compute required to train the model.
Third, the field of ML is constantly evolving, with new techniques and approaches being developed all the time. These new techniques often require even more compute power in order to be practical.
Finally, the availability of cloud-based compute resources has made it easier for organisations to access the large amounts of compute needed to train ML models. This has further fuelled the trend towards more compute-intensive ML training.
Overall, the increasing size and complexity of ML models, the demand for higher performance, the evolution of the field, and the availability of cloud-based compute resources have all contributed to the trend of compute becoming increasingly important in ML model training.
The takeaway
AI has had a profound impact on our society in recent years, but will increase to do so in ever more impactful ways in the years ahead. Staying innovative requires CTOs to have an understanding of the impact of AI on their industry and coming up with a (cost) strategy that integrates software engineering practices with AI. Fortunately, getting started doesn't have to be that difficult. Curious how? let's discuss that in next week's blogpost!
Freelance .NET Developer | MSc
2 年Nice insights Stef, I'm looking forward to next week's blog!