The 5 biggest takeaways from AI in 2023
An AI-generated image of Sam Altman with an army of robots. Generated by the author using Stable Diffusion XL.

The 5 biggest takeaways from AI in 2023

Greetings, intelligists!

First off, a Happy New Year to all!

We are back from a brief mini-holiday and ready to bring you the most compelling AI insights in 2024!

2023 was a crazy year. Artificial intelligence went from being an abstract sci-fi topic to a household term. AI is now on the minds of people from all walks of life, in every sector and profession.

Our 5 biggest takeaways from AI in 2023

We could probably have put at least 50 things on this list, but here are, in our opinion, the six most important. Why not five? Call me superstitious, but like plants, I love multiples of three.


ChatGPT: The great disruptor

It might not be the most comprehensive, the most powerful, or even the best, but ChatGPT took the world by storm in 2023.

OpenAI’s chatbot exploded onto the scene, gaining over 100 million users in only a couple of months. Users had their minds blown by a tool capable of generating advanced text, coding, analyzing data, and more, quickly and easily. Its ability to learn from users gave it a significant edge against the competition.

ChatGPT has proven to be a tool that any industry or individual can benefit from. The release of the GPT Builder feature allows anyone to design their own chatbots, fed with custom data and powerful generative tools, without having to program or code. This makes it adaptable to virtually any use.

In 2024, ChatGPT will likely continue to be a force in AI, despite increasing competition. With the barriers of a previously resistant board removed, the sky is the limit for the company led by Sam Altman.

AI disrupts education

The future of education? Generated by the author using Stable Diffusion XL.

The era of the internet had already produced a conundrum for educators. For decades, even centuries prior, students had to physically go to libraries to do research. In the span of a few short years, most students had all the information (good and bad) they could ever need at their fingertips. Plagiarism became an educational plague in the digital age. So much so, that in recent years, educators started using anti-plagiarism software to scan student submissions.

However, the emergence of generative AI meant anyone could use a chatbot to write a paper and pass any plagiarism checks. Plagiarism detector companies quickly added AI detectors to their offerings, raking up handsome profits. There was one issue, though: AI detectors do not work reliably. AI developers warn that it is impossible to reliably detect the use of AI, and it will become even harder to do so with the advance of generative AI.

The result? Many educators have adapted by changing the kind of work and assignments they give students. This has fundamentally impacted how they teach and how students learn.

Many, such as Berkeley College Professor Jason Gulya , have embraced AI and taught their students how to use it as a tool. Jason has also become a leader in academia, helping other teachers to adapt their teaching methods to integrate AI in a balanced way.

There is no doubt that the emergence of AI tools will continue to disrupt how we teach and learn. 2023 was only a preview of the vast changes that are sure to come.


The OpenAI crisis: Altman’s ups and downs

In November, the world witnessed one of the most unprecedented and influential corporate crises in history.

The OpenAI board fired CEO Sam Altman at a moment in which the company seemed to be on a meteoric rise with no ceiling. Altman was being hailed as nothing less than a blend of Henry Ford, Bill Gates, and Steve Jobs thanks to the heights to which he led the producer of ChatGPT.

However, OpenAI was no ordinary company. Established as a “capped-profit” company, its board was infused with individuals who had the role of “protecting humanity” from AI, even if it meant destroying the company.

The result? Altman’s power-hungry and profit-driven vision for OpenAI was shot down by a board that accused him of not being “consistently candid” (or some similar corporate term for dishonesty). Altman was fired, days after obtaining a ridiculous $12 billion funding pledge and partnership with Microsoft. A few days, and a great deal of noise later, he was reinstated.

Ah, but what does this mean for AI? What does it mean for humanity?

The central conflict in this corporate crisis was ideological. On the one hand, individuals who believed in creating an AI race to the top, in developing AI at all costs. On the other stood those who believe in limiting AI development. The former view AI as a vast economic opportunity. The latter view it as the possible destruction of humanity. Both claim to have humankind’s best interests in mind.

The takeaway is that, in this case, the proponents of unbridled AI development won. And an AI development race, which had already started, was given a green flag. Regulation looks to be far off, and we are entering a world where AI companies are discarding ethicists in favor of growth-specialist MBAs.


A world of art, or a phony world?

A “fake” Picasso, or an image in the style of Pablo Picasso. Generated by the author using Stable Diffusion XL.


In April of 2023, German photographer Boris Eldagsen won a major prize at the Sony World Photography Awards.

The catch: the image was AI-generated. To his credit, Eldagsen made no attempt to defend his prize. Rather, he took the liberty of announcing his winning photo was AI and renounced the prize.

In doing so, the photographer pointed out many things. For example, the arbitrary nature of prizes, the ease of committing art fraud, and the incredible advances in generative AI.

Today, it is possible to generate stunning images with AI. Stable Diffusion, Midjourney, and Dall-E are just a few of the tools that allow people to easily make images using text prompts.

However, the emergence of these platforms also raises many questions. Copyrights are one major issue (many of these platforms use copyrighted images in their “training”). Another issue is the use of these platforms to generate deepfakes and other potentially harmful images. In the meantime, graphic designers and photographers are a bit worried about their job security.

Meanwhile, the nature of generative AI begs many questions. Should all AI images bear invisible watermarks? Should there be laws or regulations on training datasets? Can artists and photographers make copyright claims against companies who use their work to train their models?

A Pandora’s box is opening in which many will begin to challenge generative AI. At the same time, there is a paradox emerging as more people will begin to use generative AI in different capacities.


AI upends work in general

The future of work. Generated by the author using Dall-E 3.


The first to go were the writers. Practically overnight, agencies and content publishers found that AI was a cheaper alternative to hiring writers. Many freelancers and even agencies saw their business go bone-dry in an incredibly short amount of time. While the quality of AI writing is still nowhere near that of good writers, it became clear that AI was going to make changes.

A few months later, it is also clear that many other jobs are going to be impacted by AI. It may take a little while, but it is only a matter of time. Any job that currently entails tedious, repetitive work that can be done by AI could be in danger in the coming months or years.

Beyond that, a clear takeaway from the “AI revolution” is that even the jobs that won’t disappear due to AI will be impacted. Acquiring AI skills is becoming a superpower for many jobs. AI can help in writing, coding, data work, analyzing, summarizing, and a myriad of other tasks. Those who know how to use AI tools will have massive advantages over those who do not.

This coming year will surely see the spread of AI tools to other jobs and sectors. New jobs will emerge, and many old ones will start requiring workers to use AI. Expect more big changes in the coming year.


The New York Times takes on AI

Perhaps the last big story of 2023 will be the one to ring the most in 2024.

In late December, news emerged that the New York Times was suing OpenAI and Microsoft for copyright infringement. The lawsuit derived from the fact that ChatGPT was found to reproduce verbatim sections of Times articles despite lacking permission to do so.

This came about because ChatGPT’s underlying models used an unidentified number of articles from the New York newspaper as training data. Microsoft is implicated in the lawsuit because it is using OpenAI’s models across a wide range of its products.

Whatever the result, this lawsuit will have massive implications for AI models. Should there be a major decision in favor of the Times, it could mean that businesses and individuals could seek financial damages against AI companies that use their data to train their models. It could also mean that AI companies may see their training data dramatically limited to open-source or free data.

Should OpenAI win, it could be open season for any content published online. However, it is also possible that the lawsuit might mean that AI models might have to modify their output to avoid copyright infringement.

The lawsuit seemingly came out of nowhere. But it highlights some critical issues. AI models don’t get their knowledge out of thin air. They scrape legitimate websites the world over in an endless quest for the latest and greatest data. At the same time, they use user inputs to train their models.

Keep a close eye on this story, as it may play a huge role in how AI plays out in 2024


That wraps up our wrap-up and kicks off our first edition for this year.

I hope you not only continue to read our newsletter, but that you spread the word. Forward this email to anyone you might think will be interested. Let’s grow together in 2024!

I shall now return to my previously schedule commitments, which involve eating ungodly amounts of food and running around after several dogs and kids.

Happy New Year!

Joaquin

intelligist.ai?


要查看或添加评论,请登录

社区洞察

其他会员也浏览了