Do-it-yourself AI part 1

Do-it-yourself AI part 1

On Monday September 4, 2023, upon my return from vacation in the USA, and after having put together a small wix site (my skills in design, and consequently in creating sites for the general public, have always been very limited... ) to host it, I decided to launch a narrow generative artificial intelligence dedicated to economics and economic policy, immodestly based on all my previous analyses (a total of seven years of articles, reports, notes, books, TV appearances, Youtube videos, radio interviews, nearly 3.5 million tokens, most of them for French medias). An artificial intelligence making Sébastien Laye (or at least the economist Sébastien, limited by what he wrote or got interviewed about in the past, with more or less qualified judgments) available 24 hours a day. Well enough to get my answers - and in some cases even my forward-looking analyses - on a range of economic and public policies topics. www.sebastienlayeecobot.com

How did I get to this point, joining the new trend of narrow AIs and prompt engineering in the summer of 2023?

It all began in December 2022 with my discovery of Chat GPT: like the average person, at that time I had only a theoretical understanding of AI, and while I'm normally a fervent advocate of technical progress , I was rather bordering on techno-skepticism about Kurzweil's theses, the Singularity and the advent of a General Artificial Intelligence. But for once, we, the general public, could test an AI product, with a simple interface, as rudimentary as it reminded me of the last great paradigm shift, the appearance in 2000 on my student computer screen, of the Google search bar....

I was one of those immediately caught up in the promise of generative AI, whether for LLMs (large language models) and text, audio (Eleven Labs), image (unable to get used to MidJourney's messy Discord interface, I set my sights on Leonardo and Stable Diffusion after a few disappointing weeks on Dall E), video (d-ID, Runway). The flowering of start-ups, new products, new versions and plug-ins became difficult to keep up with, even through dozens of newsletters. Busy writing a book on the economics of the metaverse over the Christmas holidays, I couldn't resist including a chapter on AI with my co-author Emmanuel Moyrand. And once the initial love-at-first-sight stage was over (my first real long prompt on ChatGPT would be Homeric...as I asked the bot to write the story of Marcel my cat in the style of the Odyssey), came the time for questions about the practical applications of this technology. Within two months, using Chat GPT to summarize my articles or those of others, then PDFs (via ChatPDF), automating my emails with ChatGPT integrated into my Gmail, systematizing LinkedIn posts managed by the chat bot. All this became routine by spring, when I also met Amaury Delloye, the founder of ADN AI, who awakened me to the potential of AI-generated audio and video (avatars). As far as process automation is concerned, in my professional activity as a company director and investor, I've never been disappointed. But the flirtation with AI began to disappoint when I started asking ChatGPT questions about economics. On academic matters, the answers were quite satisfactory; but on applied policies, the answers were poor and sometimes just appalling. How to explain the paucity of some answers? ChatGPT uses a gigantic database, admittedly manually enhanced by OpenAI, but which is essentially the world wide web. And on our good old network, on the subject of "economics", there are certainly good Wikipedia entries, a few online courses or economics articles, but also a lot of opinions, low-quality content and social network posts. So, as the old Excel adage goes, "garbage in, garbage out". When a database is fed with low-quality data, the end result, no matter how good the prompt or LLM, is bound to be low quality. While I continued to use ChatGPT at the beginning of the summer, my hopes turned to more specialized AIs (known as narrow AIs). The future of these AIs seemed obvious: connecting the APIs of these LLMs (Claude, Bard, ChatGPT) to specialized and qualified data should lead to better results. Already, companies were building their own ChatGPT on their own data!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了