GenAI and LLM: Key Concepts You Need to Know

GenAI and LLM: Key Concepts You Need to Know

GenAI and LLMs: new trends and key concepts, with emphasis on better results, lower costs, and faster implementations even without NN.

It is difficult to follow all the new developments in AI. How can you discriminate between fundamental technology here to stay, and the hype? How to make sure that you are not missing important developments? The goal of this article is to provide a short summary, presented as a glossary. I focus on recent, well-established methods and architecture.

I do not cover the different types of deep neural networks, loss functions, or gradient descent methods: in the end, these are the core components of many modern techniques, but they have a long history and are well documented. Instead, I focus on new trends and emerging concepts such as RAG, LangChain, embeddings, diffusion, and so on. Some may be quite old (embeddings), but have gained considerable popularity in recent times, due to widespread use in new ground-breaking applications such as GPT.

New Trends

The landscape evolves in two opposite directions. On one side, well established GenAI companies implement neural networks with trillions of parameters, growing more and more in size, using considerable amounts of GPU, and very expensive. People working on these products believe that the easiest fix to current problems is to use the same tools, but with bigger training sets. Afterall, it also generates more revenue. And indeed, it can solve some sampling issues and deliver better results. There is some emphasis on faster implementations, but speed and especially size, are not top priorities. In short, more brute force is key to optimization.

On the other side, new startups including myself focus on specialization. The goal is to extract as much useful data as you can from much smaller, carefully selected training sets, to deliver highly relevant results to specific audiences [..]

Read full article, here .


Vincent Granville

AI/LLM Disruptive Leader | GenAI Tech Lab

8 个月

Not sure if people are interested in the images accompanying my posts. For the story, this one features the gradient of interpolated geospatial data -- essentially synthesized data to the entire space when measurements are available only in select locations (the dots in the picture below). Thus, picture below represents the reconstructed measurements in the entire space given know measurements at dot locations; the image at the top of this post is the gradient of the image below.

  • 该图片无替代文字

要查看或添加评论,请登录

社区洞察

其他会员也浏览了