GenAI Learning Journey
Created via ChatGPT 4o

GenAI Learning Journey

I like how https://www.dhirubhai.net/in/areganti/ is creating short reels on GenAI. Here is the link to it: https://www.instagram.com/p/C9L2xR9SuyY/

I intend to use this as a place for my notes, open for public scrutiny and collaboration, so the learning is fast-tracked.

Day 1: What is Generative AI

  • AI = A technology where Machines can learn patterns from data using AI models and make decisions.
  • Programming = We have to give instructions in the form of code to achieve a task.
  • For AI, it automatically writes instructions based on the data patterns.
  • GenAI - subclass of AI.
  • How is GenAI different from AI - it can create its own data based on patterns.
  • GenAI is built using Neural Networks.
  • Neural Networks undergo a process called Training where they get updated on the large training data and they use the patterns in that data to update the neurons also known as parameters.
  • LLM is one class of GenAI models. Trained on text and produce text. Latest ones can generate images too. I used 4o to generate the cover image
  • Course end goal: LLM Usage, Build real world applications, understand challenges.

Open Questions:

  • Will GenAI learn from the data it generates? Will it use the same data to update the pattern?
  • How does a neuron in neural networks look like? (Inspired by Shrini Kulkarni , asking the fundamental question). Is it just code?
  • Which are the other classes of LLM?

Shrini Kulkarni

Independent Software Engg Consultant | Ex QA Director, OLA | Ex VP JP Morgan | Ex VP Barclays | IIT Madras Alumnus

8 个月
Shrini Kulkarni

Independent Software Engg Consultant | Ex QA Director, OLA | Ex VP JP Morgan | Ex VP Barclays | IIT Madras Alumnus

8 个月

>>> Will GenAI learn from the data it generates? Will it use the same data to update the pattern? In way GenAI learning from text ("data") is generates - is happening as part of fundamental process of generation. So.. if the prompt is "I want to" - LLM generates next token by feeding the prompt and returns "I want to eat" (whether it is eat or sleep or drink or run) are all possiblities. Which one is selected and put in the response cannot be determined before hand (thats why how LLM behaves - is not known). If I play fill in the blank game - I want to -> I want to eat >I want to eat an apple at > I want to eat an apple at home in >..... You can see response is feedback as prompt. If I write a big paragraph of prompt and there comes the response and I ask follow up question - all of this content (prompt + response + follow up question) - SHOULD go back to LLM for next response generation. I need to check this and confirm - but I think this is how it should work. Also - phrase LLM learns is a metaphor - a narrow meaning of "learning" is suggested. LLM cannnot learn. If I were to roughly translate "learning" in the language of AI/ML - it is updating its "weights" and biases (other hyper parameters) when it "sees" a new data.

要查看或添加评论,请登录

Ajay Balamurugadas的更多文章

社区洞察

其他会员也浏览了