AI, generally speaking ...
Yogesh Haribhau Kulkarni
AI Advisor (Helping organizations in their AI journeys) | PhD (Geometric Modeling) | Tech Columnist (Marathi)
Artificial Intelligence (AI) covers wide range of technologies, right from ruled based expert systems to latest advances in Machine and Deep Learning. Aim is to mimic (or even surpass) human intelligence. AI is broadly divided into three?stages, Artificial Narrow Intelligence (ANI), that has a narrow range of abilities, basically single tasking; Artificial General Intelligence (AGI), which has multi-task capabilities as in humans; Artificial Superintelligence (ASI), that has capability more than that of humans.
Although a bit dated, 6.S099 course at Massachusetts Institute of Technology (MIT) taught by Lex Fridman serves a good introduction to AGI.?
Following is a sketchnote of the first class giving overview of the entire course.
A number of renowned AI scientist were invited as guest lecturers in this course. So, recommending watching the full play-list (below)
AI Advisor (Helping organizations in their AI journeys) | PhD (Geometric Modeling) | Tech Columnist (Marathi)
2 年Also published at https://medium.com/@yogeshharibhaukulkarni/ai-generally-speaking-9163948448cb
AI Advisor (Helping organizations in their AI journeys) | PhD (Geometric Modeling) | Tech Columnist (Marathi)
2 年Tanmay Vora?for comments and suggestions
Global, Corporate Group Head of AI at L&T Group |CTO, Sr.VP| IITB | Keynote AI Speaker | $ 27 billion, 3 startups, Entrepreneur | 26 yrs Member of Group Tech Council !| 17 yrs in AI | Gen AI Mob: 9689899815
2 年Yogesh Kulkarni, thanks for sharing this. I see Ben Goertzel and Marcus Hootus missing from the list. Lex Fridman has interviewed both in his video series. Also Manolis Kelis is missing !
Design led transformation
2 年Yogesh Kulkarni your sketchnote is succint and yet broad in capturing the essential as well as emerging. I like the emphasis (density of words related to human empathy, feeling, intelligence) in your diagram. As a designer my observation is that we end up designing for stereotypes and dominant behaviours hence potentially miss on being inclusive, or adapting to irrational human behaviours, which 'surprise' the learning within a machine. I lean on political theory which accommodates irrational voter behaviour more scientifically. Am a big fan of Belnap's logic. Given the gaining emphasis in AI world on data quality primarily, versus modeling, weights and biases, I think it will help to understand what is good data, bad data, or all that in between. Like a wise man said, a weed is a plant without a known human need. AGI will surely have to navigate these, especially when dealing with older brain regions, irrationality, such. Thank you for your inspirational note.