AGI ≠ "ALL GOOD INTENTIONS" - WE NEED LOVE! TFG #007
Designed by AI using Microsoft Deisgner

AGI ≠ "ALL GOOD INTENTIONS" - WE NEED LOVE! TFG #007

You don't need to sing "Kumbaya", nor queue Jackie DeShannon's "What the World Needs Now (Is Love)" (popularized by Dionne Warwick), but I - as an e/acc optimistic futurist and tech "influencer" (whatever that really means anymore) - am convinced that the "next iterations" of AI could be even scarier than most are fearful of, or much more glorious and positive for humanity than most have currently reconciled: and the balance may lie in LOVE, and the components of it such as empathy, kindness, selflessness, desire, loyalty, non-judgement, agape, familial and many more, as well as what it's NOT.

IN THIS ISSUE: I will be discussing:

  • Yann LeCun's recent comments on AGI ("there is no such thing as AGI"), ---
  • Jensen Huang's keynote at Computex (the future of 英伟达 may be "one big AI" and "physical AI")
  • AI meets the blockchain ( Story ) and my upcoming talk about it at Cincy AI Week - the SOLD OUT conference mid-month about all things AI
  • how LOVE and other models beyond large language models actually could get us to AGI (artificial general intellegence: 'general intellegence' being how animals and people can just learn things on their own, and complete wide ranges of tasks without specific training on them); and
  • 'building in public' - a byline of this newsletter - and how we're leaning into it at WeJ (and what it could mean for you).

So without further ado... (let's get it)!


Yann LeCun has been one of the leading contributors to the field of AI for decades, having headed up Meta 's AI for years. He's a stickler for verbiage, and a vocal decrier that we're "nowhere near AI" - although that comes from an intellectual and caring place. He actually wants us to acheive AGI, per subtext of what he was quoted as saying this week ("to get where we want to be", alluding to AGI), but keeps saying that we will never get there with language models alone, or almost even with language models period. HE IS NOT WRONG. He speculates that we need a world model, which I completely agree on, and illustrates his point that if a human or even animal needed to build a shelter for survival with requiem materials but no language to guide them, that they'd figure it out on their own, but that no amount of language could get them their secure abode on it's own (or something like that). Where Mr. LeCan and I differ conceptually yet is that I believe that massive progress has been made with Large World Models (LWMs) like OpenAI 's Sora, 谷歌 's Geenie and now Veo, and others, as well as, notably, Large Nature Models (LNMs), Large Action Models (LAMs), and more.

Last year Inflection AI had stated that, after generations of humans learning the language of computers, it was going to teach computers the language of humans, which they did with their Pi (Personal Intellegence) which is what Google and OpenAI both just reveals a week or two back as their big "breakthroughs" (right after Microsoft took 2/3 of Inflection's founding executive team, paying Inflection founder Reid Hoffman over a half a billion dollars to license their existing tech and poach Reid's two cofounders of IAI). This was a noble undertaking the generated sufficient success, and will now be deployed widely, but it's not enough. We. can't just teach computers what our words mean, or even the intonation, implications, cadence, colloquialisms et al. Teaching the computers things like physics and laws of nature (LWM), pattern recognition (LAM) and more is great, and beyond needed, but if computers don't understand LOVE, joy, empathy, kindness, (good) human nature and much more we're likely @$%&ed. But the great news is we don't have to accept that as a foregone conclusion (stay tuned for the section on "love").


Jensen Huang keeps getting richer, more powerful, and has been revealing more and more of the tech, ropadmap, vision and technology that has NVIDIA set to become the most valuable company in the world; rather, in the history of the world. Their new chips and understanding of the need for great, fast, affordable and "green" compute are next level (though still don't rival Groq at all - I'm excited to meet and speak with them at Cincy AI Week), and he just revealed several more extremely noteworthy concepts: the first being that NVIDIA plans to become one giant AI company capable of doing anything and everything, and also that he believes we are fast approaching a next era of AI in which AI is "physical", or embodied. Yes, we are likely talking about the beginning of billions of humanoid robots (androids) powered with the most current and powerful AI models (multimodal) available. As we see remarkable advancements in robotic capabilities from the smallest to largest robots, and understand the resources available to NVIDIA, it really makes us slow down and ponder a lot. A lot, including the need for AI and computers to understand LOVE in all its forms and fashions.


If you're at all familiar with my content here on LinkedIn , my previous writing at "AI News", my keynotes, etc., you've heard me preach my perceived "need" for AI on the blockchain for years. Smart contracts tying all AI generations and tokens used in anything touched by AI so that we know "what's real", digital hygiene and chain of custody, be able to discern deep fakes and much is, in my assessment, mission critical. I believe this is true of IP, of augmented realities, even of our own personal data, which should be sovereign.

This is why I was so amazed to learn of Story Protocol, backed by a16z ventures, Paris Hilton and other luminaries, who's solving for this exactly, via L0 and L1 protocols and chain technologies tying IP to the blockchain in all manner of applications. We are a part of their builder program at my startup WeJ; also my "day job" Agora World , and my partners BlockBook . I will be speaking on the significance of blockchain in AI and the implications for ethical AI in a week in Cincinnati at 'Cincy AI Week' which is already sold out, but which I believe will be broadcast and some breakout events may still have space if you're in the area or want to travel to learn what the next few months of being humans in the "AI renaissance" will likely entail.


This entire issue is dedicated to the quest for infusing AI with love. I recently began helping a dear friend who's ambitious goal is to train an AI model on just that. It's important that we only train on good, clean data (almost entirely human generated, supervised) and don't just ingest data from Google, Facebook, etc., so it's a laborious undertaking, but it's a cause we believe is existential and could have wide-reaching implications for us all. I am actively speaking with research labs, engineers, and potential sponsors/funders - if you're interested in getting involved or seeing the project continue it's early momentum, I encourage you to reach out to me directly.

Code and software isn't inharently "good or bad", it's just programmatic; it's calculus. Like humans, it "doesn't know what it doesn't know" - that's why we should all come together en masse, with love, to teach it "the better way". I honor and respect all faiths, belief systems, teachings' of prophets et al, but love the lyrics and song by Ziggy Marley, eldest son of Bob Marley, that says "Love Is My Religion".


Keeping it moving (thanks for reading through to the end) the full name of this newsletter is "Tech For Good: Building In Public With AI", and at WeJ we have started "eating our dog food". Last week was the first week we publicly shared a link to our executive team weekly standup meeting, where we discuss what we're working on, what tools we're using to accomplish various tasks, initiatives, tech we're building and much more.

The goal is multifaceted - we'd love for anyone who's keen to learn about what we're doing to be able to do so, but also to empower anyone to build their own dreams and visions into reality, and to make it as easy as we can for them. Although non-WeJ attendees are effectively "flies on the wall" and are muted and cameras requested to be off until the end, we do open the call to Q&A at the end as well.

This all came about from a startup idea I had that I don't have time or bandwidth for allowing vetted people to listen to others' private zoom calls, for a fee (although we're not charging). This concept of building in public is particularly relevant these days where more and more jobs risk being made redundant by AI, even as we strive to keep it "tech for good". A client recently asked me " Cory Warfield , what should I be teaching my 8- and 10-year old children in these changing times?". I thought for a moment, and replied "Entrepreneurship". In a future where we mostly don't "need" to work anymore (next issue I'll discuss UBI and UBC), knowing how to validate ideas, prototype, sell, launch something, monetize ideas, solve problems etc. - especially generated from human minds rather than forecastable computer "minds" - is likely going to be such a valuablel skillset and asset. We will be able to have computers make us anything we want - but need to know what and how to ask them. Soon they will deploy multiple autonomous agents on our behalf that can prompt themselves, etc., but we will still need to know what to know to make it do what we want it to for us. At least this is my theory yet and still, although in these rapidly changing times, forecasting what life will be like in the near-term future is still guesswork to a degree.

All we can do is keep making smart guesses, together, with love "at the heart". If this is our northstar, I feel pretty great about all things coming down the pike.


Finally, I've had hundreds of comments about this newsletter (virtually all positive - thanks! And thanks for 140K subscriptions in first 2 months ????), but many also comparing it to my previous weekly AI newsletter (alias "AI News") where I did weekly tips, prompts, tools of the week and tutorials. I appreciate the comments, and I HEAR YOU! Moving forward I will include new jail breaks, next level prompt engineering, new amazing tools I am eager to share, and more - but for now I recommend going to the first issue of this newsletter, Tech For Good (could also find the same in issue #2) and find a fun and productive exercise linking to all ≈30 issues of the previous publication and following instructions I provide there to load it into your own AI LLM of choice to chat with them all about the various Tools of the Week, Tutorials of the Week, Prompts of the Week and more. It's a great way for anyone - novice or expert AI user alike - to access my previous 100+ tools, tips, tricks, prompts etc in an interactive way that's not laborious, time consumptive, or too challenging.


Feel free to provide feedback - should I include infographics? Videos of myself or other? More prompts/tutorials/tools or more editorial as I've been doing more in this newer publication than my previous. Please let me know - I want to make this awesome for YOU but can only make it "so" awesome without your input/feedback. Your support is truly appreciated: THANK YOU.


#techforgood #ai #ainews #litrendingtopics #topvoices #buildinginpublic #agi #gai #tech4good #openai #nvidia #gpu #llm #lwm #yannlecun #metaai #linewsletter

Such an amazing newsletter Cory Warfield! So exciting to see this critical focus - Jen Loving given the profound implications for humanity! THANK YOU!

回复
Alexey Navolokin

FOLLOW ME for breaking tech news & content ? helping usher in tech 2.0 ? at AMD for a reason w/ purpose ? LinkedIn persona ?

5 个月

Absolutely agree, Cory! Your insights on merging love with AI are both timely and essential. Looking forward to hearing more at Cincy AI Week.

回复
JP Liang

I write about wisdom + AI

5 个月

Great insights, Cory..as always. It is my believe that its no accident that the word for love in Chinese is spelled in pinyin as “AI”…??

  • 该图片无替代文字
Billy Samoa Saleebey

Founder of Podify | Launching Purpose-Driven Podcasts for Speakers, Authors & Founders | Amplifying Voices, Building Unstoppable Brands | Ex-Tesla

5 个月

Agreed! Love matters, even in AI. Excited for your talk at Cincy AI Week Cory!

Mike Rubin, MD, PhD, CFA

ROP (Return on Potential) is my favorite acronym although I’m an MD, PhD, MBA, CFA & a bunch of other acronyms people think matters. 4x’ing ROP @ Harvard, MIT, & Stanford & Founder/CEO of a multibillion dollar VC firm.

5 个月

Cory - this is awesome! I’m sure it will be an extraordinary event and that you will add a tremendous amount of value for all those who attend.

要查看或添加评论,请登录

社区洞察