Are humans smart enough for AI yet? It's NOT you - AI is definitely hard to use.
Cory Warfield
How do I have over 500K followers here (& 100+ recs)?? I speak ‘truth to tech’, share ‘good vibes’, highlight amazing people & companies & have friends in high places. Editor-in-Chief @ Tech For Good. Serial founder/BODs
If you're not incredibly good at using AI already, don't fret - the above cartoon I generated using ChatGPT should fix that. Now that you're an expert, my work here is done.
????????????????????????????????????????????????????????????????????????
But on a serious note, if you're one of the 7.99B humans on earth - literally - who's not great at using AI already, that's perfectly fine, and statistically very normal. I actually just learned this weekend that I'm not even as good at using AI as I thought I am, a humbling revelation for an AI founder with a decade in machine learning, three years in prompt engineering, educating and jailbreaking, now a second-time founder of a six-figure followed AI newsletter, and as someone who's even used AI to make AI that helps people learn to use AI (see the first-ever issue of Tech For Good #01 newsletter here in the archive for that). I even taught a free webinar this morning with Ravit Dotan, PhD on how to build products using AI (and to do it ethically, and how to monetize it). In all fairness, as a prompt engineer and emphatic AI power-user, I do generally have a pretty good handle on how to get AI to do things that I want it to (even when it tries to say it can't initially). But to really get it to deliver valuable, time-saving, productive, next-level results, it's still really hard.
I'll buffer the above by saying that to use off-the-shelf LLMs such as ChatGPT, Claude, Gemini etc. to do things like write copy, ideate, put together first drafts of legal docs, generate images (although, in reference to the above cartoon, they still leave a lot to be desired in their current state, although the imminent release of ChatGPT Next may change that, and much more) isn't inherently 'hard' or challenging. If you're reading this are aren't familair with how to get these systems to "work for you", the idea is to train them like you would a new employee or coworker, feeling free to ask it about itself, its capabilities and how to get it to produce desired results, then giving it as precise instructions as possible, listing it's persona (ie. you're a senior business consultant from a Top-3 agency with 20 years domain expertise in technology and cyber security), its task (ie. review this technical roadmap and present a numbered bullet point list of possible concerns), any additional relevant data or context (ie. here is a file of the last audit we had done - please emulate this), and anything else such as tone (professional, witty), desired length/format, and all other pertinent information.
Then be patient, find new ways to ask it to do things when it says it can't (particularly if you know or suspect that it really can, and especially when it's mission critical to the completion of the task at hand), fine-tune the results (don't settle for "good enough"), and ultimately, you may want to perfect it by hand (as a great result, the above cartoon could easily be cleaned up in Canva, and be actually decent - maybe).
But with all this said, it's still not "that easy", we all see awful social media content, emails etc drafted by AI daily, and for as much as we hear that AI will take all our jobs (it will), is so powerful (it is), how disruptive it is (it certainly will be), and how dangerous it is (this is not a foregone conclusion), most people are not getting "usable", let alone superior, results from AI above what they could have produced themselves, sometimes just as fast and perhaps more easilty. But where AI holds major potential and will create millionaires and billionaires in the coming years is its ability to write, test, document, debug and deploy code. This will allow anyone - at least in theory - to turn their ideas into software; apps and technology that can solve problems and generate a whole lot of money for those who know how to use it.
Sam Altman recently postulated that soon ten-person startups would reach unicorn status (privately held billion-dollar valued companies), followed by one-person startups, I believe this to be true as well. But anyone who's built Custom GPTs, or even Artifacts on Claude, or anyone who's used Bubble no-code solution, or the recent yet anomalous Devin knows that these really aren't tools that anyone can use to build software or solutions that will gain mass adoption and solve real world problems without the user being quite technical or having someone who is accessible to the project. ChatGPT and Claude both write 'decent' code, particularly in Python, Javascript and HTML but recently, headlines are that ALL THIS HAS CHANGED. But has it??
"Cursor is a game changer", "Cursor is about to replace all coders", "Nothing has ever existed like Cursor" were oft echoed posts on social media recently, and it's true - Cursor writes really good code. It is good at concepting, at architecting, and at coding; what it's NOT good at is deploying the code, unless you know how to spin up instances, install libraries and dependancies, and really "think like a coder". So if you're non-technical, you're left with a bunch of "great code" and nothing to do with it.
Then days later, a new pervasive message began to dominate social media:
"Cursor is good, but Replit is BETTER", "Replit just flipped the game on its head", "Replit will replace all coders". Sounds familiar, and frankly, I think many in the AI space need to take regularly scheduled departures from the "hype train" to keep us all sane and balanced. But then I went to Replit, and it was quite good. INCREDIBLY GOOD. It reeeeeeeeaaaaaalllllllly seemed like it was going to produce some incredible software, because, like Cursor, it can code, and has a preview window for the code (Claude 3.5 Sonnet with Artifacts has this as well), but this one can deploy. Now THAT is a big deal. This is cool - you can deploy it locally inside of Replit, you can deploy it easily to your own domain. The fact that you can use AI to code, build, test, and deploy locally, etc. is truly a big deal - AND TO MAKE IT EVEN COOLER REPLIT HAS AGENTS TO HELP EVERY STEP OF THE WAY, AND IT INSTALL ALL LIBRARIES ETC. INTUITIVELY FOR THE USER.
领英推荐
Ok, great; Cory Warfield , it sounds awesome - what's the problem. Well, aside from there being no free tier to play with for Replit, and after having just hitting my stride on it and then being told I was at my MONTHLY limit (in my FIRST DAY using it!!!), the problem with Replit, as with Cursor and also with Claude Artifacts, is that THE INTERFACES ARE TERRIBLE. If we want to make software, apps, websites, chatbots etc that people will actually use they MUST have great interfaces and usability. After spending days on Cursor and then day on Replit (lol), I was almost no better than I would have been trying to build these concepts on Claude 3.5 Sonnet with Artifacts, and that's just anywhere close to "AI taking all the programming jobs" or "a one person startup achieving billion dollar status".
One other thing to mention is that with Claude and even Cursor, you can't really integrate APIs, AI, or web into the tech that you can build, but with Replit you can, which is amazing - but to do so, you have to be quite technical. It turns out that even watching a ton of YouTube videos doesn't make this part very easy (at all). And the saga continues...
But then, something even better, but also way worse, came across my desk. Enter: v0. V0 is an AI that builds front-end, not back end, so whereas Replit can build apps that do cool stuff, tie to APIs, access AI etc. that look fairly terrible, V0 can build beautiful, sleek, sophisticated, intuitive, modern, and lovely UI (user interfaces) that do NOTHING (think "mock-up", wireframe, prototype).
It was around this time that I discovered someone on YouTube that was clearly much smarter than I am, and he had a beautiful system with a free template to connect you V0 to your Cursor to your Replit - meaning that you could design cool apps on Cursor and then have those apps' interfaces become awesome inside of Cursor using V0 and then easily add APIs etc using Replit and deploy them via Replit, your own domain, Github (a whole other "beast") or VS Code; VOILA! Presto! Bing, bang, boom. Except - IT'S NOT THAT EASY. AT ALL.
Turns out you need to do stuff with Firebase, Install software, generate secret keys and put them in 'hard-to-find' places... in summation: IT ISN'T EASY AT ALL. It does seem that as soon as people as ambitious or crazy as myself finally figure out how to do all this, it will because obsolete, especially as I already see tools like Replit generating files, adding libraries, etc in real time and these extra steps seem like a next logical step to automate in this process. But it begs a bigger question:
How Smart Do We Humans Need To Be To Truly Leverage AI?
Will the next steps toward closing the gap between the general public and AI power-users be humans understanding how to use the tech better, or must it be technology understanding the humans using it better? Although we see new models emerging and being teased with superior reasoning and listening capabilities, it seems as though both humans and technology need to be trained more extensively - the only question is are humans receptive enough to being trained and having their user behavior challenged/disrupted, or is it all on the computers to learn humans better? The funny thing is that for the computers to understand the humans better it will take some of the really smart humans to keep doing the big-brained teaching, learning, implementing and fine-tuning. I applaud the brilliant minds out there figuring out how to improve all the disparate systems and tools, how to make them all "play nicely" with each other, and making them easy enough for the rest of us to use someday.
As for me, I need a quick mental vacation. All this learning, doing, watching, trying, failing, yelling, glaring, and questioning my worthiness as an AI power-user has got me more frazzled than I've been in quite a long time. But after some breathe work, prayer and meditation, reading out of a real book (not a screen), and some well-earned (or at least much-needed) sleep, I'll be right back at it. I'm committed to making this Cursor > V0 > Replit dream-stack work for me, because I have so many more ideas I want to build, bring to life, market, scale, sell, and introduce to the world. If you have awesome ideas you want to build, drop them in the comments - maybe you'll find a cofounder here! Or roll up your sleeves, start playing with Replit Agent (but not too much, or you'' get cut off like I was), and see if you can "make this stuff work" for you.
Lastly, in other AI news, the new open source LLM Reflection (by Hyperbolic?) had been reported to be "Better Than ChatGPT and Claude" but then was exposed as a possible "fraud" - I've played with it and think it's reasoning does appear to be decent, and the potential disruption behind it is fine tuning prompts inside of a model that possible let the model get smarter, whereas other existing LLMs are as smart as they were trained on, but don't really get "smarter". If the rumors of it being a "scam" are inaccurate and this new method of training AI models can help them level up, it could be a big deal. Also OpenAI has "mostly" confirmed that their new model, ChatGPT Next, will be out this autumn (so, "any week now") which has been reviewed by the US government and is nee Q* and Project Strawberry, they've also now incorporated SearchGPT - their search engine to rival Google and Perplexity - into their ChatGPT interface to those with access; I have it and haven't been swayed to use it over Perplexity yet, personally. If you have any "tasty nuggets", please share them in the comments below.
I also welcome all feedback, suggestions, submissions and critique. ???
I appreciate you taking the time to read this - I hope you found it enjoyable, informative, actionable and most importantly worth your time (our true commodity). If you loved it and are not subscribed yet, please feel welcome and free to do so, and if you're already subscribed and love it enough to do so - please tell some friends. We're on pace to close the year at a quarter million subscribers. UNREAL! And it's all because of YOU, so Thank You. TOGETHER WE RISE!!
Your AI Mentor and Guide: You + AI = ??
1 周I see this every week when I talk with HR professionals about AI. Great insights! It comes down to developing our "EI for AI" - learning how to best interact with AI. While we use our language, it is not the same as talking with other people. You nail this with the many cartoons about "empathy". We need it for human interaction, but not for interactions with AI. In fact, your empathy may hold you back from doing great things with AI.
?? I help B2B and Tech Founders Scale with Metric-Based Marketing, Sales, and Technology Leveraging a TEAM of Serial Entrepreneurs ???? Stop Guessing— ?? Start Growing!!
2 个月I cannot wait to start my journey learning more than LLMs like ChatGPT. ?? Cory Do I have the triple threat companies tagged correctly so I know where to go and build a curriculum for ABBA Brasil? Replit. Cursor (acquired by DataRobot), and V0
Thank you
$63,000,000+ in Apple products purchased ???? - Can we buy your used Apple devices? 2x TEDx Speaker | ChatGPT Speaker | LinkedIn Speaker with 214k followers
2 个月WE all have so much to gain from learning how to use AI Cory Warfield!!
"L?sungen eine Frage der Einstellung" 22.9K+
2 个月Very informative article, Cory Warfield????