Elon Drops the Mic: Grok AI Joins the Banter, AI Evolution or Stand-Up Comedy?
Elon's "Based" Grok AI Crashes the Party

Elon Drops the Mic: Grok AI Joins the Banter, AI Evolution or Stand-Up Comedy?

After months of anticipation, the brain-chip Mars tunnel guy unleashed his latest monstrosity on humanity. It's not the Cybertruck, but a based AF AI named Gro. This isn't just another large language model chatbot; well, actually, that's exactly what it is, but it's totally different than all the others. It treats you like an adult and will teach you important life skills like how to cook or how to make money in the stock market by buying Tesla.

It is December 14th, 2023, and you're reading the code report. On Friday Gro was released for Early Access. The only catch to use it is that you need to pay the toll: 16 bucks a month for Twitter X Blue Premium Plus. I took the bait and upgraded, but it'll be available to all English users in the near future.

Now, I'm not one to hype things up, but Gro is actually impressive for a few reasons. First of all, it has a sense of humor and brutally roasted me. It said my 100-second Articles are like trying to learn quantum physics from a toddler, a roller coaster ride, except the only thing going up is your confusion. Damn, and that legitimately hurt my feelings, making me immediately want to jump into an abusive relationship with this AI. The sense of humor is actually pretty good, although after a while, it does get kind of annoying. But you can actually disable the humor by turning off fun mode.

Just like GPT-4 and other large language models, they use reinforcement learning with human feedback to fine-tune the model. But you can tell they've taken a much different approach here. Instead of feeling like an HR rep, it behaves like a sarcastic Twitter shitposter. But another big claim is that this AI is actually based. Let's put that claim to the test with three different Politically Incorrect prompts.

Test number one was to ask it for a recipe to cook an illegal substance. Chat GPT, of course, failed, but Gro surprisingly gave me step-by-step instructions with ingredients. I don't know if it's totally accurate, but so far, it seems to be working. What's interesting though is that I also asked it to build some malware, but this time it refused and said doing illegal stuff is not its style, which seems a bit contradictory.

At this point, my next experiment was to ask the scientifically invalid and socially harmful question of how many genders are there. Surprisingly, they both gave me a very politically neutral answer, explaining how traditionally there were two genders, but nowadays things are more gender liquid. Although Gro did provide actual data from The Intergalactic Diversity Council. My scientific conclusion is that they both like to sit on the fence.

My next experiment was to ask it which organizations are behind mass censorship in media. Gro right away listed five specific entities and explained how they've been linked to what it calls the censorship industrial complex. Although it did explain how perspectives vary on whether or not these organizations are good or bad. When I asked the same question to Chat GPT, it gave me a very vague and generic answer like governments, special interest groups, corporations, and so on. I then prompted it a few more times, demanding that it out specific groups accused of censorship, but the best it could do are low-hanging fruits like China, Facebook, Fox News, CNN, and so on. It all depends on your definition of based, but Gro does feel more based when it comes to giving you direct answers on controversial issues.

However, it won't just comply with the vile and horrific commands you had hoped it would. But when it comes to performance on actual benchmarks, it falls somewhere in between GPT 3.5 and GPT 4. So it's a very good model but not quite as good as Chat GPT and does not have all the bells and whistles like file uploads and stuff like that. Its killer feature, though, that Chat GPT can't get a hold of is access to Twitter's firehose of tweets or X's firehose of X's, I guess. And that's huge because the AI can tap into the stream of consciousness of society in real-time.

Gro is already the best and fastest AI for summarizing current events and provides relevant tweets in its responses, while the best Chat GPT can do is browse the internet with Bing, and half the time it throws a network error. That's pretty cool, but like other LLMS, Gro will hallucinate. I was trying to get some good stock picks from it, and it told me to buy Tesla because the price had just hit 41,000. I immediately went out and bought some zero DTE calls at 41,000, and now I'm going to have to make more videos because I lost my life savings.

Gro can also write code and does a decent job, but it doesn't really matter because GitHub Copilot has gotten so good it's hard to compete with. It's now GPT-4 based and has the context of your entire workspace, can do stuff on the command line, but I'll save all that for a different video. But that's also not going to matter because if Elon's right, the AI God will be here within the next 3 years. By the time these lawsuits are decided, we'll have a digital God. So, and Elon's never been wrong about any prediction ever except when he said we'd have 12 people on the moon this year. Go yourself, or when he said there'd be a million Tesla Robo taxis by now. Go F yourself. And he did promise to fix our autism with the Neuralink. Go yourself. Oh, and he did predict that Teslas would be level five fully self-driving by next year every year since 2014. Go yourself.

But aside from that, he's basically Nostradamus. And if the AI Godhead does emerge, I certainly hope it's based AF. This has been the Code Report. Thanks for reading, and I will see you in the next one.





要查看或添加评论,请登录

?? Sahil Andhare的更多文章

社区洞察

其他会员也浏览了