?? Why the GPT Store isn’t very good yet

?? Why the GPT Store isn’t very good yet

Headlines You Should Know?

Why OpenAI’s GPT Store Isn’t Very Good Yet

OpenAI’s GPT store has been open for nearly two weeks — have you found one you like yet? No? You’re not alone. Its current structure isn’t conducive to finding the best GPTs.

The store lists the top 12 GPTs for seven categories, including writing. Many of the writing GPTs are meant to either humanize AI writing or create SEO-focused blog posts. Each list seems to be a popularity contest — there’s no indication that they’ve been vetted at all, beyond how many chats OpenAI users have had with them. The search function has improved since the store first opened, when it only showed a handful of results, but it’s still challenging to find quality GPTs.??

OpenAI CEO Sam Altman admitted he’s “embarrassed” by GPTs in an interview with The New York Times (before the lawsuit). Most GPTs don’t have enough structure around them to be markedly different from just using regular ol’ GPT-4, and others are used as a lead-gen funnel that maybe gives one output, then prompts you to visit an external site and create an account. After looking at plenty of GPTs, there’s a lot to be desired.?

That’s not to say there are no good GPTs. One decent writing GPT is Automated Blog Post Writer, from Octane AI. It prompts the user to dig deeper into a topic before it starts writing, cites its sources with links, and asks for examples of other blogs so it can match tone and voice. For research, two GPTs stand out: Consensus and SciSpace. Both search across more than 200 million academic papers to help draft content that’s backed up with accurate citations.?

Have you found any GPTs you like? Let us know!

Elsewhere …

Tips and Tricks

?? How role-based prompting gets better results

What’s happening: Large language models can put words together nearly instantly, but they don’t always make sense or read as well as we want them to. This is especially important in the world of public relations, where the content we create is meant to be in a certain voice, for a specific audience, and tell a particular story.?

Muck Rack’s State of AI in PR report tells us 95% of communications pros edit every AI output, and 61% say those edits are extensive. But there’s a way to save some time on the front end.

Why it’s happening: Communications pros are almost always pressed for time, and sometimes that leads us to prioritize the efficiency of AI over the quality of what we’re producing. If you simply prompt, “Write me a blog post about how public sector organizations should approach cybersecurity,” you’ll get a very high-level story that may not have much structure or make much sense.?

Preparing the AI tool to play a certain role and speak to a certain audience will produce better results. If you instead write, “Imagine you’re the Chief Information Security Officer for a large suburban county, and you want to offer tips to your peers and those who may have even fewer resources than you do,” the AI tool will implement the voice that’s often missing in vague or short prompts.

Try this: There are a few pillars to keep in mind when prompting AI tools to produce content. None of this will be perfect all the time, but it can help produce a better first draft quicker.

  • Put the chatbot in the shoes of the speaker by giving it a title and a message to share.?
  • Tell it who the audience is.
  • Give as much supporting material as possible.
  • Let it know what tone the content should have.

Quote of the Week

“Content creation is table stakes now. Yes, we should be using it for 80% of the first draft, but that is only the start of where AI is going to impact our industry over the course of this year, let alone the next two or three.”

— Antony Cousins, Executive Director of AI Strategy at Cision on The Disruption Is Now

How Was This Newsletter?

?????

要查看或添加评论,请登录

Gregory FCA的更多文章

社区洞察

其他会员也浏览了