Episode 2

Episode 2

When I first asked Chat GPT to write a technology byline about a new industry standard (a byline I had already written for a client), the result was useless. In my second attempt, I included instructions that were more specific. I clarified what the main point of the byline should be and explicitly stated the need to focus on end-user benefits. As before, I provided the app with the same input I had received from my client and instructed it to produce an article based on “the input below.”

This time, the result was on target in terms of messaging, but badly written. The draft was full of overblown phraseology, e.g. [the standard] “heralds a new era…. pushing the boundaries…. a quantum leap” and so on. This draft also repeated benefits, using identical language, in more than one place.

With heavy editing, this draft might be client-viewable.

Today’s lesson is that learning how to get acceptable results from ChatGPT means understanding what you need to tell it and what it can figure out by itself. I’m not there yet. Stay tuned.

要查看或添加评论,请登录

Mike Stevens的更多文章

  • Episode 6 - Learning from the Past

    Episode 6 - Learning from the Past

    Over time, ChatGPT will revolutionize the way PR agencies generate copy, much as personal computers revolutionized…

  • Episode 5 - Claude

    Episode 5 - Claude

    In one of my news feeds this morning there was a reference to 25 GPT start-ups, and I suspect that number is quite low.…

  • Episode 4 - Breakthrough

    Episode 4 - Breakthrough

    My biggest problem with content generated by ChatGPT is that there is no way I can make ChatGPT remember my edits. If…

  • Episode 3-Is BARD better?

    Episode 3-Is BARD better?

    In this experiment I provided BARD, Google’s ChatGPT rival, with the same input I gave ChatCPT in Episode 2, and asked…

    1 条评论
  • Episode 1

    Episode 1

    Yesterday I began my first encounter with Chat GPT using a technology byline I had already written. I provided the app…

    1 条评论

社区洞察

其他会员也浏览了