Summarization and Prompting
Summarization - Generated by DALL-E2

Summarization and Prompting

I recently came across a preprint from Griffin Adams et al that covered a new approach called Chain of Density for providing summarization from GPT-4 systems that improved on the baseline, and allowed tuning of the output to fit human preferences.? The approach itself is very understandable - essentially, start with a relatively loose summary, and continue to make it iteratively more dense by including more entities.


The approach itself is interesting, but, as someone who primarily follows the biology / bioinformatics space, the publication caught my imagination for another reason.? I’d personally laughed a bit at the concept of “prompt engineering” of LLMs as a serious pursuit. Of course, learning to prompt well is useful, but in the past, this seemed at first like trial and error, and later, extremely specific to the system at hand.? Someone capable of creating a good prompt for Midjourney couldn’t always take the same one to DALL-E and generate a consistent result.??


This preprint, on the other hand, shares the prompt that they used to create this result and - it’s essentially a string of logical processes, written in English.? I’m sure you could throw it at an earlier LLM and it would fail, but the reasoning is clear enough that I don’t think you’d have to worry about taking it to whatever GPT-5 is going to be and having it break.? This particular case had me thinking that the future of prompt engineering is going to look a lot less like machine programming, and a lot more like traditional logic.

B?o Tran

| Patent Innovator | AI Enthusiast | Streamlining Patent Processes

1 年

I'd love to hear more about the key takeaways from the preprint! ?? What's the most exciting part of GPT in summarization?

回复

要查看或添加评论,请登录

Jon Hill的更多文章

  • Partners in Science: Evolving from Student to Scientific Leader

    Partners in Science: Evolving from Student to Scientific Leader

    At Boehringer Ingelheim, our commitment to engaging with our local communities is deeply ingrained in our corporate…

    1 条评论
  • Don’t Confuse Consistency with Quality

    Don’t Confuse Consistency with Quality

    Earlier this summer, I’d decided that it would be a good idea to learn Microsoft Power BI. This is a tool used to…

    6 条评论
  • Leading Change

    Leading Change

    During recent travel, I had the opportunity to read Leading Change, by John Kotter, which is a sort of "business…

    1 条评论
  • What if LLMs are GOOD for security?

    What if LLMs are GOOD for security?

    I had recently shared some thoughts on appropriate security access for LLMs on confidential data, but what if LLMs…

    1 条评论
  • LLMs and Sensitive Data

    LLMs and Sensitive Data

    My colleague Victoria Gamerman, PhD recently shared an article from Tamer Chowdhury about architecture for using…

    1 条评论
  • The Six (Prompting) Hats

    The Six (Prompting) Hats

    I had previously shared some impressions on the Six Thinking Hats method which was recommended by a colleague as a way…

    1 条评论
  • Seeing Images in Single Cell Data (Pareidolia)

    Seeing Images in Single Cell Data (Pareidolia)

    This post will describe a bit of an unusual application for generative AI. To be honest, I’m still not sure if it falls…

  • ChatG-PPi-T: Finding Interactions with OpenAI

    ChatG-PPi-T: Finding Interactions with OpenAI

    In an earlier article, I’d posted about some mixed results in using the different LLMs provided by OpenAI to answer…

    2 条评论
  • PowerPoint to Email with OpenAI

    PowerPoint to Email with OpenAI

    I was having a conversation with a colleague during his recent visit to the U.S.

    9 条评论
  • Using Chat-GPT to Generate Structured Biological Knowledge

    Using Chat-GPT to Generate Structured Biological Knowledge

    After my previous post on using Chat-GPT to explain biological findings, I was interested in digging in a bit more…

    11 条评论

社区洞察

其他会员也浏览了