Summarizing with Generative AI: four things to consider

Summarizing with Generative AI: four things to consider

It's often called out as one of Generative AI's serious muscles, and for good reason. Its ability to take very large amounts of information and source material and turn that all (in a blink) into a tidy, accessible and consumable summary is quite stunning. In a world where humans are increasingly struggling to cope with firehoses of information and data, GenAI's capacity for summarizing can feel like a godsend.

It inevitably comes up quite early as a strategic human-AI interaction opportunity for professional and leadership/management work in my Aibly Executive Coaching sessions.

If you're using ChatGPT, Claude or other Generative AI tools to help you summarize content, here are four factors that might make some difference to the results you are getting.

1. Give your AI a professional role to play when summarizing

One of the most effective yet overlooked techniques is assigning your AI a specific professional role when asking it to generate summaries. This can help shape how the AI approaches and presents the information at hand.

For example, instead of just saying "summarize this document," try:

  • As a research assistant, summarize this paper focusing on methodology and key findings
  • As a business analyst, summarize this market report highlighting the most relevant actionable insights
  • As a magazine reporter, summarize this event in an engaging, news-style format
  • As a teacher explaining this to middle school students, summarize the key subject matter worth paying attention to

As you can probably see, the role you choose is likely to help the AI understand:

  • Particular aspects of the input information to emphasize
  • The language and tone to use
  • How to structure and present the information
  • What potential background knowledge to assume

If possible, match the role for your AI to your intended audience and purpose. If you need technical detail, consider assigning a more technical-style role. If you need broader accessibility, choose a role that specializes more in clear communication.

2. Specify between 'extractive' and 'abstractive' summaries

With vague or generic prompting, Generative AI tends to default to one of two fundamentally different kinds of summaries in its output. Knowing what these two kinds are and which to ask specifically for can make a big difference.

Extractive Summaries:

  • Extract key sentences and points directly from the source material
  • Maintain original wording and specific details
  • Can be best for things like legal documents, scientific papers, research reports, or when exact wording is likely to matter more
  • Can be obtained with prompts like: "Please provide an extractive summary with direct quotes from the most important passages"

Abstractive Summaries:

  • Rewrite and synthesize the information in new words and expressions
  • Combine and connect ideas from different parts or texts
  • Can be best for obtaining the general idea, making complex information more accessible, or combining multiple sources
  • Can be obtained with requests like: "Please complete an abstractive summary that synthesizes the main ideas in your own words"

As mentioned, vague requests or instructions here are likely to result in abstractive summaries from Generative AI. For important and critical work, it's a good idea to use both. Get an abstractive summary for a quick high level understanding, then an extractive one to verify specific details.

3. Understand risks and limitations of Generative AI summaries

While Generative AI can absolutely be powerful and helpful for summarization, it's important to bear in mind some potential limitations.

Generative AI can get a variety of things wrong when summarizing, including by:

  • Missing crucial context or nuance
  • Oversimplifying complex relationships between ideas
  • Occasionally "hallucinating" or adding incorrect information
  • Misinterpreting technical or specific terminology
  • Missing more subtle meanings or things like irony or sarcasm

Some 'red flags' you might like to watch out for in outputs include:

  • Summaries that seem too neat or simple for complex topics or in relation to the input documents
  • Conclusions that don't quite match what you already know about or from the source
  • Generic-feeling language where you were expecting more specific detail or nuance


There have been a range of expert warnings and recommendations in relation to over-estimating the accuracy of or over-relying on Generative AI summaries.

Northwestern's Professor Kristian Hammond (Director of The Center for Advancing Safety of Machine Intelligence) argues in The AI Summarization Dilemma: When Good Enough Isn’t Enough that while AI summarization can be a valuable tool for managing information overload, we need to carefully evaluate when to use it based on criticality of accuracy in specific situations and whether someone will actually verify the AI's work (since most people often use AI specifically to avoid reading the original content).

A recent New York Intelligencer article from John Hermann , The Future Will Be Brief , reinforces some of the same concerns. Hermann reveals how major tech giants are rushing to add AI summarization features to everything from hiking trail guides to text messages, and while these tools can certainly handle large documents well, they frequently mishandle things like personal communications that were already concise and where accuracy can be crucial.

Some Best Practices for Managing AI Summarization Risks:

Always cross-reference critical information with the source material, especially for:

  • Legal or medical content
  • Financial data
  • Technical specifications
  • Contractual terms

Use multiple prompting approaches for important summaries:

  • Request both extractive and abstractive summaries
  • Try different professional role perspectives
  • Compare results across multiple attempts

Implement a verification strategy:

  • Spot-check key facts against the original text
  • Have subject matter experts review AI summaries of technical content
  • Use multiple different Generative AI tools to compare and fact-check each other's responses against the original content
  • Use AI summaries as a starting point, not the final product
  • Be transparent in your use of Generative AI for summaries you are distributing or contributing to, and ask your stakeholders for the same level of transparency

Carefully consider the stakes:

  • For low-stakes summaries (like blog posts), AI summaries might be sufficient
  • For high-stakes content (legal, medical, financial), use AI as one tool among many
  • When accuracy is crucial, treat AI summaries as a first draft requiring human verification

4. Keep your own summarizing muscle alive and well

While Generative AI can be something of a game-changer for summarization, and even if you can very confidently mitigate against its main risks and limitations, I think it's important not to become overly reliant on it. Remember that your unique context, experience, and understanding often allow you to catch nuances and make connections that AI might miss. But, more importantly, your human ability to comprehend, analyze, and summarize information is a crucial professional skill that should not be relegated to a machine and needs to be maintained and developed.

Based on that, I recommend:

  • Continuing to regularly practice manual/human summarization, especially for important content
  • Using AI as a complement to, not replacement for, your own analysis
  • Constantly developing and iterating your own frameworks for identifying key information
  • Continuing to build expertise in your field to better recognize what matters most
  • Resisting the temptation to delegate summarizing and other critical thinking skills at every or any opportunity to something like Generative AI

The rapid evolution of AI summarization tools offers seriously exciting possibilities for managing our growing information overload. But as with any powerful capability, the key is in understanding and using it thoughtfully and appropriately. Assigning professional role types to the AI, balancing between extractive and abstractive summary types, appreciating some of the major limitations and risks, and maintaining our own summarizing and thinking skills are some of the key ways we can hopefully continue to learn how to better harness AI's summarizing potential while also ensuring quality, reliability and human ingenuity.

*


Claude AI and ChatGPT were both used to critique and improve on early drafts of this article. They both wanted me to transform (even) more of it into dot points - but I resisted. They were also used to compare my own summaries of and takeaways from the Hammond and Hermann articles with their own analyses (which, on the whole, were more succinct and accurate in detail than mine but missed some of the potential connection and reinforcement between the two that I had made).

The post image was generated in ChatGPT and worked off some early thinking about 'seeing the forest for the trees for the forest.' I'm wondering to what degree the 'summary' of the forest in the middle is based on extraction or abstraction. Perhaps a mix of both?

Meaghan Finnis

Learning & Development Superstar ?? Instructional Designer ?? Onboarding & Induction Specialist ??

3 周

Great article Jason Renshaw! I am delivering a session next week at our Global P&C conference and the use of AI is a big topic I am covering. Great insights and resources!

Matt Walsh

Non-Executive Director, Strategy Facilitator and Executive Coach.

3 周

Brilliant advice and insights Jason.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了