From Overload to Understanding: Tinkering with Notebook LM, Reimagining Biographical Research and Social Science

From Overload to Understanding: Tinkering with Notebook LM, Reimagining Biographical Research and Social Science

In a world overwhelmed by data, the challenge isn’t just collecting information—it’s making sense of it and transforming it into actionable insights. Over the past few days, I’ve been experimenting with Google’s new Notebook LM, a tool powered by the Gemini language model, to explore whether it can help manage the relentless flood of information. Unlike a typical chatbot, Notebook LM acts more like a research assistant, synthesizing your uploaded content into tailored insights, concise summaries, and creative connections. It’s still early days for generative artificial intelligence (AI). Still, I’m excited to share my reflections on how tools like this could reshape how we approach applied research in social science and learning for life.

A Background in Applied Research and Practice Improvement

Over a 25-year career, I’ve navigated various disciplines, methodologies, and institutions. From studying how urban educators make sense of school improvement to examining how social capital shapes education and the workforce, I’ve gathered, analyzed, and disseminated my share of data. My doctoral research focused on building more rigorous and scientific measurement systems.

Throughout my career, I’ve contributed to research at the Federal Reserve Bank of New York, served as a MacArthur Fellow with the ETS Gordon Commission on the Future of Assessment, and collaborated with institutions like Carnegie Mellon, Oxford University, and UCLA’s CRESST. As a parent, special education teacher, principal, and superintendent, I’ve sifted through mountains of data, integrating varied sources to inform decisions and strategies to enhance efficacy and opportunity.

Taking stock, the sheer scope of the information leveraged is disorienting: recordings of thousands of hours of interviews, themes from tens of thousands of meetings and questions, millions of pages of archival documents and publications, and the analysis of tens of millions of data points—ranging from surveys and test scores to observational data and stakeholder feedback. In the last decade, I’ve expanded my focus to include user-centered design research, using approaches like design charrettes, usability studies, A/B testing, and co-design sessions. These methods introduced even more data formats—video, sensors, and click stream data—all aimed at making hidden patterns visible to support real-time feedback, iterative improvement, and smarter resource allocation.

Yet, even with these advancing tools, sense-making often feels like driving with powerful headlights through a blinding storm, where the glare further impairs visibility. The constraints of time, technical capacity, and human resources can make it challenging to turn data into meaningful insights.

The Weight of the World: Managing Complex Data

Working with diverse data modalities—audio, video, images, text, and handwritten notes—means dealing with a range of logistical and interpretive challenges. Each format presents unique hurdles: the nuances of meaning in audio can be lost in transcription, handwritten annotations may fade or become illegible in digitized scans, and patterns only surface after exhaustive review. Beyond these technical barriers is the constant pressure to maintain integrity and accuracy while piecing together a coherent narrative.

Weaving these disparate threads into a credible and reliable story can become overwhelming—sifting through transcripts, tracking insights, and wrestling with incompatible software. But it’s more than just about managing data; it’s about doing justice to the lives and legacies the data represents. Responsible research must honor the voices and experiences captured in these records, safeguarding privacy and staying true to context. This burden isn’t just logistical—it’s deeply ethical.

Enter Multimodal AI

Twenty-five years ago, transcribing a single interview was a grueling process, with each hour of audio taking hours to convert manually. Over the next two decades, incremental advancements in machine learning slowly improved the speed and accuracy of transcription. But the last few years have been anything but incremental. The rise of multimodal AI represents a transformative leap forward, enabling systems to simultaneously process and integrate diverse data types—text, audio, images, and video. Early models made steady progress within individual modalities, but today’s systems synthesize and express information across these formats, accelerating entirely new capabilities.

For instance, in healthcare, AI can now cross-reference patient histories with medical images and genetic data to deliver more accurate diagnoses and personalized treatment recommendations. This ability to connect previously siloed data types opens up new possibilities for AI-system functionality.?

Experimenting with Notebook LM

Against this backdrop, I started using the newly released Notebook LM—Google’s experimental AI research tool designed to process and synthesize content from multiple sources like PDFs, videos, Google Docs, and web pages. Powered by the Gemini model, it integrates text, audio, video, image, and web content into a single interactive workspace, generating summaries, answering detailed questions, and even creating podcast-style audio overviews. This capability allows it to transform dense material into digestible insights almost instantly. While it’s still early days, using Notebook LM feels like a glimpse into what’s possible for AI-driven research tools.

Learners for Life

My Learners for Life Substack explores the life, legacy, and wisdom of Professor Edmund W. Gordon, distilling his insights into lessons on decency, service, and purpose. The goal is to honor his contributions while drawing lessons for today’s educators, leaders, and lifelong learners. To support this, I’ve just begun using Google’s new Notebook LM to organize and analyze our extensive biographical research—including hundreds of hours of audio and video, and tens of thousands of historical documents and notes—making sense of a lifetime of transformative contribution.

I’ve also experimented with Notebook LM’s “Deep Dive” audio notes, which act more like research memos than formal publications. These recordings synthesize archival sources, videos, and handwritten materials, offering an interactive way to engage with complex content. The conversational style and seemingly human “AI hosts” risk creating false confidence in the material, blurring the line between grounded insights and speculative interpretations. I’m sharing these experiments to gather feedback on whether this format enhances or complicates the research process.

First Season of Edmund W. Gordon: Reflections

Check out the first season of the Edmund W. Gordon: Reflections podcast series—10 episodes capturing powerful insights. Please listen, share, and send me your reflections!

Malik Boykin’s Reflections on A Future Invitation to Legacy: Mentorship, Vision, and Affirmative Development

https://spotifyanchor-web.app.link/e/b9m40zxaqNb

Edmund W. Gordon Reflects on Alain LeRoy Locke

https://spotifyanchor-web.app.link/e/H8Yrb8cDrNb

Edmund W. Gordon’s Reflections on Howard Thurman

https://spotifyanchor-web.app.link/e/OgFDb6z8rNb

Reflections from Eleanor Armour-Thomas on Educative Assessments in the Service of All Learners?

https://spotifyanchor-web.app.link/e/vj7OM6z8rNb

Reflections on Edmund W. Gordon at 100, through the Eyes of his Children

https://spotifyanchor-web.app.link/e/TO73c0EipNb

Reflections on Pedagogical Imagination, written by the adult children of Susan G. and Edmund W. Gordon: Jessica Gordon Nembhard, Christopher W. Gordon, Edmund T. Gordon and Johanna S. Gordon

https://spotifyanchor-web.app.link/e/x5jzp0EipNb

Eleanor Armour-Thomas' Reflections from the Foreword to Pedagogical Imagination

https://spotifyanchor-web.app.link/e/aL4OZEmnpNb

Carol Camp Yeakey Reflections on Edmund W. Gordon

https://spotifyanchor-web.app.link/e/JakcMEmnpNb

David Wall Rice Reflects on Edmund W. Gordon

https://spotifyanchor-web.app.link/e/bLspG5z8rNb

Haki R Madhubuti: A Conceptual Tribute to Edmund W. Gordon

https://spotifyanchor-web.app.link/e/OyEOr6z8rNb

The Promise and Perils of AI in Social Science

At the Study Group, we focus on how AI can drive scientific breakthroughs and accelerate public innovation in delivering education, workforce, and other public services. AI has the potential to enhance agencies’ ability to evaluate and refine policies, leading to more thoughtful, effective programs that better serve society. With responsible and equitable use, AI can address urgent challenges like improving health outcomes and predicting extreme weather while also unlocking new frontiers in scientific discovery—from the mysteries of the universe to the evolution of life itself.

When designed and used responsibly, AI can remove barriers that make scientific research slow and costly, democratizing knowledge and bringing diverse voices to the forefront of enhancing opportunity, learning, and career navigation. By providing rapid solutions—such as identifying promising drug candidates—AI can unlock previously out-of-reach insights. Ultimately, AI has the potential to transform how science is done, empowering researchers to harness its strengths while mitigating its risks.

Recommendations:

  • Establish Responsible and Transparent AI Practices from the Start: Embed principles of trustworthy AI use at every stage of research to proactively manage risks like bias, inaccuracies, and non-replicable results. Building these safeguards early ensures we strive for ethical, reliable, and high-quality insights.
  • Encourage Innovative Human-AI Collaboration: Use scientific research as a testing ground for integrating AI into workflows. The focus should be on enhancing human expertise, not replacing it, enabling researchers to achieve high-quality results through responsible AI support rather than maximum automation.

As we continue experimenting with tools like Notebook LM, the question is how AI can process more data, deepen our understanding, and transform our approach to research, learning, and the shared pursuit of knowledge. With careful design and responsible use, AI can redefine what’s possible in social science and beyond.


Notes:

#DataToInsights, #ResponsibleAI,? #BiographicalResearch, #SocialScience,?

Tagging colleagues interested in AI, research, and the future of learning: Stefan Bauschard , Alan Coverstone , Ed Dieterle , James L. Moore III , Elizabeth Albro, PhD , Tom Vander Ark , Brad Bernatek , Jonathan Flynn , Lauren Cutuli , Jonathan McIntosh , Alex Iftimie , John Hines , Steve Stein , Vic Vuchic , Amir Ghavi , Erik Burmeister , Jerry Almendarez , Nirmal Patel , Derek Lomas , Cherian Koshy , Chris Baron , Nick Freeman , Kristen Eignor DiCerbo , Jacob Kantor , John Bailey , Kumar Garg ,

Tagging colleagues interested in biographical research on Professor Edmund W. Gordon: Yoav Bergner , Elena Silva , C. Malik Boykin, Ph.D. , Edmund w Gordon , Arnold F Fege , E. Wyatt Gordon , Ilya Goldin , Ken Wright , Susan Lyons , Erik Hines , Kristen Huff , Maria Elena Oliveri , Jeremy Roschelle , Sam Abramovich , Jessica Andrews-Todd , Adam Gamoran ,

Tagging colleagues interested in debate research: Les Lynn , Brian Lain , David Song , Nick Coburn-Palo , Myra Milam , Danielle Leek, PhD , Briana Mezuk , Adrienne Brovero , Jairus Grove , Ravi Rao , Greg Achten , Edward Williams , Lexy Green , Luke Hill , Amy Cram Helwich , Aaron Timmons , Gordon Mitchell , Harris, Scott , Gordon Stables , Sue Peterson , Dan Lingel , David Cheshier , Ed Lee III , Joel Rollins , Marie Dzuris , Eric Emerson , Dmitri Seals , James Roland , Wayne Tang , Kevin Kuswa , Dan Shalmon , Jonathan Paul , Michael Janas , Allen Isaac , Michael Klinger , Adam J. Jacobi ,

Eric Tucker

Leading a team of designers, applied researchers and educators to build the future of learning and assessment.

1 个月
回复
Serhii Skoromets

AI consultant and advisor | AI business integration expert | Helping companies match AI/ML tech with business requirements

1 个月

Can AI's deep thinking assist human scholars? Probing possibilities.

回复
Eric Tucker

Leading a team of designers, applied researchers and educators to build the future of learning and assessment.

1 个月

Read more at the article linked avove and at: https://substack.com/@learnersforlife/note/p-149851996

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了