How Artificial Intelligence Adds Value To The Research Process

How Artificial Intelligence Adds Value To The Research Process

Let’s dive into an example of AI making our daily life easier and enhancing our job performance - all while creating small value each time that adds up to massive total value. The process of performing research and gathering information has drastically evolved over time, driven by advances in both information management and analytics. Of course, large language models and generative AI have been the latest innovations to further streamline how we gather information. Below, I’ll first review a common research process and then I’ll show how the execution of that research process has changed over time and how the full impact of that progress is being underestimated.

?

The Basic Research Process

Let’s say I need information on anything from a product, to an event, to an analytical method, to anything else. Generally, there are six steps that I must execute to get to a satisfactory result. I must:

1)???? Define what information I am looking to for

2)???? Identify sources that may have all or part of the information I need

3)???? Review and read through the identified sources to understand what they contain

4)???? Extract the relevant pieces of information found during that review

5)???? Consolidate and summarize all the relevant information extracted from all of the sources

6)???? Determine specific options for how to act (or not) based on that consolidated summary

Over time, we’ve gone from a world where a human had to do all those steps to today where the steps can be either entirely or mostly handled for us through search analytics, large language models, and generative AI. Let’s look at the evolution.

?

The Old Days

We’ll define the old days as anything prior to the early 2000’s when the web and search engines became ubiquitous. For example, early in my career we were pretty much on our own to figure out how to solve a coding problem. I had shelves full of detailed product and language manuals that were my main source of information. I could also ask a few local coworkers or call a product help desk. Outside of that, it was on me to either find what I needed from those limited resources, to figure it out on my own, or to fail. Similarly, if I needed information to support a school paper, I had to go to the library and personally seek out books to examine while being limited to what that specific library happened to stock.

In the old days, then, I had to do all six steps myself. For a coding problem, that included the tedious manual scanning of tables of contents in the manuals, reading of relevant sections, and possibly also scanning the folders of code on my computer to find something I recalled from the past that could pertain to my current need. It was manual, time consuming, and tedious. Worse, the process accessed a very small base of information.

?

The Search Engine & Sharing Era

From the early 2000’s until recently, research and information gathering changed dramatically. First, a massive trove of documents, articles, and code was uploaded to the web. Then, search engines did their magic to index and tag those documents to make them easy to find. In addition, widespread sharing of knowledge through sites like GitHub and social media platforms enabled us to identify and interact with countless other people who could provide relevant guidance to us.

This made step 2) very easy. It also expedited steps 3) & 4) when someone else who previously had a question like our own documented a summary of what they found. In other words, we could find very relevant information from very relevant people very quickly. However, we still had to consolidate and summarize what we found across those conversations, documents, and code samples and decide how to act on the information.

?

The AI Age

Since 2023, artificial intelligence has taken things even further. Large language models can now take those same documents and code examples found on the web and almost fully execute steps 3), 4), and 5) while providing significant help in executing step 6).

The models still begin by, for instance, identifying the top 10 documents that appear most relevant to my question after interpreting my prompt and matching it against the document repository. But they don’t stop there. Today’s language models take it further and consolidate and summarize the information in those documents into a nice, concise narrative for me. The new Google AI Overview is an example of this. What’s more, we can also ask the LLM for suggested actions to take based on that summary. While the suggestions might not be perfect, they are a great starting point. Count step 6) as expedited but not fully automated.

?

Assessing The Value

Today, after defining what information we need, we can completely automate steps 2) – 5) and partially automate step 6). Better yet, we’re able to perform those steps in just seconds while considering a far larger set of base documents and knowledge than ever before. By getting detailed answers so quickly, we can iterate and ask more questions to get a better result, faster than before.

People tend to focus on the flashy examples of AI being used in some novel or creative way. However, I think that the amount of value that will come from the automation of research is far larger than most people realize. Billions of people will save time on all of their research endeavors. While the value of each instance is small, the total across those efforts represents massive value!

This simple research example is one of AI making our daily life easier and enhancing our job performance – all while repeatedly creating small amounts of value that add up to something very significant. I know that I don’t miss having to execute the entire research process myself. I’m happy to let AI handle this type of task for me. Aren’t you?

Brian Costello, MBA, PMP

Analytics development leader mastering data-driven insights to enable operational improvement and transformation

4 个月

Thanks. AI can make things so much easier. However, it all falls apart if the inputs to the research process in Step 2 involve inaccurate or skewed sources.

Richard Hackathorn

Wandering in Latent Space

4 个月

Here is a good example of what Bill is describing... Ilya Shabanov as The Effortless Academic https://effortlessacademic.com/. I have been watching Ilya for several years. He seems to be serious and thorough in his academic research methodology using the latest analytic tools. He is offering a webinar for $40 on Oct 26. The nugget here is his "research maps" based on comprehensive literature searches. Note that the focus is NOT on the search tools (Connected Papers, LitMaps, or ResearchRabbit) but focused on what you DO after gathering ALL that?information. He highlights the importance of the "research questions" being current pursued as a method for discovering those that are NOT.? Here is an earlier substantive article:?https://effortlessacademic.com/a-visual-strategy-for-finding-research-gaps/?...probably?the webinar content updated. PS: hint... following the open Obsidian community and its fluid extensions using HuggingFace-like tools. Also, watch any AI-assisted research literature organizers using OpenAI ChatGPT 4o with the new Canvas UI workplace. There will be amazing progress in this area over the coming few years.

Bill, thanks for this brief summary. I agree that AI (et al) continues the modern era march toward minimizing the time required to accomplish certain tasks. Do you have any concerns/caveats about AI’s use?

回复

要查看或添加评论,请登录

Bill Franks的更多文章

  • Do NOT Deploy THAT Chatbot!

    Do NOT Deploy THAT Chatbot!

    But, DO deploy THIS chatbot! One of the most touted applications of generative AI and LLM models is the chatbot. From…

    13 条评论
  • The Complexities Of Computing Analytics ROI

    The Complexities Of Computing Analytics ROI

    Computing the ROI of any analytics, machine learning, or artificial intelligence (AI) process can be more complicated…

    16 条评论
  • Artificial Intelligence Concerns & Predictions For 2025

    Artificial Intelligence Concerns & Predictions For 2025

    As 2024 comes to an end, I find myself worrying about a couple of aspects of the rapid progress we’ve made with AI this…

    35 条评论
  • Data Science Collaboration In The Age Of AI

    Data Science Collaboration In The Age Of AI

    The world of data science has become so complex, especially given the rise of AI, that no individual can be an expert…

  • Large Language Model Usage: Assessing The Risks And Ethics

    Large Language Model Usage: Assessing The Risks And Ethics

    With the ever-expanding use of large language models (LLMs) to generate information for users, there is an urgent need…

    2 条评论
  • Driving Value From LLMs – The Winning Formula

    Driving Value From LLMs – The Winning Formula

    I have observed a pattern in the recent evolution of LLM-based applications that appears to be a winning formula. The…

    8 条评论
  • Your Sensitive Data Is Public Record

    Your Sensitive Data Is Public Record

    I have had my data stolen multiple times over the years when a company that has my data is the victim of a hack. It…

    21 条评论
  • Foundational Generative AI Models Are Like Operating Systems

    Foundational Generative AI Models Are Like Operating Systems

    With generative AI evolving so rapidly, there aren’t many things that can be stated with confidence about exactly where…

    10 条评论
  • Same AI + Different Deployment Plans = Different Ethics

    Same AI + Different Deployment Plans = Different Ethics

    This month I will address an aspect of the ethics of artificial intelligence (AI) and analytics that I think many…

    14 条评论
  • A Paranoid Future Scenario For AI

    A Paranoid Future Scenario For AI

    Ready for a paranoid view about how AI, deepfakes, censorship, and the metaverse can be combined in a trust-destroying…

    19 条评论