Garbage In, Gold Out!
Generative AI offers new ways to help turn data garbage into business gold

Garbage In, Gold Out!

We’ve all heard the expression “garbage in, garbage out” when it comes to data systems. But Generative AI brings a big caveat, and a big new opportunity.

Data remains the biggest and most important factor in the usefulness of AI systems. Algorithms are becoming a commodity, so the biggest differentiator is the quantity, quality, and relevance of the underlying data set. And the better the data, the easier it is to create quality outputs.?

But there’s an important distinction between the underlying data and the way it’s actually recorded and stored. Real-world systems see the world through a cracked and smudged lens. But even if each point of light is dubious, we can still get an overall impression of what’s going on.?

For example, if your IoT sensors are recording random numbers, you obviously can’t get anything useful out of them. But if they’re “just” inaccurate, with the real data hidden behind a veil of noise, the result is still potentially useable with the right statistical techniques. Machine learning algorithms can capture the underlying patterns that (probably) generated the observed, messy data.

Now new Generative AI technologies are providing another huge step forward in dealing with imperfect data.

Large language models are very good at dealing with some types of messy data. For example, researchers have shown that large language models like GTP-4 can decipher even very scrambled sentences:?

Researchers in Japan showed that GPT-4 can almost perfectly handle scrambled text

A personal example: my daughter recorded a short section of her economics class (with permission). The quality was awful—the teacher’s voice was almost completely drowned out by the sound of my daughter typing and other background sounds. I personally couldn’t really hear what he was saying.

I ran the recording through OpenAI's open-source transcription algorithm Whisper, using the slowest and most sophisticated model available. It did a good job of deciphering many of the spoken words, but there were gaps, a few words that were clearly incorrect, and the result was hard to follow (the teacher had a tendency to digress and circle back).

I took the transcript and put it into ChatGPT 4, asking it to “take the text and put it into sentences”. As if by magic, out popped a restructured, clear, three-paragraph summary of the economic points the teacher had discussed. It wasn’t what he said, but it was a lot closer to what he meant.

Large language models are good at figuring out what we meant, and the principle applies to many real-world data problems.

For example, machine learning is already used to extract information from documents such as invoices: the date, amount, supplier ID etc. But these models require lots of training data, and don't generalize very well— if you try to use them against a new layout of invoice that the model hasn't seen before, then it may get stumped. By adding generative AI, the system gets much more effective at dealing with edge cases and novel layouts.

There are dangers, because these models are designed to synthesize what "should" or "could" be there, no just analyzing what is actually there. From the previous examples, the result may be thoughts the economics teacher never mentioned, or a supplier ID even if one is not included in the document.

Figuring out how to avoid such "hallucinations" is currently the leading edge of AI research—with approaches that include asking the model to double-check itself, averaging out the results of several instances of the model, or an extra check from a dedicated verification model acting independently.

But overall, generative AI is a great new opportunity to open up more data in new ways, to rethink what data sources are available, how they can be used to improve processes—and to turn what looks like data garbage into business gold.

Being able to 'gloss over' the little errors people make and still understand what they meant is going to be a really useful tool - excited to see where this all goes next.

回复
Eran Adi Cioban

AI Lead & Director @ The MOFET Institute | Digital learning, GenAI

1 年

Timo Elliott great use case. I used this expression many times in relation to GenAI, but you made me question myself. I'm grateful for that ????

回复
Clinton Jones

"Features seldom used or undiscovered are just unclaimed technical debt" I engage on Software Engineering and all things #ProductManagement

1 年

was the recording worth it in the end even after clean-up sometimes the content is just ??

回复
Jean-Fran?ois Legault

SAP Data & Analytics Consultant

1 年

I agree, there is a huge potential to improve the data quality but human validation is still required. Otherwise "garbage in" may/will result in "modified garbage" (or worse, "nicely looking garbage") that will be harder to detect and without due diligence, can lead to poor or poorer decisions. Trust in data and data models output is hard earned.

  • 该图片无替代文字

要查看或添加评论,请登录

Timo Elliott的更多文章

社区洞察

其他会员也浏览了