OpenAI o1 - Reasoning (and not data) is now the new Oil

OpenAI o1 - Reasoning (and not data) is now the new Oil


I had always been a bit uncomfortable with the term ‘data is the new oil’

Today, we can safely put that term to rest.

With the announcement of OpenAI O1, we are seeing dawn of a new class of reasoning applications

These enhanced reasoning capabilities may be particularly useful if you’re tackling complex problems in science, coding, math, and similar fields. For example, o1 can be used by healthcare researchers to annotate cell sequencing data, by physicists to generate complicated mathematical formulas needed for quantum optics, and by developers in all fields to build and execute multi-step workflows.?

and

The o1 series excels at accurately generating and debugging complex code. To offer a more efficient solution for developers, we’re also releasing OpenAI o1-mini, a faster, cheaper reasoning model that is particularly effective at coding. As a smaller model, o1-mini is 80% cheaper than o1-preview, making it a powerful, cost-effective model for applications that require reasoning but not broad world knowledge.?

Folks are creating a whole set of cool reasoning applications based on OpenAI O1,?

The problem with data as oil was - effectively everyone was singing off the same hymn sheet - and people were running out of data to train algorithms

So, what replaces data?

  1. Intent (ie the prompts to ask the LLM) and?
  2. Reasoning??

This brings back the focus on innovators who can create complex ideas based on reasoning (both large and small companies)?

It’s the dawn of a new class of AI applications where human intelligence is complemented by AI reasoning - and an exciting time to be in AI.?

If you want to learn AI with me - see my AI community at erdos research?


Steve Barker

Co-Founder @ THE COLLECTIVE - helping individuals and businesses to achieve the unexpected

2 天前

Ajit Jaokar As always, a thoughtful piece. My take .. The more the reasoning, the more powerful the models become. That reasoning then needs to be strung together which will require ever bigger processing power. It feels like that's where quantum comes in to really revolutionise the way we do things today. Based on my reading that's still quite some time in the future. Be great to get your insight.

Rodney Beard

International vagabond and vagrant at sprachspiegel.com, Economist and translator - Fisheries Economics Advisor

4 天前

There seems to be two strategies: i) data, ii) reasoning. This is nothing new. Both approaches continue to evolve, the recent development of RIG models (this week) to augment RAG models suggests further development of a data-oriented approach in AI, and more modern software focused on reasoning has been becoming available (again). There are a number of initatives in the neurosymbolic space that attempt to combine the two. So it's not really a replacement of data with reasoning, rather a two pronged approach appears to be where things might be heading. However, even this argument ignores developments in pure computational domain based reasoning approaches that aren't traditionally considered as part of AI. For example newer developments in computer algebra and also computational social choice are what I'm thinking of. There have been some attempts to integrate these approaches with LLMs but really people are just scratching the surface with this. One would expect to see much more work on this in the future.

Is a very interesting time, there are so many unique nuances now available to developer and solution implementors. It's now become more important to Fine Tune models to specific industry use cases is now becoming more and more vital in application and service differentiation. Ajit Jaokar have you seen the RAFT solutions from UCL Berkeley which is a interesting combination of RAG and Fine-tuning https://github.com/Azure-Samples/raft-distillation-recipe

Paul Golding

Hands-on R&D Multidisciplinary AI Leader | 30 patents in AI/ML | Enterprise AI | AI Chip Design | Quantum AI

4 天前

Well, many folks misunderstood that metaphor in its original form, which was largely a geopolitical metaphor. The extraction costs and eventual depletion of “crude” were foreseen. This seems largely true, at least with text data. The weakness of the metaphor is that training data turns out to be ubiquitous whereas oil is owned by whoever extracts it. On any enterprise problem, domain-specific reasoning is crucial and it seems true that those who can master this—for their domain—can gain the real advantages that GenAI has yet to offer via adoption of vanilla co-pilots.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了