While LLMs dominate the news cycle, "boring" ML like recommender systems or fraud detection have quietly continued to be the main driver of realized value from applied AI in the enterprise. Pre-trained language models cannot conquer every use case, and deploying (useful) LLM pilots should not distract from enabling an ML platform that enables the creation of first-party models from your first-party data. https://lnkd.in/eezae4b7 #LLMs #genAI #datascience #ml #mlops
Runhouse的动态
最相关的动态
-
We've observed an alarming line of discussion questioning whether companies should invest in the capabilities to train their own models, with "all ML moving to LLMs" as the alternative. The premise is that if you squint, you could imagine how a sufficiently powerful LLM could answer the type of question a custom ML model predicts today (Will this user like this movie? Is this transaction fraud?). Aside from the age old trap of "let's hope the model is magic this time," we see a few reasons to be highly suspicious of this excuse to pump the breaks on ML investment: 1) Definitionally, ML is *mobilizing your own data for the improvement of product,* and the large tech companies whose successes solidified the "data is the new oil" mantra are investing in training first-party models as intensely as ever. "Mobilizing first-party data" has a fundamentally different history of industry-redefining value creation compared to "building AI apps." Considering that the business value in Fortune500 enterprise today from proprietary models (ML) so dramatically dwarfs that of off-the-shelf models (AI), it's profoundly risky to speculatively shift investment and focus 100% from ML to AI at this juncture. 2) The LLM providers, and now the cloud providers too (Gemini, Bedrock, Azure-OpenAI), are highly incentivized to push this message. It would be incredibly convenient if we all stopped investing in training models and instead irreversibly relied on their model-training expertise as a service going forward. We've already seen what happens when a small few massively out-invest others in mobilizing first-party data: the ad ecosystem. Imagine if Facebook had relied on Google for ad technology rather than mobilizing their data directly, or if the ad agencies had matched Google's pace of investment to mobilize theirs. To this day, the largest companies on the planet can only minimally utilize their proprietary user data in ad targeting (through narrow APIs), maximizing their dependence on the proprietary targeting intelligence within their ad platform. 3) Yes, the infra for ML is hard, but it's getting easier. If we all stopped doing large-scale BI because Hadoop was hard, we'd never have reached the OLAP promised land. Runhouse is like Snowflake for ML training, so if you're investing in ML and hitting the infra walls, come give us a try. Read on below.
While LLMs dominate the news cycle, "boring" ML like recommender systems or fraud detection have quietly continued to be the main driver of realized value from applied AI in the enterprise. Pre-trained language models cannot conquer every use case, and deploying (useful) LLM pilots should not distract from enabling an ML platform that enables the creation of first-party models from your first-party data. https://lnkd.in/eezae4b7 #LLMs #genAI #datascience #ml #mlops
Can LLMs Replace ML Training?
run.house
要查看或添加评论,请登录
-
OpenAI's o1-preview: AI that pauses to think. Is this the next step in machine reasoning? OpenAI's latest model, o1-preview, brings significant changes to AI problem-solving. ???????? ?????? ????????????????: > Longer reasoning time: AI takes more time to process, like humans > Multiple problem-solving strategies: Learns from mistakes, tries different approaches > High-level performance: Matches PhD-level expertise in some areas ?????????????? ??????????????: > 83% success on Math Olympiad qualifiers (GPT-4 scored 13%) > 89th percentile in Codeforces programming competitions ?????????????????? ??????????????: > More compute used during inference, not just training > Correct answers become new training data > Separates reasoning skills from stored knowledge ???????????????? ????????: > Traditional LLMs mainly generate text o1-preview focuses on solving complex problems > This shifts focus from information recall to actual reasoning ?????? ????????????: > How we use compute matters as much as model size > Improvements in inference can be as important as training > Ethical considerations remain crucial as AI capabilities grow Advancements in AI: ?????????????? ??????: Balancing training and inference is important ?????????????????? ????????????: Focusing on problem-solving, not just information storage ???????????????????? ????????????????: Each correct solution improves the model ???????????? ????????????????: Better reasoning needs stronger safety protocols ?????????? ????????????????????????: From finance to science, impact is widespread Progressing towards more advanced AI isn't about one big discovery. It's about steady improvements in how machines process information. ???? ???????? ???????????????? ???? ???? ????????????????, focus on: 1. Understanding inference-time scaling 2. Exploring various problem-solving methods 3. Developing robust safety frameworks 4. Connecting AI capabilities with specific field expertise 5. Considering real-world applications beyond test scores ------ Repost if you like this -------- ------Share if you want to see more------ -------Leave a comment-------- #ai #data #openai #robotics #leadership Read more - https://lnkd.in/gHMzcctt
Introducing OpenAI o1
openai.com
要查看或添加评论,请登录
-
?? Breaking News: OpenAI Release's O1 Model: The AI That Thinks Before It Speaks ?? Imagine an AI that ponders like a philosopher, calculates like a mathematician, and codes like a seasoned developer. That's not science fiction anymore—it's OpenAI's latest marvel, the O1-preview. ?? Meet the Deep Thinker of AI O1 isn't just another language model; it's a game-changer in the world of artificial intelligence. Unlike its predecessors that rush to respond, O1 takes a moment to "think," leading to mind-blowing improvements in tackling complex challenges. From unraveling scientific mysteries to conquering coding conundrums, O1 is setting new standards. ?? Math Mastery: O1 solves tough math problems with an astounding 83% success rate. Compare that to GPT-4o's humble 13%, and you'll see why mathematicians are buzzing! ?? Fort Knox of AI: With a stellar 84/100 score in jailbreak resistance tests, O1 is like the Fort Knox of AI security. ?? Coder's New Best Friend: Developers, rejoice! The O1-mini model offers a turbo-charged, wallet-friendly option for intricate code generation. The Strawberry Revolution? Whispers in the AI community suggest that O1's superhuman reasoning skills might be the fruit of the long-awaited Strawberry model. Is this the sweet breakthrough we've all been waiting for? Get Your Hands on O1 The future is now, and it's accessible! O1 is available for ChatGPT Plus subscribers and API users. But don't worry if you're not in the club yet—OpenAI plans to roll out broader access soon. ?? What's Next? Hold onto your hats! OpenAI hints at even more exciting features on the horizon, including web browsing and file upload capabilities. The AI revolution is just getting started! #AI #OpenAI #O1 #FutureOfTech #ArtificialIntelligence
Introducing OpenAI o1
openai.com
要查看或添加评论,请登录
-
Should be interesting
?? Breaking News: OpenAI Release's O1 Model: The AI That Thinks Before It Speaks ?? Imagine an AI that ponders like a philosopher, calculates like a mathematician, and codes like a seasoned developer. That's not science fiction anymore—it's OpenAI's latest marvel, the O1-preview. ?? Meet the Deep Thinker of AI O1 isn't just another language model; it's a game-changer in the world of artificial intelligence. Unlike its predecessors that rush to respond, O1 takes a moment to "think," leading to mind-blowing improvements in tackling complex challenges. From unraveling scientific mysteries to conquering coding conundrums, O1 is setting new standards. ?? Math Mastery: O1 solves tough math problems with an astounding 83% success rate. Compare that to GPT-4o's humble 13%, and you'll see why mathematicians are buzzing! ?? Fort Knox of AI: With a stellar 84/100 score in jailbreak resistance tests, O1 is like the Fort Knox of AI security. ?? Coder's New Best Friend: Developers, rejoice! The O1-mini model offers a turbo-charged, wallet-friendly option for intricate code generation. The Strawberry Revolution? Whispers in the AI community suggest that O1's superhuman reasoning skills might be the fruit of the long-awaited Strawberry model. Is this the sweet breakthrough we've all been waiting for? Get Your Hands on O1 The future is now, and it's accessible! O1 is available for ChatGPT Plus subscribers and API users. But don't worry if you're not in the club yet—OpenAI plans to roll out broader access soon. ?? What's Next? Hold onto your hats! OpenAI hints at even more exciting features on the horizon, including web browsing and file upload capabilities. The AI revolution is just getting started! #AI #OpenAI #O1 #FutureOfTech #ArtificialIntelligence
Introducing OpenAI o1
openai.com
要查看或添加评论,请登录
-
?? Announcing OpenAI o1: A New AI Model with Enhanced Reasoning ?? If you haven't heard yet, OpenAI just launched o1-preview, a model designed to spend more time reasoning through problems before responding. Unlike previous models, it can handle complex tasks like coding, math, and science. While the model’s capabilities are impressive, it could hold particular value for legal professionals who often deal with intricate analyses and strategic decision-making. ?? What’s Different? OpenAI o1’s reasoning abilities set it apart. In testing, it performed at levels similar to PhD students on challenging tasks. It’s also shown impressive performance in coding competitions, landing in the 89th percentile. ??? Improved Safety One standout feature is how the model handles safety and compliance. It can reason through safety guidelines and align its responses accordingly, making it a reliable tool for tasks requiring accuracy and confidentiality. For legal professionals, this means you could trust it more to stay within ethical boundaries, even in complex scenarios. ?? How Legal Teams Can Use It Today, many legal professionals already use ChatGPT for tasks like summarizing legal documents, generating initial drafts, or brainstorming legal arguments. The current models (like GPT-4) offer features that the o1-preview model doesn't have yet, like browsing the web for real-time information and uploading files. So GPT-4 may still be more capable in the short term for lawyer's needs. That said, OpenAI o1-preview shines when it comes to complex, reasoning-based tasks. Over time, as more features are added—like file uploads and web browsing—it could become an even more powerful tool for legal professionals. This is an early look at the future of AI in law, and it’s only going to get better. ?? Access and What’s Next Starting today, ChatGPT Plus users can try out the o1-preview model. API access is also available, with more features planned in future updates. This is just the beginning, and it’s worth keeping an eye on how this new series will evolve, potentially changing the way legal professionals use AI. #LegalTech #AI #OpenAI #LegalInnovation https://lnkd.in/evhED8rp
Introducing OpenAI o1
openai.com
要查看或添加评论,请登录
-
OpenAI just released GPT-o1 publicly, and the internet is rife with demos from early adopters who've been test-driving this AI powerhouse for a month. What it is: ???????New reasoning models for solving hard problems released on 12/Sep. ???????Designed to spend more time thinking before responding and can solve complex tasks. What it is great at: ???????Trained to think through problems like humans, excelling in physics, chemistry, biology, math, and coding. ???????Performs well on challenging tasks compared to previous models. Note: GPT-o1 doesn’t do everything better. It is not a better writer than GPT-4o, for example. But for tasks that require planning, the changes are significantly better. Why it's a new kind of reasoning engine: ???????AI now plans and executes solutions with minimal human input ???????The AI does the heavy lifting, presenting us with fully formed solutions. While we can still review and refine these outputs, our role has undeniably changed. We're no longer as deeply involved in shaping the direction of the solution.? ???????Our role is shifting from active participant to observer/reviewer.? ???????This isn't necessarily bad, but it's new. ???????This is just the first generation - imagine what's next! ???????And imagine what robotics companies like Figure can do with this tech. After a morning spent researching GPT-o1 demos, it's clear that using this technology forces us to confront some fundamental questions as AI inches towards true autonomy ???????How do we stay in the loop and ensure meaningful involvement as AI capabilities grow? ???????Can we balance AI efficiency with human insight and oversight? ???????What new skills must we develop to collaborate effectively with advanced AI? It's clear that AI is evolving incredibly fast, and we must evolve with it. It's unclear, though, what that really means for us? https://lnkd.in/g-9RTibk
Introducing OpenAI o1
openai.com
要查看或添加评论,请登录
-
With rumours of GPT-6 in training (wait what? we don't even have GPT-5 yet...) needing so much power it's no wonder companies like Microsoft want to get involved in energy generation. It is also little bit scary at the same time to have someone apparently ahead by this much. We've just seen Claude from Anthropic top the leaderboard for AI performance in real world human evaluation. Gemini popped up near the top recently. All facing a model OpenAI released a year ago... Meanwhile they've got an unreleased GPT-5 rumoured for early summer and GPT-6 in training. Whoever 'gets there' first - makes an AI system intelligent enough to really improve itself/train new models - probably wins 'it all'. After all the chaos we've seen in AI governance this last year with board coups and resignations from prominent figures. I wonder if we really want to allow that to happen - a winner takes all scenario like this. Is there anything we can even do about it?
要查看或添加评论,请登录
-
https://lnkd.in/e9jdZQFX “We trained these models to spend more time thinking through problems before they respond, much like a person would. Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes.” The brilliance of this launch! It feels like OpenAI made a deliberate decision to slow things down… releasing a more controlled evolution of AI instead of feeding the beast we’ve all been experiencing. AI has become this looming figure… speed + power + complexity have started to outpace what us people and our industries can keep up with. We’re moving so fast that it’s becoming harder to absorb, process, and understand the implications of every leap forward. Intentional or not, launching o1 tapped into something amazing… the value of patience and pacing! Feels like a direct response to the fear that AI is spiraling out of our control. The launch has intentional restraint, moving away from the instantaneity that gives people a second to breathe… you get time to think and an opportunity to acquaint yourself with this tech in a way that feels more natural. When tech surges ahead of what we are ready for it creaties huge gaps between our understanding and application. OpenAI o1 acknowledges this tension. It’s not just about slowing down progress, it’s about recalibrating how we engage with it. The brilliance here is recognizing that acceleration isn’t the only answer, sometimes the smart move is to create space, allow culture to catch up. OpenAI just reminded me that power isn’t JUST in the speed of AI, but in how strategically and responsibly it’s deployed and applied. This is a deeper understanding in the relationship between humanity and technology, an understanding that acknowledges the fear and respects the pace of change leading with intention. Bravo OpenAI ????
Introducing OpenAI o1
openai.com
要查看或添加评论,请登录
-
The OpenAI o1 model, introduced on September 12, 2024, is a more advanced model that exhibits greater capabilities in reasoning coaches. This model was developed to improve upon the previous models and is especially effective in addressing problems of a STEM nature. Using innovative reinforcement learning techniques, it has been trained to tackle problems sequentially through reasoning, a feature reminiscent of human thinking. Some of the characteristics of the o1 model are as follows: Improved Reasoning: Solving complex problems is one of the strengths of the o1 model, for instance, on the International Mathematics Olympiad qualifying examination, it achieved 83% accuracy compared to just 13% accuracy for the preceding GPT-4o model. It comprises less energetic devices. In Variants: There are two available versions o1 bullet preview which concentrates on difficult reasoning-dominated tasks and a ‘Lower pricing’ o1 mini. Considerations: This is an appropriate observation because the model in its current state does not support certain tasks such as web linking and file sharing and it is less efficient in its performance as a result its reasoning is very elaborate. Availability: OpenAI plans to broaden its scope to other users in the foreseeable future although at the beginning, the use was confined to ChatGPT Plus and Team users. On most or all of the issues, a mulberry and a more advanced o1 model show how soft determines at least one of the components of soft reasoning. The o1 model, in conclusion, ushers in a dream to stretch artificial intelligence that is more approximate to the adult-like brain mode of reasoning and fulfills such modifiable requirements efficiently. Check it out here: https://lnkd.in/djdV2Gac
Introducing OpenAI o1
openai.com
要查看或添加评论,请登录