This (from someone at OpenAI) is an important thing to understand The major AI labs, including OpenAI, are very much focused on racing for the future. It is almost accidental that their early products are making billions of dollars. Their goal Is explicitly AGI, not your use case. They are not spending much time commercializing (yet) because most of their effort is going to building more powerful AIs.
So this goes counter to the stance you took the other day around these smaller dedicated models…. https://www.dhirubhai.net/posts/emollick_it-is-becoming-clear-that-agi-is-going-to-activity-7240087867119599617-5AGU?utm_source=share&utm_medium=member_ios
Are they also implying that, at least according to them, AGI might happy sooner than later Ethan Mollick?
Until the data centers cover the earth and consume all known resources. And Elon and Sam will rule over it. And it will be good. The end.
This has been my assertion. There's no way they're letting us have access for only $20/mo without it being about them. User interaction is a very rich source of high quality data for training. There's some off planet plans for this tech or something. It has nothing to do with your Excel workflows, Cindy.
How to know the “roon” is from OpenAI? I checked the X profile and the website but could not identify a person behind ”roon”. Just curious to understand the context of the post on X.
Yes and no. Problem is you will never put agi into microwave or use it to help you design Christmas card. Compute power for agi will be too great for normal day to day tasks for few human generations. So not all AI projects are doomed the second some data center with quantum computers hits that level
I think that might have been the case a while back, I would argue that’s not entirely true now. A good part of their effort resides in making enough commercial efforts to be able to sustain more innovation, as even in a fast cycle of progress, we’re talking about years to get there. And then again, what does there mean? When does it stop? AGI could also be seen as just the beginning
I've thought this for a while but I have a conflicting feeling that LLMs just aren't the right foundation technology for super intelligence. They may play a part but at their core they're just too simplistic and building bigger, faster models isn't going to change that. So, if not LLMs - what is the foundational technology going to be and have we even seen a glimpse of it yet ?
This is not news. Sam Altman told investors that if OpenAI could reach AGI it would “maybe capture the light cone of all future value in the universe,” back in 2019. You don’t have to believe this is possible, but you should know that many people (but not all) in the AI labs do.