AGI - Powerful AI - Tomato - Tomahto

AGI - Powerful AI - Tomato - Tomahto

The term Tomato - Tomahto is used to suggest that something is a distinction without a difference i.e. two similar things being exactly the same, when they are slightly different.

Recently Anthropic CEO Dario Amodei proposed the idea of powerful AI instead of the term AGI. In one sense, these two terms are the same (Tomato - Tomahto) - but its worth understanding the idea of powerful AI (which in my mind, is the same as AGI) - partly because its well explained in his article Machines of loving grace . From this article, my notes below for powerful AI

Intriguingly, Powerful AI is expected to appear in the 5 to 10 year window and by some estimates as early as 2026. The article focuses on what happens 5-10 years after that.

In terms of definitions, its similar to LLMs of today though it might be based on a different architecture, might involve several interacting models, and might be trained differently—with the following properties:

1) In terms of pure intelligence , it is smarter than a Nobel Prize winner across most relevant fields – biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc.

2) It has all the “interfaces” available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access.

3) It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on.

4) It does all of these tasks with, again, a skill exceeding that of the most capable humans in the world.

5) It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary.

6) It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer;

7) in theory it could even design robots or equipment for itself to use.

8) The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and

9) The model can absorb information and generate actions at roughly 10x-100x human speed

10) It may however be limited by the response time of the physical world or of software it interacts with.

11) Each of these million copies can act independently on unrelated tasks, or if needed can all work together in the same way humans would collaborate, perhaps with different subpopulations fine-tuned to be especially good at particular tasks.

12) We could summarize this as a “country of geniuses in a datacenter”.

he problem with this is that there are real physical and practical limits, for example around building hardware or conducting biological experiments. Even a new country of geniuses would hit up against these limits. Intelligence may be very powerful, but it isn’t magic fairy dust.

Economists often talk about “factors of production”: things like labor, land, and capital. The phrase “marginal returns to labor/land/capital” captures the idea that in a given situation, a given factor may or may not be the limiting one – for example, an air force needs both planes and pilots, and hiring more pilots doesn’t help much if you’re out of planes.

in the AI age, we should be talking about the marginal returns to intelligence , and trying to figure out what the other factors are that are complementary to intelligence and that become limiting factors when intelligence is very high. We are not used to thinking in this way—to asking “how much does being smarter help with this task, and on what timescale?”—but it seems like the right way to conceptualize a world with very powerful AI.

List of factors that limit or are complementary to intelligence includes:

1) Speed of the outside world. Intelligent agents need to operate interactively in the world in order to accomplish things and also to learn . But the world only moves so fast. Cells and animals run at a fixed speed so experiments on them take a certain amount of time which may be irreducible. The same is true of hardware, materials science, anything involving communicating with people, and even our existing software infrastructure. Furthermore, in science many experiments are often needed in sequence, each learning from or building on the last. All of this means that the speed at which a major project—for example developing a cancer cure—can be completed may have an irreducible minimum that cannot be decreased further even as intelligence continues to increase.

2) Need for data. Sometimes raw data is lacking and in its absence more intelligence does not help. Today’s particle physicists are very ingenious and have developed a wide range of theories, but lack the data to choose between them because particle accelerator data is so limited. It is not clear that they would do drastically better if they were superintelligent—other than perhaps by speeding up the construction of a bigger accelerator.

3) Intrinsic complexity. Some things are inherently unpredictable or chaotic and even the most powerful AI cannot predict or untangle them substantially better than a human or a computer today. For example, even incredibly powerful AI could predict only marginally further ahead in a chaotic system (such as the three-body problem) in the general case,9 as compared to today’s humans and computers.

4) Constraints from humans. Many things cannot be done without breaking laws, harming humans, or messing up society. An aligned AI would not want to do these things (and if we have an unaligned AI, we’re back to talking about risks). Many human societal structures are inefficient or even actively harmful, but are hard to change while respecting constraints like legal requirements on clinical trials, people’s willingness to change their habits, or the behavior of governments. Examples of advances that work well in a technical sense, but whose impact has been substantially reduced by regulations or misplaced fears, include nuclear power, supersonic flight, and even elevators.

5) Physical laws. This is a starker version of the first point. There are certain physical laws that appear to be unbreakable. It’s not possible to travel faster than light. Pudding does not unstir. Chips can only have so many transistors per square centimeter before they become unreliable. Computation requires a certain minimum energy per bit erased, limiting the density of computation in the world.

6) Things that are hard constraints in the short run may become more malleable to intelligence in the long run. For example, intelligence might be used to develop a new experimental paradigm that allows us to learn in vitro what used to require live animal experiments, or to build the tools needed to collect new data (e.g. the bigger particle accelerator), or to (within ethical limits) find ways around human-based constraints (e.g. helping to improve the clinical trial system, helping to create new jurisdictions where clinical trials have less bureaucracy, or improving the science itself to make human clinical trials less necessary or cheaper).

Thus, we should imagine a picture where intelligence is initially heavily bottlenecked by the other factors of production, but over time intelligence itself increasingly routes around the other factors, even if they never fully dissolve (and some things like physical laws are absolute) . The key question is how fast it all happens and in what order.

I find the above discussion most comprehensive description of Powerful AI / AGI . Sam Altman also painted a similar vision but it was at a high level. .

If you want to study with me, please see my course at the #universityofoxford Artificial Intelligence: Generative AI, Cloud and MLOps (online)

For the first time, we are discussing AGI on the above lines

some final comments from me

  1. I find this approach rational and pragmatic
  2. It is also extremely disruptive
  3. With 5 to 15 year horizons - it is not far
  4. Almost all aspects as we know them will change
  5. Finally, I don't see this as a dystopian view nor a view to be afraid of
  6. All of us will have to adapt
  7. And humanity will evolve for the better assisted by AI (by whatever name we choose to call it!)

Thanh (Hans) Nguyen

Marketing Professor, Educator, Scholar, Life-long Learner - Quinnipiac University

5 天前

A very interesting and thoughtful perspective! Thanks for sharing!

回复
Dr. PG Madhavan

Digital Twin maker: Causality & Data Science --> TwinARC - the "INSIGHT Digital Twin"!

1 周

Excellent summary ??

Clyde Johnson

CEO and Founder In2netCISO

1 周

I agree what ever ‘it’ becomes, change is inevitable meaning, we all have to adapt and evolve on multiple levels.

要查看或添加评论,请登录

Ajit Jaokar的更多文章