Why your electricity bill should make you less anxious about the “threat of A.I.”

Why your electricity bill should make you less anxious about the “threat of A.I.”

Two line summary: Due to the relatively low energy efficiency of current electronic technology, one of the biggest challenges for the deployment of powerful A.I. systems during the next decades will be the cost of electric energy. The rise of General A.I. will have to wait for fundamental advances in the fields of physics and bio-technology to allow us to build computational frameworks with power efficiencies thousands or millions of times superior to the current ones.

"Watt" are we talking about?

First, a few elementary facts. Power - as in "electric power" or "mechanical power" - is the amount of energy that a given system spends (or deploys) per unit of time. The Watt (W) and the horsepower (hp) are two commonly used units of power, the Watt being the standard SI unit (International System of Units). 1 Watt corresponds to spending / deploying the energy of 1 Joule (the standard unit of energy in the SI) per second (the standard unit of time in the SI).

We have an intuitive notion of how much is 1 Watt, and we know it is not a lot. The power of a common incandescent light bulb is 60 Watts: the bulb transforms electric energy in light at a rate of 60 Joules per second. This is actually not strictly true because the common light bulb is a very inefficient device that transforms most of the energy we feed it with not in light but in useless heat (hence why we should be slowly moving to LED illumination). Nevertheless, this helps conveying the scale of what 60 Watts is.

Here are a few more numbers to help establishing the magnitudes involved here (every example represents a 10x increase from the previous one):

  • 100 Watts is the power consumption of a 50” Plasma TV
  • 1 kW is the maximum power consumption of a typical microwave-oven.
  • 10 kW (kilo-Watt) is the power needed for a Toyota Prius electric car to maintain the speed of 50mph (approx. 80Km/h). Interestingly, if we want the Prius to "blast" at 90mph, its power consumption grows to 50kw (5x!!);
  • 100kW is the power generated by a mid-size wind turbine (i.e. “only” 20m diameter) operating with favorable wind conditions
  • 1 MW (mega-Watt) is the power needed by around 150 households.

We could go on. There are many of these factoids. So, before reading the rest of this article have a look at this Wikipedia page.

The Costs of A.I.

As I discussed in a previous article, one of the decisive factors for the current A.I. explosion is the availability of cheap supercomputing power, specially via GPU's. As anyone that has been playing with Artificial Neural Networks (ANN) can tell you, GPU's are great in speeding up the training and execution times of ANN's (100x faster is not unheard of), but there is one thing that is also very obvious, specially if you work on a laptop (as I do often): a GPU consumes a lot of energy. The actual power depends on the GPU (check here for the case of the NVidia Tesla and successors models) but let's take the value of 300 Watts as a reference.

Now, 300 Watts does not seem a lot: it is merely the power of 5 or 6 incandescent light bulbs. Current A.I. system can already accomplish certain important tasks - such as voice recognition, and classification of objects in images/video - using these GPU's, and 300 Watts seems to be a small cost to pay for those capabilities (it is just as much power as two or three Plasma TV’s).

But here is another perspective: current A.I. is very far from resembling anything close to general intelligence (very very far!). Our A.I. systems can only execute very specific tasks in very controlled environments, and they are already burning on the order of 300 Watts.

The curious fact is that the “all-terrain” human brain consumes about... 20 Watts! Gauss was running on 20 Watts. Einstein was running on 20 Watts. Kasparov runs on 20 Watts. Terry Tao runs on 20 Watts. Chuck Norris is the only exception. Chuck Norris runs on -20 Watts.

Human Brain vs Machine power efficiency comparison – Take 1

Let’s attempt to compare the power efficiency of the human brain with the current GPU technology. One way of doing it is to calculate the value of some unit of computing power per Watt for both “intelligent” systems. For measuring computing power, we can use FLOPS – FLoating-point Operations Per Second.

For example, Nvidia's P100 GPU (a pretty good GPU) is capable of executing around 10^13 FLOPS. 10^13 FLOPS is actually lot! For historical comparison purposes, the computing power of the first programmable electronic computer, the ENIAC launched in 1946, was only 500 flops, that is, 11 orders of magnitude below (we will come back to ENIAC later). So, the Nvidia P100 has a computing efficiency of 33 GFLOPS/Watt.

Measuring FLOPS in the human brain is tricky, because there is no natural way of mapping the ability perform intelligent actions – such as understanding language – to FLOPS. Still, there several of estimates on how much processing power in FLOPS a human brain has. The more conservative estimates point to 10^13 FLOPS (that is the same of a P100 GPU!) while the most expansive estimates point to 10^25 FLOPS.

If we take the lower estimate, then machines have already supplanted the computational power of humans (as measured in FLOPS), and they end up being only 10-15 times less efficient in terms of power, which is not a big deal. However, given what we know about the current state of A.I., machines have not yet passed our computing power: 10^13 FLOPS seems to be clearly an underestimation of the computing power of the brain.

If we take the other extreme, we are forced to conclude that the current GPU’s are still 10^12 times less powerful that human brains. So, if we maintain the same level of power consumption per FLOP, for GPU’s to have computing power comparable to the human brain a GPU would have to be fed with 300 x 10^12 times more power, that is 300 Tera-Watts.

300 Tera-Watts is more than the estimated power involved in a hurricane.

It is impossible to be happy with any these numbers. To be honest, if people are making estimates that differ in 12 orders of magnitude, then I feel that I can propose my own estimate.

Human Brain vs Machine power efficiency comparison – Take 2

In what follows, I will make many simplifying assumptions. My purpose is not to obtain accurate estimate of what would be the actual power consumption of a electronic system with computing power equivalent to the human brain, but merely to estimate the orders of magnitude of such power consumption. So, take everything as merely indicative.

The first simplifying assumption I amm going to make is that the power spent in a GPU is proportional to the number of processing cores. This is a simplification, because there are other parts of the GPU that consume power, and because even the power consumed by the cores depends on many factors including clock speed. We will ignore all this for now.

So, the question we want to answer becomes: what is the estimated number of cores required for GPU to have similar computing power to the brain.

But let’s not think about FLOPS. Let’s just think about the structure of the brain.

The brain has estimated number of 10^11 neurons. Each neuron is connected to a number of other neurons. This is very important, because the complexity of the several computational functions that the brain can perform depend on the number of connections between neurons. We know that the brain has several specialized modules, with different degrees of connectivity between the corresponding neurons, but we will simplify again and assume an average connectivity. According to Wikipedia, the we estimate that each neuron is connected in average to 7000 other neurons.

Let us now assume that we wanted to store in RAM the intensity at which these connections are occurring. That is, we want to know much RAM we would we need to store the information about whether one neuron is communicating with its neighbors, and what is the intensity of the signal being passed. This information would represent the function being executed by the brain, whatever that function is. We don’t care about the function itself (“understanding speech” , “controlling sneeze”, etc..), we just care about the parameters of the function.

So, for each of the 10^11 neurons, we would have to maintain a list with 7000 values, one for each connected neighbor cell. Assuming that we encode the intensity of the connection using just 8 bits (we would probably need more but let’s keep it a 8 bits), we would need 7000 x 10 ^11 = 700 TB just to maintain the value of each of the connections.

In practice, we would be spending more at least 5 more additional bytes per connection because we need to maintain the address of the target neurons (that is also part of the description of the “brain function”). That is, just for maintaining the structure of the connections (not the actual intensity value) we would require an extra 3.5 Peta Bytes.

But, let's make another simplification here and say we only care about the 700TB of “connection intensity” data. This is reasonable for estimation purposes because these are the only values which would change over time, during the execution of the intelligent functions. (Note however that during the training time of our “artificial brain”, we would actually need to interact with all the 3.5 + 0.7 = 4.2 Peta Bytes of RAM to “learn” which neurons connect to which).

Now, notice that modern GPU's have something between 4 to 16 cores per MB of RAM. We want to have as many cores as possible for being able to process more data in parallel, but the technology balance for now seems to be having “only” these 4 to 16 cores per MB. So, if we wanted to maintain the same ratio of processing cores / MB, we would have to increase the number of cores in the same proportion we increase the RAM (we could also change clock speeds or even the structure of the cores themselves but we are ignoring all that for now). Since current GPU's have RAM capacity of about 24 GB (the really “big” ones), for processing the 700TB of neuronal “intensity” information we would need about 700TB/24GB = 30,000 times more cores than the current reference.

This means 30,000 times more power consumption. Therefore, the magic number I propose for power consumption required by the current technology to replicate the computational power of the human brain is: 9 Mega-Watts.

It is not as much power as what is involved in a hurricane, but it is more than the peak power of the world's largest wind turbine. It is also 450,000 times more power that the human brain needs to perform intelligent functions.

A 9MW iPhone?

A 9 Mega Watts GPU represents a problem. In fact, it represents many problems. Feeding 9 Mega Watts to any machine is a challenge, but feeding that same power to a machine that is supposed to be small is even more challenging. The amount of heat produced by injecting 9 MegaWatt in a small board would melt any material. More likely, our “GPU” would vaporize instantly. Even if we made the circuit board hundreds of time larger, it would melt anyway.

But lets say that, by some miracle of physics, we found a way of feeding huge amounts of power to electronic circuits. With the current specs, feeding this imaginary device would cost more than 25,000USD per day just in electric power. The economic implications of this value are very interesting but I will leave that discussion for another time.

Let’s now suppose that our estimate is 1000x off and our brain-like GPU ends up consuming “only” 9kW, that is, as much as an electric car. We have reduced electricity costs by 1000x, which is great, but the challenges involved in cooling the device are still enormous. 9kW is still 30x more power than current GPUs consume, and it is already difficult to maintain current GPU’s refrigerated. And let’s not even talk about mobility...

It is more or less safe to say that we will not be capable of building human-like intelligence with the current semi-conductor technology.

However, it is also true that, 70 years ago, the 500 FLOPS ENIAC machine consumed 147kw, that is, about 3.4 x 10^-3 FLOPS per Watt, while current GPU’s achieve 33 x 10^9 FLOPS/Watt. This means that in merely 70 years, we improved the energy efficiency of computing machinery by 10^13. This is huge!

Somehow, evolution has figured out a way to build low-consumption general intelligence machines. It took millions of years, but a biologic solution is here. Humanity is very ingenious, and so we will probably also be able to build such human-like intelligence, using electronics, nano-technologies, biotech, etc...

We need A.I.! Space exploration will require machine intelligence because the space is to big and to harsh for our fragile biology. Just don’t hold your breath while waiting for this new technology to come… unless, of course, you are Chuck Norris.

Jo?o C. F.

Systems Engineer

6 年

Nice reading

Ricardo M. Lima

Exploring paths to the global optimum. 0 results for "Exploring paths to the global optimum" in Google before my line.

7 年

The supercomputer in the basement of my building has a consumption of 2.8MW!!!

Excellent article and TED talk

回复

Another nice reading. I'll be waitting for the next one :)

要查看或添加评论,请登录

社区洞察

其他会员也浏览了