ChatGPT | "Piece of sun breaking off"
Ataollah Etemadi
The Business Scientist: We Help Amazing Companies Get Traction! (30K connections)
The problem with AI as it is currently being developed is the researchers don't really know what its learning. Consider you may train an AI on tens of thousands of images of a blue sky. Now, you'd expect the AI to learn the sky is blue, but it could also be learning all blue things are sky. Eventually, with or without supervision the AI becomes unstable as its false learning permeates everything it learns.
This follows because of the second and third laws of computer programming. The first law is: Thou shalt make backups. The second law is: Garbage in, Garbage out. The third law is: A computer program grows until it exceeds the ability of the programmers who wrote it.
IMHO, AIs become unstable because they are learning without a purpose. You can create huge learning samples by adding noise to a small set of samples but without a purpose all you've really created is a fancy regurgitation machine. The so called AI does not really understand what it has learnt. It's much more ML than AI.
In the news recently, a piece of the sun breaking off. If you think everything works the same at all scales, then this news comes as a shock. If you think, as I do, that long distance forces such as magnetism and gravity do not behave the same way at massive scales as they do in the laboratory, then this comes as no surprise.
Firstly, in this event the magnetic field lines have reconnected releasing large amounts of energy which then cools the plasma bubble thus created to the extent that we then observe the Leidenfrost effect.
CEO | Quema | Building scalable and secure IT infrastructures and allocating dedicated IT engineers from our team
1 年Ataollah, thanks for sharing!