The road to AGI

The road to AGI

This is an attempt to mash together my thoughts on recent reading but especially these two pieces:

  1. LBH's opinion: "What's next for deep learning".
  2. Robust.ai's latest by Gary Marcus.

Firstly, I'd like to comment that the Pioneer, Jürgen Schmidhuber, is consistently and incorrectly left out of editorial on the "Godfathers of AI". Schmidhuber's PhD's were responsible for the recurrent LSTM units and make up a substantial part of London's DeepMind researchers, focused on AGI. 

[Update: Jürgen has spoken! Actually Tweeted, for the 2nd time!] I've been pushing him for years. His article discusses the past decade and outlook to the 20s & of course, the references are longer than the writing ?. One day the field will upgrade to hyperlinks and bring publishing formats up to modern day! Schmidhuber, as usual, spends most of the time crediting his students. Check out the full article (7 minute read) which talks about his first PhD, Sepp Hochreiter's, LSTM, discusses the FNN (feed forward neural network), GAN, RL, metalearning & something we'll be discussing with our supported labs at GTC, March 24, "Virtual AI v. Real AI". Schmidhuber highlights much of the "symbolic" models his students worked on over the 2010's (see Marcus' thoughts below) and explains how profound Hochreiter's LSTM was to today's technology. I'm proud to know Sepp, who runs the Institute for Machine Learning at Johannes Kepler University in Linz, Austria. He is, by far, the funniest man at any AI conference, anywhere!

Interesting fact: when I was researching deep learning (DL) at the University of Leeds, early 2015, Jürgen was answering my questions. As it turned out, my research involved proving and reproducing what he'd been doing at MICCAI already, using convolutional neural networks to read histology images far faster and far more accurately than radiologists. After some obvious resistance, DL finally took over MICCAI, and the radiology world at large. It features heavily in NVIDIA's Clara development. Little did I know that later that year I would be flying to Munich with NVIDIA, watching Schmidhuber's student, Wonmin Byeon present PyraMiDs at MICCAI. She later joined NVIDIA. Healthcare is the entire reason I decided to research DL, against the advice of my Supervisor. Interestingly he warned me off Schmidhuber too. I later realised he was just jealous of him. Muppet!

Schmidhuber's projections for the near future include trading in actual data, a fascinating prospect. He goes on to talk about something I strongly believe too, that privacy should not be something that blocks progress to AGI, especially in healthcare. As he writes: "the big wide world will not offer any more privacy than the local village ..some nations may find it easier than others [to progress with AI] ..at the expense of the privacy rights of their constituents". He's (probably) talking about China!

Schmidhuber has also been researching artificial curiosity since 1990 "incorporating mechanisms that aid in reasoning". He predicts very soon there will be "AI for All" with "self-replicating & curious & creative & conscious AIs". 

With the spotlight fiercely on OpenAI right now, artificial general intelligence (AGI) remains the scariest aspect of the vast field of AI. Because it's unsolved. AGI comprises neuroscience & all facets of the field, so is a race to both understand our own brain, also unsolved, as well as its artificial version. The common misconception is that this is foolhardy and dangerous. Like travelling to Space? ..which has killed way more humans than AI, but we accept it, as part of our progress as a Species. As with Space exploration, it is the small incremental discoveries that can, have & will generate billions in revenue. Take the lowly blindspot assistance in most new cars today, this is currently the only (optional) tool I use, but also because it's the only assistance I need right now. The rest is already taken for granted, like the rear view camera.

OpenAI just received its best gift yet in the form of University of Wisconsin & ex-Uber Labs, Jeff Clune. Jeff's work lies on the border of computer science and philosophy. His students' work, POET, truly is. Paired Open-Ended Trailblazer endlessly generates increasingly complex and diverse learning environments and their solutions. The paper references Schmidhuber's Powerplay work of 2011. Jeff will be talking at GTC, at my request, on his recent work, which expands on the POETic abstract that an AI can "in effect build its own diverse and expanding curricula, with the solutions to problems at various stages ..stepping stones towards solving even more challenging problems later in the process." All of this, of course takes place in simulation, in a game, harnessing what is known as evolutionary strategies (ES).

No alt text provided for this image

This next part is beautiful:

There is always the possibility that the feats we observe in the system today will be overshadowed by the achievements of tomorrow. In these unfolding odysseys there is an echo of the natural world. More than just the story of a single intelligent lifetime, they evoke the history of invention, or of natural evolution over the eons of Earth. These are processes that produce not just a single positive result, but an ongoing cacophony of surprises – unplanned advances rolling ahead in parallel without any final destination.

I've spent the last 8 years immersed in this huge field so, forgive me, if I roll my eyes at journalistic reminiscences down AI's memory lane and attacks (vs. criticism) on our progress. What is important is the next step, based on all prior knowledge & whether we have we addressed the ethics. We also must not forget to address today's devolution of democracy into <insert political views>, the need for equality everywhere, not just in AI research labs, and most importantly, addressing climate change. I made a comment earlier this week, in a discussion about OpenAI with the Head of UNEP's Environmental Peacebuilding Program, that it would be both tragic and ironic if, for all OpenAI's dedication and commitment, to staying safe and open, they had overlooked the risk of San Francisco falling into the San Andreas fault. I watch a lot of movies and it always amazes me that science follows science-fiction and people don't seem to realise this, or take it seriously enough. It's important to do what I do for a living and keep your eye on the Big Picture! It's just as important as being buried in the code within specific specialist folds.

GOFAI (good old fashioned AI) is about 80 years old in human years, but only 8 years old in AI years, which equates to only about 3 or 4 analogous years of a human child. That first order of magnitude is really important. The rate of progress in research in this field, fuelled by the economic revenue generated by the small value-adds, is astounding. The Call to Action, happening right now, is putting that growth to good use; AI for Good, AI for Climate Change, AI for extreme weather prediction and recovery efforts. FDL focuses on this (sign up to be involved). While Sara Sabour & Hinton's capsules greatly improve the generality of neural networks (the ability to cope with more and more variety) they are one single tool is a vast ocean. [Update: Apple & CMU have an upgrade of CapsNets here, featuring at ICLR 2020]. As Robust.ai state, we need a lot more reasoning and common sense too. Forget whether we need to see 100 Mitsubishi's to tell them apart from a Landrover, why do we buy the Landrover? The answers are endless and deeply personal, vision, for example, only plays a small role (unless you're me, when it really all comes down to the colour or whether it sports "Marvel" and Ironman gimmicks throughout!)

Self-supervised learning in humans is a very complex problem, but one we will solve eventually. Just like we went to the Moon. The bigger problem is ignorance and its consequent fear generation, from the fact only a minute percentage of humans on the planet work in the field of AI. Hence the problem with equality. We need to get more women involved, sure, but also more overall diversity, to include Africa more, and other developing countries, and so on. Not just in AI! Remember bias. It is the field of AI research that has highlighted a lot of it. Remember that if you're reading this it's because of two main biases; 1. you've already added me on LI & 2. you're already following AI. What about everyone else? A few years ago a Cambridge study showed that medical health data used for training systems lacked significant percentages of Latin-American data. That's terrible! But, I refer you back to Clune's earlier prose to remind you that, as a society, we're all still learning, every day, evolving every day (except in politics!

I rarely get bogged down in the pettiness as I focus on the Big Picture, a trait from studying astrophysics. I'd recommend it to all. The really important work is the cross-discipline research, especially incorporating philosophy, psychology and neuroscience. There's a reason Bengio is talking about System 2 now (NeurIPS slides).

Star Trek Talosian

I also refer you to the work of Phillip Alvelda and his definition of consciousness; "the continual future prediction of our conscious state" (what we see/hear/feel...), or in AI lingo, inference.

The trouble with System 2, however, is how do you code love? Is love really another dimension we can't understand. If you've watched Interstellar the argument is profound. Humans are able to love someone who is distant from us either in space (s/he went away) or in time (s/he passed away). I truly believe in The Force, my Mother's is extremely strong (RIP, 2009). We don't really understand love though, we just accept it. The argument that it could simply be just an attribute in another dimension that we can't touch, but can feel, is intriguing. Same with deja vu, except that Love does not depend on space-time. It remains unaltered. Our memories waiver & fade. Pain certainly does, grief too, but love remains. We can also time-travel, in our imagination, in our dreams, using the memories, the prior knowledge we amass. Love does not rely on the other 5 dimensions; height, width, depth (geometry/space), time & gravity. It is measureless (up to now), though we've measured gravitational waves? Is love gravity? Entanglement has been proven - the instantaneous influence of one particle on another. ... either way, how do we code it?

Here's a very cool distraction, from the University of Heidelberg. I would also point you to Platonite and the work of Christoph von der Malsburg, another genius I've had the pleasure of dining with. Christoph's ideas on our ability to decompose scenes into independent descriptors and elements, reversing computer graphics, so to speak, and attaining confidence by comparing predictions to changing input are exquisite. I refer you also to work by Schmidhuber and Ha on World Models, and the search, in simulated reality (sims) to make sense of things, concepts, dreams, creativity. A vast amount of work is being done at NVIDIA with sims - look out for announcements at GTC on ISAAC & Omniverse.

bring us "much closer to a framework for AI that we can trust." 

Moving to the 2nd paper I referred to, at the beginning of the article, Gary Marcus' extended version of the NeurIPS AI Debate and his recent book with Ernest Davies. It's an epic, as with everything Gary does (I'm a big fan). Broadly speaking he's referring to "a hybrid, knowledge-driven, reasoning-based approach, centered around cognitive models, that could provide the substrate for a richer, more robust AI than is currently possible." Marcus sums up by imploring that we don't dismiss DL and all its value-adds, nor research on hardware, on ethics etc. but that we should most definitely "shift our perspective" and direction to the Big Picture. He argues this will bring us "much closer to a framework for AI that we can trust.

In the meantime Elon Musk is stirring the pot after several pieces were published on OpenAI, saying that any organisation developing AI should be regulated. So that's every university in the world? Every tech company, including Tesla, including SpaceX, including Paypal, including NVIDIA. Again, my view is from orbit looking down. I'm not interested in regulation, someone else can debate that. I'm interested in how to harness it for All Humankind. Now. Look at how we educate our children and re-educate those that go astray. My daughter spends her days looking after under-18s in a secure home, captive because of welfare issues or criminal activities. Kids lacking nurture. She is 5'6 and gorgeous, of course I'm biased, but she can restrain a 6'4 street kid with a tendancy for violence. She doesn't do it with power, she does it with respect. They respect her. The way we educate isn't perfect, there needs to be much more flexibility. My kids went to Montessori primary till 6 and 4, even in China (2002-3), but the way we educate the kids who need care the most isn't perfect. In my daughter's workplace its outstanding, but that level isn't scaled across the world.

Happy Birthday to my beautiful badass, for tomorrow (Feb 21) ???????

All our knowledge, today, for how we treat children and adults, good and bad, should be channelled into how we teach AI, with compassion, with kindness, with respect. See work by my friends at GoodAI, in Prague. Personally I think we will. How many people get angry when a guy kicks over the Boston Dynamics robot? We laugh at the DARPA trials, but when a human kicks a robot....

I know we'll get there. But maybe we need the "race". The US wouldn't have gone to the Moon if Russia hadn't launched Sputnik on October 4th, 1957. We like races, we like competition, it fuels us, it's also biological. But AI, code & mathematics isn't. There is absolutely no reason an artificially intelligent agent would be malevolent, until it understands what 'malevolent' means AND we teach it that malevolence is good. Just like we don't do with our children, most of us anyway! The danger, as our movies already tell us, comes when the parents, or the human creators are malevolent. Had Ava been brought up in a loving, caring environment the movie would have been very different, but it wouldn't have been made, either, because we love malevolence right? Love and caring is a different movie, a chick flick!

Back in reality, in Europe, the EC have published their Whitepaper on AI. Thanks to TechUK for summarising it here. The "ecosystem of trust" for AI applications, they say, should include mandatory obligations for; diverse, representative training data, data privacy, robustness, reproducibility, resilience. Unfortunately AI will progress far faster than bureaucracy and, ultimately, the world at large is so chaotic that we really don't know what could happen, ever, anywhere, anyway. Risk mitigation for business truly has to take the astrophysics route of "live fast, die fast", it's the slow-burners that last the longest but there's nothing slow about AI, hence the incredible pace of startups, some even exiting before they're set up. The University of Pennsylvania's GRASP Lab, just spun out Trefo, by Steven Chen & Professor Vijar Kumar, who I work with. I predict they'll be acquired very quickly! "Die fast" also relates to the ability to fail and learn, to adapt. This is a key survival skill in today's world, in AI or not. Ask the sufferers of UK's latest storm flooding. Adapt or die. Or at least suffer.

The best thing Humankind has right now is its diversity. We just need to use it.

As Marcus states, true intelligence "synthesizes knowledge from a variety of sources such that it can reason flexibly and dynamically about the world, transferring what it learns in one context to another, in the way that we would expect of an ordinary adult." Adaptation is key here. The best thing Humankind has right now is its diversity. We just need to use it. That includes criminals and evil-doers. If we didn't have bad, we wouldn't know what good is. It's all part of the education and why large swathes of research labs are now looking at adversarial attacks. We know how to test and verify, we went to the Moon, we built petrol cars and still allow them to crash and kill 1,784 a year (in the UK) and injure over 25,500 (2018)! AI will vastly decrease these numbers, but only once we've nailed connected cars and the infrastructure to enable it, such as 5G & deep understanding, versus DL. Today, even GPT-2 is heavily criticized for falling short, but it's better than anything we currently have. Don't knock the pupil for getting a B, praise them! Even if they're getting paid $150,000 a year to study!

One tool which will greatly help is NVIDIA's Omniverse, which will provide physically-accurate, high-res, photo-realistic rendering of simulated reality, in which we can train AI agents to do absolutely anything. Game developers everywhere will be in high-demand. No longer will gamers be sat on their chairs at home playing, they'll be training tomorrow's Mars Rover, with AI researchers putting on their VR headsets, joining them in-world, with the rocket scientists, to push the latest update on their deep reinforcement or imitation learning algorithm. 

No alt text provided for this image

This seems perfectly natural to me, since I've worked from home for over a decade now. Commuting sucks. Embrace the future. Embrace online technology and reduce travel, save the planet!

So far as getting to AGI, we've already made movies about it. First we need a system that we feel safe enough to let loose into the internet, to learn everything Humans know. With all that knowledge we then have to teach it empathy, perspective and context. I already ask Google everything I don't know (we also need to educate the non-AI, the luddites on how to use Google!) Schmidhuber's students & Microsoft then couple that with the BERT language model (TP-Transformer) in order to step closer to our ability to automatically extract knowledge from massive datasets like the internet. But - probably - however - we'll get wrapped up first, for years, in arguing whether Google itself should be allowed to operate such an AI, alongside its Knowledge Graph! A point worth mentioning here is that very few AI researchers know much about computational engineering. This is key to AI, especially at the multiple GPU node scale, required for most natural language understanding. To then link that infrastructure, hardware and software, to something like Doug Lenat's CYC (1984) which, as Marcus reminds us, has captured facts from across "psychology, politics, economics, biology, and many, many other domains, all in a precise logical form" will require phenomenal parallel processing capability, and data processing capability. Hence RAPIDS & the reason NVIDIA has a 2000+ DGX supercomputer. Taking Berkeley+Google's Reformer or Meena and reading/summarising swathes of novels is only one step, albeit a profound one, when you factor in current progress in GPU engineering, but as Marcus states, language models are still only "a model of word usage, not a model of ideas". I would append that the value-add is being able to summarise all that knowledge so that we can make more informed decisions, faster and faster. This is work I'm currently involved in, with NASA. Since the world is warming and even rainforests are becoming tinderboxes, this is extremely life-changing AI. While we all struggle with misunderstandings and misconceptions, they rarely end in death.

Or so it seems.

we could also have cured cancer

I lost my Mother to one such misunderstanding. Had we a system, in 2009, that was able to consider her entire genetic makeup and state of health, she would be alive today. We could also have cured cancer and been able to derive a vaccination for COVID19 a week or so after the WHO declared it a global health emergency, not over a year from now! Let's solve data privacy with federated learning, and OpenMined. Now.

No alt text provided for this image

Marcus suggests, in his third claim, that (hopefully in addition to Asimov's Three Laws), AI entities should be preloaded with CYC content, along with "tools for reasoning" (though he also states "CYC is far from perfect.") It would be a good start to "subtle reasoning". For Marcus' "cognitive models" I refer you back to Platonite and Malsberg's work. Either way, I concur, "cognitive models should be one of the highest priorities in the field." See Robust.ai for the suggested path, but the real problem comes back down to simply civility, as Marcus says "if students are afraid to speak, there is a serious problem" since academia seems to have forgotten that a student's sole purpose is to question their Supervisors, and unpopular/unknown topics won't be published. Girls have a even bigger problem with this misdirection. Unless you're Me

Alison B. Lowndes

Senior Scientist | Global AI

4 年

Hi Ali, long time no hear!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了