AlphaCode, Intelligence and Towards General AI
Playback of AlphaCode's generation process | Image Credits: https://alphacode.deepmind.com/

AlphaCode, Intelligence and Towards General AI


"AlphaCode achieved an estimated rank within the top 54% of participants in programming competitions by solving new problems that require a combination of critical thinking, logic, algorithms, coding, and natural language understanding." (Blog Post)

?This is a almost a month old announcement by Deepmind, which is akin to 2-4 sprints and 1/12 of a generation in the AI research world. I took some time to reflect on it, not only to appreciate the technique/scale/complexity, but also to sleep on its implications on General AI - more specifically, on generative code, automation and symbolic logic.

***

Is this intelligence? | Turing test's academia and sci-fi writers wondering for decades if human's manipulation of bits could ever amount "general human intelligence". His approach is to ask if computer program can be trained in such a way, a human evaluator will not be able to distinguish the AI's textual responses with that of a second human's. Thus named "the imitation game" because the AI attempts to mimic the natural responses of another human.

Since the release GPT-3 and most recently China's Wu Dao 2.0 (which trained on English and Chinese corpus), which could do reasonably well, in essay writing , answering general knowledge questions, do simple math and hold a conversation in context, translate, all almost indistinguishable from an average human, I consider the Turing test milestone to be arguably passed on growing number of domain.

Is this intelligence? I don't know.

But are these intelligible imitations of a human's text response, my sense would be yes.

***

Watch the code generation | Transformers are a necessary digression in my thoughts on Alphacode, because it is the underlying technique that allowed Alphacode to "understand" problem statements in english (input language) and "translate" it potential code blocks in various programming languages. It is a beauty to watch the playback of how code is generated, and how the different what the AI is "paying attention" to by seeing the colours light up in the problem description and the already-generated sequence in the code block.

Watch the code generation is insanely beautiful. Go click on the play button.

?After many different code blocks are generated, there is a process of clustering the different code block, and checking them against different outputs - before prioritizing which viable code block to submit for the context.

Here's a GitHub X OpenAI project called co-pilot which demonstrates how these domain specific AI can be incrementally helpful / valuable for us (https://copilot.github.com/) in everyday Life… and probably towards building even better General AI.?

***

Towards General AI?| From my perspective, next several visible milestone towards general AI are:

  • Meta Reinforcement Learning?| Why? Learnings/patterns can be economically, meaningfully transferred between different AI systems trained from different sub-domain, building towards a composite meta-AI (rather than re-designed, retrained from scratch). | Potential Proof - Build deep RL models that can play different games. Compose them and play on hold-out games/genres + Show that the system is stable.
  • Self improving code-generating AI | Why? If one builds a good enough AI to improve on itself, it can endless improve on itself, ie AI creating better AI ad infinitum | Potential Proof: Generated code (child) is able to reflect and edit its own code-generator (parent) to improve its own performance against particular code-gen benchmark. A sub proof would be a reduction of complexity of parent to achieve the same performance of the child. Input data should be invariant across experiments.
  • Generation of a set of axiom that has self consistency | Why? To create an AI that is able generate coherent and reasonable about symbolic logic from scratch, which may imply the AI is able to reason about its experiences in general. | Potential Proof: Given a large corpus of digitalized math formulas and workings, some of which may be erroneous - 1) Build an AI that can generate a set of axiomatic functions and mappings that can derive most of the training set math with the least amount of bits, 2) Have the AI proof that everything can be derived from the said generated set of axioms.?

***

Refrains on languages, programming and math

?As a human, I like to think that our consciousness model and reason about the world using language and math:

  • ?Language can be used to compose, define and describe math, and similarly, through encoding/decoding, and more recently, AI and ML techniques, math can also be used via programming to give further formalism and operations on language.
  • ?Natural language is articulate, expansive, a shared experiences, imaginative, empathetic, in its imprecision - more general and expressive.
  • ?Math allows us to more precisely formulate our empirical observations and understanding of the universe on a numerical / algebraic basis; it is a conceptual model of existence that is rationale, reasonable, and hopefully self-consistent and complete.
  • ?Programming is then some where in between - where is abstracts away the repeated, drudgery of math (logic gate operations, arithmetic logic, matrix multiplications, etc) into a set up increasingly more complex instructions building up simple functions and logic towards the expressiveness of a natural language.

?Ps. "DoSomething();" While this line is often used as a placeholder in programming tutorial or pseudo-code, I muse about the day, where I could type or say "DoSomething();" and my co-pilot AI will be infer my context and intention with reasonably good accuracy, and where it is mostly unambiguous would take the intended action to support me, and where is ambiguous would offer me the top 5 intended actions to choose from and/or a prompt to clarify the context …?and learn from there on.?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了