Is "AI"? = "What We know"? + "Something else"?
AI Understanding

Is "AI" = "What We know" + "Something else"

Testing GPT-3 and learning from hype-failures of Word2vec, BERT.

GPT-3 is now making waves and already several startups are trying to offer solutions based on GPT-3. It ranges from apps to write email for you to writing a front end code automatically. Some argue that this might be a first step towards artificial general intelligence, other argue that examples are cherry picked. One might wonder haven't been seen this hype in Word2vec first, then BERT now in this.

Haven't we seen that, we have told several times that, we have figured out how the AI understands language automatically? Havent we seen that just a few months ago people are saying Self driving autonomous cars are few decades away?

Even still , GPT-2 code and model wasn't released stating that it is too dangerous to release in public. Are we assuming now that all issues such as bias, explainability are fixed?

It can be argued that training a language model could make computers to finally understand how humans speak. In other words, given enough computing power and data, we may tumble up on artificial intelligence someway.

So, how do we understand this?

let us start with the method called vectorization. A few years ago, researchers figured out that machine translation would work better if we compare the vectors of the languages. then In 2013, word2vec was published. It can do this famous comparison if king is to queen what man is to ... the model can answer women. Same can be done for places, countries and so on.

Suddenly there was a lot of interest in it. All sorts of applications looked into. sentance2vec, doc2vec etc are born, word2vec applied to genes so on. However, soon it was dawned upon the community that word2vec hasnt delivered on its promises of results and efforts to understand how and why it worked mostly resulted in failures. Theories proposed that it has something to do with distributional hypothesis and that is that.

It all boils down to how we test a AI model or AI algorithm. Ofcourse there is a test set in which model can be tested but when we apply a model trained and tested on a particular data and use it in large scale applications, shouldn't we test as we test other software?

As the cliche goes, debunking a non-sense takes more time than creating that non-sense. This also true in the case and era of fake news. So how do we test an AI model, independent of the data which used to create it?

Second, how do we know, the code used all the data to create the model, not the datapoints that are easy to process or easy to memorize?

Third, how do we even know, the model actually creates something new or just searches for the previous examples in data and shows as answers? Researchers found that in most of the tasks used for validating the AI Model, there is no intelligence needed, just "something else". https://text-machine-lab.github.io/blog/2020/bert-secrets/

In these cases, how do we even know, GPT-3 is not another google search masquerading as a AI intelligent model? Can we even run that kind of comparison?

In other words, can the things done by GPT-3 or other AI models either be achieved via manual methods or brute-force methods? Is there a unique value or proposition with this or just another way of doing the samething?

Finally, again as the cliche, we don't know what we don't know. Can GPT-3 or Google search for that matter, surprises us? Can we ask some opened ended questions like show me what haven't read or show me important but obscure/unknown information?

How do we even dig out a information which is unknown to us with the help of our AI tools?

Finally, there is a line of thinking that we dont need to know how something works to use it. That is for another day.


By Rajasankar, Founder Naturaltext

Very astute and i agree the the premise for even a simple AI algorithm is on how broad based it can be. While these codes are consistently evolving we sometimes seem to forget the business basis of these endeavors. Also the validity or accuracy of these tools are not tested and i understand that word2vec is a prime example of that. No other tool is able to verifiy the results produced. These tools look at solving one of the these problems 1. Error elimination 2. Standardization 3. Cost efficiency (reducing manpower) 4. Speed & efficiency to handle large data sets However as we get deeper into the development of these tools we seem to forget, what did i want to find out? and more importantly why?

要查看或添加评论,请登录

Mukesh Malik的更多文章

  • AI Washing! - one of the fastest growing AI....:)

    AI Washing! - one of the fastest growing AI....:)

    In the world of technology, there's a big word that everyone talks about: "Artificial Intelligence" or AI for short…

    9 条评论
  • Do we need to know how AI works to use it!!

    Do we need to know how AI works to use it!!

    Can we understand how AI works? Ever? Is that necessary? With all talk about AI being dangerous, we need to take a look…

  • 1MV!!!...Century debut in empowering youth!!!

    1MV!!!...Century debut in empowering youth!!!

    A chilly evening in Dehradun, our mentor Dr Brij Mohan Sharma took us to the center in Kandoli. On the way, we had been…

    1 条评论
  • Entrepreneur....The Warrior!!!! #savalakh

    Entrepreneur....The Warrior!!!! #savalakh

    The word entrepreneur has an embedded dreamer, organizer and a resilient warrior to see the venture through. Our dreams…

    6 条评论
  • The Appssumption Syndrome

    The Appssumption Syndrome

    “ How could he not give 'export to excel' feature, its common sense, we would need this data!!!”. “’Save as Draft’, its…

  • How HUMAN is our SOURCING!!!!

    How HUMAN is our SOURCING!!!!

    Hi Friends..

  • Six Sigma, Are we there yet??

    Six Sigma, Are we there yet??

    During one of my consulting interventions I see an engineer, with twenty years of experience sitting in his air…

    4 条评论

社区洞察

其他会员也浏览了