How machines learn what we need?

The hardest part of every business is to know exactly what your client wants.

Being an entrepreneur, I see machines having the same struggle of genuinely caring about people.

Modern machine learning implementations for structured reasoning made a great deal of progress in defining requirements in plain English.

AI researchers excelled natural language processing and even automated planning and execution of mutually contradicting goals, broadly used in robotics for movement planning.

The gap remained is in the goal definitions, with the fuzzy concept of utility. Where the goal definitions are coming from?

The fastest way to solve problems is to know exactly what these problems are.

Ability to identify possible problems people may face in their future is the crucial skill for sales people and client managers. It does require to ask questions, to empathize, to listen, to understand.

Podcast hosts, interviewers, journalists polish this skill to excellence. What makes it a genuine fit, is the ability of an interviewer to empathize and ask questions showing that they truly care!

The art is simple, putting the client into two different situations and comparing the "betterness" metric. To get to that point, ability to imagine these possible situations is crucial for human sales people.

My passion is to find out how to teach machines to do the same. To simulate clients’ most probable experiences, to get to the point where value proposition can be made.

Making AI capable of asking questions and engaging in conversations opens doors to the fantastic capability of automatic problem identification!

The ability to conduct live conversation, to empathize and interview, is the most important contribution AGI research community and business adopters can make to bring happiness and efficiency to the requirement gathering process.

I want machines to ask questions, to listen, and to imagine our most probable future experiences, both good, and bad, which can be explored and compared with each other. Psychologists call it "mentalization".

Engineers call it running multiple branches of simulations. Physicists call it many-worlds interpretation.

It’s possible to build smarter robots, teaching them to learn what your clients exactly need.

要查看或添加评论,请登录

Brian Greenforest的更多文章

  • Black Swans, Thermalization, Chaos, Uncertainty

    Black Swans, Thermalization, Chaos, Uncertainty

    Terminology Physical systems form ensembles when bonding occurs. Small ensembles form bigger parts, then the bigger…

    2 条评论
  • A Distributed Programming Language is Possible

    A Distributed Programming Language is Possible

    A distributed programming language is a surprisingly simple idea. Imagine you have an int main() {} function.

  • Self-modifying Spacetime

    Self-modifying Spacetime

    I'm a computer scientist working on some self-modifying code. Came up with these ideas about God, free will, and…

  • A Weekend's Fall of Thought

    A Weekend's Fall of Thought

    Leaves turned orange, but an apple still was there. Bill and Teddy claimed a park bench under the tree.

社区洞察

其他会员也浏览了