How machines learn what we need?
Brian Greenforest
Chiplets LLM CV SDR Massively Parallel Distributed Compute Nonlinearity Causality Consciousness
The hardest part of every business is to know exactly what your client wants.
Being an entrepreneur, I see machines having the same struggle of genuinely caring about people.
Modern machine learning implementations for structured reasoning made a great deal of progress in defining requirements in plain English.
AI researchers excelled natural language processing and even automated planning and execution of mutually contradicting goals, broadly used in robotics for movement planning.
The gap remained is in the goal definitions, with the fuzzy concept of utility. Where the goal definitions are coming from?
The fastest way to solve problems is to know exactly what these problems are.
Ability to identify possible problems people may face in their future is the crucial skill for sales people and client managers. It does require to ask questions, to empathize, to listen, to understand.
Podcast hosts, interviewers, journalists polish this skill to excellence. What makes it a genuine fit, is the ability of an interviewer to empathize and ask questions showing that they truly care!
The art is simple, putting the client into two different situations and comparing the "betterness" metric. To get to that point, ability to imagine these possible situations is crucial for human sales people.
My passion is to find out how to teach machines to do the same. To simulate clients’ most probable experiences, to get to the point where value proposition can be made.
Making AI capable of asking questions and engaging in conversations opens doors to the fantastic capability of automatic problem identification!
The ability to conduct live conversation, to empathize and interview, is the most important contribution AGI research community and business adopters can make to bring happiness and efficiency to the requirement gathering process.
I want machines to ask questions, to listen, and to imagine our most probable future experiences, both good, and bad, which can be explored and compared with each other. Psychologists call it "mentalization".
Engineers call it running multiple branches of simulations. Physicists call it many-worlds interpretation.
It’s possible to build smarter robots, teaching them to learn what your clients exactly need.