A matter of context
It’s interesting that, as we talk about using AI more and more, the phrase we use is “human in the loop” instead of “AI in the loop”. That framing might not seem like much, but even though it’s subtle, I think it’s pointing out how we are internalizing and using this technology: not as a tool, but as a partner. If we thought if this more as a tool, we’d say “AI in the loop” or “AI-assisted human” or something.
We think of AI as more of a partner because it seems to be active - to have its own agency. This is true and not true at the same time. LLMs and other models are different from humans in a few ways (at least). One of them is that humans can never quite “quiesce”. We can sleep, we can be anesthetized, we can even be in a coma, but we are never fully “off”. We are active by default. Time always passes subjectively.
Models aren’t active by default, they’re passive. If the model isn’t being asked to generate the next token or otherwise generate inference, it won’t. It will just be passive, with no “sense of time passing”, because there is no activity in whatever mind it has when it is not being asked to do work. We have to impose that agency from outside, either manually through prompting or via code (recipes or other techniques).
This also means that the mechanisms for self-awareness and correction are missing or weak. We can see some of this being put back in, in the new models like O1 and DeepSeek that have an “internal monologue” of chain of thought reasoning, that seems to give them some stability and self-correction. It’s a better tool, but still a tool. But it is beginning to be more like a partner now.
Will we get all the way there? We might. I think there are pieces missing still, like more robust continuous memory, better self-awareness and iteration. We may need model ensembles (similar to the brain being an ensemble of cortical stacks) to average out to the right answer more often. It’s hard to tell how deep this problem is, but we can see at least some of the problems.
领英推荐
In the meantime, it’s not clear if it’s helpful, or not, to think of AI as partner vs tool. Some of my coworkers get better results from what can only be described as persuasion and psychology, even emotional appeals. It’s hard to resist the temptation to personalize something that seems to be responding to emotional input. But still, I think we are jumping the gun. For now, until the mechanisms of agency (and alignment) are more robust, it may be more effective to think of this as a tool, AI in the loop, centered on the human, not the machine.
Or maybe I’m just being an old futz! ??
Business Transformation Executive | Business Advisor & Coach | Strategic Growth | Data Storyteller | IT Strategic Leader | Visionary | Influencer | Innovator | Talent Cultivator | Board Member
2 个月Sam I like your thought that we humans are always on - I do sometimes wonder how to truly quiet my mind. I see that the robots and/or AIs we use for constant monitoring (market price trackers, security monitoring, other real time sensing and assessing AI's) will create a group of Robots that will be hard to distinguish from humans in this dimension. Many of my (non tech) family and friends ask "is it listening" about my phone and bots about the house,.
Professor of Global Governance, Australian National University and Founder and CEO, Dragonfly Thinking
2 个月On this: "We may need model ensembles (similar to the brain being an ensemble of cortical stacks) to average out to the right answer more often." Sam Schillace It is worth reading this book - A Thousand Brains: A New Theory of Intelligence?by?Jeff Hawkins? https://www.hachettebookgroup.com/titles/jeff-hawkins/a-thousand-brains/9781541675797/?lens=basic-books
PM for extensibility of Copilot for Microsoft 365
2 个月Maintaining a consistent high rank on my project backlog is a dreaming AI, able to raise certain thoughts to higher focus. I'm not sure it will lead to AI being a better partner, but I know it will be interesting!