Relationships with the machine
Credit: Dima Ryabukha

Relationships with the machine


Relationships. Your family, your friends, your partner in crime. They all share a binding thread. It’s their ability to influence you, and you them. Even that new robot vacuum cleaner you bought. It can be defined in terms of a relationship.


With living creatures we don’t retain the same agency that we do with machines. Human-to-human interaction is restrictive. We have an inability to perfectly direct another’s actions or thoughts - but with machines, we assume to do so. An assumption of dominant control. A perception the average consumer has. We direct the actions of machines - therefore we master complete control of our relationship with them. This assumption is false.


Before the dawn of large-scale machine learning models (modern systems like the ones used to stream videos, music, or catch up on the news) users could grasp the causal influence their products might have on them - My vacuum does a great job of picking up pet hair, I’ll use it more often when the dog is in the house. These relationships were transparent. Elegantly simple human-to-product relations. with the press of a button you could begin the manual labor of cleaning. This machine didn’t have significant impact on your perceptions of the world. This machine didn’t have thoughts of its own or a perception of the world around it.


Our products and services are no longer just an assortment of mechanical knobs. The world is now based on the connected fabric of data. How these inorganics and intelligent systems fold into the fabric of our lives, we must explore.


How can we define our relationships with these new inelegant products?


Underneath a veil of one’s and zeros, what determines the autonomous actions of modern software is a unique “worldview”. A depiction of our human world sensed, itemized, collected, and weighted. This is the nature of machine learning. Intelligent systems understand attributes within the framework of a limited world view, and this worldview is constantly changing. The users interacting with them are also shifting. What the user enjoys, what’s disliked, and what persuades a user to take action.


If machines now hold a relative understanding about the operator, what they expect, and what will make them use the product again - do these systems not gain opportunities for directing user actions and thoughts?


Sebastian Farquhar published a paper in 2022 outlining what he called the “Delicate State”. Defined as human states which may not wish to be changed. Beliefs, moods, and the subtle qualities that make up our personalities and preferences. Sebastian was referring to intelligent systems that might be able to manipulate the very users they were supposed to serve. Though direct manipulation should be obvious to detect Sebastian insinuates subconscious or hidden incentive AND intent by a machine. Manipulation which might not be detected by a human operator. Very compelling cases are represented in his research.


Manipulation might not always manifest negative outcomes - take for instance a system manipulating children to stay focused in math class, or rewarding them for better academic performance. Individually we might take opposing views to what is right or wrong in these situations.

In consumer businesses, value of a system is weighted by performance of the product. Sales, engagement, interaction. Whether you’re reading the news, exploring new music, or binging the latest television series, all these services harness machine learning to drive profitability. Within the industry we refer to these as “success metrics”. Profit equals success. Optimising performance of the system creates a conflict of interest between users and business. A conflict naturally occurs due to socio-structural dynamics of consumer technologies. It leverages the user or buyer to maintain a positive relationship with the product.


ML is redefining relationships between product and human. In consumer tech, the effects are more strongly felt.


Humans affect what machines see, learn, or perform as part of their operating actions - Your new friendly robot vacuum cleaner knows when you’re at home and when you’re not. It automatically detects and senses dust, pet hair, and poop. Your music streaming service knows when you might be sad VS when you’re in a party mood. Intelligent systems collate similar pseudo-organic senses. These stimuli act as inputs to the machine’s model, sensing the world around them, and the humans that interact with them. The machine now learns from us and the world around us. Faster than the very creators have time to react to. Scaling beyond human capacity to understand (commonly referred to as the black box). Good problems to have, if only the operators, us the humans, could foresee the totality of effects the machine might be making on our delicate states. These effects can be hidden. In the scientific community, hidden incentive layers have proven present in machine learning models. From the not-so-elegant title: Hidden Incentives for Auto-Induced Distributional Shift - Krueger 2020, these theories are explored.


How can humans adapt to a world with no demarcation line? Separating influence the machine has on us, and the influence we have on it? Our relationships have changed with these once simple devices. It’s no longer 1995. Your beloved CD Walkman is not the primary listening device. Our music services now have self-driving incentives to become “successful”. Reshuffling, re-organizing, and manipulating what you’ll see or listen to next.


How many of our daily habits are driven purely from the human volition of thought? How many of your actions might be charged with influence from intelligent systems?


Stay tuned for the next publication to learn more.

Konrad is currently writing the penultimate book on ML, UX design, and business. To follow along with the writing and research, subscribe to his updates on Medium, Linkedin, and the book website https://knpdesign.co/gdml.html


#digitalservicesact #aiact #90%human #gdml #machinelearning #ux

Micha? Madura

Business Design & Innovations | works at EDISONDA

1 年

I saved for later!

回复

要查看或添加评论,请登录

Konrad Piercey的更多文章

  • 10 examples of hidden AI content manipulation - and their design solutions

    10 examples of hidden AI content manipulation - and their design solutions

    Content Content Content. It’s what drives us to learn, to know, and to be seen and heard.

  • How does your business define "Successful Artificial Intelligence"?

    How does your business define "Successful Artificial Intelligence"?

    What does success mean to you and your coworkers? How does this differ from design, to engineering, and business…

  • Is “controllable AI" what design should focus on?

    Is “controllable AI" what design should focus on?

    This month's article is a designer's guide for augmenting and safeguarding ML output. Question: Once aware of the…

  • Why we can't look away...

    Why we can't look away...

    Are you easily deceived? The phenomenon of manipulative intent from intelligent systems can manifest independently of…

    1 条评论
  • How design eats software

    How design eats software

    At the pinnacle of software design, during the rapid rise of app design and smartphone dominance, the industry had its…

    4 条评论
  • Is AI transparency just system monitoring?

    Is AI transparency just system monitoring?

    __________ Forward: On the fourth newsletter for 90% Human I've been running a bit behind schedule. Caught in the midst…

  • How to design unscalable AI

    How to design unscalable AI

    The tools we use and how we use them are changing. Or as John Culkin in 1967 mentioned, “We shape our tools and…

  • Man and Machine

    Man and Machine

    Man & Machine What drives man? What drives the machine? A machine should be logical. Humans are not.

社区洞察

其他会员也浏览了