Everyday Life and Microprediction
Alex Moltzau
EU AI Policy | European AI Office of the European Commission | Visiting Policy Fellow at University of Cambridge
B2C Everyday Life Prediction Tools With Artificial Intelligence
Do we have the right as individuals to use artificial intelligence to attempt to predict behaviour in everyday life? As an example should you be able to predict the risk of hiring a specific babysitter? In 2018 Predictim advertised a service that promised to vet possible babysitters by scanning their presence on the web, social media and online criminal databases.
Predictim provides a new and innovative way to vet people instantly using artificial intelligence and alternative data.
It can be said that Predictim has not gained any sort of great investment($100,000 in a seed round). It was then late 2018 announced by Twitter and Facebook that Predictim had been banned. It seems Joel Simonoff and Sal Parsa took no further action to start up a new company. In fact in the good name or bad name of social media stalking I now see that Joel Simonoff is a machine learning researcher with NASA.
What is okay and not okay?
However the startup with its rise and decline poses an interesting question in terms of the way we use technology to predict and take actions based on those predictions. With Predictim there was a clear case of inherent racism in the way it was structured. The AI scan for respect and attitude with such a lacklustre understanding of ethics in the sphere of the home seems an obvious intrusion. What is okay and not okay?
We still seem to think it is somewhat okay for companies to gather predictive information about our behaviour in our house as long as they speak with an eloquent voice or if you get the answer you are looking for online. Same goes for our fitbit (measuring health information), phone (contextual awareness, location info) and social media (psychographic, preferences).
These predictive tools with larger frameworks can certainly have bias just as bad or worse than the babysitter app, yet we can perhaps discuss that at another time.
Microprediction
Well you may have heard about micromanagement: it can be a management style whereby a manager closely observes and/or controls and/or reminds the work of his/her subordinates or employees. It can however be referred to in relationships outside of a work context.
By saying microprediction out loud I am using it more or less as a discussion point. Prediction is nothing new, and in a way you could say it is part of what makes us unique as humans:
“Symbolic abstract thinking: Very simply, this is our ability to think about objects, principles, and ideas that are not physically present. It gives us the ability for complex language. This is supported by a lowered larynx (which allows for a wider variety of sounds than all other animals) and brain structures for complex language.”
How wonderful, symbolic abstract thinking, and yet it begs the question whether there is a limit of what to predict or not to predict. There may even be questions on how to predict, or more likely laws as ethical considerations passes into regulation. Indeed prediction to govern in various sciences could be traced far back, however many mention The Prince by Niccolò Machiavelli estimated to have been distributed as early as 1513.
Recent examples of protection of rights in this context of prediction being the EU General Data Protection Regulation (GDPR) and the FDA is considering regulating machine learning in healthcare.
Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use in order to perform a specific task effectively without using explicit instructions, relying on patterns and inference instead.
Without drawing definitive guidelines there seems to be a vague understanding of what okay to do and not okay. The anthropologist Marilyn Strathern has written a piece on this dating further back named Future Kinship and the Study of Culture (1995). Wherein she talks of our notion of what is artificial or not referring to how it has changed over time. This artificial or artifice is not set in stone, and what is conceived of as natural changes.
Why microprediction?
Micromanagement is generally considered to have a negative connotation, mainly due to the fact that it shows a lack of freedom in the workplace. Microprediction as a talking point may need to be discussed further, and I have not defined it because I do not yet understand how it could be used in a good manner. However to spark further interest I want to finish my article with this comment by Predictim in regards to ethics:
“We take ethics and bias extremely seriously,” Sal Parsa, Predictim’s CEO, tells me warily over the phone. “In fact, in the last 18 months we trained our product, our machine, our algorithm to make sure it was ethical and not biased. We took sensitive attributes, protected classes, sex, gender, race, away from our training set. We continuously audit our model. And on top of that we added a human review process.”
-Brian Merchant writes in Gizmodo the 12th of June 2018
Who has the right to violate or use user data?
This is day 22 of #500daysofAI, I hope you enjoyed it.
For more stories follow me on Medium!