A tale of digital capitalism or how we are letting algorithmic bias shape our world
Photo by Franki Chamaki

A tale of digital capitalism or how we are letting algorithmic bias shape our world

In October 2017, Palestinian worker Halawin Halawi was arrested for posting a photo of himself next to a bulldozer with a caption that read “attack them” in Arabic. Only after questioning him for several hours, the police realised Facebook’s AI-powered translation service had made a mistake: the Arabic for “attack them” and “good morning” are very similar - and the algorithm had chosen the wrong one. 

Google translate, with its undeniable importance in helping society overcome the language divide and understanding other cultures other than our own, has also shown it carries a fair share of bias - when translating Turkish (a gender-neutral language) to English, it would assume the pronoun “o” to be he or she, according to the profession that followed. For example, “o bir doktor” became “he is a doctor” but “o bir a???” translated to “she is a cook”. While “o bekar” went for “he is single”, “o evli” meant “she is married”. The same for “o ?al??kan” - “he is hardworking” and “o tembel” - “she is lazy”.

The role these algorithms play in perpetuating our cultural biases is deeply problematic. In an ever mediated society where technology plays a bigger and bigger part in how we relate to each other and the outside world - the prejudice hiding behind the code means that important decisions go unchecked and lack ownership.

The fact that Halawi was arrested before any Arabic speaking officer read the original post poses many important questions as these algorithms that conceal hidden biases are already making vital decisions in our everyday lives. Proprietary algorithms can decide who gets granted parole, who gets interviewed for a job and who gets a loan. They also decide what we buy, what content we watch, what books we read and, to an extent, who we date and are friends with. 

In her book “Algorithms of Oppression - How Search Engines Reinforce Racism”, Safiya Umoja Noble argues that the power of algorithms in the age of neoliberalism reinforces “oppressive social relationships and enacts new modes of racial profiling”. In her extensive research on keywords around different communities’ identities - such as “Black girls”, “Asian girls” and “Latina girls” - Noble found that, despite never including any sexually connotative terms, pornography was the primary way these women were represented on the first page of search results. These human and machine errors have serious consequences and show how “racism and sexism are part of the architecture and language of technology”.

The case above suggests that a dominant white (or at least Western-centric) cisgendered male point of view is being encoded into the organization of data. For Noble, an algorithm is just an “automated decision tree” and the choice of prioritising certain content in detriment of other is directly related to the relationship of advertisers with digital platforms. 

Furthermore, the fact that software such as Google Adwords allows advertisers to bid in real-time for consumers attention means that the highest bidder will always have greater control over information. And, even though Adwords considers different factors other than the actual maximum amount the advertiser is willing to pay for an impression, such as the quality of the ads or the expected impact of the ad extensions and other ad formats, it signifies that those with fewer resources will never “be able to fully control how they’re represented, given the logic and mechanisms of how search engines work.”

This is not to say that we interact with technology as passive users many devices had an original purpose that differs from what people ended up giving it use for. According to Judy Wajcman, digital devices are “socio-material practices that coevolve with lives as lived in interaction with technologies” and the Internet is “quintessentially a generative technology” that is creating unprecedented cultural and informational practices. However, and despite how much devices facilitate user engagement and involvement, “its networked architecture is shaped by powerful commercial interests to steer users down in particular pathways”.

Technology is not value-free and that its design shapes as much as it reflects our society. As active users of technology, our devices are crystallizations of our cultural norms and assumptions so we must take into account not only the direct impact and consequences of code but also the very powerful ways in which it allows ideas to circulate through the social world. 

However, if we are not able to fully control what is being developed, what role do big industry players have in continuously shaping the world we live in? The fact that a more human and social context is sometimes missing from algorithmically driven decision making is a particular concern for marginalised groups, as they are more prone to be problematically represented in erroneous or stereotypical ways. Therefore, it is incredibly important that we know what is behind the code that is running our lives or we risk handing over total power over our choices to anonymous entities and identities that speak a language we can’t comprehend. 

This is true for many, and evermore, aspects of our lives. Whether applied to translation software, finance institutions, search engines, shopping, social media or even self-driving cars, we must have a deep understanding of what an algorithm is and how different algorithms work to fully grasp their social power.

An interesting example would be one of the unmanned drones and the expansion of its use since the US Air Force has put in place a programme called Operation Gorgon Stare that heavily relies on an intelligence-gathering system that is a “collection of surveillance and data-analysis resources that ‘sees’ unblinkingly 24/7 (...) and that is lethally oblivious to the specificity of the living beings it targets”. (Crary 2013) .

For David Beer, the uncertainty about algorithms could lead us to misinterpret “how power might be deployed through such technologies” and, considering these as being the decision making parts of code, it comes as a surprise that we know so little about something that is constantly shaping how we treat and judge people, how we distribute wealth and opportunities, and so on. 

In my argument below, I aim to expose the dramatic influence the algorithmic bias plays in the paradigm-shifting era our society is currently going through. In such a critical moment in the evolution of machine learning and AI, we now hold the power to transform our notion of what it means to be an individual. Augmented reality, enhanced human beings, self-driving cars, robots and the internet of things are no longer exclusive of science fiction, but how much do we actually know about how and who is designing the code shaping our future?

 

Digital capitalism or how we are letting algorithms shape our world.

 

Google Flu Trends was a project launched by Google.org to predict the activity of the influenza virus in more than 25 countries. Using the search queries of the millions of users looking for self-diagnose online, combined with historic data on the prevalence of flu across different regions, Google found a mathematical model able to consistently match information collected by national and local health agencies, without any human being involved in the process. As a matter of fact, in 2009, at the peak of the H1N1 crisis, Google proved to be a more reliable tool than government statistics. It was the combination of data Google had accumulated through its users' searches, combined with a powerful algorithm that identified 45 keywords as being related to flu incidence, that enabled this sort of conclusions to be drawn. It is in this sense, that the power of Big Data relies on the society’s ability to exploit information or goods and services of significant value. 

However, one must consider the fact that this algorithm would compute a query’s time series separately by region by identifying the IP address associated with every search - what understandably created a lot of privacy concerns. At the same time, not all “flu” searches are done by people with flu, as many people end up misdiagnosing as flu symptoms related to other conditions.

In the Big Data economy, such instance also means a computer program can speed through hundreds of hundreds of job applications and select the top few that will be granted an interview. Or that, in the same sense, it allows a bank to decide what candidates are eligible for credit. And while this seems to eliminate human bias and save time by not having to go through piles of unsuitable candidates, we must not forget that “the math-powered applications powering the data economy were based on choices made by fallible human beings” and, in fact, and that “many of these models encoded human prejudice, misunderstanding, and bias into the software systems that increasingly manage our lives” (Cathy O’neill, 2016).

The idea that we are handing over such a great amount of authority to decisions that remain unchecked is incredibly present in Frank Pasquale’s work as it is this association between Big Data and algorithms that give the first a “purpose and direction”. The algorithm becomes a source of concern because data is operationalised through these algorithmic decisions. And, the fact that these mathematical models are opaque to all but mathematicians, engineers or computer scientists puts their verdicts in a condition above and beyond any disputes. 

This is what O'Neill calls the ‘Authority of the Inscrutable” - meaning that we hesitate in questioning what we do not understand over the fear of not having the knowledge or expertise needed to question it. Such ideas are similarly exposed by Pasquale, who understands that “values and prerogatives that the encoded rules enact are hidden within black boxes” and that these knowledge gaps have powerful implications: in an interesting parallel with the invisible hand that regulates Adam Smith’s free market, Pasquale explores the thought that this is symptomatic of the fact that no one (including bankers and financiers) understands the works of modern finance and that this creates a “knowledge problem” that becomes a barrier to “benevolent government interventions in the economy”.

Nonetheless, this knowledge problem is not an intrinsic characteristic of the market, but rather something corporations choose to perpetuate, to avoid public awareness and, consequently, regulation. For Pasquale, what we understand about the social world “is not inherent in its nature, but is itself a function of social constructs”, meaning that our ignorance in regards to a certain technology is not due to its inherent difficulty, but rather a characteristic of a system that profits from it. While the concept of a separate undefinable entity that regulates the market is a dangerous one, as it raises doubt in regards to ownership and responsibility, the same happens with digital devices, algorithms and code. In this sense, one of the main challenges we as a society have to overcome is the fact that crucial stakeholders, whether it is big financial institutions, companies that develop machine learning systems or state regulators, have very little interest in understanding and monitoring algorithmic bias.

The same theory is also present in Crary’s texts, who believe we accept many aspects of the contemporary social reality as necessary and “akin to facts of nature” out of the conviction that technological change is “quasi-autonomous, driven by some process of auto-poiesis or self-organization”. This, when, to the contrary, it is far from arbitrary but rather a product of an always changing environment, where the way we relate to information and communication technology will “continue to be estranged and disempowered because of the velocity at which new products emerge and at which arbitrary reconfigurations of the system take place”. Furthermore, Crary considers there has been deliberate concealment of the most important techniques invented in the past century and that these are “various systems for the management and control of human beings”. 

And while one may not agree with such a radical view of the events, it is undeniable that technology has revolutionised our social sphere. Take, for instance, what we call the “Gig Economy” - a labour market characterised by short-term contracts and where freelance work prevails - there is little doubt that digital devices and algorithms have strikingly empowered this reality. The term “algorithmic management”, for example, coined by scientists at the Carnegie Mellon University, argues algorithms enable this new class of work where “human jobs are assigned, optimized, and evaluated through algorithms and tracked data”. Looking at the impact of data-driven management in companies such as Uber, they intended to explore the way algorithmic management “allows a few human managers in each city to oversee hundreds and thousands of drivers on a global scale”. 

Uber’s software allows the company to match independent drivers with their cars with passengers almost instantly; fares change dynamically depending on where passenger demand surges; furthermore, the performance of the drivers is evaluated both by customer ratings and the driver’s cooperation with the algorithmic assignment. Companies such as Uber or TaskRabbit claim that this not only creates new employment opportunities and better, cheaper services but also enables transparency and a sense of justice temperamental human bosses would not. The question here is that, as we have seen earlier, it is naive to assume there is no bias in code. For David Beer, detaching these algorithms from the social world they are inserted in is a mistake as “algorithms are inevitably modelled on visions of the social world, and with outcomes in mind, outcomes influenced by commercial or other interests and agendas”. Moreover, algorithms allow companies to monitor their workers and make sure they are only paid by the time these companies want to pay them for while having people on call 24/7 (Standing, 2014). 

No alt text provided for this image


And what are the consequences this perpetual availability (and traceability) have in our views of work, family, or even self? We can assume it renders normal the notion of functioning without interruption or limits (especially when directly related to the concept of labour) while pushing us to challenge the limits of efficiency, as we try to achieve higher standards in a shorter period. Crary defines the world of 24/7 as being a “non-social model of machinic performance” that refuses to acknowledge the human cost of sustaining such efficiency. It is a “time of indifference”, in which the fragility of human life poses a threat to the system. 

The fact that the US Defense Department has a research programme in place to study how migrating sparrows can spend up to seven days without sleep to create “a sleepless soldier” by discovering “ways to enable people to go without sleep and to function productively and efficiently” is a symptom of the how technological development is based on a neo-liberalistic order. And, as the Crary argues, it is true that, in the course of human history, many initially military aimed innovations have been brought into a “broader social sphere” - such as the GPS and the internet itself - and that both the thought of continuous work and continuous consumption have been around for a while. Sleep might just be our last resource when trying to escape late capitalism, considering most of the necessities of human life such as hunger, thirst, sexual desire and even family and friendship have already been commodified. 

In the Gig Economy, where a considerable percentage of people are freelance, and with the advent of devices that allows us to access our files, email or social media practically anywhere at any point of the day, the borders between work and play become increasingly blurred. And while we go to bed with our phones and our phones keep waking us up in the morning, or while we never shut down our computers, the dichotomy on/off seems to quickly be fading away. Sleep mode, now ubiquitous in pretty much all our devices, is in itself contributive to this logic as the “notion of an apparatus in a state of low-power readiness” reshapes our conception of sleep from being an actual state of rest (off) into a “deferred or diminished condition of operationality and access”. Impetuous technological development at the service of capitalism is not compatible with “any inherent structure of differentiation” - whether this is sleep/awake, work/leisure, private/ public, machine/ organism, etc.

We must notwithstanding, remember that this contemporary imperative of speed and effectiveness is as technological as it is cultural - and that technology has been highly shaped by society. One can note people have commonly chosen to use technology “to achieve higher standards rather than to save time”. For example, instead of using the advent of public transportation to reduce their commute, many people chose to live further away from work instead, so their commuting time would stay the same. In the same sense, people use digital devices to achieve more, rather than save time. (Wajcman, 2015)

Judy Wajcman’s perspective on whether ubiquitous connectivity has been stealing both time and humanity from us vastly differs from Jonathan Crary’s. For the latter, 24/7 is a time in which the fragility of the human life is at its highest as “it belongs to the aftermath of a common life made into the object of technics”. 

Wajcman argues the Internet makes it possible to reshape the control of cultural production and that it adds a “new framework of radically decentralized individual and cooperative nonmarket production” to the former market-centric production system. Technology is not stealing private time but, instead, extending and reconfiguring the time frames which the dichotomies public/ private, work/ leisure inhabit and, in this sense, “making possible new kinds of emotional proximity that are less anchored in shared time and geography”. For her, digital natives see omnipresent communication and perpetual availability as “seeming seamlessly into their lives”. 

To the contrary, Crary argues that this assumption is fundamentally wrong as this “transitional phase” will never come to an end, but rather, as we have seen above, give place to never-ending arbitrary reconfigurations of the system. And this intense rhythm, he claims, will preclude “the possibility of becoming familiar with any given arrangement.”


Do we really want to trust Google to make the world a better place?


Throughout history, human beings have always expressed concerns when it comes to technological development, and whether it was the electricity or the assembly line, we tend to resist change.

The printing press brought similar concerns. At its rise, people worried television would end not only radio - which it did not, especially considering the recent increase in podcasts popularity (a fairly similar format) - but quality family time. 

Furthermore, way before that, Plato believed the written word would bring “forgetfulness to the souls” and feared this would mean the end of true wisdom.

George Simmel, in his “The Metropolis and the Mental Life”, defended the 19th century promoted individualism due to “new freedoms, the division of labour and individuals achievements that make them indispensable to their line of work.”

By the end of the century, Lord Salisbury considered that the telegraph, an unprecedented innovation in communications at the time, “assembled all of mankind upon one great plane, where they can see everything that is done and hear everything that is said, and judge of every policy that is pursued at the very moment those events take place”. This new communication system allowed information to move almost instantaneously for the very first time and its social impact led to the direct recognition of the potentials unveiled by technology.

This is to say we are not unique in considering our era one of unprecedented change and acceleration. An article published in 2000 by the MIT Technology Review called “The Cell-Phone Scare”, interestingly reflects on this idea that eventually our anxieties will end up fading, “to be replaced in our minds and our newspapers by a more up-to-date apprehension” - anxieties which are directly correlated to our fear of the unknown and the notion of technology as being hidden being a “black box” we fail to fully comprehend. 

At the beginning of this text, I start by exploring my concerns in regards to the consequences of human bias in technology. Now that we have seen not only how deeply code and algorithms are impacting our lives, but also how our antecessors were equally worried about the impact of technological innovation in our lives, it is fundamental that we rethink our relationship with technology to reclaim control over cultural production. And while the internet has, as Wajcman poses, unlimited potential in redesigning the social sphere by decentralizing power, we have seen it can have the exact opposite effect. I would like to say this has two main and more or less correlated causes. 

Firstly, we need to understand who is creating the code and algorithms or who is working for these major technology companies. A 2017 survey by Stack Overflow aimed at tech developers worldwide has seen that 88.6% of its participants were male, and 74.4% were white. This is dramatically connected to the idea explored above that these spaces are far from neutral, being populated “by people which shape both their physical form and cultural meaning”. We need to acknowledge the class, gender, racial and age divide that is the norm in tech and ensure that such an unprecedented power to change is held in hands that represent all sectors of society - we must democratise the engineering and software development world, by demystifying it.

Secondly, because technology has been hand in hand with a capitalistic notion of productivity. As we know, algorithms make sense of past events and try to look for the fastest, most efficient connections between data. This is true for when searching for a restaurant, for instance, when getting an Uber, trying to find a match Tinder or, as we’ve seen, a job or a second mortgage. This raises serious questions in regards to the forms of normativity that are being introduced in our finances, work, family and even love, all in the name of efficiency. Is effectiveness society’s ultimate goal? Do we want to live in the world that always points to the past or would we rather discover new food, different employees and unthinkable lovers?

When speaking at the London School of Economics (LSE), Genevieve Bell, a sociologist who worked at Intel for over 15 years (that to this day is one of the biggest chip makers in the world), posed a very interesting question: do we really want to trust Google to make the world a better place?

Wow make us think...

回复
Diana Braga

Account Manager at Orgvue

4 年

Thank you for sharing - very interesting!

回复
Stephen Moran

PhD/Doctoral Researcher

4 年

This is a great article Regina!!

要查看或添加评论,请登录

Regina Nogueira的更多文章

社区洞察

其他会员也浏览了