Comparing apples with idealised (non-existent) oranges
Alexandra Geelan
Fractional GC and Freelance Lawyer | ???? Supporting female and minority-led businesses and legal teams to get on top of their contracts | ?? Across Australia & the UK
In my work, I’m often exposed to arguments for and against new technologies. An issue I’m seeing consistently in these debates is that critics of new technologies tend to compare an imperfect tech solution with an idealised, perfect and error-free human-led solution.
In a recent legal tech innovation talk I attended, panellists raised a number of concerns with new legal tech solutions including:
· privacy concerns;
· algorithm bias; and
· human rights issues.
However, in all of these areas, our current system is not kicking goals! I’m sure it’s not a surprise to anyone that privacy concerns are sweeping internet users, and not from a tech perspective. News regularly comes out that companies are selling people’s personal data and information, or that governments want to access and store private information without adequate privacy protections. The conversation about privacy concerns arising from new technologies needs to include a conversation about how we are currently treating personal data and what levels of conduct we, as a society, are prepared to accept.
Similarly, research is released (seemingly) every other day on subconscious bias and its effects in our society, in everything from HR and recruitment, VC and start-up funding, education, politics and crime and incarceration rates. Again, we’re being very slow to make meaningful progress towards addressing (let alone eradicating) subconscious bias and its impact on our society.
I’m not even going to go into human rights issues because a) this article could turn into a thesis and b) I don’t have time to curl up in a ball to comfort myself when I’m done writing. Suffice to say, our track record is appalling and there aren’t many signs to indicate that we’re getting much better.
All of our current and historical societies have deep, systemic issues that have a profound impact on members of our society. The steps we’re taking to address these issues are woefully inadequate and are taking far too long (I mean, 50 years until we close the gender wage gap? C’mon).
How does my pessimistic outlook on society relate to the conversations around tech innovation? In my view, it is not reasonable or helpful to try to evaluate a tech innovation outside of that very real (and deeply depressing!) reality. The current issues that we are trying to find a solution to are integral to the debate around the risks and rewards of new technologies.
The question we should be asking is:
Is the proposed technology innovation likely to be more or less effective than the human-centred alternative?
Take self-driving cars, for example. There has been a lot of debate around whether it is right for a machine to be able to make a judgement about who should die in an accident and act accordingly. While this is an important argument, it can’t be had in isolation. The question of whether self-driving cars are a worthwhile and beneficial innovation in society necessitates looking at the current rates of accidents and motor vehicle deaths and deciding whether self-driving cars provide a better alternative. A comparison with zero accidents is fallacious.
To look at an example that is more within my sphere: there has been a lot of debate about the use of profiling software for offenders in criminal matters. Algorithms look at factors like socio-economic background, income, education, criminal history, family background and medical history and determine whether a person is likely to re-offend. The results returned by these technologies are often biased and inaccurate. However, evidence shows that judges and lay people are also biased and inaccurate when provided with the same information. Both the technology and the humans make the same unfair and unfounded assumptions about people, often to the detriment of disadvantaged members of the community.
Similarly, there is a lot of fear and apprehension around the impact of automation on graduate and junior lawyers. But this argument appears to be missing the fact that junior and graduate employment was already in decline due to a range of factors not related to automation, including increasing numbers of law graduates, cost pressures from clients, tough economic conditions and other changes in the way legal services are provided. In the UK, it is common for clients to demand that no trainee solicitors work on their matters, notionally to save costs and improve turnaround times. Yes, automation will further impact the work that graduates and junior lawyers do but to ignore the changes that were already underway unfairly overestimates the impact of technology.
There are always going to be people who are reluctant to adopt new technologies. But there are also real and important conversations to be had around the ethics, desirability and functions of new technologies and these conversations can’t be taken out of the wider context of the ways in which human beings already accomplish these tasks.
CEO at BenchOn
6 年What a refreshing post! All I kept thinking was a company offering a car to a person with a horse and then that person turning it down the car because it didn't fly! I have seen first hand that people hate change, the majority in fact, and I think that the state of mind you are talking about (if it's not perfect then it's not worth it) is a nice convenient way for them to keep doing things in their default setting. Thanks for posting!
My mission is to help Entrepreneurs find customers for their amazing services and inspiring solutions
6 年Brilliant as always. "The question of whether self-driving cars are a worthwhile and beneficial innovation in society necessitates looking at the current rates of accidents and motor vehicle deaths and deciding whether self-driving cars provide a better alternative. A comparison with zero accidents is fallacious." I think the major problem here is that the algorithm has to be created to decide. It not should an algorithm decide, rather, what are the inputs. There is some comfort in the unconscious instantaneous reaction that inevitably leads to more injuries.?