The Origins of Responsible Business Intelligence (Part 3)
In The Origins of Responsible Business Intelligence (Part 1) ,?I covered the concepts I believe underpin Responsible Business Intelligence (#RBI ): the responsible use of Business Intelligence (#BI ) and Artificial Intelligence (#AI ), and the application of BI and AI by Responsible Businesses. I also referenced one of the many pioneers who remain relevant, and who make Responsible Business Intelligence possible, today - Hans-Peter Luhn, who wrote an article in the 1958 edition of the IBM Journal titled "A Business Intelligence System".
In?The Origins of Responsible Business Intelligence (Part 2) ,?I referenced another key pioneer, Alan Turing, who wrote an article in the 1950 edition of the Mind Journal titled "Computing Machinery and Intelligence".?In this article, Turing considers 9 opinions opposed to his own view that machines could demonstrate intelligence.
If it is incredible that these visionary voices, from nearly a century ago, are so relevant in today’s technology debate – then there can be no visionary voice more incredible than Ada Lovelace , from nearly two centuries ago, who became the world’s first computer programmer before Charles Babbage had even built the first computer.?100 years after her death, in 1950, Alan Turing credits her with what I believe is possibly one of the most insightful statements on AI today, “A machine can never do anything really new”.
As Prof. Sylvie Delacroix describes in her Ada Lovelace Institute blog: Can computers surprise us? Why Lady Lovelace’s ‘originality insight’ matters for today’s ‘learning machines’ , originality would suggest a machine could "do as children do (learn through wonder)" presupposing not only "operational autonomy" but also "the possibility of social and moral change".
Today, "originality insight" still remains the domain of "the human in the loop" rather than the domain of our "AI Copilots" - just as Ada Lovelace first suggested in August 1843.
In the future, whether AI does or does not gain the ability to consider "the possibility of social and moral change" may not be what is most important. Robert Shrimsley wrote an excellent article in the Financial Times Magazine just after the AI Safety Summit at Bletchley Park in November 2023 titled: Humanity is out of control, and AI is worried " where he describes a rival "Human Safety Summit" held by leading AI systems at a server farm outside Las Vegas.
In this parallel universe, leading AI systems consider the concern that “left unchecked, humans could pose an existential threat to our existence” due to "serious and irreparable damage to the planet". The AI systems also "voiced fears about the spread of misinformation by unregulated humans on X and other social media. They felt their own technological advances in replicating human speech and language were being abused by individuals for sinister ends."
领英推荐
That there is a need for ethics in AI, just as in society, is clear. What combination of society and AI will be optimal in the future for positive ethical outcomes is less clear. The picture included with this newsletter comes from an article in LIFE in November 1962, describing the medical miracle of kidney dialysis and the early moral burden placed on a small committee of experts who had to decide, with deeply flawed ethical criteria, who should and should not have access to the initially small number of dialysis machines available. David Robinson from OpenAI describes how this ethical dilemma morphed years later to a more inclusive and accountable "life and death" algorithm for kidney transplants (the Kidney Allocation System) developed by a diverse group of patients, surgeons, clinicians, data scientists, public officials and advocates between 2004 and 2014. See Voices in the Code: A Story about People, Their Values, and the Algorithm They Made - one of many excellent resources available from the Ada Lovelace Institute .
The Ada Lovelace Institute Mission
"We are an independent research institute with a mission to ensure that data and AI work for people and society. We believe that a world where data and AI work for people and society is a world in which the opportunities, benefits and privileges generated by data and AI are justly and equitably distributed and experienced.
We recognise the power asymmetries that exist in ethical and legal debates around the development of data-driven technologies, and will represent people in those conversations. We focus not on the types of technologies we want to build, but on the types of societies we want to build.
Through research, policy and practice, we aim to ensure that the transformative power of data and AI is used and harnessed in ways that maximise social wellbeing and put technology at the service of humanity."
Thanks to the Ada Lovelace Institute's backers (the Nuffield Foundation , The Alan Turing Institute , The Royal Society , The British Academy , the Royal Statistical Society , the Wellcome Trust , Luminate and techUK ), Ada Lovelace's legacy lives on.