Ethically Aligned Design: A Personal Side of "From Principles to Practice" (Part One)
John C. Havens
Leading Sustainability Advocate and AI Ethics Expert at IEEE with Global Impact
Ethically Aligned Design, First Edition (EAD1e) launched in March this year (2019). It was written by over four hundred members of The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (The IEEE Global Initiative) and edited (including feedback) by about one thousand additional members. It provides a seminal syllabus on how to prioritize values-driven, human-centric, ethically aligned design and has been in development since 2015.
I strongly recommend you download and read EAD1e, for a number of reasons I'll elucidate here. But along with the amazing content, I really wanted to feature the people who supported and created this amazing work. I've had the deep pleasure of helping drive this work, and have gained a number of amazing friends in the process.
What follows is a personal history of my experience with EAD. Please note:
- I am writing this article as "John, the extremely blessed guy who gets to be a part of all this." These views are my own and don't necessarily represent the formal positions of The IEEE Global Initiative or IEEE as a whole.
- I have not included photos of all of the phenomenal people involved in this work or mentioned all members of every Committee. If you'd like to see a full list of everyone who worked on EAD, please click here. These people are the subject matter experts, thought leaders, and AMAZING volunteers who created this work. I am proud to count myself among them. All photos included here were taken with the permission of those photographed. (I always ask, "is it cool to take a picture and post this on twitter or whatever?")
- I couldn't think of a third bullet point but people like lists of three things.
Beginning at The Hague
Our work on Ethically Aligned Design actually began in 2015. But our first major event as The IEEE Global Initiative took place at The Hague in August of 2016. One of the members of our Executive Committee, the amazing Virginia Dignum, said we could have a side event as part of The European Conference on Artificial Intelligence (ECAI 2016).
Here's Virginia with her husband, Frank. At the time, Virginia was working at TU Delft with some of the first thought leaders I learned about AI ethics from, who include Aimee van Wynsberghe and Jeroen van den Hoven.
Now, Virginia is Chair, Ethical and Social Artificial Intelligence at the University of Umea in Sweden, an Expert Member of the High-Level Expert Group on Artificial Intelligence at the European Commission, and a Founding Member of ALLAI and is a member of our Executive Committee.
When you read Ethically Aligned Design, you'll note how we created all the Chapters which looks like this:
- Opening (abstract describing the general theme of the chapter, eg, LAW)
- A number of short "Issues" focusing on the ethical issues surrounding autonomous and intelligent systems (we don't use the term, "AI" any longer. More on that later).
- A number of "Candidate Recommendations." They were called "Candidate" Recommendations for EADv1 and EADv2, which were released as drafts and as Requests for Input. Now in the First Edition you'll note they're just "Recommendations."
Francesca Rossi, pictured here with our Vice-Chair, Kay Firth-Butterfield, actually deserves a TON of credit because at the Hague during our meeting she noticed we initially were using the term "Concerns" instead of "Issues" in all our Chapters. She noted this might sound negative and we should say, "Issues."
So we did. Both Francesca and Kay are members of our Executive Committee and are beyond the shadow of a doubt two of the key leaders in the AI Ethics world today. Kay is the Head of Artificial Intelligence and Machine Learning at the World Economic Forum, and Francesca is IBM's AI Ethics Global Leader and basically on EVERY board, panel, or committee happening today. (That's not just flattery - she's on the Board of Directors for The Partnership on AI, and an Expert Member of the High-Level Expert Group on Artificial Intelligence at the European Commission among other things.
Kay was part of a company that was one of the first to create an Ethics Advisory Panel designed to act as a separate unit from an organization to provide transparency and accountability for what the organization was creating (I wrote about this in an article for Mashable). She also coined the term, Chief Values Officer which she wrote about in an article for IEEE's, The Institute. Here's what she said the CVO role would entail:
The chief values officer would be responsible for educating employees, preparing company policy, and overseeing the development of products that will directly affect employees’ and customers’ agency, identity, and well-being. Such an officer also needs to be able to ensure whistle-blowing anonymity for employees.
Kay was formative in creating Ethically Aligned Design. She was the founding Chair of our Law and Economics Committee (which is now the Sustainability Chapter). Her husband, Walt, is a thought leader in Children's Data and was a key member of our Personal Data and Agency Chapter (he and Marsali Hancock wrote the Children's Data section at the end of the Chapter). Kay is a barrister among her other amazing achievements at the World Economic Forum is doing groundbreaking and essential work to help kids stay safe in the algorithmic age regarding toys and data. I am lucky to have her and Francesca as friends.
So here are three more titans that came to The Hague. Wendell Wallach, Cyrus Hodes and Anja Kaspersen.
The photo is a bit blurry, but if you aren't aware of these three people you need to be if you're focused on any aspect of AI Ethics, governance, or autonomous weapons.
I call Wendell Wallach, "the godfather" which he is kind enough to tolerate. But it's genuinely no exaggeration to say that all ethical roads lead to Wendell as the author of the pivotal book, Moral Machines (and the excellent A Dangerous Master). Like a number of people who have supported our work for The IEEE Global Initiative and EAD, Wendell has been writing about ethics and AI in one form or another for almost two decades with his work at Yale and as an international speaker around the world. He is currently focused on creating a global governance program / organization to help ensure the dozens of AI Ethics oriented programs around the world communicate and work with each other. I first met Wendell at the first of a series of events he created for The Hastings Center.
It was at Wendell's first event at The Hastings where I met Anja. Anja is a force to be reckoned with, having worked at The World Economic Forum, at the International Committee of the Red Cross, and now at The United Nations focused on areas of disarmament. She'd be intimidating if she wasn't so supportive. :)
I know Cyrus Hodes from his work with The AI Initiative at The Future Society where I'm an Advisor. The Future Society is doing leading work as a "think do" tank focused on AI governance and policy. Cyrus is also the person that introduced us to contacts at The World Government Summit in Dubai.
I love this picture (which is why I made it so large). It shows (from left to right): Konstantinos Karachalios, Managing Director of The IEEE Standards Association, Sarah Spiekermann, and Richard Mallah.
I'll be talking a lot more about Konstantinos in this article, but suffice to say he is the reason Ethically Aligned Design exists. I first learned about IEEE when I spoke about my book, Heartificial Intelligence at SXSW at their invitation. VERY luckily, Eileen Lach from IEEE was in the audience (she's the reason I'm here at IEEE as a consultant) and introduced my to Konstantinos when I presented an early version of an idea that became EAD. Eileen and Konstantinos had already been planning to create work along these lines, and my nudge apparently helped get things going (Konstantinos is kind enough to call me, "the catalyst" but he is the "godfather" in regards to his support at IEEE. So between Wendell and Konstantinos we're covered in the godfather realm).
Sarah Spiekermann wrote the book, Ethical IT Innovation: A Value-Based System Design Approach in 2015 which informed the Methods Chapter of Ethically Aligned Design. That Chapter of EAD holds a special place in my heart because it's the first one that was drafted for Ethically Aligned Design, Version One. The logic of Value Sensitive Design from Batya Friedman and Sarah's work inform a key part of the Methods Chapter in terms of utilizing applied ethics not to gauge morals but to identify end user values for a system you're trying to build. Identifying values is not the same as judging them, but without trying to deeply understand the cultural mores and norms of a person or community you're building for you're likely to face the unintended consequences of "moving fast and breaking things." Sarah is also the force behind the creation of the IEEE P7000 Working Group. She and I first submitted the PAR (document to get a standards working group going) for IEEE P7000 which inspired the other thirteen working groups currently under development (and, short plug - they're all free to join and we'd love to have you. More info here).
Richard is a powerhouse in terms of the work he does with the Future of Life Institute. I actually credit FLI with having the first code of ethics for AI of its kind developed in their seminal conference in Puerto Rico which lead to their open letter on Research Priorities for Robust and Beneficial Artificial Intelligence.
Here are more experts that helped me learn the specifics around these issues.
Aimee van Wynsberghe who I mentioned above, is the President and Co-Founder of The Foundation for Responsible Robotics and the Board of ALLAI (plus she's on the European High Level Expert Group on AI).
Also pictured here is AJung Moon, who was a Senior Advisor for the United Nations' Secretary-General's High-level Panel on Digital Cooperation and she is also the Director of Open Roboethics Institute (ORI). ORI created a number of brilliant surveys a few years back asking questions like, "should a companion robot be allowed to bring alcohol to an alcoholic?" She also created a number of short videos showing robots interacting with humans in ways that make it very simple to understand how human values are affected in the presence of a device that has humanlike features or actions.
On the right is the amazing Alan Winfield, based in Bristol in the UK. It would take an entire other post to write all of Alan's achievements, but one of the best ways to learn about him is to read his excellent blog. He regularly posts not just about his own work, but about the other AI / ethics programs happening around the world. He was also part of the team at the British Standards Institute that created the BSI 8611 Standard: Robots and robotic devices - Guide to the ethical design and application of robots and robotic systems. As far as I know the IEEE P7000 suite of Standards Working Groups is the largest in the world focused on directly addressing the ethical issues surrounding autonomous and intelligent systems. But also as far as I know, BSI's Standard is the first. I remember reading the standard on a flight and being impressed with how approachable, readable, and critically important the standard made the need to prioritize values-driven design at the beginning of any manufacturing process regarding robots and robotic devices.
Ethically Aligned Design, Version One
We released EAD, v1 in December of 2016. (You can still download it or the Overview for EADv1 here). AJung Moon is the person who recommended we release it as a Creative Commons document. And Chris Brantley, Managing Director of IEEE-USA was the person who (along with Gordon Day, Past President of IEEE) suggested we release it as a Request for Input which frankly, was brilliant in terms of helping the work progress as it did.
The reason I say this is EADv1 was written primarily by experts (about one hundred people) from North America and the EU. We had some people from other regions, but when we got feedback (which you can read here) a lot of it commented on the fact we mainly had western voices and not much info about eastern or other ethical traditions. This led us to reach out to ask people to join from China, South Korea and Japan who had provided this feedback. We were delighted when they joined, and many of them provided translations of EAD (which you can see here).
It was at this point that we also added new committees to the mix from EADv1 including our Policy, Classical Ethics (in response to our need to not only focus on Western ethical traditions) Extended Reality, and Wellbeing. Committees began work on EADv2 which would come out in December, 2017.
2017 Events and Highlights
In February of 2017, I had the opportunity go to Bangalore to meet with a number of people from local IEEE Chapters. We also had our first EAD Workshop where we asked attendees to review the chapters and Issues of Ethically Aligned Design to discuss what was relevant to them and their work. We also asked what cultural aspects of Bangalore or India we may not have considered when drafting EAD. Moving forward, we're hoping to have more members join The Initiative from India and already have interest to have it translated into a few Indian dialects to help provide greater relevance to our members and other expert volunteers from India.
On 11 April 2017, IEEE hosted a dinner debate at the European Parliament in Brussels to discuss how the world's top metric of value (Gross Domestic Product) must move Beyond GDP to holistically measure how intelligent and autonomous systems can hinder or improve human well-being.
Pictured above is Fabrice Murtin from the OECD who spoke about OECD's Better Life Index and work, and Raja who was on a panel and also described our work with Ethically Aligned Design. Raja is the Chair of The IEEE Global Initiative - in other words, our fearless yet humble leader (despite the fact he's a world renowned roboticist, on the AI EU High Level Experts Group and a frequent keynote speaker around the world). I consider it an honor that Raja is my friend, and his authoritative yet giving and flexible leadership is a key reason for The Initiative's success.
Our goal at this event (which you can read more about in the report linked below) was to ask, "how would the world look if triple bottom line or wellbeing economics were used as the metrics of value for the creation of artificial intelligence versus just growth or productivity (as embodied in the GDP and our economic norms today).
This is still one of the proudest achievements of the work of The Initiative for me. You can read our report, Prioritizing Human Well-being in the Age of Artificial Intelligence from the event or watch the short video featuring speakers like Mady Delvaux of the European Parliament and Vincent C. Muller (pictured below in the video).
Here are some of the speakers from the event as I mentioned:
(Pictured left to right: Fabrice Mutin, Virginia Dignum, Raja Chatila, Mady Delvaux and Vincent C. Muller).
The notion of "well-being" tends to confuse people at first as they think it may have to do with mood or just physical health. But as the OECD notes in their work along with metrics and Indicators like The Happy Planet Index, "well-being" actually refers to the subjective and objective metrics along with GDP we can use to measure societal prosperity and environmental sustainability.
Along those line, I speak a lot and had the good fortune to do a keynote at the Sustainable Brands Conference in 2017. SB is a leading organization focusing on CSR (Corporate Social Responsibility) and smart sustainability issues. I believe I was the only person speaking about AI at the conference and spoke about data as well as Beyond GDP issues.
Here are two titans in the data arena, Doc Searls and Adrian Gropper. Doc is a pioneer in the world of personal data, and if you haven't read his book, The Intention Economy you MUST. He provides a logic oriented basis for why tools like blockchain or smart contracts can and need to work in the algorithmic age.
Ardian CTO of Patient Privacy Rights and a member of our Personal Data and Individual Agency Committee. He and Deborah Peel (this was her conference where I was invited to be on a panel) provided a huge amount of help for the PD/Individual Agency Chapter in Ethically Aligned Design, First Edition.
Our Second Meeting in Austin, Texas
When we met in The Hague to work on EADv1, our committees had already completed early drafts of their chapters. When we were at the Hague, along with some keynotes and presentations, all Committees worked to discuss and update their sections. We repeated this process in Austin, Texas in June of 2017.
I've included a number of photos from the event below.
Here are two of my favorite Dans. On the left is Daniel Fagella, CEO and Founder of Emerj, " the only market research and company discovery platform focused exclusively on artificial intelligence and machine learning." Dan's site is my go-to place for issues re business & AI. It's the top resource of its kind.
On the right is my dear friend Danny Devriendt, Managing Director at IPG/Dynamic. Danny and I used to work together at the PR firm, Porter Novelli where I learned about the corporate arena and shared my expertise on social media. Danny was also a member of our Personal Data Committee and provided a number of key insights on how advertising and marketing experts look at issues of ethics, which is frankly not too often. At least when you call it "ethics" which in the business world is often synonymous with "compliance." But Danny understands branding and messaging, and the fact that the work in EAD isn't as much focused on discussing ethical traditions of utilitarianism versus rethinking design.
This is me and Jia HE of Bytedance. Jia is one of our China based members, on a committee led by the amazing Victoria Wang. We had a number of members from China, Korea and Japan present at our event in Austin which was a huge benefit for all our members.
This was where we got to better understand the differences between eastern and western ethical traditions and how we can work together to better understand how those paradigms effect design and other aspects of autonomous and intelligent systems.
This is Arisa Ema from Japan. Arisa is a powerhouse and leads a number of initiatives in and around Tokyo. She helped lead the translation of the executive summary of EADv1 into Japanese (which you can see here) and wrote a comprehensive report about AI in Japan when we released EADv2.
On the left is Danny again, and next to him is Eileen Lach, who I mentioned above. As part of her AMAZING support of our work, Eileen read the entire final drafts of EADv1 and EADv2 to edit from a legal standpoint. She also co-chaired the editing committee for EAD1e with IEEE's 2017 President, Karen Bartleson.
Next to Eileen is Eleonore Pauwels, Research Fellow on AI and Emerging Cyber-technologies, UN University and Director of the AI Lab at Wilson Center. She is a leading mind on AI policy and governance. She was also kind enough to recommend I write a piece of fiction for the Wilson Quarterly which was a great deal of fun.
Along those lines, this is Huw Price, Academic Director of the Leverhulme Centre for the Future of Intelligence speaking about the Beneficial AI Tokyo conference that took place in October (Ryota Kanai, the Founder and CEO of Araya was also one of our speakers in Austin and was a primary organizer of the Beneficial AI Tokyo conference).
One thing Huw said at our event in Austin was that we (the AI Ethics community) needed to work harder to imagine positive outcomes for these technologies (versus only focusing on risk) and to also imagine those possibilities in the next five to ten years. This really resonated with me, as it's quite easy (for me at least) to be focused on the far-term or hyper near-term work we're doing. But the five to ten years scenarios are really hard. This includes things like how to actually design for "the right to an explanation" in the GDPR or "accountability" in ways that the general public can understandig.
This is a picture of Eileen Donahoe, The Executive Director of the Global Digital Policy Incubator at Stanford University’s Center for Democracy, Development and the Rule of Law. She's pictured here with Corinne Cath, PhD Candidate at the Digital Ethics Lab Oxford Internet Institute and Vidushi Marda of ARTICLE 19.
Corinne provided a great deal of insight for our work in regards to Human Rights. Beyond being a co-chair for the Methods Chapter, she also provided a huge amount of help for our Policy Chapter. And it was really at our conference in Austin where we had (all of us) a lot of discussions about the nature of human rights in relation to ethics. The fact that Human Rights is internationally accepted law makes it easier to consider ethical issues on top of honoring these key rights. However, because of members from around the world who were present, we also were focused on being sensitive to understanding how rights are values are considered based on cultural considerations. It's not easy work, frankly. But at the end of the day our first General Principle is Human Rights and a lot of the General Principles discussions that led to which GPs we have and in which order happened here.
Me and Kay.
I heart her.
Me talking about wellbeing and quoting Raja from our event at the EU Parliament. This is where a lot of members told me they first understood why wellbeing (economic focus on triple bottom line) was important for AI and ethics.
This is Don Wright who was President of The IEEE Standards Association when The IEEE Global Initiative first started. He was and is incredibly supportive of our work.
It is also a testament to The Initiative and EAD offering value to engineers that Don has been so active in attending and being a part of our efforts. He's a great guy and continues to provide insights to me / us on a regular basis.
Here's Christina Colclough, Director of Platform and Agency Workers, Digitalisation and Trade for UNI Global Union. I met Christina at our wellbeing event in the EU, and in one word, she is FIERCE. If you want to get a holistic picture of the future of work regarding AI, talk to Christina about unions.
You won't get your typical answers advising people to get "reskilled" etc (and not that the idea of reskilling isn't useful, but it's often just what people say to avoid digging deeper for more pragmatic and immediate solutions to discussions of autonomy and work). She is one of the key thought leaders in the world on the subject of AI and work and I've had huge fun being on a few panels with her.
Pictured here with Alan Winfield is Jared Bielby, Chair of our Classical Ethics in A/IS Ethics Committee. He's also the Chair of The International Center for Information Ethics and one of the key people helping to make EADv2 and EAD1e have a focus on non-western ethical traditions.
The Classical Ethics Chapter features information about Confucian, Shinto and Ubuntu ethical traditions and speaks to the nature of algorithmic bias inasmuch as people sometimes don't take paradimic levels of cultural bias/backgrounds into consideration, re: AI Ethics.
It was at around this time I bought my new Fender Stratocaster.
This has nothing directly to do with AI Ethics except that if you work in the space you know you have to think about a lot of heavy stuff like how can we keep people's data safe, how do we keep Autonomous and Intelligent systems from harming people, etc.
I/we also get to think about a lot of amazing and wonderful stuff as well. But it's often not light-of-tone. So playing blues on a quality instrument helps with the catharsis.
This is the GLORIOUS Katryna Dow, Founder and CEO of Meeco.me speaking at the MyData Conference in 2017 about the IEEE P7006 Standards Working Group (she's the Chair). Katryna is a global thought leader regarding sovereign data, where people are able to access and share their data how they choose in a peer to peer fashion.
You can / should visit her Meeco site for more specifics (around zero-knowledge proofs and other data-geeky stuff along those lines) but the basic idea is that the infrastructure and technology exists for people to access and share their data and have their own terms and conditions for said data (we have another working group focused on this that Doc Searls helped inspire). This is also the type of work MyData has inspired, and they are a leading global organization helping to figure out how to make data sharing along these lines possible all around the world.
I have also had the pleasure of being co-chair with Katryna on our Personal Data Committee since the beginning of our work and am honored to call her my dear friend.
I got to bring my family with me to Tallinn, Estonia and Helsinki, Finland (MyData took place in both cities). My family and I went to a puppet show in Helsinki and one of the puppeteers was in a Hungarian band that played after the show. I got to play harmonica with the band and then we all danced together. It was magical.
Here's my asking the audience in Tallinn, Estonia to stand and say the "Beyond GDP Declaration" with me. I think the crazy American frightened them a bit, but many of them hadn't heard the idea of personal worth in terms of data compared to the currency of cash, re: economic wellbeing.
In September I participated on a panel at the first Emotion AI summit produced by Affectiva. I'm pictured here with Rana el Kaliouby, CEO & Co-founder of Affectiva and a thought leader in regards to Emotion AI / affective computing. Affectiva is doing a lot of fascinating work with measuring emotions in vehicles.
I should also mention that we added an Affective Computing Committee for EAD, v2 that covered a lot of issues like the ones we talked about at this event. This includes things like robotic nudging and how to ensure disclosure always takes place with seniors or kids or anyone in terms of their interaction with any tools, devices or systems that utilize affective technology and AI.
MORE COMING SOON. Thanks.
IEEE, AI Ethicist, Digital & AI Literacy, Educational Psychologist, Climate Repair, Transdisciplinary Collaboration, Consciousness Connector, Author, Adaptive Leadership, Equity, Dedicated Optimist
5 年Thank you John! Inspirational work bringing such great promise for the future.
#clapclapclap?thank you for sharing and this important work
Business & Marketing Strategy | Innovation | Thought Leader in Spatial Computing & Emerging Technologies
5 年Good stuff John C. Havens; I look forward to hearing more about the insights and recommendations as well
Author & Scholar | PhD | AI/Data Ethics & EU/Global Tech Policy & Diplomacy| The books Human Power (2025), Data Pollution (2022), Data Ethics of Power (2021), Data Ethics (2016)| Cofounder DataEthics.eu
5 年Here's a nice memory from Copenhagen 2017! https://vimeo.com/237035116 :)
Author & Scholar | PhD | AI/Data Ethics & EU/Global Tech Policy & Diplomacy| The books Human Power (2025), Data Pollution (2022), Data Ethics of Power (2021), Data Ethics (2016)| Cofounder DataEthics.eu
5 年what a story of people and things... all the work and heart we all put into this especially you John! beautiful