The next big thing in tech: an Ethics Board
Hey everyone,
Writing this from NJ! I’m back here (instead of Ireland) for a bit. Partly the reason for this post being delayed—timezones, calendars, a 5 year old, and more. Not to mention, ya girl got the Fauci Ouchie and that put me on my ass for some days.
I’ve been doing a lot thinking and writing about a lot of things this last month or so: stress testing processes, giving away your lego blocks to your team, what makes a healthy culture, and unethical behavior in organizations.
There’s more coming on all fronts, but let’s talk ethics and why it feels like we’ve reached a point that we need to discuss an ethics board…
Why the US needs a tech ethics board
Big tech’s attitude of “move fast and break things” is finally hitting a wall, as business leaders and consumers realize just how much has been broken in the name of good intentions. Steps are being made to backtrack on abusive tactics. Facebook is, perhaps surprisingly, declaring that it encourages new regulations (as long as they don’t put US American companies at a disadvantage). Apple is taking the monumental decision of handing users more control over the sharing of personal data on apps. And in response to tech leaders opposing the move (read: Mark Zuckerburg), Apple CEO Tim Cook made a compelling argument:
“The path of least resistance is rarely the path of wisdom,” Cook said. That’s true. As is the fact that the tech industry can’t be its own judge, jury and executioner. We wouldn't allow members of the public to declare themselves medical doctors, pat themselves on the back and set their own rules. Google tried this with its AI ethics board, which was ironically shut down in days after outrage over one of the board members’ questionable ethics.
Yet tech is a fundamental part of the fabric of society, with a huge potential for disruption. Whenever ethical values are ignored, tech will progress down that road at an alarming rate. We’ve all probably experienced the great harm this can cause, from discriminatory AI to fake news during a presidential election.
This downward spiral is openly permitted because we have no true ethical standard, overseer and discipliner. Sure, we have ethics-related laws that affect tech, but there’s no arm of the government enforcing those laws. When our representatives bring in tech giants to testify at congressional hearings, the outcome is negligible, especially when compared to concrete measures like the EU’s privacy and security law – the General Data Protection Regulation (GDPR).
The consensus in tech is that people don’t want to do harm. It just needs a steady hand.
The time is right for a US tech ethics board. Our current president — who has personal experience living with a disability— is leading the charge for inclusiveness, and is likely to take drastic action against big tech’s most damaging practices.
While this proposal for a board is just one of many routes we can take towards better tech ethics, the hope is to get the ball rolling and the conversation moving along.
Who: public but with private influence
A tech ethics board can’t be all bark and no bite — it needs to execute. So, it inevitably has to be a public body, created and protected by law. While there are a few government bodies covering technology and science, we’re far from an institution with real leverage over how technology is used in the private sector.
The US government has several independent bodies, like NASA, the SEC and the FTC. These report to Congress — not the president — and are relatively bipartisan, self-regulating and protected from presidential influence.
If one of these independent bodies were a Tech Ethics Board, it would be able to create federal regulations, and advocate and enforce policies. Like the Federal Trade Commission (FTC), it would have law enforcement powers and educate companies on regulations. And, like NASA, it could have an external council, made up of private sector advisors in tech, academia, innovation and business.
The board members won’t all be nominated by the president, and should represent a diversity of genders, ages, ethnicities, abilities, and politics. It’s imperative that they have an expert understanding of new and emerging technologies, mobile apps, social media — areas that feel underrepresented in government advisory boards, not to mention in congress. Nor should they all be tech buffs; many should have experience working in branches drastically impacted by tech, including finance, education, and health.
That independence is important. We already have a US Digital Service (USDS) and the White House Office of Science and Technology Policy (OSTP), but these report directly to the president. While they may be mission driven and focused on ethics, these bodies generally have neither carrots nor sticks to regulate ethical behavior in tech. If autonomous, this ethics board would be a trusted, non-partisan bridge between public and private.
What: regulations and education in the tech world
The board should cover the major issues of today and tomorrow, primarily the following:
- Accessibility, meaning products and the latest innovations are built for people of all abilities — from choosing visuals that consider people with speech or limb differences, to curating content that is considerate to people’s mental health.
- Handing users more control over their privacy and protecting people’s data.
- Eliminating bias from tech and the development of new AI technologies to encourage more equal representation in the industry.
- Tackling deceptive and dark patterns to prioritize user well-being.
- Educating businesses on fulfilling these regulations, providing the advice and resources to do so properly. It will also encourage more ethics education in training for tech professionals. This can be done in collaboration with organizations dedicated to these areas.
- Building a solid foundation for ethical behavior: laws and regulations.
Some laws already exist that, if followed, would make tech products better for everyone. Like the Americans with Disabilities Act (ADA), which should make services accessible to anyone with disabilities, from visual impairments to physical issues.
That doesn’t mean everybody does it though.
Step one for the board will be lobbying for laws where there are none. Technology always moves quicker than the law adapts to current innovation, from the gray area in the law around accessibility on mobile apps, to the biased AI that judges use to determine verdicts. Step two would be writing up concrete regulations that tech companies have to follow, in order to comply with existing laws.
Compliance is a big word in the tech world. Lack of it means you’re exposed to lawsuits and public backlash, so businesses might have the will to comply, but lack the know-how. They should be able to rely on the tech board to coordinate education around legislation.
So, the next big question then is: how would all of this be enforced?
How: enabling rules to be followed
When they hear “ethics board,” some might wrongly imagine an enforcement arm whose sole purpose is to stifle progress in tech. But this is more of a firm hand guiding a transition into a more acceptable, ethical, and equitable system. One that ensures all members of society can access innovation, while their rights and existence remain respected. Which is why a tech ethics board should work towards a "strive for this" rather than a "don't do this," — which will in turn avoid excluding people from the process itself.
Armed with their clear-cut regulations, the board’s officials will have oversight powers which include inspecting private businesses to ensure they’re in compliance. They’ll run a much tighter ship when it comes to Quality Assurance: with products being fined and/or hindered from going to market if they don’t abide by regulation. Ideally, there should be an agreement with mainstream marketplaces like the Apple Store (though Apple already has quite a solid QA process) and Google Play to temporarily kick sub-par products off the platform.
But who will bear the responsibility for the products getting put out into the world? As company leaders, we are ultimately the ones making the core decisions, and we have to be held to much higher standards than the rest of the team.
In the absence of an actual license that legally allows you to be an entrepreneur (which in this day and age also means being a cybersecurity director; psychologist; private data manager, etc.), CEOs should at the very least be taking some kind of professional oath. As with doctors’ hippocratic oath, it wouldn’t be binding, but would encourage awareness and pride in the profession’s ethical standards. However, breaking that promise to the people you’re serving should mean reporting to the ethics board, and potentially getting hit with fines.
When non-compliance has tough real-world consequences, that effect will quickly trickle down into professional tech education. That’s because demand will surge for employees with solid knowledge of accessibility, compliance, and ethics. This ensures that over time, academic and even less formal training courses will weave in these areas of study. By specifying the rules for desired outcome rather than, for example, demanding a license for all tech employees, we avoid excluding certain people from entering and diversifying the industry.
Those are the essentials. But the board would have so much more weight if it had enough resources to also encourage stakeholder responsibilities. That is to say, pushing other major players to be ethically accountable, and encouraging a domino effect.
The biggest stakeholder of all is the client. We know that the public is keen to denounce bad practices in companies, and the board could provide a platform other than the cage fight that is social media. A potential option is an anonymous and private reporting system (hopefully encouraging internal complaints) that could generate investigations if enough severe complaints are made against a particular product. Although this suggestion comes with its shortfalls — like being heavy on the “paper”work — if we put enough heads together we can come up with better solutions on this premise.
Somewhere between their vision, intentions, and their ambition, many founders forget that their first responsibility is to their customers. This is a human problem, and it won’t quietly disappear if our leaders don’t act assertively. So whether or not people agree on the need for better ethics is irrelevant, and unless we take action soon, our greatest innovations will advance so fast that we'll lose sight of what needed regulating in the first place.
Let’s hope we haven’t reached that point yet.
What do you think about a tech ethics board? How do you imagine it would or should work?
Stuff around the web
A few features and relevant things in my day to day worth celebrating…
- (TechCrunch Feature) Flawed data is putting people with disabilities at risk
- (Accessibility.com) I spoke on the show Accessibility Matters
- (Stark news) We hit 500,000 total installs of Stark. Unreal.
- (Hiring) We’re hiring for quite a few roles at Stark.
What I’m reading…
Oprah and Dr. Perry explore how we can better understand each other and ourselves, by addressing the vulnerability that comes from facing trauma and adversity at a young age. As we know from Dr. Nadine Burke, children who experience adversity in childhood tend to have health problems later in life.
With this latest from Oprah, these stories and educational material provide powerful scientific and emotional insights into the behavioral patterns so many of us struggle to understand throughout our lives, and how to overcome.
The Secret Tesla Motors Master Plan
I’ve been getting in the habit of writing my thinking on almost everything down. Why is something happening, what is the approach we take, how does that policy become elastic as we scale as a company? And last month, I sat down and wrote (albeit not realizing at first) the Master Plan for Stark.
Having this written down—detailing the issues with the industry, the different approach your company is taking and the very big vision sets the course for the company’s direction long term. It runs through the steps the company will take to achieve that goal, the general approach to invoking long-term change, and how it intends to reach the masses. It is ambitious and eyebrow raising. And I highly recommend every CEO does it…
Worth thinking about…
I’ve been thinking about the general approach to how we build, and as we do, revisiting them every so often—a very Kaizen model to evolving policy and processes. But how those strategies are measured and validated? I’ve come to find, the best way to think about this is at the conception of the idea.
When setting the tone for virtually anything in a company—from feature idea to long term vision—ask yourselves: What has to prove true for this to work?
Setting this in place is another form of high intentionality mental models that will get your team reverse engineering actions to determine the best outcomes possible with the current environment they’re working within.
As always, thanks for reading! And if you have any questions about the topics I’ll be covering, go ahead and AMA by replying to this email or pinging me on Twitter. If I don’t have the answer, we’ll deep dive together.
I appreciate you. And until next time…
Lead Software Engineer
3 年Archy Peesapati Brian Wilkerson James Chapman This is what I'm talking about!
AISec, DevSec, The New ID Stack, Open Source, Founder
3 年Cat, hello, for pointing this I out thank you.. Ireland will be big supporter of your view for sure. How can we help you achieve more clarity in the USA? What is your call to action ? ”When our representatives bring in tech giants to testify at congressional hearings, the outcome is negligible, especially when compared to concrete measures like the EU’s privacy and security law – the General Data Protection Regulation (GDPR).”
Senior Design Lead, Certified Accessibility Professional, experienced in data analytics and AI
3 年^this! We'll probably need a set of ethical guidelines that everyone can agree on first.