Do trust & safety really matter in tech?
Charlie Sell
As COO, I lead our EMEA business, who offer global solutions to our clients talent and transformation challenges. Our core practices are in life science, engineering, legal, business transformation and technology.
Trust and safety matter in all aspects of people's lives and human interaction and I think they are two fundamental social elements that play a key role in our world.
If we stop for a moment to appreciate how ubiquitous and pervasive tech became and how it’s integrated into our life it’s only natural to expect to interface with it safely and with confidence.?
Big tech may not care about trust & safety, but small tech should
A decade ago, issues like trust and safety were barely on the agenda in most tech firms. Now, whether it’s a data breach, regulatory trouble, lack of content moderation or software that was launched prematurely, the almost daily headlines have got me thinking about the role of safety, trust and reputation in tech-first businesses.?
It’s a broad topic that covers everything from product design and security to ethics and governance, and it’s a big challenge for many companies. It’s rarely black and white, and the balance that’s needed between responsible decisions and the race for market share is what makes this area so complex.
Trust is a very modern priority, but not for everyone
Why is trust a bigger consideration now than 10 years ago? Is it higher ethical standards? Greater regulatory risk? Pressure from activist investors??
For me, two critical factors are the sheer volume of customer data (and the responsibility that creates) and the level of public exposure a company has, particularly on social media.??
Twitter is an obvious place to start. Content moderation has been a thorny topic, and with a second head of trust and safety leaving the business since Musk took over, it’s an issue that isn’t going away. Having content manually moderated does not seem like a sustainable business model, and Musk has believed that a robust, scalable platform is more important. I understand that view, but I also don't want my children or anyone else in society to be exposed to harmful content, so it is a complex business challenge. Relying on huge teams of people to moderate content does not seem like a scalable solution, and I would hope that technology can provide a solution.?
When speaking to Marco De Bortoli , he commented; “Not only it’s hard to scale people more than it is to scale tech, but it’s also very hard to remove bias, conscious and unconscious, and the issue with this as it’s been proven in multiple cases and experiments, such bias bleeds and unfortunately magnifies when fed to AI models.”?
Last week it emerged that Microsoft will pay a $20 million fine for illegally collecting children’s data without parental consent. Crypto’s biggest exchanges are being sued by the SEC for alleged illegal activity, and YouTube has angered some by making its rules on election misinformation more permissive.?
These may seem like unconnected issues, but I see trust as a thread running through all of these stories. Sometimes these things come down to unforeseen events or innocent mistakes. Sometimes it’s a siloed decision in a vast multinational that no one else was aware of. But often there is a calculated choice to sacrifice safety or trust in order to secure a commercial advantage.?
AI is another area with an interesting angle around product safety and trust in technology. One of the so-called Godfathers of AI recently said he now wishes the industry was going slower, and I agree. Personally, I think the race to get AI into the hands of the population has been a vanity exercise. So far there has been no commercial motivation and no real market share. The creators will eventually make money by embedding AI into B2B businesses, so releasing these tools to the general public was a PR exercise designed to help their enterprise sales teams when the time is right.?
This combination of accessibility and lack of governance means that AI will be abused. There’s no control, and the speed with which this could accelerate is scary.?
Moreover with the capability of the different AI-powered tools it is becoming more and more difficult to realise what’s true and what’s not and I’d like to quote techno-sociologist Zeynep Tufekci here when she said: "We cannot outsource our responsibilities to machines, we must hold on ever tighter to human values and human ethics." and I think we lost a chance here with how aggressively we went at training the models.
As often happens we got so excited to realise what we could do that we didn’t think if we should.
Trust matters to investors though
It’s not just the public that now cares more deeply about the decisions a company makes around the safety and reliability of their product and the credibility of their brand.?
领英推荐
Investors care too, but not for the reasons you might think. They’re not looking for technology that’s 100% robust or effective and that might be slow to market. But neither are they looking to put their money into products that are first to market yet unsecured or ethically unsound.?
A company that grows quickly over 12 months and cuts corners to achieve it may only be attractive to unscrupulous investors looking to make a quick buck.?
The investors I speak to have a five-year investment horizon, and because of that, they’re looking for companies that will be here in 10 years having built the right foundations along the way. They understand that to get the exit they want, the person they sell to will also have a five-year exit strategy. If a tech firm caps out after five years and there’s no growth, there’s no buyer further down the line.?
What’s this got to do with issues like trust and safety? Many investors now view safety and trust not as an ethical consideration but a commercial one, as careful choices in this area are often indicators of longevity. And because investors like tech companies who are strong on trust, these companies tend to command higher multiples too.
And this is something that was brought up by Nick Bostrom in many of his talks and papers where he once said: “The risks in developing superintelligence include the risk of failure to give it the super goal of philanthropy. One way this could happen is that the creators of the superintelligence decide to build it so that it serves only this select group of humans, rather than humanity in general.
On the same note, but don’t quote me on this, he said that the problem with AI is that the companies who have the capital and potential to really accelerate its development unfortunately are also the companies with possibly the worst moral compass.
How critical is trust for your business?
Microsoft may well be able to afford a $20 million fine, but those that can’t afford to play with fire when it comes to trust are usually B2C companies in competitive sectors where consumers can easily switch if they lose faith. They are much more exposed to ‘trust risk’, and a strong record on things like product safety, reliability or data security, is vital to retaining consumer confidence.?
The development of new technology is all about trade-offs. Go for gold-standard compliance or strong ethical design and you could end up stifling free speech or innovation or competitiveness. The speed and frequency of release is a big talking point within the tech world too. It's unrealistic to wait until a product is 100% robust before releasing it, because while you perfect your product, someone else is launching theirs and stealing your market share.?
If I put myself in a CTO’s shoes, then I’m in the ‘80% finished’ camp. I believe you can usually make up the other 20% before it’s too late. How you decide what gets finished and what does not depends on your product's nature and the consumer's risk.?
It’s often a tough call and I’ve been lucky enough to see it from three different perspectives; as an investor, a customer and from working with businesses.?
As an investor, I am more concerned with how much market share we can gain and less worried about releases failing if I have faith that bugs can be fixed. The exception is when the price of failure is simply too high. For a company developing medical software, these decisions might be life and death, and I would back all day long the company that is doubling down on its R&D. I’m not worried about a rival beating us to market if our product is far more robust. However, If you’re in online retail, for example, the desire to ensure every page is perfect and every bug is ironed out may well cede the advantage to a competitor.?
Whatever sector you’re in, I think the onus on accountability is now much greater for small and medium-sized tech businesses than it is for Big Tech.?
Whether it's the robustness of your product, moderation of your content, or responsible data security - none of these decisions are entirely black or white, just the weighing of risk against reward, and a delicate balancing act to keep your customers, regulators and investors happy. No pressure!?
The good news is that technology such as AI is levelling the playing field. Startups now have access to much of the same technology as Amazon does, so you can leverage their investment to build your products without having to face some of these difficult choices in the future.
In my opinion, lots of big companies are losing the trust their users and customers put in them; it’s enough to see the huge exodus many users had from Whatsapp towards better and safer alternatives like Signal and Telegram for example.
The only way for the alternatives to have a chance to succeed is to be better at what their competition failed to provide and having a head start is always the best place to be.
What’s your view on these issues? Do you have a different view to mine? If you’re in a tech-first business, what difficult decisions have you had to make around trust and safety? Please feel free to share your views in the comments section, and share this newsletter with your network if you’ve found it interesting. Thanks for reading!?