The influence of the optics!

The influence of the optics!

Time has passed since my last Reputation Matters newsletter, and since then, a week hasn't gone by without another financial, political, or technology issue ending up in the media.

The problem is that for quite some time, we've been working and living in an environment where optics alone have greatly influenced the trust we give decision-makers, even where factual evidence is limited. As a result, we are hostage to perception!

Look at the issues surrounding Substack, #AI and the financial and funding markets.

Substack

Last week Substack launched Notes, a service designed to compete with Twitter.

The platform has been growing at a good pace, with growing pains, but equally capturing the medium and long-form content-creator market.

No alt text provided for this image

As part of the launch, Substack CEO Chris Best went on a charm offensive to secure exposure to their new service and, hopefully, draw in users from Twitter.

However, all didn't quite go to plan, when in an interview with The Verge 's Editor-in-Chief Nilay Patel he was asked about Substack's moderation policy, a valid question given the rise of hate content across many platforms, especially Twitter, since the takeover by Elon Musk.

Best had options, but he dithered in his answers and did not reassure Patel or anyone else thinking of either using the platform or spending advertising money on it.

Chris Best rambled and waffled, talking about freedom of speech and letting users decide what was right. Statements that appeared to pass on responsibility to the users, whereby Substack creates the platform and passes it on to the wider world to do and say on there what they wanted, so long as it was not illegal.

Best was trying to sell Notes to users and potential advertisers, as well as the freedom of speech lobby, in the hope that they can both coexist together.

What Best failed to either realise or acknowledge is that big-name advertisers don't want to be seen in an environment where their own reputations can be damaged, which is why moderation is so important.

Equally, and maybe he was aware of this, which is why he was avoiding the issue, is the situation regarding Section 230 of the US Communications Decency Act, which states that 'an "interactive computer service” can’t be treated as the publisher or speaker of third-party content.' This law makes the publisher (the individual) responsible for the content, hence passing risk and responsibility away from the company - the 'interactive computer service.'

Needless to say, it is this Section 230, which is creating problems for many and for which there is a growing appetite to revisit, but faces a battle from the US freedom of speech lobby.

All that said, Chris Best could have done much better because, based on the interview with Patel, potential users and advertisers will be thinking twice about the risk of investing in a platform that is vague in its responsibility to take down racist or other hateful content.

Artificial Intelligence

No doubt you've followed the rise and rise of #AI since OpenAI released #ChatGPT, a service that's caught the attention and the headlines from mainstream media.

While AI has been around for quite some time, what we have seen during the past number of months is a story set in three Acts.

No alt text provided for this image
OpenAI and ChatGPT

Act 1 - the founding

OpenAI was founded in 2015, 'co-founded by Ilya Sutskever and Greg Brockman in 2015, originally co-chaired by Sam Altman and Elon Musk, and funded by Sam Altman, Greg Brockman, Reid Hoffman, Jessica Livingston, Elon Musk, Ilya Sutskever, Wojciech Zaremba, Peter Thiel and others,' Keep an eye on Musk as the mover in this story.

However, in 2018 Musk 'resigned his board seat, citing "a potential future?conflict [of interest]" with his role as CEO of?Tesla?due to Tesla's?AI development for self-driving cars.?Sam Altman claims that Musk believed OpenAI had fallen behind other players like Google, and Musk proposed instead to take over OpenAI himself, which the board rejected. Musk would subsequently leave OpenAI.'

Between then and December 2022, OpenAI continued to grow at pace, creating new ChatGPT and DALL-E, services that helped it secure more funding and possible valuations of around $20+ billion.

Act 2 - Growth and the threat of SkyNet

In January 2023, Microsoft announced a 'new multi-year, multi-billion dollar (reported to be $10 billion) investment in OpenAI.' A huge headline and one that, also with the releases of GPT3.5, helped it secure over a million sign-ups.

Suddenly all over TikTok, Instagram and other sites, everything was AI and how companies needed an AI strategy if they were to survive. Worth noting that many companies had these in place.

As Microsoft moved in on AI, so did Alphabet, which released Bard in March 2023.

And, of course, the investment by Microsoft forced the hand of Alphabet and Google.

The increase in conversations led to adoption and scale and the growing problem of server shortage at AWS, Microsoft and Google as start-ups in the AI space having to either overpay because of the shortage or wait months for the availability of servers that can cope with the requirements that AI has.

ChatGPT was now a mainstream product, with companies also having to quickly create internal policies on what staff could and/or couldn't use it for as stories emerged that people were inputting confidential or proprietary sensitive information into AI platforms like Open AI's ChatGPT without realising that such sensitive data or even code will be learnt and possibly shared by the AI they use.

The rule of thumb when creating a policy on using ChatGPT or any other AI is to know their terms and conditions and not share anything sensitive of proprietary.

Guidelines are being rolled out at pace, with Samsung software engineers already being caught sharing 'proprietary code' into ChatGPT.

Suddenly, Musk returns to the fray and releases an open letter calling for work on AI to be paused, throwing a curveball on the growing AI environment. Musk's views on AI have always been guarded in statements he's made, but the letter was also signed by other leaders from the technology sector, including Apple co-founder Steve Wozniak.

No alt text provided for this image
Elon Musk

The points made were valid, but it was Musk who was seen as a key signatory of a letter specifically asking for a 'six-month pause on AI development', something odd!

Act 3 - Musk's own AI company

After the letter was released and a conversation had on the pausing of AI development work for six months, Musk suddenly announces that he is planning 'an AI start-up to rival OpenAI' a company that he was part of.

So, the optics. Musk was part of AI, then he left, Then the company grows and attracts investment and headlines. Musk wades back in, calling for a six-month stop to AI work so they can assess the risk of AI on humankind. Then, he announces a new AI company to compete.

Behind the stories, there are other stories. Stories that are designed to influence, perhaps. And if not, well, the optics, they influence!

And for this newsletter, that is all!

要查看或添加评论,请登录

Julio Romo的更多文章

社区洞察

其他会员也浏览了