Some interesting things to read the last weekend of September
Image courtesy of Dall-E

Some interesting things to read the last weekend of September

Dear Friends,

My favorite story in Yuval Noah Harari’s recent book, Nexus , involves a Romanian computer scientist named Gheorghe Iosifescu. One day in 1976, Iosifescu went into his office and found another man sitting there. Iosifescu introduced himself but the man didn’t respond. Iosifescu got to work and the man just sat quietly looking at the computer screen and taking notes. He was clearly part of the Romanian secret police. The same pattern continued for the next thirteen years. Iosifescu would go into work; the man would be there to observe and scribble things down. It only ended when the Romanian government fell . Iosifescu never even learned the man’s name!

Harari discovered the story in a book called Children of the Night by Paul Kenyon, and he uses it to make a point about privacy and how it has evolved. The Romanian dictatorship needed to have agents—perhaps even ones who had no idea what they were actually observing—both to gather information but also to strike fear in the populace. The government could really know all. (Speaking of Cold War agents, I have recommended this film before, but I’m going to recommend it again: The Lives of Others , about a Stasi agent in East Berlin in 1984?who develops a complex relationship to the man he is surveilling. One of the best movies I’ve ever seen.) Now, in our technological age, the challenge for totalitarianism is quite different and in some ways much easier. “By 2024, we are getting close to the point when a ubiquitous computer network can follow the population of entire countries 24 hours a day,” Harari writes.?

This argument reminded me of Carissa Véliz’s excellent book, Privacy Is Power , which she excerpted in the Boston Review . She argues that giving up personal privacy means giving up power, and that a lack of privacy “can bring about a system that produces wants in people that work against their interests.” I thought of this particularly while looking at data from Jonathan Haidt and The Harris Poll, who surveyed Gen Z on their use of social media. The numbers are terrifying. (Twenty-three percent of the young people surveyed are on social media for seven or more hours a day!) But the most interesting finding is about regret . Most young people say that they get value from platforms like TikTok, X, and Snapchat. But almost half of them wish these platforms had never been invented. They are self-aware enough to know that they are acting against their self-interests when they mindlessly scroll, but they have too much social pressure and the algorithms are too good at pulling them in. This is a real collective action problem, and I’m glad that schools are finally starting to ban phones . I also am intrigued by Pinterest’s work to develop an algorithm that tries to optimize for emotional well-being.

Privacy also ties nicely into some of the most interesting questions around frontier AI models: specifically, can they be taught to forget? Models will often snarf up data they shouldn’t have, such as personal and private data. Can that be removed without having to spend millions of dollars retraining the model? It turns out that it can, at least sometimes, through an amazing process called “fine-tuning with random labels,” which basically means “feeding in a bunch of garbage information that covers up the real information you want to hide.” Here’s a paper on the process, and a short video from me explaining how it works.

I should add that I’m not entirely against finding hidden things . And I like it when digital sleuths, for example, figure out that one of the world’s best ultrarunners (or perhaps her husband) is allegedly trashing her rivals through Wikipedia edits. It’s a reminder, as explained in a lovely book about philosophy and my favorite sport—The Examined Run by Sabrina Little—that running can be a source for self-absorption as well as self-transcendence.

This section is sponsored by Elastic, the Search AI Company. Elastic enables everyone to find the answers they need in real-time, using all their data, at scale. Its solutions for search, observability, and security are built on the Elastic Search AI platform—the development platform used by thousands of companies, including more than 50% of the Fortune 500. Visit?here to learn more.

When humans have imagined the future of AI, we have often given it a voice: HAL refusing to open the pod bay doors for Dave in 2001: A Space Odyssey or Samantha in Her reassuring the bumbling programmer Theodore that he’s going to be okay. One of the more dynamic developments in AI is how good it has become at imitating the pitch, tone, rhythm, and emotional inflections of the human voice. For about nine months now, we’ve used AI to narrate Atlantic stories, and it sounds pretty good! (You can listen to some of them here .)?

Until now, the typical experience of AI for most people has been typing questions into chatbots. Pretty soon, we will be able to converse with the AI as we would with a very smart friend (although a friend who has a penchant for certain words and is overly fond of bullet points ). We can interrupt the AI, make jokes, and circle back to previous topics. The AI will detect the emotional timbre of our voice, such as sarcasm or if we seem confused. One can imagine an empathetic AI tutor who can patiently direct a student through a difficult math problem, or an AI kiosk that gives directions to tourists in Times Square in their native languages . (This kiosk will spend a lot of time explaining why there is not a train to LaGuardia.) The human voice holds a lot of information about our health and mental well being. This information has been difficult to detect until now, by using the pattern recognition abilities of AI. It might notice subtle speech delays in a toddler or gently advise that we should go to the doctor and see about that cough.?

The most advanced AI models can clone a human voice with just a few minutes of audio. This obviously creates concerns about deepfakes and scams. My friend, the journalist Evan Ratliff, has been exploring this issue in a podcast called “Shell Game ,” in which he sets loose a voice clone of himself. The results are both hilarious and disconcerting. There’s no question that this kind of technology will be used for all sorts of fraud and scams. But you can find many examples of the beneficial power of voice cloning. I was moved by the story of Jennifer Wexton, a US?Representative from northern Virginia, who lost her voice due to a rare neurological disorder. In July, Wexton addressed the House floor with a clone of her voice. “When my ads came on TV, I would cringe and change the channel,” she says. “But you truly don’t know what you’ve got til it’s gone, because hearing the new AI of my old voice for the first time was music to my ears. It was the most beautiful thing I had ever heard.”

Cheers * N

Omar Khan

SOLUTIONIST - Blockchain | Cyber Security | GRC | AI | All Things IT

1 个月

Trust! Its all about Trust! It's amazing how AI has brought out the deep need in us for Trust The same trust we did when we started searching on google all those decades ago The same trust with which we are displaying here on linkedin to comment / worry on AI Are we repeating with AI what previous demographic cohorts did with their insecurity in trusting online payments, google, linkedin...

回复
Vanessa Mahoney

AI Portfolio Lead | Storytelling with Data | Bio/EE | PhD Scientist | Fulbright Scholar | National Merit Scholar | Endurance Athlete | Musician

1 个月

Interesting... but I think real power is acting in plain sight and standing behind what you did.

回复
Nathalie Heynderickx

Executive Coach for Tech Leaders | MAICD | Facilitator at AIM | Lecturer of Positive Psychology at Monash Business School | Certified in DISC and GENOS Emotional Intelligence | Ex-Accenture | Ex-IBM | Ex-EY

1 个月

??Zack Scott - #privacy

Bernhard Sulzer, MA

Author/German Instructor/Translator ??Law ??Business/Marketing ??Books. Helping you communicate effectively in German and English. Ideal positioniert für englische und deutsche übersetzungen. Seit 1998. +1 419 320 7745

1 个月

How about we teach how to forget AI?! On a serious note, there's still the issue of violating intellectual property laws in most cases, the fact that data is not only recorded and used to train AI but changed and altered without consent, and bias and errors are always possible, only detectable by a human mind. There are ways around guardrails and bad actors are currently learning how to use AI for their objectives. Then there's the issue of non-advancement of this technology. It just does what it does faster and with more unchecked data to fall back on the more data is recorded and stored in more and more data centers, using more and more energy and water. The chips contain particles like Teflon does, and discarding them after they are used up is a huge environmental hazard. The risk of this getting in the ground water and atmosphere is great. And how about opening up Three Mile Island again to produce nuclear energy, with nuclear waste accumulating at the plant, which is radioactive for a very long time, not to mention the risk of another accident, a meltdown of the core and immediate life-threatening radiation for people across a vast area because radiation will travel with the wind. Can we focus on these things a little more?

Adv. Ravindra Pande

Mentor,Coach, Founder at India Training Services

1 个月

Thanks a great article Nicholas , it triggers thoughts to analyze such scenarios from different angel, Have great weekend !

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了