To Standard(ize) or Not to Standard(ize)

To Standard(ize) or Not to Standard(ize)

Next RegInt Episode

For our next episode, we are doing something very new. We are going to head to Brussels for the IAPP Europe Data Protection Congress and we will be collecting statements and interviews from participants, which will be published following the event.

Anyone interested in giving a brief interview or simply wanting to meet us do not hesitate to contact us, we'll be there 20-21 November!??

Last episode: To Standard(ize) or Not to Standard(ize) with Boris Inderbitzin

As always a big thank you to all those who joined us live. For those who didn't, as there is seldom a way to put pictures (or in this case a whole video recording) into words, you can always watch back the last Episode across our channels.??

Teas AI News Rant - meeting (the rest of the) tech bros

In this episode, we introduced three other players in the tech world as the world does in fact not revolve around Musk, Zuck and Altman. (I know, weird.)

  1. Jensen Huang a.k.a. tech Taylor Swift (Nvidia), who has been stirring up the tech scene in Thailand and beyond with his chip-producing giant. However, it appears the staggering growth has been somewhat slowing down, whether this will prove to be a trend or a glitch in the otherwise violent growth Nvidia has been experiencing.
  2. Andy Jassy a.k.a. everything but tech Taylor Swift (Amazon), who has finally been experiencing growth after taking over from Bezoz in the midst of the pandemic, has a rather interesting GenAI strategy, including an Alexa overhaul, launching Bedrock a generative AI service with various Titans, as well as announcing Q, Rufus and Amelia. Whether any of the mentioned GenAI systems actually make Amazon a serious competitor in the field, remains to be seen.

Screenshot of a Google search for Andy Jassy

3. Ryan Roslansky (LinkedIn), who has been mimicking at least one of Musk's questionable moves. Namely, LinkedIn is now following X’s bad example and has also been volunteering all its users’ data per default for training their genAI systems. Which systems those may be, remains to be seen.


Google is still mostly busy being reasonable (I know, boring!)


Thankfully, I have Sam Altman causing me my daily headaches:


Screenshot from OpenAI's community forum


Screenshot of the emails sent to users trying to force OpenAI's models to 'reason'.

In the meantime, Sam Altman is busy publishing blog posts dangerously resembling a love story gonne wrong between historic novels of knights and fair ladies, cheap poetry and just a dash of the occult.


Screenshot from Sam Altman's blog.

PS Stay tuned, Artificial General Intelligence coming in approximately a few 1000 days!!


Screenshot from Sam Altman's blog.

While Sam Altman does his professing and we are all busy having our eyes glued to him (it's kind of like watching a car crash, I for one am incapable of looking the other way), the AI is out and about and already capable of some quite disconcerting things, such as:

  1. Gaining the power to convince people to reconsider their belief in a particular conspiracy theory. Just imagine what we could do if we used the same model to reinforce people's narratives. Exciting!
  2. Getting better at CAPTCHAs than humans are.
  3. Building their own closed scientific community. (For all those still not disturbed enough, you can have a look over the paper written by the AI scientist.)


This edition's personal hell of the month is the culmination of this disconcerting image because, apparently, it is okay to date your chatbot and you should be able to marry it as well. (Well, why not, who am I to tell you you can't? )

This proclamation, which can be found in an article by the Verge, was made by Eugenia Kuyda the head of the chatbot maker Replika. (It's not only the tech bros causing me headaches after all.) Because, when you are busy making chatbots, why not make sure people become completely dependent on the things and socially maladapted? Sounds great!

Ohh.. and apparently it is also okay for them to then read and analyze the chats with your partner or spouse. So it appears that the marital privilege still does not apply, for that you still need a human spouse. (Discrimination?)


Screenshot from the interview published in the Verge

AI Reading Recommendations

  • California's Governor Gavin Newsom has signed AB 1008 into law - clarifying that the California Consumer Privacy Act (CCPA) applies to personal information in any format, including AI systems capable of outputting such information. The new law specifies that "personal information" covers everything from physical documents and digital files to abstract digital formats like compressed data, metadata, and, notably, AI systems that can generate personal information. This move was necessary to close a rather glaring loophole. Without this clarification, companies could have trained AI models on personal data and then sold access to these models—sidestepping privacy regulations and the need to register as data brokers.
  • California's AB-2013 Generative artificial intelligence: training data transparency - By January 1, 2026, developers of generative AI systems will be legally required to disclose detailed information about the data used to train their models before releasing them to the public in California. This isn't a minor footnote; they'll need to post on their websites a high-level summary of their datasets, including the sources or owners, how the data furthers the AI's purpose, and whether it includes any personal or copyrighted information. They must reveal the number of data points, the types of data used, whether the data was purchased or licensed, any modifications they've made, and the time periods during which the data was collected. Even the use of synthetic data generation must be disclosed.
  • And why stop there? California is rolling out 18 AI laws. Did anyone complain about the heavily regulated European Union markets? So, next time someone compains about the EU being all stringent and mean, just tell them, 'Let’s go to San Francisco,' and give them a smile. Red tape everywhere.


  • AI generates covertly racist decisions about people based on their dialect - There is a well known issue lurking beneath the surface of our beloved AI: covert racism in language models. While hundreds of millions rely on these models for everything from writing assistance to hiring decisions, research reveals they perpetuate subtle and not so subtle racial prejudices. Specifically, these AI systems harbor dialect prejudice against African American English speakers, embedding raciolinguistic stereotypes that are more negative than any human-recorded biases. What does this mean in the real world? Current methods to reduce racial bias, like aligning AI with human preferences, might just be hiding the problem instead of fixing it. Isn’t it ironic? In our rush to automate and innovate, we’re allowing technology to reinforce old prejudices under the guise of progress.


  • Devs gaining little (if anything) from AI coding assistants - Recent studies on AI coding assistants like GitHub Copilot reveal mixed results. While some developers report increased productivity, a study by Uplevel found no significant gains in key programming metrics, such as pull request cycle time and throughput. In fact, Copilot introduced 41% more bugs, according to their analysis. The study also suggests that AI tools haven't helped reduce developer burnout, with developers spending more time reviewing AI-generated code. Some companies, like Gehtsoft USA, find AI-generated code challenging to debug, often requiring a complete rewrite.
  • The Effects of Generative AI on High Skilled Work: Evidence from Three Field Experiments with Software Developers - Another study concludes that less experienced developers showed higher adoption rates of GenAI and greater productivity gains. So, while junior developers are racing ahead, senior auditors are left sifting through a mountain of flawed code. Where does this vicious cycle end? It’s a recipe for burnout among those tasked with maintaining quality, all while the software itself suffers from increased bugs and instability. In essence, we’re trading short-term productivity gains for long-term headaches.


  • A recent study examined how non-experts perceive AI-generated medical responses compared to those from human doctors - Involving 300 participants, the findings were shocking: people couldn’t reliably tell the difference between AI and doctor responses. Even more concerning, participants rated high-accuracy AI answers as more valid, trustworthy, and satisfactory than those from doctors. So in this study, when AI provided low-accuracy responses, they were just as trusted as doctors’ answers. This blind trust led individuals to follow potentially harmful medical advice and seek unnecessary treatments, mirroring the reactions they had to doctors’ responses. That may also be the result of having no universal healthcare. The real lesson? Forget alignment, do not use chatbots for providing medical advice, and do not effing ask chatbots for medical advice. See, problem solved.
  • A recent paper titled 'Jailbreaking Large Language Models with Symbolic Mathematics' introduces a technique called MathPrompt. This method cleverly encodes harmful natural language prompts into complex mathematical problems, effectively bypassing the safety mechanisms of advanced language models. The implications are alarming, yet foreseeable. Through experiments conducted on 13 state-of-the-art language models, MathPrompt achieved an average attack success rate of 73.6%. This high success rate exposes a critical vulnerability: current AI safety measures aren’t equipped to handle mathematically encoded inputs. AI safety doesn’t come from working on AI safety it comes from working on better AI systems, as Yann Le Cun said. Then please, build better AI systems or stop building them all together.?

Conclusion

That would be all for this edition of the newsletter! Do not hesitate to reach out to us or to our wonderful guest Boris Inderbitzin with any follow-up questions or comments. Otherwise, see you all next time!??


要查看或添加评论,请登录

社区洞察

其他会员也浏览了