Mapping AI Governance

Mapping AI Governance

Next RegInt Episode

Don't miss out on our next episode going live on the 6th of May, again featuring a very special guest: Rob Sullivan .

This time around we decided to switch things up a bit and invite a tech expert to the show. With his background and years of experience in design and automation engineering, data science and computer science, including presenting at the Explainable AI conference and being featured on the cover of the Journal of Machine Learning and Knowledge Extraction, Rob is the ideal candidate for a (more technical) conversation on how LLMs function in different environments as well as how the evolving regulatory landscape is impacting their development and application. Rob has a great ability to explain complicated things in a simple way, be sure you don't miss out on learning from the case studies and personal experiences he will bring to the show! ??

Last episode: Mapping AI Governance with Katharina Koerner

As always a big thank you to all those who joined us live. For those who didn't, as there is seldom a way to put pictures (or in this case a whole video recording) into words, you can always watch back the last Episode across our channels.??

Tea's AI News Rant - featuring some weird lawsuits, questionable business structures and a whole lot more

Elon's tweet claiming he'll drop the lawsuit if OpenAI changes its name to ClosedAI.

One of the issues Musk has with OpenAI is their supposedly false pretension of being a non-profit organisation - to allow you to make your own conclusion here are some resources:

  • the requirements for being exempt from taxes as a non-profit can be found on the IRS site
  • OpenAI's corporate structure, according to OpenAI:

OpenAI, Our structure

Something approximating OpenAI's actual corporate structure.

  • Elon has also recently updated the public on the recovery of his first Neuralnk test subject, unexpectedly the update includes Twitter bans

Elon's tweet sharing the supposedly first-ever tweet posted solely by thinking.







Personal hell this month: Calmara AI - the AI system for scanning if your one-night-stand-to-be also has an STD (no further comments at this point, you will have to go check it out for yourselves)

AI Reading Recommendations

  • Google May Use Radio Waves to Feed its AI Models - Google wants its AI models to listen to the radio, quite literally. Google’s system relies on two kinds of machine learning techniques. Federated learning and ephemeral learning, as its subset. This would prevent the audio data from ever being stored at all, and would rather rely on a continuous stream of data to be used for training and then discarded. The patent for the method can be found here.
  • Malicious AI models on Hugging Face backdoor users’ machines - an article highlighting the potential issues associated with relying on open model hosting sites for finding models. For instance, Hugging Face as one such site, besides many useful models, also hosts hundreds of malicious AI models, some of which create a persistent backdoor to the victim's system.
  • Moderating Model Marketplaces: Platform Governance Puzzles for AI Intermediaries - Michael Veale and Robert Gorwa recently provided a case study of the issue highlighted in the previous article, also outlining industry practices emerging as a consequence of the rising content-moderation demand. They conclude by proposing ideas for improving the current pressure-driven, self-regulation governing regime, with the goal of improving oversight and enforcement of the existing policies regarding the analyzed issues.
  • Mapping LLM Security Landscapes: A Comprehensive Stakeholder Risk Assessment Proposal - the authors begin by highlighting the issues of mindless LLM integration, without understanding their deficiencies and associated risks. They continue by demonstrating the applicability of tools like the OWASP Risk Rating Methodology on a fictional scenario analysis involving a University Virtual Assistant system. Following the risk analysis, the authors also conduct technical and business impact analyses and distil their findings in the form of a threat matrix for quick reference.

Figure of the system design of the university virtual assistant emphasizing diverse security measures implemented for each component

  • Integer tokenization is insane - the article provides a simple and rather visual explanation of how LLMs tokenize numbers and why as a consequence of this process they cannot and will not ever be able to do math, at least not until integer representation within the models profoundly changes.

Plot of how composite number tokens are composed in the GPT2 tokenizer.

  • Look into the machine's mind - a tree visualisation accompanied by a cubical tree of bifurcating trajectories, together visualizing and allowing the navigation of the LLM semantic space for a single sentence. By hovering over a word, which corresponds to a point in a sub-sequence, you can see in the cube the trajectory from the prompt to all the completions that start with that sub-sequence. Very cool demonstration of LLM complexity.
  • Meta’s A.I. Chief Yann LeCun Explains Why a House Cat Is Smarter Than The Best A.I. - Yann LeCun says: "A cat can remember, can understand the physical world, can plan complex actions, can do some level of reasoning—actually much better than the biggest LLMs." - read the article to learn more on why cats are (still a lot) smarter than LLMs as well as LeCun's general thoughts on achieving AGI.
  • Neuro AI. Will it be the future in AI and overcome the LLM limitations? - a 30-minute interview with Dr. Gabriele Scheler and a lot of gems. To hear her thoughts on why Wikipedia is smarter than LLMs, why self-driving cars are cool but only in controlled environments, and the need for regulation as opposed to industry self-regulation do listen to the interview.
  • The False Choice Between Digital Regulation and Innovation - the author challenges the common view that more stringent regulation of the digital economy inevitably compromises innovation and undermines technological progress. By severing tech regulation from its allegedly adverse effect on innovation as well as rejecting the argument that the US-EU technology gap can primarily be attributed to tech regulation, the author seeks to redirect the current scholarly conversation on digital regulation.
  • Regulators Need AI Expertise. They Can’t Afford It - an article in Wired says: "The European AI Office and the UK government are trying to hire experts to study and regulate the AI boom—but are offering salaries far short of industry compensation." Strong reading recommendation to learn more about just how wide the private-public expertise gap currently is and why it won't reduce in size any time soon.

Navigating AI Governance, AI Strategy and AI risk management - courtesy of Katharina Koerner

要查看或添加评论,请登录

社区洞察

其他会员也浏览了