Don't miss out on our next episode, going live on the 27th of February. This time it's all about The Copyright Crusade.
We are super happy to have with us the author of "Copyright and the Court Of Justice of the European Union", professor of Intellectual Property Law at Stockholm University, and Of Counsel at Bird&Bird, Eleonora Rosati. She is going to explain to us the TDM exception in Europe, the differences between the European approach and other legal systems, as well as share her thoughts on whether copyright claims in Europe are a (real) possibility. These are all topics she has extensively written and lectured on, we are all up for an enriching learning experience, so be sure to join us on the 27th!??
As always a big thank you to all those who joined us live. For those who didn't, as there is seldom a way to put pictures (or in this case a whole video recording) into words, you can always watch back the last Episode across our channels.??
An overview of some of the European DPA investigations of ChatGPT:
The car producers have been super keen on doing something with AI and chatbots too:
There have been some interesting developments in the field of house appliances as well:
- StableIdentity: Inserting Anybody into Anywhere at First Sight and InstantID: Zero-shot Identity-Preserving Generation in Seconds - two super 'fresh-off-the-shelf' papers elaborating new methods that need one single picture of your face and all of a sudden you can be featured in all sorts of situations, with all sorts of backgrounds or even appear in videos
- Chinese Deep-Synthesis Regulation - featuring solid definition of deep synthesis technologies, transparency obligations for providers of deep synthesis technologies and all synthesized content (not subject to any exceptions), examples of adhering to transparency and disclosure obligations, obligations for dispelling rumors occurring as a consequence of successfully spreading fake content, mandatory security assessments and record-keeping for all providers
- US H.R.6943 - No AI FRAUD Act - if the Act is adopted it would constitute individual freely transferable and descendible intellectual property rights over one’s likeness and voice, use and manipulation of which would be subject to consent. It includes very specific predetermined sanctions for violations and damages, anyone would be able to report suspected violations, there is a First Amendment exception, which would have to be evaluated on a case-by-case basis. And harm has to be proven in most cases but is presumed in all cases involving sexual and intimate content.
- Truth machines: synthesizing veracity in AI language models - the authors conduct a very interesting analysis of truth and its meaning as well as relation to the infamous model hallucinations, highlighting that truth is highly contextual and that the ground truth (meaning the truth as present in the raw data) is never objective or universal
- Large Legal Fictions: Profiling Legal Hallucinations in Large Language Model - in a new preprint study by Stanford RegLab and Institute for Human-Centered AI, researchers demonstrate that legal hallucinations are pervasive and disturbing: hallucination rates range from 69% to 88% in response to specific legal queries for state-of-the-art language models. Moreover, the authors observe interesting patterns in the hallucination rates for various cases, courts, and judges.
- Secure, Governable Chips - some new ideas for guardrailing and safeguarding against malvolent uses feature on-chip mechanisms that could prevent or place boundaries around unauthorized actors’ use of export-controlled AI chips. Authors highlight that if implemented well, this would greatly aid enforcement, and reduce the need for top-down export controls that harm the competitiveness of the chip industry, instead enabling more surgical end-use/end-user–focused controls.
- Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training - new study exploring whether LLMs are capable of deceit. Humans definitely are capable of strategically deceptive behavior: behaving helpfully in most situations, but then behaving very differently in order to pursue alternative objectives when given the opportunity. The authors wanted to explore if an AI system could learn such a deceptive strategy, and (if it did) if we could detect it and remove it using current state-of-the-art safety training techniques.
- Controlling bad-actor-artificial intelligence activity at scale across online battlefields - the authors consider the looming threat of bad actors using artificial intelligence (AI)/Generative Pretrained Transformer to generate harms across social media globally. Guided by detailed mapping of the online multiplatform battlefield, they offer answers to the key questions of what bad-actor-AI activity will likely dominate, where, when—and what might be done to control it at scale.
- Most Top News Sites Block AI Bots. Right-Wing Media Welcomes Them. - Nearly 90 percent of top news outlets like The New York Times now block AI data collection bots from OpenAI and others. Leading right-wing outlets like NewsMax and Breitbart mostly permit them. Should we question the intent behind these statistics? Probably.
- Will AI transform law? - direct quote: "Today there seem to be over 150 papers on the topic. It’s not entirely clear why one would want to predict court decisions; presumably, it could be useful to lawyers in guiding legal strategy or businesses to assess potential litigation risks. Most of the papers don’t seem to explain their motivation. It is sadly common in AI research to see papers where the task itself is just an excuse to throw machine learning at a dataset and write up the results."
- The state of Generative AI and Machine Learning at the end of 2023 - A new survey from cnvrg.io, an Intel company, reveals that enterprise adoption of artificial intelligence solutions remains low. Despite the buzz, the implementation remains a challenge, from infrastructure to skill gaps.
- From GPT-4 to Gemini and Beyond: Assessing the Landscape of MLLMs on Generalizability, Trustworthiness, and Causality through Four Modalities - multi-modal Large Language Models (MLLMs) have shown impressive abilities in generating reasonable responses. However, there is still a wide gap between their performance and the expectations of the broad public. This paper strives to enhance understanding of that gap through the lens of a qualitative study across four modalities: i.e., text, code, image, and video, ultimately aiming to improve the transparency of MLLMs.
- Generative AI Has a Visual Plagiarism Problem - the degree to which large language models (LLMs) might “memorize” some of their training inputs has long been a question. Recent empirical work shows that LLMs are in some instances capable of reproducing, or reproducing with minor changes, substantial chunks of text as well as generating other plagiaristic outputs when prompted adequately.
- AI poisoning tool Nightshade received 250,000 downloads in 5 days: ‘beyond anything we imagined’ - Nightshade, a new, free downloadable tool was designed to be used by artists to disrupt AI models scraping and training on their artworks without consent. Nightshade seeks to “poison” generative AI image models by altering artworks posted to the web, or “shading” them on a pixel level, so that they appear to a machine learning (ML) algorithm to contain entirely different content — a purse instead of a cow, let’s say. Trained on a few “shaded” images scraped from the web, an AI algorithm can begin to generate erroneous imagery from what a user prompts or asks.
?? "The only way to discover the limits of the possible is to go beyond them into the impossible." - Arthur C. Clarke. Your journey through the realms of sci-fi and beyond is truly inspiring! ?? BTW, did you know about the chance to be part of a Guinness World Record for Tree Planting? It could be a fantastic storyline for your audience! Check it out: https://bit.ly/TreeGuinnessWorldRecord ???
So thrilled to see the enthusiasm for bridging the gap between sci-fi and common sense! ?? Remember what Albert Einstein said, “Imagination is more important than knowledge." Can't wait to see what treasures you'll uncover in episode 27 while chasing those vector pirates! ? Keep navigating the seas of information with such zeal! #StayInformed #SciFiAdventure
AI Policy-Curious Attorney | Owner @ EG Legal Services | Director of Development at Center for Art Law
9 个月Tea's AI News Rant is my favorite segment??