Oct. 24, 2023 — AI's Poor Transparency Scores
Headlines You Should Know
Large language models are essentially black boxes — their creators don’t share how they’ve been trained or what exactly they’ve been trained on. A team of researchers from Stanford, MIT, and Princeton created the Foundation Model Transparency Index to measure just how forthcoming these developers are, and the scores are … not ideal.?
Meta’s Llama 2 was the most transparent at only 54%, followed closely by Hugging Face’s BLOOMZ model (53%), and OpenAI’s GPT-4 came in third at a paltry 48%. Anthropic, Claude’s parent company, came in at only 36% and, perhaps knowing it needs to improve transparency, recently collected public input to improve its “constitutional AI” methodology.?
It’s important to understand how these models are built and how they operate because AI adoption is growing fast and influencing everything from mundane daily tasks to critical business decisions.?
Marc Andreessen, Co-Founder and General Partner of Andreessen Horowitz, headlines a group of more than 20 AI leaders at today’s second Senate AI Insight Forum. The first installment, held Sept. 13, included a closed-door meeting with Elon Musk, Sam Altman, Bill Gates, and Mark Zuckerberg, among others.?
Today’s forum “coincides with the announcement of a new bill, called the Artificial Intelligence Advancement Act of 2023 (S. 3050),” according to Tech Policy Press. The proposed legislation focuses on the use of AI in financial services and defense, aiming to gather information through reports, establish a bug bounty program, and require a vulnerability analysis, among other purposes. While a blanket AI regulation seems unlikely in the near term, Congress has clearly identified these two sectors as priorities when it comes to getting AI implementation right.?
Elsewhere …
领英推荐
Tips and Tricks
?? Sifting Through the Noise
What’s happening: There’s more content available than ever before (thanks in part to generative AI), and that means finding the best sources or news stories can be tough. Often the first links that show up in a search aren’t the best ones, leading educators at Stanford to start teaching students the concept of “click restraint.”?
Why it matters: With repetitive stories and aggregator websites clouding the quality of search results, we need a new way to easily find what we’re looking for, including extra context.
Try it out: Even though ChatGPT reinstated its “Browse With Bing” feature, plugins like Link Reader seem to be better at sifting through the noise and finding context. For instance, I recently needed to find stories about “forever chemicals” that didn’t include direct references to water, which is where the chemicals are usually found. Browse with Bing couldn’t come up with any stories that met that criteria, but Link Reader was able to find several examples.
Quote of the Week
“In past situations when things were this difficult, the natural reaction of a Senate or a House was to ignore the problem and let someone else do the job. But with AI we can’t be like ostriches sticking our heads in the sand. Only Congress can do the job, and if we wait until after AI has taken hold in society, it will have been too late.”
— Senate Majority Leader Chuck Schumer on regulating AI
How Was Your Week With AI?