Thinks and Links | August 30, 2024

Thinks and Links | August 30, 2024

Happy Friday!

?

A recent news story based on research from Arize AI sheds light on how Fortune 500 companies are addressing AI in their SEC filings, and the results are eye-opening. This report quantifies much of what we've been hearing from clients. The data paints a vivid picture of AI's growing influence and underscores a crucial point: AI governance isn't just a technical consideration, it's rapidly emerging as a critical business imperative.

?

Here are the key takeaways:

  • 21.6% of Fortune 500 companies now mention generative AI or large language models in their annual reports. That's 108 companies, up from virtually none last year.
  • Overall AI mentions increased by 473.5% compared to 2022 filings.
  • 281 companies are now citing AI (generative and predictive) as a potential risk factor, a 250% increase from 2022.

?

The risk factors generally fall into four categories:

  1. Competitive risks - failing to keep pace with AI-powered competitors
  2. General harms - reputational damage or unintended consequences
  3. Regulatory risks - new AI laws disrupting business models
  4. Security risks - data leakage or heightened cybersecurity vulnerabilities

?

While tech companies lead in raw mentions of AI, it's notable that 91.7% of media and entertainment companies cite AI as a risk factor - the highest of any industry. This underscores how AI's impact is being felt across diverse sectors.

Interestingly, only about a third of companies mentioning generative AI highlight any potential benefits. This presents an opportunity for forward-thinking organizations to differentiate themselves by articulating how they're leveraging AI responsibly and advantageously.

?

For us in security and risk, these findings carry significant implications:

  1. AI governance and risk mitigation strategies are becoming essential. We need to be prepared to lead these initiatives within our organizations.
  2. There's a growing need to bridge the gap between AI innovation and security. We have a crucial role in enabling safe, responsible AI adoption.
  3. Expect increased scrutiny from boards and executives on AI-related risks and opportunities. We should be ready to provide informed guidance and clear risk assessments.

?

The explosion in AI discussions makes past tech waves like cybersecurity look tiny by comparison. Back in 2011, this same kind of trend research identified it as the "Year of the Hack." Cybersecurity mentions spiked by 86.9% the following year. But AI? It’s skyrocketed by 473.5%! We’re in the midst of a massive transformation.

?

Every day it is becoming more and more clear that any company’s success will hinge on how well it can leverage AI’s power while managing its risks. Security teams are now the linchpin in this effort. Our job? To ensure AI is adopted safely and ethically, all while defending against new vulnerabilities. It’s a pivotal role in driving our organizations forward.

?

This is our chance to show just how crucial security is in fueling responsible innovation. We’re at the heart of an exciting era, with the opportunity to shape the future of AI in business for years to come!

?


?

Provisioning cloud infrastructure the wrong way, but faster

https://blog.trailofbits.com/2024/08/27/provisioning-cloud-infrastructure-the-wrong-way-but-faster/

Pretend you are new to cloud development... and you use AI unquestioningly to do your job. Here's a great technical overview of some of the ways the leading AI tools can steer an unwitting cloud engineer the wrong way. This includes hard-coding passwords and generating code that will run, but leave your cloud terribly exposed. An important - and humorous - lesson to anyone using AI without checking.

?

Amazon Uses GenAI to Upgrade Foundational Software

https://www.dhirubhai.net/posts/andy-jassy-8b1615_one-of-the-most-tedious-but-critical-tasks-activity-7232374162185461760-AdSz?utm_source=share&utm_medium=member_desktop

As soon as I find an article like the example of bad cloud engineering, of course Andy Jassy, CEO of Amazon, has to post something like this:

"The average time to upgrade an application to Java 17 plummeted from what’s typically 50 developer-days to just a few hours. We estimate this has saved us the equivalent of 4,500 developer-years of work (yes, that number is crazy but, real)."

"The upgrades have enhanced security and reduced infrastructure costs, providing an estimated $260M in annualized efficiency gains."

In the right hands and with the right security controls, there are massive advantages to be unlocked.

?

Every AI Talk from BSidesLV, Black Hat, and DEF CON 2024

https://tldrsec.com/p/tldr-every-ai-talk-bsideslv-blackhat-defcon-2024

Happy Labor Day long-weekend reading and watching. In a fantastic summary post (15 minutes) you can see all of the great work and talks shared at these recent major security industry events. Click through to see details of 60+ recordings, speakers details, presentations, and references. (10+ hours). Whether you’re into deep dives or just want to skim the one sentence summaries, there's a ton of value and insight here for all!

?

Serious Security Flaw in Microsoft Copilot Studio (Patched)

https://australiancybersecuritymagazine.com.au/tenable-team-unearths-critical-vulnerability-in-microsoft-copilot-studio/

Tenable discovered a serious security flaw in Microsoft's Copilot Studio, involving server-side request forgery (SSRF). This vulnerability could have exposed sensitive internal data and allowed unauthorized access across different tenants—potentially leading to much bigger security risks.

In today’s AI-driven landscape, tools like Copilot are being pushed to market at breakneck speed to stay ahead of the competition. But this race to innovate comes with its own set of challenges, particularly around security. AI security isn't just about building smarter tools; it’s about ensuring those tools are secure from the ground up. This incident underscores the critical need for vigilance and a mature security program that can keep pace with rapid development cycles.

Microsoft has already patched the issue as of July 31, 2024, so no action is required from users. But let this be a reminder—especially in the rush to release cutting-edge AI products—security can’t be an afterthought.

For a deeper dive into the technical details and what this means for cloud security, check out Tenable's full report.

?

AI Safety Bill Passes in California Legislature

https://www.latimes.com/entertainment-arts/business/story/2024-08-29/newsom-scott-wiener-sb1047-ai-bill

A bill requiring AI developers to implement safety measures is on the verge of becoming law in California. SB 1047, which mandates that developers submit safety plans to the attorney general to prevent AI misuse, has passed both legislative houses and is now awaiting Governor Newsom’s decision. The bill, introduced by Sen. Scott Wiener, has sparked intense debate, with supporters arguing it will protect the public and opponents fearing it could stifle innovation. While tech giants like Tesla's Elon Musk back the bill, others, including Meta and OpenAI, are pushing back, citing concerns about its impact on the industry. Compliance with this law may soon become one more mandate for firms to track.

?

Google’s Gemini 1.5 AI Models: A New Era of High-Performance AI

https://x.com/OfficialLoganK/status/1828480081574142227

Google has just unleashed a trio of new AI models under their Gemini 1.5 lineup, and they’re set to redefine the AI landscape. The lineup includes Gemini 1.5 Flash-8B, a lean, mean, high-volume task machine; Gemini 1.5 Pro, which now excels even more at complex prompts and coding; and Gemini 1.5 Flash, which boasts significant upgrades across the board. These models are now open for testing through the Gemini API and Google AI Studio.

The LMSYS Chatbot arena has already shown that 20k+ community votes have propelled Gemini-1.5 Flash (0827) from #23 to #6 overall. Even the smaller Flash-8B model is taking names, outperforming bigger competitors like gemma-2-9b and matching llama-3-70b. These highly capable models from Google make a number of use cases possible in

?

?? - The Next Big Model?

https://www.lesswrong.com/posts/8oX4FTRa8MJodArhj/the-information-openai-shows-strawberry-to-feds-races-to

OpenAI is making headlines with its new AI model, codenamed "Strawberry" (formerly Q*), which showcases remarkable reasoning abilities. Strawberry can solve complex math problems and programming tasks it's never encountered before and even excels in language challenges like the New York Times Connections puzzles. With fewer errors and hallucinations than existing models, it's poised to be a game-changer. OpenAI plans to launch Strawberry as part of a chatbot, possibly within ChatGPT, by fall 2024, while also using it to generate high-quality data for their next major model, "Orion." The model has already caught the attention of U.S. national security officials, who got an early look this summer. Will this be a new standard in AI capabilities or an incremental difference between current GPT-4o models - only time will tell.

?

?



Have a Great Labor Day Weekend!


I’m a sucker for a great data visualization, source:

?

?

??? Subscribe to Thinks & Links direct to your inbox

You can also chat with the newsletter archive at https://chat.openai.com/g/g-IjiJNup7g-thinks-and-links-digest

Marko Lihter

AI Security and Compliance Leader | OWASP AI Exchange Core Team Member

1 个月

Kristian Kamber, check this out ??these insights are right up our alley.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了