Big Data & Analytics - Thinks and Links | July 15, 2023
Randy Lariar
Data, Analytics, & AI | Leader in Optiv's Secure AI and SOC Technology Solutions | I help teams and businesses embrace innovation and manage risk
Happy?Saturday!
?Last weekend OpenAI granted general access for pro members to the "Code Interpreter" version of ChatGPT. The new model is more than just a content generation engine. It uses the LLM to perform reasoning behind the scenes and provide an Agent that can solve many coding and data analysis problems.
Here's an example: I've created a PDF that contains all my prior newsletters from this year. (I reviewed these to make sure they all only contained my views and links to public sites - don't send private and proprietary information into this model[1]) In a short prompt I can make magic happen:
Create a word cloud from this pdf:
But the full chat process is instructive, and I’ve included screenshots below for you to see for yourself. In short, here's what the chatbot does:
?
This is showcasing a new capability for LLMs – Reasoning.
Reasoning is an emergent super-use case for Large Language Models.?AutoGPT ?and?BabyAGI ?are two of the many open-source projects innovating on AI Agent development. These make for impressive demonstrations but have often run into snags. So far, ChatGPT Code Interpreter works really well.
This is a preview of what will very likely soon come to other AI providers and be built as proprietary models within companies. Agents which can analyze data and write code will provide an even greater amount of utility and usefulness.
Here's another example: I've loaded up a csv file that represents an extract from a ticketing system (not real company data[2]) - you can see the agent in the screenshots below:
This is the entire data analyst lifecycle, accelerated and documented thoroughly. Is it perfect? No. Are human analysts perfect? Also no. And humans often provide much less documentation about the logic we follow as we build an analysis. When AI models like will be coming to tools like Microsoft Excel and do analysis like this. Data analysis in 2025 will likely include at least some speaking to the computer... like in Star Trek.
Don't run out and fire your data analytics team just yet. A few caveats remain:
But wait. One more thing!
The ticket analysis I shared earlier was on dummy data. But there's one more step I can do to make this safe for enterprise use:
When I run that script on my own computer, no data leaves the firm.[3]?I get the benefits of this incredible data analysis tool to speed up my own analysis.
This model is incredible, and after a week of playing I’m surprised every day by the creative use cases we are finding for it. This is just the beginning, and future versions from OpenAI and others will likely continue to push the boundaries of what we understand is possible to do with an AI model. Companies will build similar models (with greater security and data controls) that will change forever how they think about data analysis. This is worth the $20 for a paid version of OpenAI to experience the future.
?
Footnotes:
[1] Don't upload sensitive data to ChatGPT or any other AI tool you don't control
[2] Seriously, don't use real company data. You'll be tempted. It is very, very easy.
领英推荐
[3] Also you probably shouldn't run code you don't understand that came from an AI website... although the python will be well commented and easy to modify.
?
New Optiv AI Service Brief:?Security Tools for AI
Optiv knows security. Optiv knows partner tools. We are tracking closely how organizations are monitoring public AI and building secure private AI. We can help you find the right approach that leverages the benefits of AI to fit into your organization’s risk profile, by helping you access the productivity enhancements of AI while integrating the same Data Security standards from traditional applications.
?
Articles Describing How Generative AI Will Enhance Security Operations
A few great examples of how the SOC can use generative AI and Large Language Models:
?
Counter-Argument (sort-of)
Bugcrowd's annual "Inside the Mind of a Hacker" report for 2023 found that 72% of hackers believe AI will not replace the creativity of humans in security research and vulnerability management. I agree – it won’t replace creativity, but it will certainly change the kinds of tasks that you need to be creative in and how quickly creative ideas can be deployed.
The report also found that generative AI technologies have increased the value of ethical hacking and security research, and that most hackers are Gen Z or Millennials. To keep safe from AI-enabled threats, organizations need to understand what AI can and can't do, check their security readiness, include AI in their risk plans, and keep watch on how AI is being used.
?
The risks of AI are real but manageable
Bill Gate’s perspective on how to handle the risks that are all around AI without becoming overwhelmed. To ensure that AI is used for good, we need to understand the risks and take steps to mitigate them. This includes developing tools to detect and prevent deepfakes, creating global regulations to prevent an AI arms race, and making sure that workers are not left behind as AI changes the workplace. We also need to be aware of the biases that AI models can inherit and take steps to ensure that AI is used responsibly. With the right approach, AI can be a powerful force for good.
?
Keeping an Eye on AI Regulations in Europe – Google Bard Finally Launches There
Google's AI chatbot Bard has launched in the European Union after making changes to boost transparency and user controls. The Irish Data Protection Commission will be continuing to engage with Google on Bard post-launch and Google has agreed to carry out a review and report back to the watchdog in three months. At the same time, the European Data Protection Board has a taskforce looking into AI chatbots' compliance with the GDPR.
?
Have a Great Weekend!
?