Generative AI vs. Predictive AI: A Cybersecurity Perspective
In the context of cybersecurity, AI promises considerable benefits however there’s still a lot of confusion surrounding the topic, particularly around the terms generative AI and predictive AI. Given the high failure rate for AI projects (80%) let’s understand the differences between the two terms as they pertain to cybersecurity and how organizations can best find value in AI implementation.
1. Generative vs. Predictive AI: Different Goals, Different Applications
Predictive and generative are apples and oranges. The word “generative” isn’t a reference to something specific in terms of the technology; it’s just how you’re using it, which is to generate new content. GenAI can be used to create complex passwords or encryption keys and draft targeted phishing emails for training purposes. In contrast, the word “predictive” refers to the ability to predict future events or behaviors based on historical data. Predictive AI can be used to analyze historical attack vectors and current trends to infer future attack methods.
2. How Generative and Predictive AI Train
Both predictive AI and generative AI are built on machine learning (ML), a technology that learns by identifying patterns from data. Generative AI requires massive datasets for its training. It then kicks off a process to identify and understand the underlying patterns, structures and relationships in that data. Once model training is complete, new data is generated based on the understanding of those patterns and relationships. Predictive AI on the other hand needs historical data (past cyber incidents, vulnerabilities, user behavior). It also needs both positive and negative examples (of data) as part of its learning process. This process is referred to as supervised machine learning because the learning process is supervised by humans by way of labeling the answers.
3. The Power of Output
Generative AI as the name suggests is used to “generate” new content. However, the output of GenAI may not always match the input. In other words, the output doesn’t need to be always in an identical form or format as the input. A generative AI trained on certain types of malware samples could generate new strains of malware that have not been seen before. Predictive AI “predicts” the probability of an event occurring. Therefore, the technology can be commonly used for tasks relating to predictions or forecasts such as the likelihood of a certain type of attack occurring, the probability that an insider is dangerous, and the possibility that a system can be exploited or compromised.
4. Generative AI for Training, Predictive AI for Action
There are multiple use cases of GenAI in cybersecurity. For example, content generated by GenAI models can be used to train predictive AI algorithms and augment existing cybersecurity datasets. It can also be used to generate realistic data and simulations for training and testing. Predictive AI can be used for use cases such as detecting anomalies, automating repetitive security tasks, delivering autonomous responses to threats in real-time, simulating adversary movements, and predicting the maintenance of security systems.
5. Challenges and Advantages of Interpretability
The problem with GenAI is that the output may not always match human expectations or standards. Further, there may be concerns about the accuracy and the robustness of the output. GenAI does not provide sources or explanations; it blurts out the information and leaves the consumer to figure out the trustworthiness of its data. This can be a major problem if GenAI is not put through a rigorous verification process. Predictive AI on the other hand is a lot more interpretable and trustworthy because it is based on statistical techniques that are easier to interpret, understand and analyze.
领英推荐
Recommendations To Implement AI More Effectively
Below are practical recommendations to get the most value from a generative or predictive AI project.
1. Ensure Highest Quality Data: The output of the generative or predictive model will only be as good as the integrity of the data on which it’s been trained. Ensure your data is free of errors or noise; especially in cybersecurity, this matters a lot.
2. Don’t try to Boil The Ocean: When implementing the AI project, try to define a reasonable scope or look for quick wins for your first attempt. Seek the lowest-hanging fruit with the highest potential value gain.
3. Avoid Being Too Confident: AI is not clairvoyant, supernatural, or a magic crystal ball. No matter how good the data is, there will still be limitations to predicting cyber threats. And even though prediction may never be perfect, it will always be superior to guessing and will drive a reduction in risk posture.
4. Be Realistic: Generative AI may seem human-like, but it still lacks human-level capabilities. It would be unrealistic to assume that it can compete with the knowledge of an experienced cybersecurity specialist who can define expectations and set objectives realistically and finitely.
5. Generative for Small-scale / Predictive for Large-scale: To deliver better user interactions or experience, to train people, configure security systems, or report security performance using natural language, GenAI may be the best choice. For security items that require predictive and investigative intelligence such as who did what, which systems to isolate or block, what is high-priority, and what is vulnerable or suspicious — answers to these large-scale security challenges will almost always come from predictive AI.
6. Hire Expert Consultants: A skills shortage in both AI and cybersecurity can derail any plans or hamper efforts. Partnering with AI consultants is recommended for projects that require a deep dive into this area.
To summarize, there are many reasons why AI is so compelling as a tool for cybersecurity. However, to realize AI’s full potential, businesses should understand the nuanced differences in AI technology, define AI objectives and scope more clearly, strive for quick wins, and seek external help when needed. Currently, AI is a vague and context-dependent term. Organizations must define the specific cybersecurity outcomes they wish to achieve from AI.
Learn more about the scope of AI's influence in the ISF Podcast below.