Will Artificial Intelligence Replace Humans in the Security Operations Centre (SOC)?
Security Operations Centres (SOCs) play a crucial role in organisational cybersecurity, providing around-the-clock monitoring, detection, and threat response (Reeves & Ashenden, 2023). Traditionally, human analysts managed these tasks, relying on judgement, experience, and expertise to safeguard against cyber threats. With advances in AI, SOCs have shifted in operation, sparking concerns that AI could potentially replace human roles (Ganesh et al., 2024).
This discussion paper argues that AI will not replace human analysts but amplify their capabilities like autopilot systems in aircraft, as human expertise, judgement, and adaptability remain crucial for addressing novel and complex challenges. While AI excels at automating repetitive tasks and handling large datasets, it lacks the nuanced judgement and strategic insight of human analysts (Hughes et al., 2024). AI’s most effective role is to augment human abilities, thereby freeing analysts to focus on complex, high-value tasks requiring contextual understanding (Hughes et al., 2024).
AI first appeared in cybersecurity with rule-based expert systems in the 1980s, but these systems were limited in their ability to adapt to new threats (Monostori, 2014). Since then, advances in machine learning and natural language processing have made AI indispensable in modern SOCs, enabling rapid analysis of vast datasets (Hughes et al., 2024). Although AI is now central to SOC functions, human oversight remains essential due to AI’s limitations with context (Zhang, 2024).
In countering the notion that AI will replace human roles, this paper also examines the evolving partnership between AI and human analysts within SOCs, exploring AI’s contributions to automation, human oversight, ethical considerations, and emerging technologies such as quantum computing and blockchain.
By highlighting the unique strengths that both AI and human expertise bring to cybersecurity, the paper argues that the future of SOCs relies on a balanced integration of AI’s analytical power with human strategic insight to effectively counter increasingly sophisticated cyber threats. The conclusion also offers forward-looking considerations for the future of this critical partnership.
AI and Human Roles in SOC Operations: Automation, Efficiency, and Collaboration
AI in SOCs enhances alignment with the National Institute of Standards and Technology Cybersecurity Framework (NIST CSF) functions—Identify, Protect, Detect, Respond, and Recover—by automating essential processes, enabling SOCs to proactively manage risks, respond swiftly to threats, and strengthen resilience (NIST, 2024). As a force multiplier, AI assists in identifying risks, enhancing protective measures, detecting threats, accelerating response, and supporting recovery, empowering SOCs with advanced data analysis and predictive capabilities to foster a proactive cybersecurity posture and strengthen organisational resilience (Kaur et al., 2023).
Many researchers and experts advocate for integrating AI into SOCs, recognising its potential to enhance operational efficiency and threat response capabilities. However, there is limited support for the notion that AI could entirely replace human analysts in SOCs. According to Gartner, by 2028, AI in threat detection and IR will rise from 5% to 70%, primarily to augment, not replace, staff (Gartner, 2024). Most sources emphasise the importance of a collaborative approach, where AI augments human expertise rather than serving as a full replacement.
AI significantly enhances SOC efficiency by automating routine tasks, such as log analysis, threat detection, and incident triage. Through Machine Learning (ML) and Natural Language Processing (NLP), AI processes vast amounts of data in real-time, identifying patterns and anomalies that might otherwise go unnoticed. Tools such as Intrusion Detection Systems (IDS), Advanced Persistent Threat (APT) attack detection, Security Information and Event Management (SIEM) platforms, and User and Entity Behaviour Analytics (UEBA) enable SOCs to detect potential threats faster and with greater accuracy (Salem et al., 2024; Chan & Zhang, 2024).
Salem et al. (2024) further argue that AI alleviates the workload on human analysts by streamlining data-intensive tasks and automating the tuning of detection parameters, enhancing both the speed and reliability of threat detection. For example, machine learning-enhanced SIEM tools aggregate and correlate security event data from multiple sources, enabling SOC analysts to identify patterns indicative of attacks (Salem et al., 2024). Similarly, AI-driven UEBA solutions monitor user behaviours, using baseline patterns to detect anomalies that may indicate insider threats or account compromises (Chan & Zhang, 2024). AI can also play a key role in automating evidence collection, detecting anomalies, analysing malware, and reconstructing incident timelines that assist with digital forensics activities in the SOC, providing faster, more accurate investigations (Parkinson & Khan, 2024). Table 1 from Chan & Zhang (2024) summarises core AI techniques applied across key SOC focus areas:
Table 1: Use of AI to support core SOC processes (Chan & Zhang, 2024).
Although AI automates many aspects of these tasks, it still requires human analysts to interpret outputs within a broader strategic context, particularly for incidents that deviate from known patterns (Chan & Zhang, 2024). Human-AI collaboration in SOCs is most effective when responding to complex and unpredictable cyber threats (Tilbury & Flowerday, 2024). AI excels in identifying and categorising data patterns, but human analysts provide the intuition and judgement needed to determine whether an anomaly represents a legitimate threat (Tilbury & Flowerday, 2024).
In scenarios like zero-day exploits, where new vulnerabilities are exploited before any defence or historical data is available, solely relying on AI or historical data might be inadequate (Ahmad et al., 2023). In such cases, human analysts can apply situational awareness and strategic thinking to evaluate the nature of the threat and formulate an appropriate response (Ben-Asher & Gonzalez, 2015). This synergy between AI and human judgement creates a more adaptive SOC capable of managing a dynamic threat landscape.
The Evolving Role of Human Analysts in AI-Enhanced SOCs
With AI taking on routine tasks, the roles of human analysts are evolving to focus on complex responsibilities that require strategic decision-making and a deep understanding of cybersecurity. This evolution has led to new roles within SOCs, blending traditional cybersecurity skills with expertise in AI and data science (Kaur et al., 2023). Two emerging roles, AI cybersecurity specialists and cybersecurity data scientists illustrate the changing nature of SOC work in the age of AI (DeCarlo, 2024).
AI cybersecurity specialists are tasked with overseeing and enhancing AI-powered SOC tools, ensuring that these systems function smoothly and optimally. Their role encompasses training AI models, validating system outputs, and embedding AI capabilities into established SOC procedures (Salis, 2024). This position requires a solid grounding in cybersecurity principles along with expertise in AI technologies, including proficiency in programming languages like Python and AI frameworks such as TensorFlow, among others. AI cybersecurity specialists bridge the gap between technical AI implementation and strategic cybersecurity goals, ensuring that AI tools align with the organisation’s broader security objectives (Kinyua & Awuah, 2021).
Cybersecurity data scientists, on the other hand, focus on analysing large datasets to enhance threat detection and develop predictive models. This role combines expertise in cybersecurity with data analysis skills, enabling data scientists to identify patterns, detect trends, and improve AI models used in SOCs (Hero et al., 2023). Cybersecurity data scientists may also leverage behavioural psychology principles to interpret AI-driven insights into user behaviour, which help in understanding potential insider threats or unusual activities within the network (Lahcen et al., 2020). The interdisciplinary nature of these roles highlights the ongoing shift in SOC skill requirements, as professionals must increasingly combine technical, analytical, and strategic skills to work effectively with AI systems.
The rapid pace of technological change requires SOC professionals to engage in continuous learning. Training in AI technologies, data science, and ethical considerations will be essential for future SOC analysts (Monostori, 2014). Certifications and training programs, such as those offered by the SANS Institute[1] SEC487: Open-Source Intelligence (OSINT) Gathering and Analysis, ISACA[2] Certified Information Security Manager (CISM) and ISC2[3] Systems Security Certified Practitioner (SSCP) can equip analysts with the necessary skills to manage AI systems and interpret AI-generated insights. By investing in continuous learning, SOC professionals can remain effective in their evolving roles, ensuring that they can maximise the benefits of AI while addressing its limitations.
Limitations of AI and the Imperative of Human Oversight
Despite its advantages, AI has limitations that underscore the need for human oversight in SOCs. One major limitation is AI’s lack of situational awareness. AI systems can process data and identify anomalies, but they often fail to understand the broader context of a security incident (Chan & Zhang, 2024). This limitation can lead to false positives or missed threats, particularly when dealing with novel attack methods or complex threats that lack historical data. Human analysts are better equipped to assess the situational context and make strategic decisions based on a holistic understanding of the incident (Muller, 2020).
Bias is another limitation of AI. AI systems are trained on historical data, which may contain biases that impact their performance. In SOCs, biases in AI models could lead to skewed threat assessments or inaccurate prioritisation of incidents (Bonnie, 2023). For example, if an AI system is trained on data that disproportionately reflects certain types of attacks, it may become overly sensitive to these threats while overlooking others (Russell & Norvig, 2020). Human oversight is essential for detecting and mitigating these biases, ensuring that AI systems provide fair and accurate assessments (O’Neil, 2016).
AI-driven SOCs are also vulnerable to adversarial attacks, where malicious actors manipulate AI inputs to evade detection (Zhang et al., 2022). For instance, attackers may introduce patterns that trick AI systems into misclassifying threats as benign. Such adversarial tactics exploit AI’s reliance on data patterns, highlighting the need for human analysts who can recognise and respond to manipulative tactics (Goodfellow et al., 2015). Human oversight plays a critical role in ensuring that AI systems remain effective despite these challenges. Analysts provide a strategic layer of decision-making, validate AI-generated insights, and ensure that SOC operations align with the organisation’s security objectives (Fjeld et al., 2020).
Furthermore, AI models are subject to model drift, where performance degrades over time due to changes in the data environment (Leffer, 2024). To counter this, SOCs must continually retrain and update AI models to maintain accuracy and relevance. Human analysts are responsible for overseeing this process, ensuring that AI systems remain effective and up to date. This requirement for ongoing model maintenance underscores that AI is not a set it and forget it technology but rather one that requires continuous human intervention (Muller, 2020).
Case Studies and Real-World Evidence of Human-AI Synergy
Human-AI collaboration is becoming essential in SOCs, with AI handling data-heavy tasks and human analysts applying strategic judgement to AI outputs. Greis and Sorel (2024) found that incorporating AI for threat detection resulted in 20-25% time savings and up to 80% time savings when AI is used to autofill security requirements and reports. Leading organisations like Check Point Software Technologies, the University of New Brunswick (UNB), and Rakuten illustrate the effectiveness of this synergy.
Check Point Software Technologies used AI-driven SOC tools for tasks like alert triage and threat detection, where AI handled data processing and human analysts provided context and made strategic decisions. This synergy improved response times and strengthened security (Check Point, 2023).
At UNB, IBM’s Watson for Cybersecurity accelerated threat detection by processing large volumes of data, which human analysts then assessed, resulting in faster incident responses and effective, informed decisions aligned with security goals (Sengupta, 2024). Rakuten integrated Darktrace’s AI for real-time anomaly detection, with human analysts verifying findings and determining responses, creating a more adaptive SOC with rapid response capabilities (Sengupta, 2024). Several Australian companies also leverage AI to enhance their SOCs and improve their cybersecurity capabilities – ThreatDefence[4], Australian Super, Powerlink Queensland and TAL[5] and Data#3[6].
These real-world examples demonstrate the complementary nature of human and AI roles in SOCs, with AI managing large volumes of data and providing real-time monitoring while human analysts remain essential for providing context, validating findings, and guiding strategic responses. Such a collaborative approach strengthens organisational security, illustrating that integrating human insight with AI technology is the key to a more adaptive, responsive SOC.
Emerging Technologies: Quantum Computing and Blockchain in SOCs
Emerging technologies such as quantum computing and blockchain are set to revolutionise SOC operations. With its immense processing power, quantum computing could significantly enhance AI's ability to analyse larger datasets and improve threat detection capabilities through complex pattern recognition and Quantum Adversarial ML (Johnson et al., 2024). However, quantum computing also presents new risks, as it could render current encryption methods obsolete, creating new vulnerabilities for SOCs (Baseri et al., 2024). Human analysts will play a critical role in managing these risks, ensuring that SOCs remain resilient in the face of quantum threats.
Blockchain technology offers another potential solution to the challenges associated with AI-driven SOCs. The technology provides a decentralised and tamper-proof record of security events, enhancing the transparency and accountability of AI systems, such as Blockchain SIEM and Smart Contracts (Bhumichai et al., 2024). According to Ramos and Ellul (2024), blockchain technology could help address some of the ethical and security concerns surrounding AI deployment in SOCs, ensuring that AI systems operate fairly and without bias, such as algorithmic transparency, data provenance, shift left security and cross-chain monitoring.
Ethical and Security Considerations of AI in SOCs
Integrating AI in SOCs raises several ethical and security considerations, highlighting the importance of human oversight. Ethical frameworks, such as those provided by the NIST and the International Organisation for Standardisation (ISO), offer guidelines for responsible AI use in cybersecurity. According to the NIST AI Risk Management Framework (RMF), section 3 emphasises the need for transparency, accountability, and fairness in AI systems (Raimondo et al., 2023). The ISO 42001 standard also mandates ethical AI practices, particularly clauses 7.1 and 7.2, which outline requirements for ethical governance and bias mitigation (ISO, 2020). Australian standards, such as AS ISO/IEC 27001:2023, also provide guidelines on maintaining information security while integrating ethical AI considerations into SOC operations (ISO/IEC 27001:2023, 2023).
Data privacy is another key ethical concern in AI-driven SOCs. The Australian Privacy Act 1988 (Cth) and the Australian Privacy Principles (APPs) require organisations to protect personal information and ensure that AI systems do not compromise data privacy (OAIC, 2022). Human oversight is essential for ensuring compliance with these regulations. SOC analysts must monitor AI systems to ensure data handling practices align with legal requirements. By maintaining human oversight, SOCs can safeguard sensitive information and prevent unauthorised access or misuse of data (OAIC, 2024).
Bias management is also critical in AI-driven SOCs. Biases in AI systems can lead to unfair threat assessments or skewed incident prioritisation (X. Zhang et al., 2022). To address this, organisations must monitor AI outputs for potential biases and implement corrective measures. Human analysts play a vital role in identifying and mitigating biases, ensuring that AI systems provide fair and accurate assessments (Binns, 2018). Adversarial attacks further complicate the ethical landscape of AI in SOCs. Malicious actors can manipulate AI inputs, leading to misclassifications that could compromise security. Human oversight is necessary to detect and respond to such attacks, thus preserving the integrity of AI-driven SOCs (Huang et al., 2011).
Future Directions: Advancing Human-AI Collaboration in SOCs
As AI evolves, SOC professionals must develop skills that complement AI capabilities. Training in AI technologies, data analysis, and ethical considerations will be essential for future SOC analysts (Kaur et al., 2023). By staying current with technological advancements, SOC professionals ensure they remain equipped to work effectively alongside AI systems. This ongoing adaptation is necessary for SOCs to address emerging cybersecurity challenges and maintain a robust defence.
AI-driven SOC innovations, such as predictive analytics, may enable SOCs to anticipate threats based on patterns across multiple data sources (Hero et al., 2023). Furthermore, with the advancement of AI, SOC roles will shift towards strategy and policy, requiring analysts to engage in higher-order thinking and decision-making (Salis, 2024).
Overreliance on AI, however, could undermine SOC effectiveness. While AI is a powerful tool, it lacks the flexibility and intuition that human analysts provide (Salem et al., 2024). A balanced approach ensures that AI supports human decision-making rather than replacing it, preserving the adaptive capacity of SOCs. By investing in skill development and ethical governance, SOCs can create a framework that accommodates new AI technologies while preserving human oversight (Salem et al., 2024).
Conclusion
AI is unlikely to supplant humans in SOCs, as there is little evidence to support such a shift. Instead, AI serves as a powerful tool that enhances human capabilities. By handling routine tasks and extensive data analysis, AI allows SOC analysts to focus on strategic and complex responsibilities. Furthermore, continuous improvement and adaptability are essential, so SOCs must adopt a mindset of constant refinement by regularly updating AI models, refining processes, and remaining vigilant against evolving threats.
Human oversight is crucial for contextualising AI insights and addressing ethical and security considerations. This synergy of AI and human expertise, supported by a commitment to improvement, creates a resilient SOC that is fully equipped to address current and future multifaceted cybersecurity challenges.
Future Considerations: Preparing for the Next Generation of SOCs
In preparing for the future, SOCs should focus on skill development, ethical governance, and adaptive frameworks. Continuous training ensures that SOC professionals can work effectively alongside AI as technology evolves. Ethical governance is essential for maintaining transparency and accountability in AI-driven SOCs. Adaptive frameworks that integrate new AI technologies while preserving human oversight will be crucial for sustaining a resilient cybersecurity posture.
As AI becomes more integral to SOC operations, the balance between human intuition and AI capabilities will determine the effectiveness of future cybersecurity defence. SOCs that invest in human expertise and ethical AI practices will be better equipped to defend against sophisticated cyber threats in an increasingly digital world.
?
领英推荐
References
Ahmad, R., Alsmadi, I., Alhamdani, W., & Tawalbeh, L. (2023). Zero-day attack detection: a systematic literature review. Artificial Intelligence Review, 56(10), 10733–10811. https://doi.org/10.1007/s10462-023-10437-z
Baseri, Y., Chouhan, V., & Ghorbani, A. (2024, April 16). Cybersecurity in the Quantum Era: Assessing the impact of Quantum computing on infrastructure. Arxiv. https://arxiv.org/html/2404.10659v1
Ben-Asher, N., & Gonzalez, C. (2015). Effects of cybersecurity knowledge on attack detection. Computers in Human Behavior, 48, 51-61. https://doi.org/10.1016/j.chb.2015.01.039
Bhumichai, D., Smiliotopoulos, C., Benton, R., Kambourakis, G., & Damopoulos, D. (2024). The convergence of artificial intelligence and blockchain: the state of play and the road ahead. Information, 15(5), 268. https://doi.org/10.3390/info15050268
Binns, R. (2018). Fairness in Machine Learning: Lessons from Political Philosophy. In Proceedings of Machine Learning Research (Vol. 81, pp. 1–11). https://proceedings.mlr.press/v81/binns18a/binns18a.pdf
Bonnie, E. (2023, December 7). How Artificial Intelligence Will Affect Cybersecurity in 2024 & Beyond. Secureframe. https://secureframe.com/blog/how-will-ai-affect-cybersecurity
Chan, W., & Zhang, J. (2024, February 6). Elevating Security operations: The role of AI-Driven Automation in enhancing SOC efficiency and efficacy. https://journals.sagescience.org/index.php/jamm/article/view/128
Check Point. (2023, December 24). What is SOC Automation? Check Point Software. https://www.checkpoint.com/cyber-hub/threat-prevention/what-is-soc/what-is-soc-automation/
DeCarlo, A. L. (2024, August 30). 4 AI cybersecurity jobs to consider now and in the future. Search Security. https://www.techtarget.com/searchsecurity/tip/AI-cybersecurity-jobs-to-consider-now-and-in-the-future
Fjeld, J., Achten, N., Hilligoss, H., Nagy, A. C., & Srikumar, M. (2020). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center Research Publication. https://doi.org/10.2139/ssrn.3518482
Ganesh, N., Premanand, V., Chikkam, V. S. D., Teja, P., Kumar, N. C. A., & School of Computer Science and Engineering, Vellore Institute of Technology, Chennai – 600127. (2024). Sustainable Horizons: Generative AI’s Evolution in Empowering Security Operations Centers. In International Journal of Novel Research and Development (Vol. 9, Issue 7) [Journal-article]. https://www.ijnrd.org/papers/IJNRD2407096.pdf
Greis, J., & Sorel, M. (2024). The cybersecurity provider’s next opportunity: Making AI safer. McKinsey & Company. https://www.mckinsey.com/capabilities/risk-and-resilience/our-insights/the-cybersecurity-providers-next-opportunity-making-ai-safer?cid=eml-web
Goodfellow, I., Shlens, J., & Szegedy, C. (2015). Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. https://doi.org/10.48550/arXiv.1412.6572
Hero, A., Kar, S., Moura, J., Neil, J., Poor, H. V., Turcotte, M., & Xi, B. (2023). Statistics and data science for Cybersecurity. Harvard Data Science Review, 5(1). https://doi.org/10.1162/99608f92.a42024d0
Huang, L., Joseph, A. D., Nelson, B., Rubinstein, B. I., & Tygar, J. D. (2011). Adversarial machine learning. Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence, 43-58. https://doi.org/10.1145/2046684.2046692
Hughes, M., Carter, R., Harland, A., Babuta, A., & CETaS. (2024, April). AI and Strategic Decision-Making: Communicating trust and uncertainty in AI-enriched intelligence. CETaS Research Reports. https://cetas.turing.ac.uk/publications/ai-and-strategic-decision-making.
ISO/IEC 27001:2023. (2023). Standards Australia. https://www.standards.org.au/standards-catalogue/standard-details?designation=as-nzs-iso-iec-27001-2023
Johnson, R., Maserrat, K., & Yeo, J. (2024, August 22). Quantum Computing’s Transcendence: Impacts on Foundational Technology Foley & Lardner LLP. Foley & Lardner LLP. https://www.foley.com/insights/publications/2024/08/quantum-computings-transcendence-impacts-foundational-technology/
Kaur, R., Gabrijel?i?, D., Klobu?ar, T., & Laboratory for Open Systems and Networks, Jo?ef Stefan Institute, Ljubljana, Slovenia. (2023). Artificial intelligence for cybersecurity: Literature review and future research directions. In Information Fusion (Vol. 97, p. 101804) https://doi.org/10.1016/j.inffus.2023.101804
Kinyua, J., & Awuah, L. (2021). AI/ML in Security Orchestration, Automation and Response: Future Research Directions. Intelligent Automation & Soft Computing, 28(2), 527–545. https://doi.org/10.32604/iasc.2021.016240
Lahcen, R. a. M., Caulkins, B., Mohapatra, R., & Kumar, M. (2020). Review and insight on the behavioral aspects of cybersecurity. Cybersecurity, 3(1). https://doi.org/10.1186/s42400-020-00050-w
Leffer, L. (2024, February 20). Yes, AI Models Can Get Worse over Time. Scientific American. https://www.scientificamerican.com/article/yes-ai-models-can-get-worse-over-time/
Monostori, L. (2014). Cyber-physical production systems: Roots, expectations and R&D challenges. Procedia CIRP, 17, 9-13. https://doi.org/10.1016/j.procir.2014.03.115
Muller, V. C. (2020). Ethics of artificial intelligence and robotics. The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/ethics-ai/
O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
OAIC. (2022). Australian Privacy Principles. Office of the Australian Information Commissioner. Retrieved from https://www.oaic.gov.au/privacy/australian-privacy-principles
OAIC. (2024, October 3). OAIC submission to the Department of Industry, Science and Resources – Safe and responsible AI in Australia discussion paper. OAIC. https://www.oaic.gov.au/engage-with-us/submissions/oaic-submission-to-the-department-of-industry-science-and-resources-safe-and-responsible-ai-in-australia-discussion-paper
Parkinson, S., & Khan, S. (2024). The role of Artificial Intelligence in digital forensics: Case studies and future directions. Assessment and Development Matters, 16(1), 42–47. https://doi.org/10.53841/bpsadm.2024.16.1.42
Raimondo, G. M., U.S. Department of Commerce, National Institute of Standards and Technology, & Locascio, L. E. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). In NIST AI 100-1. https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf
Ramos, S., & Ellul, J. (2024). Blockchain for Artificial Intelligence (AI): enhancing compliance with the EU AI Act through distributed ledger technology. A cybersecurity perspective. International Cybersecurity Law Review, 5(1), 1–20. https://doi.org/10.1365/s43439-023-00107-9
Reeves, A., & Ashenden, D. (2023). Understanding decision making in security operations centres: building the case for cyber deception technology. Frontiers in Psychology, 14. https://doi.org/10.3389/fpsyg.2023.1165705
Russell, S., & Norvig, P. (2020). Artificial intelligence: A modern approach (4th ed.). Pearson.
Salem, A. H., Azzam, S. M., Emam, O. E., & Abohany, A. A. (2024). Advancing cybersecurity: a comprehensive review of AI-driven detection techniques. Journal of Big Data, 11(1). https://doi.org/10.1186/s40537-024-00957-y
Salis, S. (2024, August 8). What skills do SOC analysts need today? HarfangLab | Your Endpoints, Our Protection. https://harfanglab.io/blog/methodology/soc-skills-mindflow/
Sengupta, S. (2024, June). Top 5 Successful initiatives in AI and Cybersecurity. Magnus Management Group LLC. https://www.mmgllc.us/top-5-successful-initiatives-in-ai-and-cybersecurity/
Tilbury, J., & Flowerday, S. (2024). Humans and Automation: Augmenting security operation centers. Journal of Cybersecurity and Privacy, 4(3), 388–409. https://doi.org/10.3390/jcp4030020
Zhang, L. A., Hartnett, G. S., Aguirre, J., Lohn, A. J., Khan, I., Herron, M., & O’Connell, C. (2022, December 12). Operational feasibility of adversarial attacks against artificial intelligence. RAND. https://www.rand.org/pubs/research_reports/RRA866-1.html
Zhang, X., Chan, F. T. S., Yan, C., & Bose, I. (2022). Towards risk-aware artificial intelligence and machine learning systems: An overview. In Decision Support Systems (Vol. 159, p. 113800). https://doi.org/10.1016/j.dss.2022.113800
Sales | Account Management | New Product Introduction
2 个月A very well thought out article especially in the area of AI limitations and the need for human supervision.
Strategic, Tactical and Operational Problem solver, GRC, BCM, DRP, ITIL, Info/CyberSec Consultant
2 个月Great write up but so far out of touch with reality that it is ludicrous. Certain extracts from other’s research should benseen in the conttext of what the researcher’s objectives were. There are many factors that make a SOC a success and relying on certain keywords is not one of them. It was also apparent that the use of existing tools within the SOC was not taken into account neither were the playbooks or monitoring systems used by the analysts at level 1. There is also no mention of how AI could assist in identifying the technological advances made By the threat actors that do not create an event at initial compromise and for which the Att&ck methodology may not work resulting in a more comprehensive threat modeling approach. Neither is Pattern Analysis by Neurodivergent individuals mentioned who are able to understand a situation faster than an existing tool if the tool could identify it in the first place. There are also inconsistencies in the history of SOCs. Jay Jay Davey, may be able to note more. Additionally, very few companies CAPEX or OPEX a SOC to the level required due to the cost of the tools.
Innovating conversations, one thought at a time.
2 个月Dr. Michael N. You’ve clearly highlighted technology's role in security, but as you pointed out, the critical vulnerabilities stem from human error. When security fails, it’s rarely due to technology itself—we already know it has limitations—but rather the people designing, operating, and overseeing it. The core issue lies in insufficient investment in understanding human risks. Decision-makers prioritise self-serving interests, focusing on appearances rather than substance. This reduces even the best technology to a "garbage in, garbage out" scenario.?? One suggestion: you've mentioned AI's ability to understand. Until AI sentience is proven, we must shift the narrative back to human accountability. AI follows programmed instructions and algorithms—it processes, but it doesn’t UNDERSTAND. Any misunderstanding in outcomes is rooted in human failures to recognise AI's limitations or program it effectively. Linear programming imposed on complex, nonlinear systems amplifies design flaws. When end-users lack the innate complexity thinking talent (which cannot be taught), they exacerbate these issues, unintentionally reinforcing systemic weaknesses. Technology, after all, reflects the limitations of its creators.?
Founder. Strategist. Business Leader. Hyper passionate business owner driven to create status-quo-breaking ideas.
2 个月Dr. Michael N. this is a compelling perspective. I agree that AI serves as an augmentation tool, not a replacement, much like how autopilot enhances but doesn’t replace the pilot’s role. In SOC operations, human intuition and contextual understanding remain irreplaceable, especially when navigating the unpredictable nuances of cyber threats. By leveraging AI to handle routine tasks and analyse massive datasets, analysts can focus on high value strategic challenges ‘Things that really matter!’ The organisations that will thrive in the evolving cyber landscape are those that invest in developing their teams’ (human) expertise while adopting AI ethically and responsibly. It’s all about striking that balance between innovation and human insight. PS: I can't see myself ever shifting away from this opinion ??