AI in CYBER SECURITY: PART TWO

AI in CYBER SECURITY: PART TWO

More Articles on similar subjects at: UK National Cyber Security Association

This is our second article on the fascinating topic of AI in Cyber Security. The first one really stirred up some emotions! First Article Here

One reaction … "AI does not even exist yet, the most you can say is that it’s very clever programming. So, by trying to claim your product uses AI ..."

I think this is pretty much down to semantics, so it’s worth while highlighting what we mean by AI.

The term artificial intelligence was used way back in 1956 at the Dartmouth Artificial Intelligence Conference where it is described in broad terms as features of intelligence that can be simulated by machines, that is, computers. So AI is clever programming. Early AI systems used pattern matching and expert systems. However Machine Learning is different from just programming a set of rules, ML is a tool which enables computers to teach themselves, and set their own rules.

ML has developed to become the best tool to analyse and identify patterns in the data. Using machine learning, the computer can be trained to automate tasks that would be far too time consuming for a human to carry out. Additionally the decisions can be made based on the outcome of algorithms with minimal operator intervention.

Star Wars: C-3PO

Due to the movies, I think many people automatically think of AI as having human-like abilities (Star Wars C-3PO etc). This is really known as Artificial General Intelligence (AGI) or General Artificial Intelligence which is not yet with us, and some think it’s a long, long way off.

AGI would give the machine the ability to use sensors to understand it’s environment and make decisions to improve the environment. This ability coupled with extremely fast data processing would give it more ability than humans. This is also known as Strong AI

The type of AI in use now for cyber security and most other implementations is called Narrow AI (NAI) or Weak AI, which is for single task operations. NAI makes use of ML algorithms to find patterns in data that would be impossible or very tedious for humans to discover.

Deep Learning

Deep Learning is a subset of ML using structures called deep neural networks which are supposed to mimic the human brain. One feature of deep learning is that outcomes themselves can be tested, and fed back into the algorithm layer to improve accuracy. This is called recurrent neural network and recurrent back-propagation. This kind of unsupervised learning, is ideal for working with very large sets of data, which is just what we have when we look at network traffic data.

Deep Learning Feed-back
Loop to Improve Accuracy
No alt text provided for this image

Our Growing Data Pools

As we all know, our network devices create an immense amount of data which many organisations spend millions of pounds collecting, normalising and analysing in order to identify malware, lateral movement, and unauthorised access. Our security information and event management (SIEM) systems help get all this data into one place, create alerts, and then help the SOC team triage and respond them. Many of today’s SIEMs have morphed into “Unified” security systems which now incorporate data from intrusion protection systems (IPS), data loss protection (DLP) and end-point AV protection. That’s a lot of information, too much for humans to deal with, even for a large SOC team.

Of course, our IT sprawl now includes our on-premise data centre with real and virtual machines, AWS and maybe Azure or GCS cloud infrastructures. Add to that, the remote devices, connections via WIFI, Bluetooth, and VPN to both corporate and personal devices and gradually they’ll be thousands of IoT devices to secure as well. (Sorry, looks a bit daunting!) That’s why I feel that AI, (that is narrow AI) is going to become a necessary mainstream element of cyber security.

A Different Approach

Collecting log data from all the different devices, using agents to gather even more data and then moving and storing the data is quite a drain on the network bandwidth as well as compute resources. A different way of dealing with security is to just analyse the network traffic itself in real time with several different and specific machine learning algorithms. This is using narrow AI to accomplish specific tasks at a very high speed. Strategically placing a span port (tap or mirror port) at the main server and routing the mirrored data to a high power AI engine, enables all network traffic to be analysed in real time without placing any burden on the network.

Such a system can then watch and use unsupervised machine learning to build up a pattern of “normal” usage for every device, every user, and every service that sends traffic through the network. By just using meta data, there is enough information to spot anything that is “out of character” for a particular device or user and therefore trigger an anomalous data alert. Again ML algorithms can be used to triage these alerts saving the SOC team an immense amount of time and effort.

OK, so much for the theory, let’s look at two more real-world examples.

(Feel free to send me a LinkedIn Message for more information on the software used.)

The Security System Itself Becomes the Security Problem!

Recently in an investment consultancy firm, the CCTV system became the main security problem. As with many new systems this one was also internet-connected and had been compromised by unknown attackers. When the AI defence system was installed, it noticed that a high volume of video data was moving through the network. The attackers had gained control of the CCTV system, and would be able to watch all of the system’s video recordings including board room meetings.

When the perpetrators began moving data from the unencrypted CCTV server towards the perimeter in an attempt to exfiltrate the files, the AI defence system recognised this as anomalous behaviour and blocked the file transfer. The restriction did not interfere with the running of the CCTV system or it’s server, it just prevented the anomalous behaviour.

Sometimes an IPS or DPL can severely obstruct business processes when they close certain services in order to contain an exploitation. This example of the AI system taking proportionate action, highlights the benefits of AI. The AI defence system had calculated what was a normal pattern of use and restricted usage to that norm thus not interfering with BAU.

Insider Threat in South African Company.

This insider action started with reconnaissance of the internal network from an employee's laptop device. It began ‘pinging’ hundreds of internal IP addresses in order to identify those which were active. After that the laptop with its software started a sweep of the network collecting names of responsive machines, and then scanned them all for open ports.

Of course, this activity is well outside what had been determined as the “normal activity pattern” for it’s peer group (not necessarily the actual laptop itself) and an alert was raised for the SOC team. The AI defence system was configured to take defensive action so the laptop was immediately restricted to it’s groups “normal activity pattern” for a one hour period.

Within a few hours, the exploit continued. The laptop began to run commands on hundreds of other internal pcs in the IP range it had previously identified. The laptop was using a remote-administration tool and deploying script files to the machines which could be used later as a back door into the network.

Because no other similar file-writes had been seen across the network during such a short period of time, this was flagged as anomalous behaviour. Against this the AI system instantly blocked out-going SMB file transfers which contained the event. When the SOC investigated, in this case it was found that a member of the security team had performed an unscheduled scan to look for weaknesses in the network.

Salient Points

The interesting point here is that the events were carried out by authenticated and authorised users. The events were detected, not because actions broke any network rules, but because they were “out of character” with the learned “normal pattern” of use.

Consider how much time it takes to properly configure a SIEM to get the alerts you need. Often, the SIEM never gets optimally tuned and the SOC team has to deal with hundreds of false positives every day.

The AI defence system is self-learning. As the network grows and changes there is no need to reconfigure the systems because the AI systems will adjust itself.

The savings in SOC team alert analysis time and configuration time are substantial. Added to this is the rapid auto response to contain events, thus giving the SOC team the time it needs to properly investigate.

More Articles on similar subjects at: UK National Cyber Security Association

Feel free to send me a LinkedIn Message for more information on the software used.

David R Bird is a board advisor on cyber security. He works in a consultative/mentoring capacity with your security team to develop and improve the cyber security program. As well as a Certified Information Systems Security Professional (CISSP), David is also an Enterprise Architect (TOGAF practitioner), AWS solutions architect (professional) and PRINCE2 project manager, this with his previous business background as a finance professional ensures that cyber security is always aligned with business objectives.

Pete Vivoski

US West Coast Marketing Manager

5 年

Thank you for sharing this David R. Bird. MSc Cyber Security, CISSP. Cyber Expert. Well written.

回复
David R. Bird. MSc.

Cyber Security Consult and and Trainer

5 年

Question: who is using some kind of AI in cyber security? Jon Chan, CISSP, PMP?Bob Carver, CISM, CISSP, MS?Robert Dahlqvist?Andrey Nikishin?Milos Pesic?and everyone !!

回复
Dennis D.

Company Owner | Director, Security Operations

5 年

AI as automated intelligence? Thoughts welcome ..

回复

要查看或添加评论,请登录

David R. Bird. MSc.的更多文章

社区洞察

其他会员也浏览了