Network Edge Zero Trust in Practice
How does zero trust apply at the network edge??
Zero-trust frameworks are very helpful in addressing a fatal flaw in security thinking -?the presumption of a ‘safe space’, which is protected by a magical perimeter, or unicorn shield.?
Focusing on continuous verification, access controls and breach assumption is a much better model for security.?
What does this mean at the network edge? How can you possibly continually verify traffic, add access controls and breach models for traffic that hasn’t logged in? Even if you could, wouldn’t it just slow your entire site down?
At VerifiedVisitors, we're constantly looking at ways of continually verifying visitor traffic and automating access controls at the network edge.
The famous rhyming Russian proverb?doveryay, no proveryay, "Trust but Verify " which was popularised in the West by President Reagan and repeated ad nauseam during the nuclear arms reduction talks, worked in the enormous soviet-era bureaucracy simply because you have to have some element of trust for humans to co-operate and function. Responsible people with integrity who are allowed to get on with things, whose actions are verified over time seems like a good model for society to follow.
It's also turns out it's also a good model for machine learning to follow as well.
Verify Constantly, Trust Occasionally.
At VerifiedVisitors we constantly verify all the traffic hitting endpoints at the network edge. We work at the Content Distribution Network (CDN) layer, before the traffic hits your site. Our detectors use Machine Learning to constantly assess these threats according to the actual behaviour of the visits as shown below.
We then display the threats by risk area, across hundreds of endpoints, so you can identify and visually verify the risk types, and create policies based on your access requirements.?
As the machine learning sees more traffic over time, it learns from the traffic, accepts labelled data from our customers who flag key paths and know vulnerabilities, and apply rules. It learns and gets better. It starts to trust, based on the repeated verifications.
Verify Constantly, Trust Occasionally.
In the graphs above, we show the entire risk surface area of all visitors across all endpoints. The areas of high risk are clearly visible, colour coded and displayed. Where do see repeat visitors, that are proven to be human, or known bots that are allowed and verified, they are then displayed in green as 'trusted'. Each visitor is assigned a virtual ID, so we can track repeat visits over time. These visitors are still verified dynamically, but can now safely go into the 'trusted' bucket for this particular visit at this time.
Clicking on the chart icon flips the view so you then see the detailed traffic chart for each threat type over time. This can be really useful when looking at trends, traffic patterns or particular behaviour.
This means we can learn to trust our verified visitors over time, based on their past behaviour.?
Known bots are already picked up by their behavioural patterns.?
This allows us to cut-down on the constant need to verify all traffic, all of the time.
Verify Constantly, Trust Occasionally.
Access Control
Now that we have a robust architecture for verification at the network edge, we can then set-up policies for network access.?
Again, this is based on the Verify Constantly, Trust Occasionally model. Many sites are just putting in a perimeter type approach, by just blocking all suspect visitors using a fingerprint.
That’s why you see so many messages like the one above. On this occasion visiting the home page with a VPN was enough to trigger the message below with an?enforced CAPTCHA. It’s a one-size-fits-all approach that just doesn’t work.
Nearly a third of internet users worldwide use a VPN. All the user had done was to visit the home page of site they are potentially interested in purchasing from. Some welcome.
Using a fingerprint alone leads to many ‘soft signals’ such as a the use of mismatch between the stated user agent and the actual hardware specification, which generates many false positives as in the example above of using a VPN.
Combining fingerprint anomaly detection with the actual behaviour of the visitors gives a much more robust and accurate model for visitor management. It’s much easier to disguise a genuine fingerprint using an automated client. It’s very hard if not impossible to disguise the actual behaviour of the client. If the hacker is attempting an account take-over attack, it has to hit the login path at some point.?
The VerifiedVisitors automated rules generator sets up a dynamic rule in place to protect e.g. login paths from potential account-take-over that also display significant risk from the fingerprint anomalies detected.?
Breach Assumption
Now that we have a robust architecture for verification and access control, we’ve already made potential breaches more unlikely. With behavioural tracking, path analysis and fingerprinting of each user, we’ve put in a much more robust protection from account-take-over attempts. However, this doesn’t stop us from being pro-active and vigilant and monitoring for potential breach behaviour, by constantly verifying our 'trusted' traffic.