Security Awareness: We say it's a 'layer 8 problem' but are we to blame?

Security Awareness: We say it's a 'layer 8 problem' but are we to blame?

Information security is about people, process and technology. I hear this a lot. It's an eminently sensible assertion, and it has served us well. In a world of digital enablement, and cybercrime-as-a-service, what does it mean? Is it really about people? In what way? Has our ever more sophisticated threat landscape reached a tipping point where we simply cannot train the lay user? Scrap that, have we reached a level where NO ONE can decide between what is good and what is bad on the big, bad Internet?

I am not saying that users are absolved of any responsibility in cyberspace. We must educate around the perils of social engineering; it's important that everyone understands the issues with password reuse, I'm not disputing the benefits of a clear desk policy, nor the use of privacy screens to prevent shoulder surfing. Off topic a little, on the subject of snooping - why don’t companies issue privacy screens with all laptops? I travel a lot and every day I see someone on a train or a plane working on confidential information or typing passwords into their email / VPN solutions! If I wanted to exfil data from an organisation or steal privileged creds, I’d just buy a ticket on the 7:26am to London Paddington.

I am saying that legacy approach to security awareness is predicated on knowing good from bad. I assert this is no longer possible. Fox Mulder told us to trust no one, was he right all along? Sort of

There is no one size fits all approach to cyber security. The motivations, skills, funding and experience of the ‘bad guys’ varies enormously. Our ‘known-bad’ approach works well for a generic, poorly constructed 419 email requesting the transfer of large sums of money to an offshore account but no amount of awareness is going to mitigate some of the risks associated with targeted attacks. We need to contextualise the threats, assess organisational mitigations, and make conscious risk decisions. 

We need to know what we’re trying to protect against. Not always an easy task in the cat-and-mouse game of cyber protection and something made harder by some of our innate human behaviour:

Users click stuff

I don't mean users like my Mum - I'm not knocking the non-digital natives; my mother is one of the cleverest people I know (had to get it from somewhere) but computer savvy she's not. Tangentially, she told me five years ago that Sky had installed her 10Gbps DSL line (I was fairly sure at the time she'd not opted for a carrier grade circuit at home to support her occasional browsing of The Guardian website or infrequent iPlayer download). My point (yes, I do have one) is that users click stuff because they are inquisitive, curious, emotional people not because they’re nefariously inclined or poorly educated. We've spent aeons telling users to 'click this link' for all sorts of business and personal reasons, now we're expecting them to know good from bad? Come on!

Add into the melting pot that social engineering and phishing scams have evolved into multi-million dollar businesses, conducted by professionals. The emails we receive or websites we're asked to visit are frighteningly authentic-looking. It's not just my Mum they'd fool; it's almost everyone. We all remember spam emails of yesteryear - spotting good from bad was almost trivial: An hour of computer-based training and job done.  Now, we're in a world of homographic attacks and domain squatting which makes the burden of responsibility on the user too great.

Users are busy

Show me an organisation that doesn’t have a reliance on technology for critical business operations and I’ll show you an environment with a shadow IT problem. Email and mobile computing are de facto methods of communication intra and inter-organisation. Try telling a journalist with a hot lead entitled 'breaking news' to contact IT and check the file is safe, or an accountant swamped at year-end not to open 'HMRC.docx'. Pressure and stress alter an otherwise pragmatic approach to security awareness. People need frictionless security and we're failing to deliver this; instead, throwing the blame back to my Mum. Some are mooting a ‘zero-trust’ approach to enterprise security architecture and perhaps we need to extrapolate some of these principles for user awareness: assume a user is going to click EVERYTHING and visit malicious websites – let’s work back from there.   

Curiouser and curiouser

Curiosity may have killed the cat and rendered Alice pint-sized but that has not stopped users being nosy. It’s an ‘on-the-fly’, subconscious risk calculation being performed by the user. Yes, this USB could contain malware but I’d love to read that HR spreadsheet that tells me what Steve is getting paid and chances are, it’s fine. Phishing works because we are human beings. Abstract from temptation and curiosity, we all know that we shouldn’t really open that link but hey, no one is looking…

Blaming users for plugging USB pens into machines is analogous to blaming a child for falling down the stairs. There should have been the appropriate protections to remove or at least mitigate the risk. If you need to use USBs (and outside of my ivory tower I’ve seen lots of genuine use cases), please make sure that you have granular access control, logging and some form of cryptographically-sound framework to a) encrypt the data and b) digitally identify that the device was issued from your IT department and can be remote wiped.

When good sites go bad!

Jamie Oliver, Spotify, Mr Chows, The New York Times - not an extract from my latest bank statement merely a handful of 'reputable' websites which have fallen victim to some form of malvertising. There was a time in the recent past where malware came almost exclusively from file hosting, torrent and pornography sites. Whilst it is still statistically probable that these locations are serving the majority of malware, it is sites we historically considered 'safe' which are blasting a huge hole through our traditional security defences. Watering hole attacks and drive-by-downloads are common, they require little effort for the criminals and have a very high success rate.

The beautifully tailored web that we have today is increasingly constructed of dynamic, user-generated content. The weird and wonderful URLs we see are invariably single-use or customised, based on that specific user action. This is another reason why the whitelist / blacklist model doesn’t work. Dynamic content can be vulnerable to a myriad of application vulnerabilities (XSS, CSRF, SQLi). We’re serving static content from Content Delivery Networks (CDNs) which are almost always whitelisted by organisations and subject to little, if any, security protection. Our world wide web has changed; it’s time we changed the way we protected our users when using it.

The Business Conversation

I feel we need to be more honest with our boards and business executives when discussing end-user awareness. An hour’s annual CBT is going to provide limited benefit if it is treated as a ‘set and forget’ activity.

In an ideal world, the executive meeting would follow this format:

CxO: 'Acme Corp have commissioned a security awareness programme. They’re sending bogus emails to users to see who clicks on the links. If they do, they’re taken to a splash page which provides them with awareness information telling them not to open files from people they don’t know or to avoid suspicious links. The Acme CISO says this is going to dramatically reduce their exposure to a data breach – CISO, why are we not doing this? What am I paying you for?’.
CISO: ‘Great, user awareness is important. A lot of malware is delivered through email and web exploits. We SHOULD be giving users guidance on how to report suspicious activity and identify scams. Good security hygiene is imperative. Unfortunately, however, we’re not dealing exclusively with the kid in the garage and the techniques criminals are using have evolved. These bogus sites often look legitimate, have credible-looking DNS names and are often delivered with the trusty padlock symbol. This type of activity can sometimes have the adverse effect too. If users feel that clicking stuff is bad, they’re far less likely to report when they realise that perhaps something untoward just happened on their workstation.’
CxO: ‘So, you’re telling me not to waste my money?’
CISO: ‘No – awareness is key but let’s not forget about the ‘process and technology’ components of the InfoSec cube. If we apply controls on the assumption everyone will click everything, we’ll be a lot safer.’

So what do we do?

Do we chuck away our computer-based training and give up? Absolutely not. Bad guys are evolving their tools, techniques and procedures continually – we need to do similarly. Rather than lumping responsibility solely with the end user, we should be keeping up our end of the bargain.

We put users under the microscope with metrics around ‘who clicked’ or ‘% of users who opened a pdf file’. Let’s measure ourselves a little more. If we’re assuming (which I wish we would) that users will click anything, how about some structured blue teaming; are we able to identify and respond to attacks which target the user? Does our security architecture provide defence-in-depth protection? Do we have a malware detection capability focused on behaviour opposed to signature? Are we aggregating log information into our SIEM? Do we even know what our indicators of compromise look like? Polymorphic malware is at an all-time high. The Cerber ransomware is allegedly morphing every 15 seconds, generating essentially unique hashes which will always evade signature-based defences. If we’re subscribing to a zero-trust model then we need the ability to analyse the behaviour of files rather than their reputation. Let’s train our users to identify the obviously malicious but in a world where the cyber-criminal is hiding in plain sight, reputation is only going to get you so far.

Steve Grossenbacher

Sr. Director of Product Marketing at Zscaler

8 年

Nicely done Chris. Crazy that the Cerber ransomware is morphing every 15 seconds!

回复
Kalpesh Sheth

Principal Technical Program Manager at Amazon Robotics

8 年

Great article Chris. We as a startup have been telling CISOs that people are the new perimeter and chasing after new shinny malware or APT may not work in the long run and you have to focus on User Behavior with proactive action (2FA) in real-time. Somehow many CISOs are afraid of taking a "real-time" action due to false-positive concern. I understand, but imagine the cost of false-negative. Bad guys have to be right only once while good guys have to be right all the time. We live in such an asymmetric world !

回复
Florian Hartmann

Senior Manager Sales Engineering - Central Europe at CrowdStrike

8 年

Great article Chris and some good points on user behaviour.

Nick Santora

cybersecurity hype man | business architect | founder at aijobs.com

8 年

Great in depth article Christopher. A lot of these points we stress especially the once a year death by powerpoint. We speak with lots of organization's and their "awareness" experts that won't even listen to the thought of changing perception outside of the once a year delivery of education. It is mind blowing, but sadly a reality. The best way I explain security awareness is to compare it to personal health. Do you think eating 1 healthy meal and exercising 1 time a year will keep you fit and lower your overall risk of health problems? Probably not. Well then don't expect anything to change with your cyber security culture either.

要查看或添加评论,请登录

Christopher Hodson的更多文章

社区洞察