My thoughts on: Telling users to ‘avoid clicking bad links’ still isn’t working

My thoughts on: Telling users to ‘avoid clicking bad links’ still isn’t working

On the 20th December 2022, NCSC published a blog post titled “Telling users to ‘avoid clicking bad links’ still isn’t working”, and they’d be right. A few in my network reached out and asked my thoughts on this as a Security Behaviour and Culture Specialist, so I sat down to fully digest the piece, and respond as I digested each paragraph and point.?

“Users frequently need to click on links from unfamiliar domains to do their job, and being able to spot a phish is not their job.”

Yes, staff do need to click on links to carry out their daily work tasks, this is right. By telling people not to click stuff, you are blocking their delivery, which could be a performance issue in the long run, leading to a world of upset. Is it not their job though? It’s as much their job as raising the alarm when the toaster catches fire in the kitchen. Are we now saying staff shouldn’t raise the alarm in the face of danger? Do we categorise cyber danger as less of a threat than physical? I know I’ve written about this before, and I still feel education lets us down. We don’t get in vans with the man offering to show us his puppies, but we will click stuff with the promise of puppies or another reward.?

“We're even aware of some cases where people have forwarded suspicious emails from their home accounts to their work accounts, assuming that the security measures in place in their organisations will protect them.”

This does show a level of talent and diligence in spotting suspicious emails or phish. Now, if we are saying it isn’t their job to spot a phish, how do we protect our business from accidental insiders, those who use the same password at home and at work, those who we now say don’t have to spot a phish at work, so aren’t furnished with the experiences and knowledge to protect themselves at home, thusly accidentally compromising your corporate systems.

“the stigma of clicking can prevent people reporting it”

This is not a staff problem, this is a cultural problem. A few words in the sub heading of this piece nails it; avoid ‘blame and fear’. Let staff know there will be no repercussions and you welcome questions. Start an open dialogue between staff and security teams, where they feel happy to approach, let you know they’ve f**ked up, and be supportive and thank them for the info, without the “I’m smarter than you” tone or eye rolls. Yes, you need to ask a few questions, but switch off the interrogation lamp, and chat like old friends; “and then what happened?... you’re joking… Don’t worry, we'll get to the bottom of it”. You are not a Bond villain, you are a peer. We need to make it widely acceptable to click stuff and ask for help. Yes, it’s not ideal to have people clicking here, there, and everywhere, but what is ideal is them telling you they did it, after all; we have all done it!

“Mitigating credential theft for organisational services”

So, the blog goes on to say “your organisation can entirely mitigate the threat of credential theft by mandating strong authentication across its services” and lists a few options. Again, yes, they’re right, this could be achieved, however, I can count on exactly one finger the amount of companies I’ve worked with who are routinely using device-based passwordless authentication. I couldn’t count on all your fingers and toes how many companies have tried to enforce MFA and failed. SSO is a beast of a thing to integrate, and comes at a cost which often hasn’t been factored into procurement budgets, and may require technical support which wasn’t planned for. So I guess this isn’t that simple to fix, or we wouldn’t be where we are now.?

Now we go onto look at Password Managers, and to encourage use of them. Let me ask, why just encourage? Why not make it available to staff, with the sweetener of an extra account for personal use at home, breeding the habit of unique and strong passwords, and the separation of accounts for home and work use, reducing the likelihood of a compromise at home taking down your corporate network? Again, we encourage staff to use MFA, but see above, it just doesn’t work like that in reality.?

“only your organisation's devices can access resources”

I get the concept here, but this does involve a fair bit of technical jiggery pokery, which may limit this option to larger companies. My main concern is crisis management, business continuity etc. This isn’t a fool proof plan, and would restrict disaster recovery if something went wrong. The power in the admin account to lock and unlock these controls must be closely guarded with all sorts of bells and whistles. I wonder if this is a technical control that's as blocking as telling staff not to click stuff?

“Mitigating malicious downloads through defence in depth”

Now this is quite a detailed section in the blog, so as they break down the measures I will break down my thoughts in line.

Firstly, preventing delivery of the phishing email; email scanning, web proxies, DMARC, SPF policies; most email scanning things need to learn, and this takes a long time, and they also need user reporting to spot anomalies or to look for normal, but this isn’t the job of our staff, and remember, we made them scared to report stuff, so that actual need to teach the technical control makes the technical control more damaging.?

Let me explain; the more email scanners quarantine stuff we actually need, the more damaging automated quarantines become, because we are told to dig in our spam if we didn't find something we are looking for, giving the green light to click on stuff in there; the place where the email scanner puts stuff to protect us from it! Get my point?

DMARC and SPF policies need some technical integration that the SME may not have, or carry a cost they weren’t expecting, but a reasonable solution, should they be available, and configured correctly, which they often are not.

"Preventing execution of initial code" and "Preventing further harm"

Unsurprisingly, this is all very sensible stuff, but I find myself harping back to “what about the SMEs”. These configurations require technical know-how or come at a cost of a third party. I don’t know how to solve this, but I can tell you that I have wandered around many a cyber conference talking to exhibitors about their managed services, and I’ve asked how small is your smallest client, most won’t touch under 50 endpoints. The reason the under 50 endpoint managed service guys aren’t exhibiting is because the big boys dwarf their offer with bright lights and fancy driving simulators, which prices them out of the exhibition hall. The SME often doesn’t know which questions to ask, or even that they need this stuff, so we, collectively, need to do something about this!

"Does this mean we can stop training people to recognise suspicious links?"

Now here's the bit people have been waiting for me to comment on, right??

“if your organisation implements the measures above, and tests and maintains them, it's likely there will be a significant drop in attackers exploiting your users to gain initial access.” See above. This is a pipe dream for many.

“But it's still worth training users to spot suspicious links. Why is this?” Here comes the rage…. Still worth? Worth? Still? One of the emails I received about this article asked me what I thought about turning the first line of defence into the last line of defence. Do we see the danger in this line of thinking? Look around your company. Are your people the most valuable asset you have, or is it your stock, of IT assets, or real estate, some art, the ping pong table? Most companies do not see that their people are literally the things that keep the lights on, that bring in the cash, that keep the wheels turning, so are they the first or last line of defence? They are a cog in the defence machine, just like they are a cog in your asset machine. I don’t know why worth infuriates me so much, perhaps it's that I read “are your people worth investing in?” Of course they are!

Anyway, on to what I really wanted to say. I’ve said it before and I’ll say it again; we have left it far too late to start building secure habits, like raising the cyber alarm when the cyber toaster is on fire, or you’ve received something out of the ordinary, and we can not let up for a second. Security training shouldn’t mean 20 minutes every 12 months, it means using every opportunity to influence secure behaviour without ramming it down their throats.

You need a balance of everything in the NCSC blog, to wherever degree is achievable to your organisation. I have clients who will not roll out a technical control without a security behavioural campaign before, during and after, and I’m aware of some companies who will send out an email saying “we’ve seen dodgy looking Microsoft links being clicked, don’t login on dodgy links”, with no technical control and no behavioural learning opportunity. In fact, when your infrastructure is based on Microsoft 365 and you email all staff to tell them not to login to their Microsoft account, you deserve all the s**t coming your way!?

The point NCSC makes about “a determined attacker who is very focused on finding a route into a particular company network may also target users' personal accounts to get to their end objective” is spot on. However, an attacker may find the keys to something sexy in a personal account and turn their attention to your company, which was previously outside of their radar. I’ve mentioned staff in their home life a few times in my response to this article, and I just want to lay it out here; spotting suspicious stuff and calling it out needs to become as common as reporting a fire, a suspicious package at the train station, asking for Angela. So we should never stop talking about security stuff with anyone, we should never ask the question “should we still train them?”, it shouldn’t be a chore, it should be a part of everyday conversation, and then it becomes normal, not taboo or scary, it’s something we all see and talk about, there is no shame and we can collectively help each other, our grandparents, our parents and our kids to consider cyber safety at home. They will then take this into work and use their experiences, and those of their friends and family, to protect you, alongside the technical controls you were able to implement.

"Building a strong reporting culture"

Appreciate the intelligence, it's much faster to rule out a report than to quarantine and rebuild an unreported event. Be nice!


In conclusion, you may have noticed throughout my response that I don’t mention users, instead I use the words staff or people. Users are faceless individuals and I fundamentally disagree with the usage of this term in the IT and security space.?

I know this seems like I’m bashing the piece, there are some really strong points in there, and as the conclusion says “we don't have to choose between usability and security. Bringing the two together can achieve the right level of security, while also allowing people to get on with their jobs, and without blaming them when things go wrong.”.?

We all know there isn’t a silver bullet, it’s probably more shrapnel over a mine field that needs piecing together, to form a silver bullet looking thing. I think we need to be clear, it is not easy, and the questions I’ve received prior to writing this worried me, so I’ll repeat; people are a cog in the defence machine, made up of technical controls and human behaviours. There is no hierarchical ladder of defence. Stronger together and all that!

Stephen Middleton

Information Security Risk & Compliance Manager

1 年

Great article…..in the People, Process, Technology triad, the People element is often the most important yet most challenging…..

Marc Smith

Head of Projects / Business Engagement (Corporate Security)

1 年

good article Jemma and concur with your assessment.

Tim Ward

Co-founder and CEO at ThinkCyber.Delivering secure behaviour change with Redflags?, real-time security awareness.

1 年

Good on you for challenging this blog Jemma, absolutely agree. The author seems to be speaking purely from a technical perspective and I'm intrigued as to how his colleagues in the more social/culture space feel about it, as NCSC are working hard on Culture and Resilience. I don't believe that as an organisation, they believe that technology is the whole answer! Your dissection is great and this bit really resonated with me: "I don’t know why *worth* infuriates me so much, perhaps it's that I read “are your people worth investing in?” Of course they are!" I wonder if that sentence had some expletives in it in the first draft?

Andy Johnson

Founder, BeCyberSafe.com

1 年

Totally agree with your point about implementing a no-blame culture to encourage users to be open, so that lessons can be learnt! Many moons ago before I moved into cyber security I studied for a degree in aeronautical engineering, with my dissertation being on air crashes (a bit macabre I know!). Whilst there are many reasons why aviation is so safe today, one industry-wide scheme that really stood out to me is the independent "Confidential Human Factors Incident Reporting Programme" (known as CHIRP). This is a way for anyone in aviation (eg pilots, ground crew, engineers etc) to report a safety incident or near miss to an independent third party, confidentially & without fear of blame or repercussion from their employer, so that trends can be spotted and lessons learnt & shared across industry. I've often thought a similar system could perhaps be useful within cyber security, although I've never figured out how it could be implemented (eg most users wouldn't bother going out of their way to report a near miss from a phishing email to an external body, unlike aviation professionals where near-misses can be a life & death matter). CHIRP is definitely worth a Google & a read about anyway if you're not already aware of it?

Tracey Corbett

Cyber Compliance Lead - Cyber Security Division at UK Health Security Agency

1 年

Spot on as ever!

要查看或添加评论,请登录

社区洞察