The Smarter We Are, the Dumber We Are?
Can our overconfidence be a problem?
Years ago, as Google was first releasing their new browser, Chrome, it had a few features that I did not like as much as I did in Microsoft Internet Explorer (IE). Mind you at the time, I was a full-time Microsoft employee, but Chrome had a superior internal security model (albeit one imagined by a Microsoftie…but that is another story). Google, with Chrome, had a chance to reinvent the browser in a more secure form.
At the time, IE, was suffering from over one hundred unique software vulnerabilities a year, year-after-year, and was the most attacked software program by a wide margin. A lot of the security problems in IE trace back to its creation nearly two decades ago and all the legacy features it had continued to support. Google, with a brand-new browser, had none of those concerns. They could build the most secure browser possible without worrying about a security feature would change or wipe out some legacy feature or code that was used by hundreds of millions of people.
Note: That is why Microsoft created Microsoft Edge, a brand-new browser that did not have to support all of that bug-prone legacy IE code.
There was a lot to like about Chrome, especially security-wise, under the hood. But one feature was really disappointing; and that was the way it treated digital certificates. In browsers, digital certificates are used mainly to help create and maintain HTTPS connections between the browser and the host website (and to verify digitally signed downloads). A digital certificate contains someone’s public key, signed by a Certification Authority (CA), attesting to the validity of the digital certificate – primarily that the contained public key really did belong to the claimed owner of the digital certificate.
Most of today’s digital certificates use a format known as x.509. x.509 details the different fields that a digital certificate can have, such as subject name, validity dates, public key, serial number, etc. Anyone who understands digital certificates well can open any x.509 digital certificate and learn a lot of information about the claimed digital certificate. What may look like hieroglyphics to some is a lot of useful information to others (with me among the latter group). Us, “digital certificate people”, pride ourselves on understanding the intricacies of digital certificates and being able to spot warning signs and frauds. Since most malware and websites use “valid” digital certificates these days, understanding how to recognize the maliciously used ones is a bit of an art, one that we are usually more than willing to share.
But when Chrome first came out, if you went to a HTTPS-protected website, it would show you not only the digital certificate, but not most of the details. With IE and most other browsers, anyone could click on the little “lock icon” in their browser when connected to an HTTPS-enabled website and see every detail contained in the x.509 certificate, which was usually 15-20 fields of information. But in Chrome, all you could see was a very small summary subset of fields. It had some of the most important information, but it did not show anyone most of the fields. Initially, we thought that the display of digital certificates…or lack of display…was just because Chrome was in beta. But years later, the same treatment of digital certificates was ingrained.
It was a real pain, and digital certificate experts like me got used to connecting to HTTPS-enabled websites with a second browser that showed more digital certificate details by default whenever we wanted to investigate the digital certificate. It was not a huge deal, but a pain. I, and others, wondered why Google refused to show more digital certificate details.
Then, I came across a Google Chrome research paper that explained the digital certificate treatment. I would post the link here so you could read it if I could find it, but I cannot. The paper, which I downloaded, was lost a long time ago during some laptop upgrade and I cannot remember enough of the title to search for it.
But the key part was that it revealed why Google had intentionally decided not to show more details of digital certificates. The reason was that they had done studies while making Chrome, and found out that very few people looked at the details…only us digital certificate weirdos. And they discovered that the more details they showed, the more likely that us digital certificate “experts” would accept and use a fraudulent digital certificate. Yep, you read that right.
Turns out that the more someone knew about digital certificates, the more likely they were to incorrectly explain away an “error” when they came across it as an unintentional mistake made by the digital certificate owner. And this was true in my experience. If you get into digital certificates and go out of your way to inspect them and all of their details, you come across mistakes all the times. The most common mistake was that the legitimate owner of a website would get the digital certificate assigned to one host or website name and then use it on another host or website, not realizing that it was key to have the name of the subject of the digital certificate match the host or website name it was used on. Very common mistake. So much so, that when you saw it, you would likely say, “Oh, yet another clueless digital certificate user who does not understand how digital certificates work!” I promise you, us digital certificate experts were this dismissive about mere mortals trying to use them correctly.
Anyway, back to Google’s study. What they found out was that when a so-called digital certificate expert came across a real invalid, malicious digital certificate, that they were more likely to pull up the details of the digital certificate, see what they thought were regular, non-malicious user errors, ignore the digital certificate warning(s) and continue on to the malicious website compromising themselves. What!? Yes.
And I am not talking about the famous Dunning–Kruger Effect, where people who are not experts think they are experts and overestimate their ability. This is similar, but distinctly different. This is real experts with strong ability simply overthinking an error to the point where they too casually dismiss it. It is expert overconfidence. Dunning-Kruger is overconfidence by people who really do not know their subject matter. But in this case, it is real experts with solid knowledge, simply making poor security decisions.
This is very counterintuitive. But I trust Google’s research and their data clearly showed that they could protect more people more often by showing less digital certificate details. For a long time, I thought this was an unexplainable one-off. But, as I paid more attention to other events in my life or things that I saw on TV, I saw it play out a ton, often with deadly consequences. It would happen when some experienced, knowledgeable expert thought they could safely handle something, but then the event they were involved in turned out very deadly for them. It was ship captains headed intentionally into hurricanes, losing themselves and the whole ship. It was people who owned lions as pets breaking every industry-safety rule until it was too late. It would be snake handlers fatally killed by the 11th snake bite they had received. It was plane pilots not acting early enough to handle icy conditions. Every nuclear accident event I have ever read about has one or more trained nuclear engineer ignoring and overriding the multiple automated danger alerts indicating a serious problem. Over and over, I saw real experts hurt and killed because they thought they could safely handle a situation that was clearly more risky than they thought. I am not sure if this phenomenon has a name as cool as Dunning-Kruger, but it is clearly a common problem. And one that many industries recognize and actively try to prevent through education and training.
These overconfidence issues happen in the computer security world all the time. I have read about many stories where one or more computer security experts saw something in a security log that was clearly anomalous and high risk, which they incorrectly explained away as a non-event. I remember one major security event where they had over 15,000 security alerts about a brand new, unexplained executable. One of the most knowledgeable and trained computer security people said that he knew what it was and that it was just a legitimate upgrade that was being pushed out. But it was not. It was an eavesdropping trojan that pwnd a major retailer and compromised millions of their customers’ credit cards.
Now, I work full-time at KnowBe4. We do training to help people better spot and deal with social engineering and phishing (along with other things like compliance). In the years that I have been with KnowBe4, I have seen some of the most phishing-aware people get taken down by a phishing campaign. It happens so much that any time I hear an overly confident person tell me there is no way they can be phished, I just laugh out loud. I have seen so many of those people accidentally be the one who got phished and ended up infecting their whole environment; it is not even surprising anymore. It seems that the people who tend to get successfully phished are the ones with the least amount of understanding of phishing and those with the most understanding. The ones in the middle appear to be appropriately scared of phishing.
And I am not pointing fingers. I consider myself among the most phish-savvy people in the world. I have been aware of and fighting social engineering and phishing since 1987. I write about it all the time. I work at an anti-phishing company full-time. I research and do forensic analysis on phishing attacks all the time. I literally write and speak and present on phishing and social engineering 10 hours a day. You cannot fool me!
Except, last year, I got tricked by three different simulated phishing emails (as I wrote here previously: https://blog.knowbe4.com/shame-shame-i-got-phished). Now, my only saving grace is that I do not think I have ever been successfully phished by a real criminal. All my failures were tests, right? But who knows? I am phishable! I am the expert I am talking about who is apparently overconfident. There is a chance that I have failed a real phish and did not realize it.
And again, I was reviewing an upcoming research paper by another vendor that showed the same thing. Many of the biggest ransomware attacks were only successful because one of the organization’s most knowledgeable and savvy people made a mistake and got tricked into providing credentials or running a Trojan Horse program. I read this report and as shocking as it may have been at one time, it was not shocking at all. It made me think back to the Google Chrome research all those years ago and about my own anecdotal experiences. The more things change, the more they stay the same.
The Solution
Training and education are the best solutions for reducing cybersecurity threats when the policies and technical defenses do not work. In general, security awareness training tries to build a general culture of healthy awareness and skepticism in an organization so that employees can both better identify and appropriately treat threats. That is what all security awareness training programs do.
And in the anti-phishing training world, we all know that people who are “frequent clickers” need more training and testing. When you send out a simulated phishing test and someone fails it, they should get more immediate training. You give them more training until they stop clicking on real and simulated phishing tests. We all know this.
The new part of training I am recommending today is to make sure your computer security experts do not get too cocky and overly confident. In highly critical roles, such as astronauts, pilots and law enforcement, the people in those roles are constantly taught about the issue of overconfidence and how to remain ever vigilant. We need to do the same with our cybersecurity training. All of us, especially the “experts” need to understand that overconfidence can breed mistakes. Make sure that the concept of overconfidence is a topic discussed with your cybersecurity leaders. Give examples of how an expert’s overconfidence led to a failure. Examples abound in the cybersecurity industry.
I think it can be super helpful if your respected cybersecurity leaders share personal examples of times when they failed. It is why I have written about my own experiences. When I was at Microsoft, one of the annually-required cybersecurity training videos included probably the most expected technical person in all of Microsoft. And there, he was on camera telling us how he got tricked into opening a malicious document from a spear phished email. He even shared how he did not realize at first that he had been duped and it was only later on that he realized that he had likely been a victim of a successful phishing attack. He said he was even hesitant in reporting the possible phishing attack (it had not been confirmed yet) and did not want to embarrass himself. His ego was getting in the way. But then he fell back on his training, which said to report all suspected phishing attacks, even if you did not know for sure that it was a real attack. He reported it and it was confirmed to be a real phishing attack. The malicious executable that got launched and installed on his computer was stopped before it could spread further. It was great to see what I think everyone at Microsoft thought was the smartest person at Microsoft admit to being successfully phished. And how he overcame it was his own ego, which saved the company a lot of damage. There is no doubt that his public confession probably paid dividends for years.
This is to say, do not just concentrate on your low-end, “frequent clickers”. You do need to do that, but it cannot hurt to concentrate on another group. Anecdotal research says your high-end security experts might need some particular training to prevent overconfidence. Training and education benefits everyone.