Okta - A Small Breach Became a Big Story

Okta - A Small Breach Became a Big Story

I realize that for some this didn't feel like a "small breach." And writing this, one week from the initial reports of this breach, I've watched the news cycle lose track of this in favor of stories about the January 6th investigation, the Russian war with Ukraine, and strange happenings at the Oscars. However, Okta's actions and response to the Lapsus$ breach will be a case-study for some time to come, just as Colonial Pipeline's ransomware event has been. Some of the lessons from their experience will be obvious, perhaps almost to the point of cliche. Others may not be so, and I hope to cover several of each in this article. As I've said before , I'm not interested in beating up on Okta, but I think there are lessons we can learn from this experience. And we better learn these lessons lest we be doomed to repeat this over and over again.

Lessons for a breached service provider

Believe it or not, Okta really did some things right in their response. Their timeline shows both quick response and appropriate steps. Very quickly after identifying that an administrative account was under attack and may have been compromised, they suspended the account and terminated all current sessions. Perhaps they could have done that faster, but that kind of armchair quarterbacking without more context is relatively pointless: their response took just over an hour. Automation might help here, but often times it is better to react with information than to simply react; and information gathering, correlation, and analysis can take time.

Okta, and their subcontractor, were quick to involve a forensic team to go back and investigate the incident. They clearly understood that what they saw could have been just one item in a larger event. It turns out they were right. Kudos to them for not just assuming that was a small event. The mistake Okta admits they made is in leaving the investigation in their subcontracter's hands and in not probing enough at the beginning of the investigative process.

However that's a more complicated issue than it may first appear. Contracts and legal obligations rear their heads. Okta may not have had the contractual ammunition they needed to be more forceful. And that's a thorny issue - contractors and their legal teams rightly want to protect sensitive data about their own operations and security operations, while customers (at least business-to-business customers) want the right to all sorts of auditing and review. Having been in some contractual conversations this can be a point of significant contention - and I don't imagine that's going to just change overnight.

But I promised a few lessons, so here they are:

  • Take immediate action to "stop the bleeding"
  • Follow-up on these events - investigate them thoroughly (and quickly)
  • Even if you don't disclose immediately ensure you are updating your understanding of the incident as the investigation proceeds - don't wait for the "final report" to be published

So your *aaS provider got compromised, are you prepared?

If you got one of Okta's ~300 emails saying your company's authentication solution may have been compromised, what would your response have been? Admittedly Single Sign-On and Multi-Factor Authentication solutions are critical to any quality security program, but are all your eggs in that basket? Is there a moat in front of your castle wall? Do you have anybody manning the ramparts to keep an eye out for marauders coming over the hill?

Image of a castle on high ground with a wall and towers for a commanding view

Yes, that's all cliche, but it is for a reason. Defense in depth is critical. If a core security solution is compromised what other solutions do you have to at least minimize the damage?



I'm a firm believer in the NIST CSF functions of Identify, Protect, Detect, Respond, and Recover. If you don't have other Protect capabilities to handle when a major part of your security program may be compromised, have you compensated with Detect and Respond capabilities?

The lesson here is that you will have a *aaS vendor get breached. It will happen several times in the next 5 years - we'll all face our vendors getting breached to some extent or another. We all have to accept this almost as "business as usual" going forward, and we have to have the defense in depth on the Protect tools and ensure we are ready to Detect, Respond, and Recover as well.

Shouldn't they have alerted us all in January?

An arrow pointing one way is composed of many smaller arrows pointing the other way

At the risk of sounding like an Okta apologist, let's just delve into what goes into any company making the decision to disclose a breach. There are a ton of conflicting factors, loopholes, and opposing points of view. These are just a few of them:


What are we compelled to disclose?

Worldwide disclosure requirements are all over the map. Even just here in the US there are a confusing morass of disclosure requirements. However, to over-simplify, the disclosure requirements for non-government entities in the US generally revolve around private information about individuals (PII, PHI, PCI data), breaches to businesses defined as critical infrastructure, or breaches to publicly traded companies that could significantly alter their valuation (the SEC is pretty strict about that). These are all complicated, of course, by individual laws in each state identifying what that state believes PII is and what your obligations are to announce a breach, which is an additional legal hell of it's own for many companies. But if your incident doesn't check at least one of those major boxes you probably don't have to disclose it - except to customers you have specific contractual obligations to.

Is somebody going to "disclose" a breach for us?

One factor showing up more and more is that attackers are bragging publicly about their conquests, as we saw happen to Okta. In the past malicious actors were a bit more circumspect - but their goals were different than they are today. And then, more recently, we've seen a security researcher claim (likely correctly) to have the raw investigation results from the 3rd party investigating and has shared some fairly destructive data from that painting a disappointing picture of the relative lack of sophistication in the attack, and the lack of detection and response capabilities during the first few days of the breach. Okta isn't the first to have this happen. So impetuous for self disclosure is changing from a "damage control" perspective. Transparency becomes almost required. Of course, that disclosure may share information that shows your business isn't meeting contractual conditions and requirements you've agreed to with your customers, putting you in a very difficult situation.

Public disclosure or private?

In short, do we tell "everyone" or just those who we think may have been impacted? What a thorny issue. If you tell the world you risk immediate brand damage. If you just tell those who may have been impacted you risk that word getting out from them, and possibly more or less damage to your reputation. Sometimes this is determined for you based on regulation, and other times you have to decide for yourself.

The pressure of shareholder and company value

Image showing stock ticker

The elephant in the room is that the company is a for-profit entity, accountable to the people/investors who fund it. Shareholders and owners really like making money, and they don't like anyone or anything that interrupts that, such as announcing an adverse occurrence. So if there is any way to justify not publishing a breach, well, you can imagine what the investors want - their uninterrupted flow of company value.

This isn't an "Okta problem," this is a regulatory problem

Yes, I said it. It is naive in the extreme to believe Okta is somehow a unique case. Think about the company you work for. Could your company detect a breach? Once detected, could you follow up to figure out the impact, or are you more likely to have detected it, stopped it, and simply assumed that was that? If your company had a breach event but had no regulatory obligation to report it, would you have?

This could be solved for with clear, concise, regulation. Regulation that defines what breaches must be disclosed, how to disclose them, how quickly they must be disclosed, and the penalties for being found to have not reported a breach. (Should a breach you knew about and didn't report have stiffer or lesser penalties than a breach you didn't know about?) All jokes about the number of laws that are clear and concise aside, if we want to compel breach notifications this is what it will take. There are benefits to legislating this:

  • Encouraging companies to improve their security postures so they don't have to disclose or face penalties
  • Making data security into a competitive advantage
  • Public awareness of the frequency and severity of breaches at companies
  • Eliminates (or reduces) some of the most contentious contract negotiation points in any data sharing related contracts
  • Provides better awareness to law enforcement and global information security resources to understand malicious actors' patterns, tactics, techniques, and goals

Though I understand there are those who oppose any more regulation - and those who doubt that this would work. I'm open to other suggestions, but I can't see how the status quo is acceptable - all we're doing is repeating the same experience over and over again.

Really well written piece Bill Bernard, CISSP! Would love to hear our extended network's thoughts on the growing need for legislation and standardization of disclosure.

要查看或添加评论,请登录

Bill Bernard的更多文章

  • No, It Wasn’t a Nation-State Actor. Really.

    No, It Wasn’t a Nation-State Actor. Really.

    Many years ago my sister’s car mysteriously broke down one day on her way to work. We towed it back home and began the…

  • New Year CyberSec Resolutions (Proposed)

    New Year CyberSec Resolutions (Proposed)

    My crystal ball finally broke in two, and my magic 8-ball is in the shop to have it's fluid flushed, so since I have no…

  • Post LastPass Breach - Real Consequences, Real Lessons

    Post LastPass Breach - Real Consequences, Real Lessons

    I ran across this article showing that just under one year on it seems that the LastPass breach is being used to access…

  • The Social Media Dichotomy - Public Privacy

    The Social Media Dichotomy - Public Privacy

    Between the renewed hand-wringing over TikTok and Mr. Musk's new side hustle as Twitter Owner and active CEO, social…

  • My Cybersecurity Wish List

    My Cybersecurity Wish List

    Dear Security Claus, I've tried to be a good Cybersecurity practitioner this year, and I know that you'll check my…

  • A Cybersecurity Thanksgiving

    A Cybersecurity Thanksgiving

    I'm not Oprah, so no list of my favorite things so you can go shopping. I'm not the President, so I won't be pardoning…

  • Cybersecurity - Misinformation As a Security Problem

    Cybersecurity - Misinformation As a Security Problem

    Misinformation has a very political context these days - think "fake news!" The word conjures up images of horrible…

    1 条评论
  • InfoSec Staffing Myths We Can't Afford

    InfoSec Staffing Myths We Can't Afford

    There are a trio of myths that are holding us back from solving for our inability to fully staff security teams. These…

    3 条评论
  • Three Thoughts on Log4j

    Three Thoughts on Log4j

    Just as we did with our deepwatch webinar last week, I wanted to take a moment to discuss the last few weeks and Log4j.…

    3 条评论
  • Working From Home: Things I Learned As I Transitioned

    Working From Home: Things I Learned As I Transitioned

    The company I work for was developed with the expectation that employees would work from home. That has had a lot of…

    8 条评论

社区洞察

其他会员也浏览了