LEARNING FROM THE CHANGE HEALTHCARE RANSOMWARE ATTACK
Written by Premal Parikh
One of the most significant cybersecurity attacks ever was that of Change Healthcare in February, 2024.? It impacted healthcare services across America. According to the company, the ransomware incident cost the company over $800 million in the first quarter of 2024, with the full-year impact estimated to be somewhere between $1.3 to $1.6 billion!
Change Healthcare is part of UnitedHealth Group, one of the largest healthcare services companies in the world.? This not only demonstrates that nobody is immune to cybersecurity attacks but also highlights the fact that the time to resolve was considered unacceptable.?
Public information shows that the attack originated via a remote access tool that wasn’t enabled with multi-factor authentication (MFA).?
There clearly is more to this that we might never be told – for example:
These are some learnings that companies should apply in their businesses, if they already haven’t:
The basics were missing at Change Healthcare. MFA should be enabled on all external-facing systems (if not all). This includes encrypting the data as well.
That way, even if the data is exfiltrated, the bad actors can’t leverage it.
? ? ?2. Compliance is needed but that’s not enough
Companies may have all compliance certifications in place and often test a firm’s process maturity, but they never test a firm’s real-time posture. This doesn’t mean compliance certifications/audits like HITRUST, SOC-2, and ISO aren’t important. However, these certifications only represent a snapshot of a firm’s maturity and processes at a specific point in time.? In the case of HITRUST, less than 1% of all certified firms have reported a breach, which is very good considering they are mostly health services-related firms that are the most attacked.
But you can still be in the 1%.?
? ? ?3. External surface area management
All mid-to-large firms should be aware of all their external-facing assets and those that have real-time information. Running a scan once a month (or less) isn’t useful. Attackers are constantly scanning organizations for new services, applications, or ports that may have come online and aren’t secure. Companies need to be doing the same.
领英推荐
There are several ways to do this – whether it’s for a service or a product.?
? ? 4. Segmentation of network
Assume you will be breached. Design and test your network to make sure there is strong segmentation of networks. This ensures that if one section of the network is compromised, the ransomware remains contained and is unable to spread to other sections.
Clearly, in the case of Change Healthcare, that didn’t happen.? It is unclear if that was due to a bad network design or other vulnerabilities that were leveraged.
? ? ?5. Continuous red team testing
A ‘one and done’ per year just doesn’t work. One must continuously test their framework like a hacker and conduct regular red team exercises against the key services where an ethical hacker might try to get to the flag.
Leverage red team exercises to improve internal security and alerting.?
? ? ?6. Incident response company and training
Do you know who your incident response company is? Know your incident response company and make sure there is a contract in place that would cause them to act in the organization’s best interests in a timely manner. This is provided they don’t want to be negotiating a contract while an attack is underway.
Start by understanding your cybersecurity insurance and knowing who is on their incident response panel. Then, reach out to them and get a contract in place. This might include paying a retainer, but for larger companies, that is good insurance. Work with them so they know your key assets and are ready to go.?
Do real incident response drills/training.? Play out the ‘what-if’ scenarios before they become real.
? ? ?7. Business continuity / disaster recovery (BC/DR) testing
Just like incident response drills, the organization should start by documenting their BC/DR plan. The plan should include business criticality of applications, and their recovery times.
BC/DR testing should be as realistic as possible.? Don’t just limit your testing to table-top exercises, although that’s a good place to start.? Instead, ‘declare an emergency’ and fail-over to the backup applications/data centers.?
This practice will uncover a number of issues that will need to be addressed.?