Deception Environments: Beyond Chaos Testing
Source: https://queue.acm.org/detail.cfm?id=3494836

Deception Environments: Beyond Chaos Testing

One of Malcolm Gladwell's Revisitionist History podcast episodes is titled - "Taxonomy of the Modern Mystery Story"[1]. Gladwell sets up a structure that he posits every mystery story falls into. There are four types of mystery stories, he says. If we imagine the four directions North, South, East, West drawn as points on two intersecting lines, we have:

North - The law enforcement is present and highly efficient. Examples of this type of story would be police procedurals with brilliant police detectives.

South - The law enforcement is present and is malignant.

East - The law enforcement is present and highly inefficient. Examples of this type of story would be the Sherlock Holmes stories, or Hercule Poirot and closer home Satyajit Ray's Feluda.

West - There is no law enforcement present. The Wild Wild West.

If we try and sketch a similar picture for methods to improve system resilience, along a single North-Sound line, we have in the North the familiar Chaos Testing. What lies in the South?

What's the opposite of creating an environment and then having engineers in your company or team break down components one by one?

I would argue it is the creation, maintenance and use of Deception Environments. Having external actors come in and break down the system, observing what they do and improve the resilience of your system based on this feedback.

The idea is to ensure your system gets compromised, and leverage the expertise and ingenuity of the attacker to learn the weaknesses in the system and improve them. This is called Attack Observability by the authors of the article "Lamboozling Attackers: A New Generation of Deception"[2].

Historically the way to understand how real world attackers exploit systems has been the building of Honeypots. Honeypots are toy systems left open for attackers to exploit and then learn from the incidents. However, honeypots suffer from the problem of fidelity. The toy systems quickly get found out by the attackers, who then abandon their efforts.

This brings up the three factors that govern the building of systems or environments that entice attackers and enable Attack Observability.

Fidelity - How closely does the test system mimic the actual one?

Isolation - How tightly isolated the test system is from the real one?

Cost - How much does it cost to build and run the test system?

Honeypots have low cost, high isolation, but low fidelity.

Enter Deception Environments. Fully built out environments, with all the system components that are in the real production environment, but isolated from the real production system/environment.

Deception Environments have high cost, high fidelity, but may not have the best isolation.

Using a concept the authors call Honey Patching, detected attackers are diverted by the load balancers into the Deception Environment. Here, there's synthetic data, and outside-in observability. So the actions of the attackers can be observed in detail.

Deception Environments sound like a lot of work. However, in the times we live in, we're fortunate to have cloud computing, deployment automation and SDN. So bringing up and maintaining a production-like environment isn't too difficult.

The payoff is highly valuable real world feedback on the state of resilience of the system.

References:

[1] - https://www.pushkin.fm/podcasts/revisionist-history/taxonomy-of-the-modern-mystery-story

[2] - https://queue.acm.org/detail.cfm?id=3494836


Abhradeep Kundu

Engineering @ Cloudera

8 个月

Awesome ??

回复

要查看或添加评论,请登录

Rick Banerjee的更多文章

社区洞察

其他会员也浏览了