Fighting Cyber Extortion through Storage

Fighting Cyber Extortion through Storage

We just entered a new era of cybercrime, the era of cyber extortion. In this new era of cyber crimes, digital attackers invade systems and ask for money to either or both: give victms access back to their contents and to not disclose the stolen content to the public.

Wannacry tought us a good lesson on why we should keep software updated and back-up content regularly. It also remembered us that often backups do not work when we need them the most.

Updating software and doing backups certainly help, but I was thinking about a fundamental change to make attacks so hard and the cost/reward benefit of extortion unnatractive to attackers.

More specifically, I was thinking about borrowing from enterprise storage systems. In such systems, any piece of meaningful information to store goes through the following seamless steps to the users: i) chop the information into M pieces; ii) generate N replicas of each piece, resulting in M x N fragments of data to store; iii) send each fragment to a particular storage device; iv) the device stores the received fragment locally. So whenever the end user requests a given content, the storage system retrieves the M pieces of information from given places (given complex logics and performance criteria) and then assembles the pieces to recover the original content.

Take this approach to end users of mobile and PC devices for instance. Let′s assume that the replicas will be stored across a pool of resources including the local storage device and cloud storage providers (e,g,, Dropbox, Google, Amazon, etc).

There are various benefits to this approach:

  1. Invading a single top cloud storage providers is hard
  2. Invading various systems is much harder than invading one
  3. It breaks blackmaling into 2, each with a different level of difficulty to succeed. To be able to blackmale the end user to not release content to the public the attacker must obtain each of the M pieces of the original content and for that the attacher will have to invade M systems. To be able to blackmale the end user to give back access to the contente, the attacker will have to invade all of the systems over which the M x N fragments were stored.

The above-mentioned benefits make the life of the attacker much harders and, consequently, it reduces the appeal to extortions. It adds an extra benefit which is taking storage cost into consideration to choose where to store the fragments.

There are challenges to get there though:

  • First, enterprise storage systems employ complex algorithms to determine the number of replicas and where to store each replica, and for that they typically take into consideration performance and availability, not security. Thus, some research would be required to understand whether security requires new algorithms for optimal results.
  • Second, how to make such a smart storage system seamless to the end user while preventing an attacker that successfully invaded the end user device (physically or virtually) from recovering pieces of information from the other places? This is the hardest challenge in my opinion.
  • Third, the proposed approach would require more storage space unless someone comes up with some clever idea to reduce required storage sizes. Would providers be able to offer the extra storage capacity for free? Would ordinary users pay for the extra storage capacity? Or this approach be valuable to premium users only?
  • Fourth, with people increasingly generating more content, hence making storage the bottleneck, would providers be able to support the extra storage capacity demanded by this approach at all?

Food for thought and perhaps some startup adventure :-)

Hey Marcos! First, if there is any interest to go forward with that idea, there has been some work around a storage overlay using multiple cloud storage providers in the backend (here is the one I know of https://ieeexplore.ieee.org/document/6838281/). BTW, it's a brazilian researcher who initiated that work :) The important is understanding actual reasons why backup is safe against wannacry. 1) backup content is not directly accessible to WannaCry, 2) backup content is in its essence a versioning system. We assume the files on backup are immutable and encrypted files by WannaCry can be reverted to an older version maintained by the backup. However, as an example, if your backup is on an external backup disk which is accessible to WannaCry at the moment of the infection, you'll just end with the same problem: your data will be encrypted. I think what matters the most is the mutability aspect involved in versioning to protect against these malwares. If you can guarantee immutability of the contents and can revert back to any of these older versions, then you guarantee a protection against these ransomware. Getting into more details, if the API described above is actually the native FS API that any user would use to access any file of his filesystem, then WannaCry would also be able to re-write all these files in an encrypted way. No matter how you had pieced them or duplicated them since all that would be transparent to the user any, hidden behind the FS API. IMHO, I would imagine a solution using versioning filesystems similar to VAX/VMS where the filesystem guarantees you versioning of all files and that any new modification leads to another file (guaranteeing immutability of any file version) so that you could revert easily to any older version of a file in a snap. Regards, Sebastien

回复

要查看或添加评论,请登录

Marcos R Salvador的更多文章

社区洞察

其他会员也浏览了