Insights into the data flows complexity from security perspective

It’s been a while since I last blogged and in the recent few months I have been educating myself on networking with security, so let me throw in all what I learnt so far.

I hope we are all conversant with the concept of data center. These are physical facilities hosting the compute, storage and networks to support the business applications. If not millions then at least tens of thousands times a day, data flows in and out of any data center. The data flow carries the data in a particular format on a protocol (e.g. click on a portal to request some information could carry json data on https).

Data centers have been around since many decades. In the first many years, the servers hosted in these data centres would need enormous space, so a data center would home few servers. Over the many years that have followed since then, the servers have multiplied and any data centers today can host a number of servers connected with cables, well organized in racks.

Traditional data centers hosts servers, disks, routers and firewalls and are designed for north-south traffic flows where request for information enters from outside the network perimeter (e.g. a browser).?In a traditional data center, the applications are hosted on dedicated physical servers and isolated from other applications (which are hosted on its own dedicated physical server). The security controls are mostly achieved with perimeter firewalling. While some could debate, client server architectures were a beautiful fit in the traditional data centers as it allowed minimal east west traffic flows.

In the last 10 years or so, the traditional data centers have started showing gaps in operational efficiency (or doing more with less) with the given demands from compute and storage. This has lead to the growing adoption of hyper-converged infrastructure and move to cloud computing. The trends are also for shrinking on-premises data center footprints and expand the off-premises footprints. This move is creating abundance of new data flows mainly the east-west direction. These east west data flows are normally transparent to the end user, but are a source of headache for architecture governance. A rough estimate is that around 50% of the data flows are east west and this number is only on the rise.

For security, the east-west data flows is a different paradigm and is also leading to new security controls to guarantee the data integrity. So, how can we guarantee the data integrity of our data flows given the abundance of east-west data flows? Here are some good practices to protect the data integrity:

  • For the east-west flows, move away from a single network domain mindset towards segmentation for lateral movements through the internal network thereby implementing isolation via segmentation firewalling (this is not the same as perimeter firewalling explained above). In order to strike a good balance on the number of the segments, the concept of pattern-based-security is a good one to follow (workloads with similar security characteristics can be homed together)
  • Harden the access to the segments by focusing on implementing a strict identity-based access control for all traffic (east-west and north-south) and keep to the least-privilege principle.
  • Log and monitor all traffic into the SOC intrusion tools which can create an security audit trail of all the data flows.

There can be many more security controls that can be implemented for the east west data flows , the aforementioned are the bare minimum essentials that need to be looked into for ensuring that security is kept integrated into the scalable networking architecture.

要查看或添加评论,请登录

Dhiraj Bahroos的更多文章

社区洞察

其他会员也浏览了