Demystifying Zero Trust Architecture
images.googlesearch.com

Demystifying Zero Trust Architecture

DISCLAIMER: -

Copyright ?2022 by DivIHN Integration Inc. | [email protected].

The creator of the document reserves all rights. Publication Date: May 2022. DivIHN Integration Inc. reserves the right to change the contents of this article, the features or the scope without the obligation to notify anyone of such changes. The content has been adapted using secondary research from various data points via "Google Search". Infographics and Images used in the document are the property of the respective owners and have been used for indicative purposes only. The author reserves the right to authorise and use the Intellectual Property contained in the document.

Executive Summary: -

Cyberattacks can result in severe losses of money, Trust, and reputation. Organisations and businesses have all the right to upgrade their security infrastructure. The demand for an appropriate level of protection for particular types of data is being formalised in regulatory requirements. These requirements call for technical and organisational measures to guarantee the confidentiality, integrity, availability, and privacy of the data.

Companies' cybersecurity skills are tested to their breaking point due to the more sophisticated nature of cyberattacks and a regulatory environment that requires a reactive yet intelligent response. The paradigm shift in information technology security architecture known as Zero Trust is gaining more and more attention as a potential solution to these difficulties.

Zero Trust can improve an organisation's cybersecurity posture by increasing the visibility of cyber risks and improving compliance with data protection laws. The policy of "No Trust Without Verification" ensures the same.

The diagnosis to treat the symptoms rather than the underlying reasons is insufficient. The foundation of cyber-resilience is the information technology security architecture, which materialises and conceptualises information security.

When internal and external needs are aligned, the IT security architecture is responsible for determining how specific technological security measures need to be implemented within the more significant corporate architecture.

The security architecture addresses the entirety of the life cycle of (electronic) data, beginning with its generation, usage, transfer, and storage, and ending with its archiving and destruction. It also encompasses all components, such as physical or virtualised client and server endpoints, IT and business applications, IT platforms and infrastructure, and the network linking the different resources.

The network perimeter serves as the enforcement point for security rules in the traditional approach to information technology security. After reaching this milestone, most resources will become available for use. However, since many devices on a network now have access to the internet, this type of perimeter policing is no longer sufficient. If a device is breached, the attacker will be able to access the corporate network without going through the perimeter. Since today's cyberattacks are more sophisticated, businesses need to shift to the "ZTA (Zero Trust Architecture)."

A Brief History: -

In his research, Forrester analyst John Kindervag came up with the term "Zero Trust," which he used to explain the significance of inherent "non-trust" when working with network traffic, regardless of where the traffic originates. When the idea was first conceived, most companies relied on their internal networks and data storage facilities; this was common practice. As a result, the term "network security" was the first to be associated with the idea.

However, many of the concepts expressed in Zero Trust networking can trace their roots back to a much earlier concept called de-perimeterization, which was proposed in 2004 by the Jericho Forum. Firewalls and armed guards stationed at strategic points around the perimeter are the primary deterrents to prevent unauthorised entry.

The problem with this strategy is that there are no safeguards once the intruders have broken through the perimeter. De-perimeterization is a security strategy that involves removing the typical "boundary" security that separates a local area network (LAN) from the internet and replacing it with a segmented and multi-layered security system based on encryption and authentication. The Zero Trust architecture, also known as ZTA, offers layered security through persistent reauthentication and an inherent distrust of all devices, users, and actions, regardless of whether or not they are located within the perimeter.

What is ZTA (Zero Trust Architecture): -

The Zero Trust model is an advanced foundational framework that reimagines how businesses and individuals perceive and interact with the IT security perimeter. The concept of the perimeter has been completely revised, as it should have been.

Because you no longer have a perimeter to protect, the traditional model of having an on-premise firewall that protects the perimeter in conjunction with an endpoint agent on the device has been rendered obsolete. As a result, the firewall cannot provide adequate protection against potential threats. Since most businesses have already adopted cloud computing in some form, you cannot assume that the devices connected to your network are safe to use.

Zero Trust does away with the traditional security perimeter assumption by enforcing an explicit rule that no application or device can be trusted. This rule is more commonly known as "trust nothing, verify everything."

If your company does not implement Zero Trust, there will be many negative repercussions for the business. As a direct result of the global pandemic, ransomware attacks have skyrocketed. Remote working has become the norm in most companies, and the capacity to perform work effectively and productively has probably been hindered more than a few times. One thing is sure: the time has come to implement Zero Trust.

The concept of "Zero Trust" refers to designing and architecting security and networks in which nothing is trusted until it can prove itself trustworthy. The perimeter is close to the devices, data, or applications possible.

Zero Trust is not a singular piece of technology; it is not a single product or service that could be purchased off the shelf. It is a long-term strategy and a shift in the organisational mindset that informs all IT security decision-making. It is built upon people, processes, and technology (we'll cover this topic in more detail in our next blog post).

NIST Zero Trust Framework

ZTA Core Principles: -

ZTA Core Principles

There is no "Inside" The Network: Hypothetically speaking, you are operating your entire company from an untrusted location, such as the public Wi-Fi of a coffee shop, and all of your devices are connected directly to the public internet, which is the most dangerous of all networks. How would your company fare? You will be forced to implement cyber security measures that do not allow you to rely on being behind a conventional corporate perimeter when you imagine this is your reality and act as though it is.

Trust Nothing, Verify Everything: Let's presume that there are attackers both on the inside of your networks and on the outside. Also, they are launching concurrent attacks continuously from both locations. None of the devices present in the network is to be trusted; Instead, it is required that each device authenticate itself before any consideration of establishing a connection.

The Security Should Adapt In Real-Time: To achieve Zero Trust, the security policies you implement should be dynamic. They should be able to change based on the insights gained from as many different types of data sources and technologies as is practically possible.

A static policy won't protect you if your device is compromised while a particular user uses it. Suppose your policy also considered the device's health, for example, by identifying malicious behaviours. In that case, your policy could use this information to adapt to the current circumstances dynamically.

Implementing Zero Trust: -

Step 1. Identifying and Segmenting Sensitive Data:

A company may decide to outsource the management of its entire information technology infrastructure. Nevertheless, the organisation will continue to be responsible for maintaining compliance and protecting the data. As a result, compiling a list of data storage facilities is an absolute necessity. The organisation will be able to determine the level of data protection and its criticality by using this inventory. The level of security is determined by internal requirements (such as intellectual property and business value) and external requirements (such as legal and regulatory compliance).

It is essential to differentiate between the following types of data in case one or more of the following sets of regulations or internal data requirements are applicable:

  1. PII – Personally Identifiable Information.
  2. PCI DSS.
  3. Financial Statements and Tax Data.
  4. Intellectual Property, M&A Data.

You may think it's too expensive to micro-segment all of the data holistically. Data enclaves must be established with segmented sub-perimeters using application processing data with a certain criticality level regarding confidentiality, availability, integrity, and privacy. This is the bare minimum requirement for doing so.

Discovering all of the data processing activities within the entirety of the IT environment is required to create a data inventory. Once you have done this, you will be able to assign the data sets to the inventory, verify that the categorisation and classification of the data sets are accurate, and make sure that a data owner has been designated.

You will also need to conduct an inventory of all data processing applications and gather additional details that will assist you in identifying the underlying IT infrastructure and the sourcing option that was utilised. This should include locations for data storage, backup locations, file sharing locations, and other storage locations. The data repository needs to be connected to the application inventory to provide the data owner with information regarding who has access to the data and where the data processing activities occur.

Step 2. Zero In On Susceptible Data Flows:

Your company needs to be aware of sensitive and critical data flows to correctly design network segments and identify irregular activities. In this step, IT and business staff must collaborate to understand the dependencies between the business processes, the required applications, the active IT components, the data traffic, and access rights.

When trying to understand the data types that departments and corporate functions are processing, the business architecture is an excellent place to begin. Understanding the data flow is possible by connecting to the IT applications responsible for processing these data.

Using swim lanes to indicate which roles within the network process sensitive data as a task to perform an appropriate step of a business process to use an IT application is an excellent way to document sensitive data flows. It is a perfect way to write the flow of sensitive data.

It must be made clear whether a particular business application that processes sensitive data is operated and hosted by the local IT unit, outsourced to a partner, or provided as a cloud service. The process presupposes that adequate device management is already taken care of for the following hardware and network

  1. End-User Systems such as Unmanaged Endpoints, BYOD, handhelds or devices monitored by the corporate IT networks.
  2. The IT department-owned Data Servers, Storage devices, Network Routers, etc.
  3. Services such as SaaS, IaaS, or PaaS.

Ensuring that all sensitive data flows have been accurately identified is imperative. The business has contextual information regarding data processing. IT can ensure that all business applications, data repositories specified in the process mentioned in Step 1 – and devices are included in the analysis of sensitive data traffic.

Once you have that, you will be able to deploy adequate measures to determine how to protect data, whether it is at rest, in motion, or being used.

Step 3. Defining and Architecting Micro-Parameters:

The previous two steps are interconnected with this procedure. Identifying sensitive data repositories involves putting into practice the need-to-know/need-to-do principle, which is based on the principle of least privilege for access controls. It requires the data owner to understand and define the roles necessary to access the data and approve access requests made by users who hold those roles and manage entitlements.

However, user entitlements are not a one-and-done sort of thing. Controls on access must undergo ongoing revisions as well as auditing. A user management process that is adequate will handle the following broad groups of IT users in a distinct manner:

  1. Users with access to IT applications that ingest and process sensitive data.
  2. IT users who have been granted privileged access and can modify user access rights and security configurations, giving them the ability to access sensitive data, are considered privileged access.
  3. Outside stakeholders such as clients, vendors, and partners have access to sensitive data.

Every user level and the group will have customised levels of access. In addition to user access management, data resources, including storage and processing resources, that have similar data protection requirements may be grouped in a dedicated 'enclave' connected with endpoints. These endpoints can only be accessed with a certain trust level.

The protection of access to the data enclave at the PEP enables the enforcement of such policies, which offloads the responsibility of security enforcement from each individual IT system and IT application. A network segment in a data centre, the Cloud, or an IT service provider can all be potential locations for a data enclave.

Step 4. Framework For Security Control and Policies:

The security policy framework is applied to the IT security architecture, which does the work of policy enforcement. Policy Enforcement Points, or PEPs, are distinguished from Policy Decision Points or PDPs in NIST Standard Practice 800-207.?It also ensures that policies are enforced at all gateways and data enclaves or that an action is triggered If a policy violation occurs. The following networks involving PDPs/PEPs are deployed

  • PDPs/PEPS at User and Device Level - Users who have the appropriate entitlements are the only ones who can access IT systems that process sensitive data. The way to do it is via Managed End-User Devices, Un-managed Devices, or the provision of a virtual desktop infrastructure as a prerequisite for granting access to confidential data.
  • PDPs/PEPS at Application Level - At the application level, access to sensitive data is granted according to the principle of the least privilege, also known as the need-to-know/need-to-do framework.
  • It ensures that application users only have access rights to the data they need to perform their jobs effectively. Implementing privileged roles can occur either through step-up mechanisms or a separate login. In addition, application security controls and application logging are enforced, and logs are sent to a centralised log repository.
  • PDPs/PEPs at the IT Infra Level - Operating and maintaining IT platforms (operating systems, databases, storage, networks, and virtualisation) requires privileged IT roles to be played on the platform level on the infrastructure level of IT.

Access control, data encryption technology, and break-glass procedures should be utilised to the greatest extent possible to restrict privileged IT accounts' access to sensitive data to the barest minimum. In addition, you can block management access through standard user access zones by employing other security measures and using dedicated management zones in conjunction with jump hosts.

Obtaining access to data can be accomplished by using either an application or an IT infrastructure. On the level of storage, there is no direct access to the data. Therefore, sensitive data should not be downloadable to a personal device, personal storage device, or public cloud storage that is not managed by the organisation's access management system.

PDPs/PEPs and Cloud – It entails gateways for specific protocols

-External Browsing - The use of this gateway ensures that only legitimate websites and content are accessed and that sensitive data is not uploaded to unauthorised cloud storage services or shared with third parties. In addition, there should be mechanisms to detect infected endpoints contacting known command and control (C2C) infrastructure and other anomalies. The installation of malware on an endpoint should also be detectable by these mechanisms.

-Internal Browsing - Access via the public internet, also known as access via a web application, requires a gateway that is either a web application firewall (WAF) or a reverse proxy. It is necessary to access policy for web services accessible from the internet and enable malware detection.

-Secure Email Gateway – The gateway acts as a shield between internal and external email communication. Email security is ensured, and the detection and prevention of data leakage through email communication. It includes identifying spam, phishing, and malware being delivered through email.

-CASB (Cloud Access Security Broker) – The functionality of this gateway is to ensure that the business data is shared and processed via authorised vendors. The appropriate security measures for protecting data, maintaining confidentiality, and controlling access are implemented. Before sensitive information is sent to or stored in a SaaS application like Microsoft SharePoint, Microsoft Dynamics, or Salesforce, a CASB can typically encrypt that data using an encryption method.

-Accessing Remotely – Remote business users, IT users, and Partners with privileged access to confidential and sensitive information have to go through a particular gateway to gain management access to the IT infrastructure and policy administration. A "Jump Host" is deployed for the more privileged users.

The security policy and control framework encompasses the entirety of the information technology (IT) scope, regardless of whether the IT is housed on-premises, in an outsourced location, or in the Cloud. Before allowing access to information technology services or data, Trust is not established based on a principle or a service level agreement (SLA). Still, instead, it is shown through independent verification.

The user's identification, the connecting device, the connecting location, and the service accessed are all factors that may or may not play a role in determining whether Trust can be established.

You need to conduct a threat assessment that simulates common threats along the critical data flows in the entirety of the IT estate controlled by your organisation to ensure that the policy framework contains all of the necessary safeguards. It lets you verify that any identified cyber risk has been mitigated to an acceptable level. It also helps ensure that detection measures are in place and working effectively.

Step 5. Constant Monitoring & Analytics:

Logging and real-time inspection for malicious activities are required for all the connected endpoints and gateways (PDPs and PEPs) and sensitive data traffic. It may be challenging to detect security-related incidents reliably if monitoring functions are spread out across several different IT divisions in your company, depending on your organisation's maturity level and size. It would help if you ensure that monitoring takes place in the following focus areas:

Monitoring of IT Operations - entails IT services, Network Management, and Wireless communication links.

Monitoring At The Enterprise Level - refers to the entirety of the information technology infrastructure, including all components owned by the corporate IT organisation and managed by it. A viewfinder into the IT security estate of the organisation can be provided by a centralised log repository that includes visualisation, dashboards, and reporting. In addition, security information and event management system, also known as a SIEM, should be implemented. This system will process and correlate events, allowing for more accurate detection of potential security breaches.

Compliance Monitoring - Entails all aspects not covered by security monitoring or IT operations.?In the majority of cases, compliance monitoring will cover the following elements:

-Ensuring that the baseline security configuration is getting monitored accurately.

-Detection and Monitoring of unauthorised changes to the File Integrity.

-Detecting any data breaches.

-Discovery and vulnerability scanning of IT assets.

Zero Trust implements compliance monitoring as a subset of security monitoring for the enterprise. It assumes that event logs are a centralised service accessible to all monitoring functions. In practice, logs, events, and information are frequently duplicated and are not shared between the various monitoring functions.

Your company must therefore determine which log information needs to be kept and whether additional requirements for legal admissibility and e-discovery might apply for investigations and audits to guarantee the appropriate handling of the data.

It would be best if you also determined whether additional measures are required to avoid deletion or alterations and keep the data's integrity intact. It is essential to be aware that the various components of enterprise security and compliance monitoring differ in terms of the areas of concern they concentrate on and the speed with which they react to alerts and deviations.

Step 6. Automating and Orchestrating Enterprise Security:

The moment enterprise security and compliance monitoring has been established and reached a basic maturity level, successive improvements will follow.

Since speed is of the utmost importance in detecting and eliminating cyber threats, your company may choose to implement automated security analytics to reduce its susceptibility to vulnerabilities. For instance, if it is highly sure that a user is engaging in malicious behaviour, the user in question can be automatically disconnected from the network.

It is also possible to automatically update the rule and policy base for access controls by integrating it into your HR department's joiner/mover/leaver process.

Such a practice would make it possible to maintain trusted identities and efficiently assign roles. You need an orchestration layer that combines all policy enforcement points with the policy administrator to use automation.

When identifying policy violations and suspicious processing of sensitive data, the orchestration layer gives you the ability to include additional aspects, such as threat intelligence feeds or information about DNS sinkholes, to have a higher confidence level in your findings.

If you decide to go with this type of integration, you must ensure that various tools and data sources are linked together. SOAR, which stands for "Security Orchestration, Automation, and Response," is a solution stack composed of compatible software programmes that enable an organisation to collect data about potential security threats from a variety of sources and to respond to low-level security events without the intervention of a human being.

The increased productivity of security-related tasks results from implementing a SOAR stack. Gartner is responsible for creating this term, which can describe compatible products and services that contribute to defining, prioritising, standardising, and automating security incident response functions.

Gartner identifies the following as the three most significant capabilities offered by SOAR technologies:

Gartner SOAR Technologies

Microsoft's Framework Zero Trust Architecture: -

Microsoft Zero Trust Architecture

Key Growth Drivers: -

Zero Trust is gaining traction as a buzzword in recent years within enterprises and corporate environments. Much attention is placed on how this innovative approach to cyber security can help organisations defend themselves against the new generation of attackers. These attackers are better networked, more organised, and have access to tools that were supposed to be futuristic and only a select few nation-states could afford to deploy.

However, a broader set of business drivers and demands pushes Zero Trust onto the corporate agenda. These drivers and demands highlight the need for more incredible speed and adaptability in how organisations approach cyber security as they seek to survive and thrive in an increasingly digital world. The pertinent factors driving the deployment of this framework by the businesses are:

1.??The rapid pace of digitalisation increases IT complexity and drives up the cost.

2.??Cyber Criminals are becoming more lethal and are a few steps ahead of the current cyber security practices and solutions.

3.??Overly stringent controls hamper the growth of digital products and services in terms of cyber security.

4.??The increasing shift to the Cloud warrants a robust and new approach to securing business and mission-critical data.

5.??Post the pandemic, the demand to work remotely or from home entails multiple devices and accessing them on the go.

6.??A more flexible approach is required to meet the demand for improved efficiency and convenience while collaborating for business.

7.??The cost of compliance is increasing due to controls that overlap and are overly restrictive and requirements that are becoming increasingly stringent.

8.??It is becoming increasingly difficult to contain the proliferation of shadow IT without compromising the agility of the business.

9.??Managing the risks associated with mergers and acquisitions securely is becoming an increasingly complicated, time-consuming, and expensive endeavour.

10.The ever-increasing complexity of vendor landscapes and supply chains necessitates a more streamlined approach to security.

Significant Challenges In Adopting Zero Trust Architecture: -

1.??Acceptance of Zero Trust requires the backing of a cyber organisation that is fluid and malleable, one that is open to new ways of doing things.

2.??It is often necessary to use bespoke approaches to enable legacy systems (both IT and OT) to participate in Zero Trust environments.

3.??To establish a foundation for Trust, the implementation of Zero Trust necessitates complete end-to-end visibility into both the resources at one's disposal and how they are put to use.

4.??There is no fool proof method for Zero Trust, and no vendor offers a solution that covers the entire process from beginning to end.

5.??To ensure that there is unambiguity in purpose and that everything is appropriately aligned, Cyber and the rest of the organisation need to work closely together.

6.??The concept of Zero Trust is undergoing a rapid transformation. A Zero Trust programme must be adaptable to the constantly emerging new capabilities.

7.??Integration difficulties can arise due to the lack of standardised Zero Trust because there are no industry-wide standards.

The key to success is to put the appropriate governance and have a solid understanding of where to get started.

Benefits Of Adopting ZTA: -?

Benefits ZTA

Final Words: -

The "Zero Trust" framework does not refer to a product or a service. It is a concept that assists an organisation in gaining transparency on the data processing activities, identifying sensitive or critical data, and applying an adequate level of protective, detective, and reactive security measures.

It is essential to have a solid understanding that Trust cannot be ensured merely based on a promise made by a vendor or provider or an acceptance of a policy statement by a user. It is one of the most important things to keep in mind. Before granting access to a data enclave or a network segment, Trust must be validated at policy enforcement points (PEPs) or policy decision points (PDPs).

Segmenting the network and using centrally managed next-generation firewalls is necessary to accomplish this goal. These firewalls serve as policy enforcement points in front of data enclaves, which are locations where sensitive data are stored and processed. It must be done regardless of whether the sensitive data are stored in the Cloud, on-premises, or at an IT provider's location.

In conclusion, the framework enables all security-relevant information and logs in a centralised location to verify that adequate data protection measures are the most effective method for detecting cyberattacks and unauthorised access to sensitive data.

By using threat intelligence, correlation, and legitimate day-to-day use of sensitive data, comprehensive security monitoring gives you the ability to recognise suspicious data processing. PEPs and PDPs deliver the necessary information to correlate security-relevant details to apply automation and contain suspicious transactions. It enables security-relevant information to be linked.

DivIHN Inc. Cyber Security practice offers robust solutions for your cyber risk management needs with proven success stories with the federal government and businesses of repute. To know more, please get in touch with [email protected].

要查看或添加评论,请登录

DivIHN Integration Inc的更多文章

社区洞察

其他会员也浏览了