Zero Trust Network Design
The drive for a Zero Trust Networking and Software Defined Perimeter is again gaining momentum. The bad actors are getting increasingly sophisticated, resulting in a pervasive sense of unease in traditional networking and security methods. So why are our network infrastructure and applications open to such severe security risks? This Zero Trust tutorial will recap some technological weaknesses driving the path to Zero Trust network design and Zero Trust SASE.
Key Zero Trust Network Design Discussion Points:
* TCP weak connectivitiy model.
* Develop a Zero Trust architecture.
* Issues of the traditional perimeter.
* The use of micro perimeters.
We give devices I.P. addresses to connect to the Internet and signposts three pathways. None of these techniques ensures attacks will not happen. They are like preventive medicine. However, with bad actor sophistication, we need to be more at a total immunization level to ensure that attacks cannot even touch your infrastructure by implementing the concepts of a Zero Trust security strategy. Along with Software Defined Perimeter solutions.
Diagram: Define Zero Trust: The standard three pathways.The idea behind the Zero Trust model and software-defined perimeter (SDP) is a connection-based security architecture designed to stop attacks. It doesn’t expose the infrastructure and its applications. Instead, it enables you to know the authorized users by authenticating, authorizing, and validating the devices they are on before connecting to protected resources. A Zero Trust architecture allows you to operate while vulnerabilities, patches, and configurations are in progress. Essentially, it cloaks applications or groups of the application, so they are invisible to attack.
Zero Trust principles
Zero Trust Network ZTN and SDP are a security philosophy and set of Zero Trust principles, which, taken together, represent a significant shift in how security should be approached. Foundational security elements used before Zero Trust often achieved only coarse-grained separation of users, networks, and applications. On the other hand, Zero Trust enhances this, effectively requiring that all identities and resources be segmented from one another. Zero Trust enables fine-grained, identity-and-context-sensitive access controls driven by an automated platform. Although Zero Trust started as a narrowly focused approach of not trusting any network identities until authenticated and authorized.
* A key point: Traditional security boundaries
Traditionally, security boundaries were placed at the edge of the enterprise network in a classic “castle wall and moat” approach. However, a major issue with this was not just the design but also how we connected. Traditional non-Zero Trust security solutions have been unable to bridge the disconnect between network and application-level security. Traditionally, users (and their devices) obtained broad access to networks, and applications relied upon authentication-only access control.
Issue 1 – We Connect First and Then Authenticate
Connect first, authenticate, second
TCP/IP is a fundamentally open network protocol designed to facilitate easy connectivity and reliable communications between distributed computing nodes. It has served us well in terms of enabling our hyper-connected world but—for various reasons—doesn’t include security as part of its core capabilities.
TCP has a weak security foundation
Transmission Control Protocol (TCP) has been around for decades and has a weak security foundation. When it was created, security was out of scope. TCP can detect and retransmit error packets but leave them to their default; communication packets are not encrypted; which poses security risks. In addition, TCP operates with a Connect First, Authenticate, Second operation model, which is inherently insecure. It leaves the two connecting parties wide open for an attack. When clients want to communicate and access an application, they first set up a connection. Then only once the connect stage has been carried out successfully can the authentication stage occur. And once the authentication stage has been carried out, we can only begin to pass the data.
Diagram: Zero Trust security. The TCP model of connectivity.From a security perspective, the most important thing to understand is that this connection occurs purely at a network layer with no identity, authentication, or authorization. The beauty of this model is that it enables anyone with a browser to easily connect to any public web server without requiring any upfront registration or permission. This is a perfect approach for a public web server but a lousy approach for a private application.
The potential for malicious activity
With this process of Connect first, and Authenticate Second, we are essentially opening up the door of the network and the application without knowing who is on the other side. Unfortunately, with this model, we have no idea who the client isuntil they have carried out the connect phase, and once they have connected, they are already in the network. Maybe the requesting client is not trustworthy and has bad intentions. If so, once they connect, they have the opportunity to carry out malicious activity and potentially perform data exfiltration.
Developing a Zero Trust Architecture
A Zero Trust architecture requires endpoints to authenticate and be authorized before obtaining network access to protected servers. Then, encrypted connections are created between requesting systems and application infrastructure in real time. With a Zero Trust architecture, we must first establish trust between the client and the application before the client can set up the connection. Zero Trust is all about trust – never trust always verify. The trust is bi-directional between the client and the Zero Trust architecture ( that can take forms ) and the application to the Zero Trust architecture. It’s not a one-time check; it’s a continuous mode of operation. Only once sufficient trust has been established, we move into the next stage, the authentication. Once authentication has been established, we can connect the user to the application. Zero Trust access events flip the entire security model and make it more robust.
* We have gone from connecting first, authenticating second to authenticate first, connect second.
Diagram: The Zero Trust model of connectivity.Example of a Zero Trust network access
Single Pack Authorization ( SPA)
The user cannot see or know where the applications are located. SDP hides the application and creates a “dark” network by usingSingle Packet Authorization (SPA) for the authorization. SPAs’ goal, also known as Single Packet Authentication, is to overcome the open and insecure nature of TCP/IP, which follows a “connect then authenticate” model. SPA is a lightweight security protocol that validates a device or user’s identity before permitting network access to the SDP. The purpose of SPA is to allow a service to be darkened via a default-deny firewall. Essentially, the systems use a One-Time-Password (OTP) generated by algorithm 14 and embed the current password in the initial network packet sent from the client to the Server. The SDP specification mentions using the SPA packet after establishing a TCP connection. In contrast, the open-source implementation from the creators of SPA15 uses a UDP packet before the TCP connection.
Issue 2 – Fixed perimeter approach to networking and security
Traditionally, security boundaries were placed at the edge of the enterprise network in a classic “castle wall and moat” approach. However, as technology evolved, remote workers and workloads became more common. As a result, security boundaries necessarily followed and expanded from just the corporate perimeter.
The traditional world of static domains
The traditional world of networking started with static domains. Networks were initially designed to create internal segments separated from the external world by a fixed perimeter. The classical network model divided clients and users into trusted and untrusted groups. The internal network was deemed trustworthy, whereas the external was considered hostile. The perimeter approach to network and security has several zones. We have, for example, the Internet, DMZ, Trusted, and then Privileged. In addition, we have public and private address spaces that separate network access from here. Private addresses were deemed more secure than public addresses as they were not reachable on the Internet. However, this trust assumption that all private addresses are safe is where all of our problems started!!!!
Diagram: Zero Trust security meaning. The issues with traditional networks and security.The fixed perimeter
The digital threat landscape is concerning. We are getting hit by external threats to your applications and networks from all over the world. They also come internally within your network, and we have insider threats within a user group and internally as insider threats across user group boundaries. These types of threats need to be addressed one by one. One issue with the fixed perimeter approach is thatit assumes a trusted internal network and a hostile external network. However, we need to assume that the internal network is as hostile as the external one. Over 80% of threats are from internal malware or malicious employees. The fixed perimeter approach to networking and security is still the foundation for most network and security professionals, even though a lot has changed since the inception of the design.
We get hacked daily!
We are now at a stage where 45% of US companies have experienced a data breach. The 2022 Thales Data Threat Report found that almost half (45%) of US companies suffered a data breach in the past year. But this could be higher due to the potential for as yet undetected breaches.
We are getting hacked daily, and major networks with skilled staff are crashing. Unfortunately, the perimeter approach to networking has failed to provide adequate security in today’s digital world. It works to an extent by delaying an attack. However, a bad actor will eventually penetrate your guarded walls with enough patience and skill. If a large gate and walls guard your house, you would feel safe and think you are fully protected while inside the house. However, the perimeter protecting your house may be as large and thick as possible. There is still a chance that someone can climb the walls, access your front door and gain entry to your property. However, if a bad actor cannot even see your house, then they will not have the ability to go to the next step and try to breach your security.
Issue 3 – Dissolved perimeter caused by the changing environment
The environment has changed with the introduction of the cloud, advanced BYOD, machine-to-machine connections, the rise in remote access, and phishing attacks. We have many internal devices and a variety of users, such as on-site contractors that need to access network resources. There is also a trend for corporate devices to move to the cloud, collocated facilities, and off-site to customer and partner locations. In addition, it is becoming more diversified with hybrid architectures.
Diagram: Zero Trust concept.These changes are causing major security problems with the fixed perimeter approach to networking and security. For example, with the cloud, the internal perimeter is stretched to the cloud, but traditional security mechanisms are still being used. But it is a completely new paradigm. Also, remote workers: we have abundant remote workers working from various devices and places. Again, traditional security mechanisms are still being used. As our environment evolves, security tools and architectures must evolve. Let’s face it the network perimeter has dissolved as your remote users, things, services, applications, and data are everywhere. In addition, as the world moves to the cloud, mobile, and the IoT, the ability to control and secure everything in the network is longer available.
Phishing attacks are on the rise
We have witnessed increased phishing attacks that can result in a bad actor landing on your local area network (LAN). Phishing is a type of social engineering where an attacker sends a fraudulent message designed to trick a person into revealing sensitive information to the attacker or to deploy malicious software on the victim’s infrastructure like ransomware. The term “phishing” was first used in 1994 when a group of teens worked to obtain credit card numbers from unsuspecting users on AOL manually.
Hackers are inventing new ways
By 1995, they created a program called AOHell to automate their work. Since then, hackers have continued to invent new ways to gather details from anyone connected to the internet. These actors have created several programs and types of malicious software still in use today. Recently, I was a victim of a phishing email. Clicking and downloading the file is very easy if you are not educated about phishing attacks. In my case, the particular file was a .wav file. It looked safe, but it was not.
Issue 4 – Broad-level access
So, you may have heard of broad-level access and lateral movements. Remember, with traditional network and security mechanisms, when a bad actor lands on a particular segment, i.e., a VLAN, known as zone-based networking, they can see everything on that segment. So this gives them broad-level access. But, generally speaking, when you are on a VLAN, you can see everything in that VLAN, and VLAN-to-VLAN communication is not the hardest thing to do, resulting in lateral movements.
The issue of lateral movements
Lateral movement is the set of techniques attackers use to progress through the organizational network after gaining initial access. Adversaries use lateral movement to identify target assets and sensitive data for their attack. Lateral movement is the tenth step in the MITRE Att&ck framework. It is the set of techniques used by attackers to move in the network while gaining access to credentials and without being detected.
No intra-VLAN filtering
This is made possible as traditionally, a security device does not filter this low down on the network, i.e., inside of the VLAN, known as intra-VLAN filtering. A phishing email can easily lead the bad actor to the LAN with broad-level access and the capability to move laterally throughout the network. For example, a bad actor can initially access an unpatched central file-sharing server; they move laterally between segments to the web developers’ machines and use a keylogger to get the credentials to access critical information on the all-important database servers. They can then carry out data exfiltration with DNS or even a social media account like Twitter. However, firewalls generally do not check DNS as a file transfer mechanism, so data exfiltration using DNS will often go unnoticed.
Diagram: Zero trust application access. One of the many security threats: Lateral movements.Issue 5 – The challenges with traditional firewalls
The limited world of 5-tuple
Traditional firewalls typically control access to network resources based on source I.P. addresses. This creates the fundamental challenge of securing access. Namely, we need to solve the user access problem, but we only have the tools to control access based on I.P. addresses. As a result, you have to group users, some of which may work in different departments and roles, to access the same service and with the same I.P. addresses. The firewall rules are also static and don’t dynamically change based on levels of trust changing on a given device. They provide only network information.
Maybe the user moves to a more risky location such as an Internet cafe, its local Firewall, or antivirus software that has been turned off by malware or even by accident. Unfortunately, a traditional firewall cannot detect this and live in the limited world of the 5-tuple. Traditional firewalls are only capable of expressing static rule sets and cannot express or enforce rules based on identity information.
Issue 6 – A cloud-focused environment
Upon examining the cloud, let’s make a comparison of a public parking space. A public cloud is where you can put your car in comparison to your car in your parking garage. In a public parking space, we have multiple tenants who can take your space and don’t know what they can do to your car.
Today we are very cloud-focused, but when moving applications to the cloud, we need to be very security-focused. However, the cloud environment is less mature in providing the traditional security control we are used to in our legacy environment. So when you are putting applications in the cloud, you shouldn’t leave security to its default. Why?? Firstly we are operating in a shared model where the tenant next to you can steal your encryption keys or data. There have been a lot of cloud breaches. We have firewalls with static rulesets, authentication, and key management issues in cloud protection.
Control point change
One of the biggest problems is that the perimeter has moved when you move to a cloud-based application. Servers are no longer under your control. Mobile and tablets exacerbate the problem as they can be located everywhere. So trying to control the perimeter is very difficult. More importantly, firewalls only have access to and control network information, but they should have more content. Defining this perimeter is what ZTNA architecture and software-defined perimeter are doing. Firewalls are now managed by cloud users who are moving their applications to the cloud, not the I.T. teams within the cloud providers. So when you are moving applications to the cloud, even though cloud providers provide security tools, the cloud consumer has to integrate security to have more security visibility than they have today.
Diagram: ZTNA. Zero Trust cloud security.Before, we had clear network demarcation points set by a central physical firewall creating inside and outside trust zones. Anything outside was considered hostile, and anything on the inside was considered trusted.
1. Connection-centric model
The Zero Trust model flips this around and considers everything untrusted. To do this, there are no longer pre-defined fixed network demarcation points. Instead, the network perimeter that was initially set in stone is now fluid and software-based. Zero Trust is connection-centric, not network-centric. Each user on a specific device connected to the network gets an individualized connection to a specific service hidden by the perimeter. Instead of having one perimeter every user uses, SDP creates many small perimeters purposely built for users and applications. These are known as micro perimeters. Clients are cryptographically signed into these micro perimeters.
Diagram: Security micro perimeters.2. Micro perimeters
The micro perimeter is based on user and device context and can dynamically adjust to environmental changes. So as a user moves to different locations or devices, the Zero Trust architecture can detect this and set the appropriate security controls based on the new context. The data center is no longer the center of the universe. Instead, the user on specific devices, along with their service requests, is the new center of the universe. Zero Trust does this by decoupling the user and device from the network. To remove the user from the network, there is a separation of the data plane from the control plane. The control plane is where the authentication happens first. Then the data plane, the client-to-application connection, transfers the data. Therefore the users don’t need to be on the network to gain application access. As a result, they have the least privilege and no broad-level access.
A final issue 7 – The I.P. address conundrum
Everything today relies on I.P. addresses for trust, but there is a problem: I.P. addresses lack user knowledge to assign and validate the device’s trust. There is no way for an I.P. address to do this. I.P. addresses provide connectivity but do not get involved in validating the trust of the endpoint or the user. Also, I.P. addresses should not be used as an anchor for network locations as they are today because when a user moves from one location to another, the I.P. address changes.
Diagram: Three main network security flaws.> Can’t have security related to an I.P. address.
But what about the security policy assigned to the old I.P. addresses? What happens with your change I.P.s? Anything tied to I.P. is ridiculous as we don’t have a valid hook to hang things on for security policy enforcement. When you examine policy, there are several facets. For example, the user access policy touches on authorization, the network access policy touches on what to connect to, and user account policies touch on authentication. With either one, there is no policy visibility with I.P. addresses. This is also a major problem for traditional firewalling, which displays static configurations; for example, a static configuration may state that this particular source can reach this destination using this port number.
Security-related issues to I.P.
1. This has no meaning. There is no indication of why that rule exists and under what conditions a packet should be allowed from one source to another.
2. There is no contextual information taken into consideration. When creating a robust security posture, we need to look at more than ports and I.P. addresses.
For a robust security posture, you need full visibility into the network to see who, what, when, and how they connect with the device. Unfortunately, today’s Firewall is static and only contains information about the network. On the other hand, Zero Trust enables a dynamic firewall with the user and device context to open a firewall for a single secure connection. The Firewall remains closed at all other times, creating a ‘black cloud’ stance regardless of whether the connections are made to the cloud or on-premise.
The rise of the next-generation firewall?
Next-generation firewalls are more advanced than traditional firewalls, and they use the information in layers 5 through 7 (session layer, presentation layer, and application layer) to perform additional functions. They can provide advanced features such as intrusion detection, prevention, and virtual private networks. Today, most enterprise firewalls are “next generation” and typically include IDS/IPS, traffic analysis and malware detection for threat detection, URL filtering, and some degree of application awareness/control. Like the NAC market segment, vendors in this area began a journey to identity-centric security around the same time Zero Trust ideas began percolating through the industry. Today, many NGFW vendors offer Zero Trust capabilities but many operate with the perimeter security model.
Still IP-based security systems
They are still IP-based systems offering limited identity and application-centric capabilities. In addition, NGFWs are still static firewalls most do not employ zero trust segmentation, and they often mandate traditional perimeter-centric network architectures with site-to-site connections and don’t offer flexible network segmentation capabilities. Similar to traditional firewalls, their access policy models are typically coarse-grained, providing users with broader network access than what is strictly necessary.
Latest posts by Matt Conran: The Visual Age (see all)