Zero Trust: Same Old Controls, or Something New and Shiny?

What is Zero Trust?

Michael H,16 Jun 2022

One of the hottest topics in cybersecurity today is Zero Trust. Just in the past 20 months, DoD[i] and NIST[ii] published Zero Trust Architectures, CISA published a Zero Trust Maturity Model[iii], an Executive Order[iv] and OMB Memorandum[v] levying requirements on government organizations to implement Zero Trust were issued, and GSA released a Zero Trust Architecture Buyer’s Guide[vi]. Not to mention all the vendors advertising Zero Trust solutions, Gartner and Forrester publications, and the other 3,660,000,000 results from a Google search on the term “Zero Trust.” Likewise, a search on the term "How to Implement Zero Trust" yielded 80,900,000 results. It’s safe to say there is a lot of interest in Zero Trust.

But what is Zero Trust? According to Gartner[vii], Zero Trust network security “starts with a default deny posture of [trusting nothing]. Access is granted based on the identity of the humans and their devices — plus other attributes and context, such as time/date, geolocation, device posture, etc. — and adaptively offers the appropriate trust required at the time.” This is a significant paradigm shift from traditional, "trust, then verify” perimeter-based network security that seeks to keep malicious actors out, while assuming that anyone who is already inside the network can be trusted. The Zero Trust security model eliminates the idea of a trusted network, assuming instead that threats exist both inside and outside a network, so no one human or thing can be trusted implicitly. Under this model’s principle of "verify, then trust," permissions are only granted to user accounts, devices, applications, or services once properly validated, and moreover trust must be continually revalidated to maintain access.

With all the buzz, you’d think Zero Trust is a brand-new silver bullet for all our cybersecurity concerns. But if you’ve worked in cybersecurity for a while, you’ll find much of Zero Trust to be hauntingly familiar. Let’s explore that further:

Since it seems that each publication addressing Zero Trust has a different take on the concept, for simplicity let’s just look at one of the Zero Trust frameworks, the DHS CISA Zero Trust Maturity Model.

The CISA model shown in Figure 1 organizes capabilities required to achieve Zero Trust into five categories (referred to as pillars): Identity, Device, Network/Environment, Application Workload, and Data. The capabilities listed in each pillar are then divided into three maturity levels: Traditional, Advanced, and Optimal. It’s here that a sense of déjà vu begins to set in when looking at the security functions called out for each block of the Maturity Model.

As we begin to examine the CISA model, we see Multifactor Authentication and Federated Identity required under the Identity pillar. Nothing new here conceptually. The original patent[viii] for the SecurID token was filed in 1993, Chip and Pin for credit cards was introduced in France in 1984[ix], and Paypal began offering a token for multifactor authentication around 2006[x]. So multifactor authentication has been available to enhance identity security for decades. Standards for Federated Identity Management (Single Sign-On) such as OpenID have been around since 2005[xi], and many commercial solution providers have broad user bases today. Again, nothing new.

Under the Device pillar, Device Compliance Enforcement and Data access management based on initial device posture are specifically called out. Requirements for automated asset management, vulnerability scanning, and patching are also levied in the detailed descriptions of this pillar. Again, not really anything new. In the SANS Consensus Audit Guidelines[xii] (the precursor to the SANS Top 20 Security Controls) dated 02 Mar 2009, the first control requires an inventory of all devices on the network and the third requires secure configurations for all network devices. NIST 800-53[xiii] published in February 2005 addressed requirements for configuration management. For over a decade, we've had solutions associated with Network Access Control and VPNs that install a client on the user’s computer and assess device security status and compliance before admitting the device onto the network. More recent methods add other sources of data, such as endpoint security scanning, and network vulnerability scans to facilitate that determination. This is the basis of the DoD comply-to-connect strategy and associated implementation framework[xiv] developed over the past decade. Everything in the Traditional and Advanced blocks of this pillar are well-known security requirements with existing solutions.

In the Network/Environment pillar, we see macro-segmentation, traffic encryption, internal micro-segmentations using micro-perimeter controls, as well as basic network traffic analytics. Traditional firewall and VLAN technology to enable macro-segmentation have been available since the late 90s[xv]. Information Flow Enforcement controls to enforce segmentation was first added to NIST 800-53 in Rev. 1 published in 2006. Next Generation firewalls coupled with Active Directory (or other user-to-IP tie-in) have been capable of enabling network micro-segmentation for over a decade.

The concept of Micro-perimeters is a more recent development moving the firewall to the application. For comparison, network micro-segmentation might sit at the front of a data center and can control the IP addresses, and for web pages, the URL accessible by a user. The micro-perimeter sits with the application and can control access with finer granularity beyond just the URL, such as the type of queries allowed. While the term micro-perimeter is new, the ability to provide this level of data access control at the application level is not. The Multiplexed Information and Computing Services (Multics)xvi operating system designed in the late 1960s, and evolved through the 1980s, provided an application-based data protection capability similar to a micro-perimeter. The operating system could prevent direct user access to data thus enabling the application as the enforcement control point for data access. The concepts are the same, differing only in the method of access to the application.

Other Network/Environment pillar capabilities have also been around. We have had the capability to do basic traffic analysis for 20+ years. And strong encryption for data-at-rest and data-in-motion has been recommended and commercially available for decades.

The Application Workload pillar for the Advanced Maturity level requires centralized authentication and authorization for access to applications and integration of threat protections into application workflows. Standards and solutions to meet these requirements have been available for almost forty years, dating back to when MIT developed Kerberos[xvi] in the 1980s. Requirements for application security date back to the early publications of both the Sans Top 20 Security Controls and NIST 800-53. The migration to cloud services has accelerated the use of centralized authentication services.

The final pillar, Data, requires application of least privilege data access, and encryption of data at rest. The concept of Least Privilege dates back to 1983 when the National Computer Security Center (NCSC) Trusted Computer System Evaluation Criteria (TCSEC, aka. Orange Book)[xvii] was initially released. The original 2009 SANS critical controls included “Controlled Access Based on Need to Know,” and the 2005 release of NIST 800-53 v1 included the Least Privilege security control. It is not until the third revision of 800-53 in 2010 that we see a control specifically addressing Protection of Information at Rest recommending encryption, but that was added twelve years ago. Again, all well-known cybersecurity principles.

The question is then “Is Zero Trust just old wine in a new bottle?” or does it bring something new to the fight? Zero Trust is at least shining a new spotlight on required security controls that haven’t been universally applied despite having been known and recommended for many years. If so many of the tenets of Zero Trust have been recommended by security practitioners for decades, why aren’t they already in widespread use? The answer lies in the fact that security is the enemy of ease of use. Network users don’t want to be burdened with carrying tokens for multifactor authentication, strong passwords that are hard to remember, having to ask for access to data that they didn’t need to do their job until just now, networks slowed by the need to encrypt and decrypt data, and all the other “burdens” of strong cybersecurity. Security is also hard, often requiring restructuring of network architecture and additional capital investment. Security is often seen as an unnecessary cost by senior executives, at least until after they experience a ransomware attack or major security breech. Mandatory requirements for Zero Trust may be just the incentive needed to get network owners to finally implement all those well-known security controls we’ve recommended for decades.

Zero Trust also brings with it several newer concepts. The core tenet of Zero Trust is to assume that the network has already been compromised, and malicious actors are inside the network perimeter. This idea encourages stronger identity management and access controls and shifts the defensive focus from boundary defense to role-based and attribute-based access, network segmentation, data segmentation, and treats each application as a boundary. This leads to a key new concept: The user logs into applications, not the network, and data is accessed through the application based on the user’s need to know, and right to know. The paradigm shifts from blanket trust to transactional trust.

The truly innovative elements of Zero Trust are found in the requirements for the Optimal level of the CISA Zero Trust Maturity Model. Continuous validation isn't that hard to achieve. Most centralized security authentication/authorization systems issue a token that has a lifetime forcing continuous validation. When the token expires, re-authentication/authorization is required. In Contrast, real time analysis, continuous monitoring, machine learning, encryption of all data at rest and in motion, and other requirements are beyond the scope of most current enterprises. Virtualization, cloud services, artificial intelligence, machine learning and automation capabilities are required to achieve these goals. Technology is truly still evolving as new approaches and techniques improve how this can be done by machines, because in most places there aren't enough human resources to manage and analyze the data flow, much less do it in real-time. For most network owners, this will require significant investment and rebuild of their networks to add these capabilities. These investments and changes are unlikely to happen overnight. Network owners can still implement those legacy elements of Zero Trust that are well understood and for which many commercial solutions are available in the near term.

The bottom line is that Zero Trust can serve as a forcing function for the implementation of good security practices that have been recommended by cybersecurity practitioners for many years, while encouraging adoption of even stronger controls enabled by newer technologies. Adoption of Zero Trust principles will help minimize the blast radius of data breaches to single user sessions rather than the entire enterprise. And the increased automation and centralized control required for an optimal implementation of Zero Trust will reduce the burden on administrators implementing required security controls, making them more dynamic and transparent to users. Finally, remember Zero Trust is a journey not a destination, with each step taken providing more protection to organizational data, and well worth the trip.

References:

[i] Joint Defense Information Systems Agency (DISA) and National Security Agency (NSA), Department of Defense (DOD) Zero Trust Reference Architecture (2021).

[ii] Rose, S., Borchett, O., Mitchell, S., & Connelly, S., NIST Special Publication 800-207 Zero Trust Architecture (2020).

[iii] CISA Cybersecurity Division, Zero Trust Maturity Model (2021).

[iv] Exec. Order No. 14028, 86 Fed. Reg. 93 (May 12, 2021).

[v] Office of Management and Budget, Memorandum M-22-09 Moving the U.S. Government Toward Zero Trust Cybersecurity Principles (2022).

[vi] GSA, Zero Trust Architecture Buyer's Guide (2021).

[vii] McQuaid, A., MacDonald, N., Watts, J., & Handa, S. (2022). (tech.). Market Guide for Zero Trust Network Access. Gartner

[viii] Weiss, K. P. (1996, January 16). Enhanced security for a secure token code.

[ix] Roos, D. (2014, May 16). How chip and PIN credit cards work. HowStuffWorks. Retrieved May 18, 2022, from https://money.howstuffworks.com/personal-finance/debt-management/chip-and-pin-credit-cards.htm

[x] Gibson, S. (2007, August 2). Security now! Transcript of Episode #103. Retrieved May 18, 2022, from https://www.grc.com/sn/sn-103.htm

[xi] Fitzpatrick, B. (2005, May 16). Distributed identity: Yadis. LiveJournal. Retrieved May 18, 2022, from http://community.livejournal.com/lj_dev/683939.html

[xii] Gilligan, J. (2009, February 27). Consensus Audit Guidelines - Draft 1.0. SANS.org. Retrieved May 18, 2022, from http://www.sans.org/cag/guidelines.php

[xiii] NIST, Recommended security controls for Federal Information Systems and organizations (2005). Gaithersburg, MD; U.S. Dept. of Commerce, National Institute of Standards and Technology.

[xiv] Defense Information Systems Agency. Fact Sheets. (n.d.). Retrieved May 23, 2022, from https://www.disa.mil/about/fact-sheets

[xv] "IEEE Standards for Local and Metropolitan Area Networks: Virtual Bridged Local Area Networks," in IEEE Std 802.1Q-1998 , vol., no., pp.1-214, 8 March 1999, doi: 10.1109/IEEESTD.1999.89204.

[xvi] S. P. Miller, B. C. Neuman, J. I. Schiller, and J. H. Saltzer, Section E.2.1. Kerberos Authentication and Authorization System, M.I.T. Project Athena, Cambridge, Massachusetts (December 21, 1987).

[xvii] National Computer Security Center, DoD 5200.28-STD Department of Defense Trusted Computer System Evaluation Criteria (1985).