In the second of a series of guests posts on information security, Kathleen Moriarty of the Center for Internet Security (CIS) takes a close look at the benefits of zero trust architectures and what their increased adoption means for the industry.
Zero trust is an important information security architectural shift. It brings us away from the perimeter defence-in-depth models of the past, to layers of control closer to what is valued most – the data. When initially defined by an analyst at Forrester, zero trust was focused on the network providing application isolation to prevent attacker lateral movement. It has evolved to become granular and pervasive, providing authentication and assurance between components including microservices.
As the benefits of zero trust become increasingly clear, the pervasiveness of this model is evident, relying upon a trusted computing base and data centric controls as defined in NIST Special Publication 800-207. So, as zero trust becomes more pervasive, what does that mean? How do IT and cybersecurity professionals manage the deployment, and maintain an assurance of its effectiveness?
Zero Trust Architecture: Never Trust, Always Verify
Zero trust architectures reinforce the point that no layer of the stack trusts the underlying components, whether that be hardware or software. As such, security properties are verified to assure they are as expected for every dependency and interdependency on first use and intermittently (the dynamic authentication and verification tenets of zero trust). Each component is built or constructed as if the adjoining or dependent components may be vulnerable. As such, each individual component assumes it must be the component to assure the trust level asserted and must be able to detect a compromise or even an attempted compromise.
This can be a bit of a confusing paradigm in that zero trust instills the principles of isolation at every layer. This enforces the point of so-called zero trust between components, while verification of the security properties and identity is continually performed to provide an assurance that expected controls are met. One component may choose not to execute if the expected properties of the dependencies are not assured. Zero trust architectures assert to “never trust, always verify.” This enables detection and prevention of lateral movement and privilege execution for each component, and results in higher assurance for the system and software.
Core Tenets
Identity, authentication, authorisation, access controls, and encryption are among the core tenets of any zero trust architecture where deliberate and dynamic decisions are continuously made to verify assurance between components. While zero trust is often discussed at the network layer as a result of its origin as a concept by Forrester, the definition of zero trust has evolved considerably over the last decade to be a pervasive concept that spans infrastructure, device firmware, software, and data.
Zero trust is discussed often as it relates to the network with isolation of applications by network segments, ensuring controls such as strong encryption and dynamic authentication are met. Zero trust can also be applied at the microservices level, providing assurance of controls and measurements via verification between services. The granular application of this model further enforces prevention and detection for attacker lateral movement.
Infrastructure Assurance
Zero trust begins with infrastructure assurance; it has become pervasive up the stack and across applications. A hardware root of trust (RoT) is immutable with a cryptographic identity bound to the Trusted Platform Module (TPM). The infrastructure assurance example instils the tenets of a zero trust architecture. Upon boot, the system first verifies that the hardware components are as expected.
Next, the system boot process begins verifying the system and each dependency against a set of so-called “golden policies” which include expected measurements, attested to with a digital signature using the cryptographic identity in the TPM. If one of the policy comparisons do not match, the process may be restarted, or the system boot process may be halted. While there are several hardware and software-based RoT options, from boot the resiliency guidelines for firmware and BIOS are generally followed in the development of the policies and measurements used.
Attestations are signed by a RoT at each stage of the boot process and are used to both identify the relying components as well as to provide an assurance of trust, thus at the very basic level identifying the system and components are as required. The dependencies may be chained or may be verified individually. These attestations are also provided at runtime, supporting the zero trust requirement for dynamic authentication and access control – in this case, for infrastructure components. Attestations aid in the requirement to verify identity of components, essential for providing assurance of said component.
Any attacker that has infiltrated the component or software would need to survive this dynamic and periodic verification and authentication to remain a threat. The attacker would also have to figure out how to escalate privileges or move laterally between isolated components that don’t trust each other.
Trusted Control Sets
The Trusted Computing Group’s (TCG) Reference Integrity Manifest based off of NIST’s Firmware Resiliency Special Publication provide the trusted controls for policy and measurement of firmware. As you go up the stack, trusted control sets to provide the verification necessary for zero trust include the CIS Controls and the CIS Benchmarks. Trusted third parties such as NIST, CIS, and TCG provide a necessary external and established vetting process to set control and benchmark requirements. An example of this would be attestations used to comply with a CIS operating system or container benchmark at a specified level of assurance.
What Evidence Supports this Shift to Zero Trust?
Interestingly, about the same time that zero trust architectures began to take shape, Lockheed Martin developed their Cyber Kill Chain (in 2011). The Cyber Kill Chain was first defined to separate the stages of an attack, enabling mitigation and detection defences between stages. The MITRE ATT&CK Framework is used more predominantly today with the foundation provided by Lockheed Martin’s model, plus identified gaps learned from use and the evolving threat landscape. For the purpose of this paper, the Cyber Kill Chain will be used to simplify the correlation process, but can be abstracted to the MITRE ATT&CK Framework.
The Lockheed Martin Kill Chain was developed in response to the ever-increasing sophistication of advanced persistent threat attacks (APT) that had shifted to include supply chain attacks. By implementing defences and controls between attack phases, including requirements to prove identity (dynamically) via authentication, attackers' lateral movements or privilege escalation attempts could be more easily detected. Moving detection and prevention earlier in the kill chain is ideal to prevent attacks from being successful (e.g., exfiltration of data or disruption within the network).
Applying detection and prevention techniques pervasively in the stack and across applications and functions with dynamic access controls to verify authentication attested components, supports zero trust architectural tenets and enables detection early in the kill chain. The evidence of the tenets of zero trust working is clear when you consider its deployment in concert with kill chain detection controls as evidenced by attacker dwell time patterns.
Reducing Dwell Time
Since the use of the kill chain was first invoked, attacker dwell time (the time an attacker remains on a network undetected) has been dramatically reduced. This can be clearly seen with both the global and regional dwell time changes as different regions adopted the Cyber Kill Chain and zero trust defences. According to FireEye’s M-Trends annual reports, the global median dwell time was 229 days in 2013 and in the 2020 report is 56 days. The regional numbers support the success of this architectural approach as well, with known disparity in adoption of the zero trust architectural pattern and the defence frameworks of the Kill Chain and MITRE ATT&CK.
The United States was known to be an early adopter of both. Selecting 2017 as an example, the median dwell time in the Americas was 75 days and in Asia was 172 days. Smaller organisations or those with less resources in any region at any point in time, may experience wildly different dwell times from larger and well-resourced organisations. The dwell time numbers do help demonstrate the success of these controls with tangible data.
Zero trust evolved from a network-only definition, where applications were segregated, to a more granular level in support of detecting unexpected behaviours between all components. The logical connection between zero trust and the Lockheed Kill Chain demonstrates the clear value of the models. This also helps to project the future for zero trust as increasingly data-centric, built upon a foundation of isolated components from boot in infrastructure attesting to their verified identity and assurance levels up and across the stack to the microservices level.
NIST SP 800-207 defines zero trust as follows:
“Zero trust (ZT) provides a collection of concepts and ideas designed to minimize uncertainty in enforcing accurate, least privilege per-request access decisions in information systems and services in the face of a network viewed as compromised. Zero trust architecture (ZTA) is an enterprise’s cybersecurity plan that utilizes zero trust concepts and encompasses component relationships, workflow planning, and access policies. Therefore, a zero trust enterprise is the network infrastructure (physical and virtual) and operational policies that are in place for an enterprise as a product of a zero trust architecture plan.”
Tenets of Zero Trust
The following is a lists sourced from the NIST CSRC publication SP 800-207
- All data sources and computing services are considered resources
- All communication is secured regardless of location
- Access to individual enterprise resources is granted on a per-session basis
- Access to resources is determined by dynamic policy
- All owned and associated devices are in the most secure state possible
- All resource authentication and authorization are dynamic and strictly enforced
- Collect as much information as possible on current state of network infrastructure to improve security posture
An objective of the Lockheed Kill Chain is to proactively detect threats. The tenets of zero trust aid in prevention and detection along the phases of the kill chain.
- Reconnaissance: Harvesting email addresses, conference information, network data
- Weaponisation: Coupling exploit with backdoor into deliverable payload
- Delivery: Delivering weaponised bundle to the victim via email, web, USB, etc.
- Exploitation: Exploiting a vulnerability to execute code on victim’s system
- Installation: Installing malware on the asset
- Command & Control C2: Command channel for remote manipulation of victim
- Actions on Objectives: With hands on keyboard access, intruders accomplish their original goals
Lockheed Kill Chain mapped to NIST Zero Trust Tenets
1. Reconnaissance | 1 - Inventory and monitoring of all assets |
---|---|
2 - Encryption to limit information gathering | |
7 - Detection of unusual behaviours with log analysis and advanced AI/ML capabilities | |
2. Weaponisation | |
3. Delivery | 5 - Increase difficulty for any delivery to be successful as only authorised code and communications are permitted |
4. Exploitation | 3 - Access granted on per session basis to limit scope of attack |
4 - Dynamic policy may be used to remove access for attacker, e.g. posture assessment fails | |
6 - Dynamic authentication prevents the attacked from remaining if authentication fails on retry | |
7 - Detection of exploit through log analysis | |
5. Installation | 3 - Access granted on per session basis to limit scope of attack |
4 - Dynamic policy may be used to remove access for attacker, e.g. posture assessment fails | |
5 - Prevents unauthorised software or firmware from executing | |
6 - Dynamic authentication prevents the attacker from remaining if authentication fails on retry | |
7 - Detection of installation through log analysis | |
6. Command and Control | 5 - Prevents unauthorised communication on systems and network |
7 - Detection of anomalous behaviours on systems and network | |
7. Actions on Objective | 5 - Prevents unauthorised communication on systems and network |
7 - Detection of anomalous behaviours on systems and network |
______________
Read the first in this series: Transforming Information Security to Secure Business
Originally published as a CTO blog post on the CIS website on 19 January.
Comments 0
Comments are disabled on articles published more than a year ago. If you'd like to inform us of any issues, please reach out to us via the contact form here.