Kathleen Moriarty

Day One Exploits: How to Effectively Reduce the Threat

Kathleen Moriarty

5 min read

21 You have liked this article 0 times.
0

Recent Day One attacks have demonstrated how difficult it can be for organisations to react when vulnerabilities are announced.


Cyber hygiene and patching are key measures towards protecting data and systems. However, it’s not always possible or practical to patch when vulnerabilities and associated patches are announced. This problem gives rise to day one exploits.

Day one exploits are responsible for attacks such as the recent Microsoft Exchange attack that compromised hundreds of thousands of organisations. This began as a zero-day exploit and was followed by numerous day one exploits once the vulnerabilities were announced. Day one exploits were also used by Iranian threat actors about a year ago to gain access to financial sector networks via published virtual private network (VPN) vulnerabilities.

Patching Systems – Not Always an Easy Fix

The crux of the problem are the hurdles organisations face with patching systems. This is what leads to intrusions on such a large scale. Additionally, once an organisation is infiltrated, recovery can require system rebuilds, as was the case with recent attacks.

How can we build our infrastructure in such a way that patching is simplified and complete recovery is enabled?

Twenty years ago, at a previous employer, our email was Unix-based. The system administrators had the IMAP-based mail server set up with a mirror. The team could:

  • Break the mirror
  • Patch or even rebuild a system
  • Restore screened content from backup
  • Prepare the system to go back online
  • Initiate a momentary outage to bring the clean system up
  • Wipe the system that was still vulnerable
  • Prepare the system to go back online
  • Rejoin the mirror

This process allowed for the mail servers to be rebuilt or patched with a minimum of downtime and could also be followed for any other server configured this way.

Using “Cloud Native” Environments for Rapid Updates

Resiliency such as this is increasingly important with infiltrations that are difficult to detect. The underlying approach of patching systems to thwart vulnerabilities is, as we have established, flawed. This leads to the question “Have my systems been infiltrated?”. Patching alone does not close any backdoors that may have been created by an attacker as a method of establishing persistence. Patching cycles can take time to catch up to the systems in question. An internal capability to manage and process system reimaging is a necessity.

For many services that can be run in virtual environments, following the cloud native architectural style, it is possible to move a workload to a new instance of a patched or rebuilt application or server. Application data is screened and verified prior to being restored to this new environment. DevOps practices such as decoupling of modules and mobility in cloud native environments ensure that rapid updates are possible. This can be done without impact to the supported service.

This is critical to the resiliency needed to recover from today’s attacks. The restored or rebuilt system should be configured to meet policy requirements and best practices for security configurations.

Resources and Planning

Best practices for security configurations are intended to ensure system hygiene and reduce the chance of attacks. The CIS Benchmarks and CIS Controls provide guidance as to how you should prioritise policy and control implementations to reduce risk to your organisation. There are over 100 CIS Benchmarks across more than 25 vendor families available for applications, operating systems, services, and devices.

Applications like Microsoft Exchange add some more complexity to patch and restore seamlessly, but this level of recoverability can be achieved with planning and the use of a database availability group (DAG). Planning is required to architect a network that is flexible enough to ease the recovery process. However, in some cases this level of resourcing is not always possible. If virtual environments can be used, moving workloads is an excellent current day option.

Zero Trust Architecture and DevOps

What if patching were less scary? We are all accustomed to testing patches prior to deploying updates to systems. This causes a delay in when the associated vulnerabilities can be mitigated, leaving the door open for day one exploits.

With consideration, the move toward pervasive zero trust architectures and DevOps processes could help reduce this problem. In DevOps, modules are minimised and referenced again rather than written again. There is a movement toward reducing the coupling of code to allow for faster updates beyond cloud native deployments. Applications have also steadily reduced any coupling to operating systems. This in turn minimises any unforeseen impact of patches in different environments.

In zero trust architectures, applications or even components do not trust other applications or components, and perform verification as expected before authorising access. This decoupling and reduction of reliance on adjoining modules, components, applications, and operating systems will enable faster patching times. Ideally, vendors will embrace these concepts and make it possible for near-immediate patching without distributed testing at each site.

Establishing Resilience Against Day One Vulnerabilities

Resilient infrastructure coupled with lower risk patching from vendors will help close the attack window for day one vulnerabilities. If operating systems and application providers increasingly embrace DevOps principles, organisations will be able to patch systems more effectively. Until then, determine if and how your architecture can be more resilient. You'll want to enable recovery of completely rebuilt systems that meet hygiene requirements.

Cloud-hosted environments often include this level of resiliency, especially if they are based on cloud native models. The threat landscape as indicated by recent attacks has demonstrated the need for this level of resiliency. It should be a priority for all organisations to determine the best risk mitigation strategy.

For additional information on the concepts described, see Transforming Information Security: Optimizing Five Concurrent Trends to Reduce Resource Drain sections 7.2 and 7.3.


Originally published as a CTO blog post on the CIS blog.

21 You have liked this article 0 times.
0

You may also like

View more

About the author

Kathleen Moriarty, technology strategist and board advisor, helping companies lead through disruption. Adjunct Professor at Georgetown SCS, also offering two corporate courses on Security Architecture and Architecture for the SMB Market. Formerly as the Chief Technology Officer, Center for Internet Security Kathleen defined and led the technology strategy, integrating emerging technologies. Prior to CIS, Kathleen held a range of positions over 13 years at Dell Technologies, including the Security Innovations Principal in Dell Technologies Office of the CTO and Global Lead Security Architect for EMC Office of the CTO working on ecosystems, standards, risk management and strategy. In her early days with RSA/EMC, she led consulting engagements interfacing with hundreds of organisations on security and risk management, gaining valuable insights, managing risk to business needs. During her tenure in the Dell EMC Office of the CTO, Kathleen had the honor of being appointed and serving two terms as the Internet Engineering Task Force (IETF) Security Area Director and as a member of the Internet Engineering Steering Group from March 2014-2018. Named in CyberSecurity Ventures, Top 100 Women Fighting Cybercrime. She is a 2020 Tropaia Award Winner, Outstanding Faculty, Georgetown SCS. Keynote speaker, podcast guest, frequent blogger bridging a translation gap for technical content, conference committee member, and quoted on publications such as CNBC and Wired. Kathleen achieved over twenty five years of experience driving positive outcomes across Information Technology Leadership, short and long-term IT Strategy and Vision, Information Security, Risk Management, Incident Handling, Project Management, Large Teams, Process Improvement, and Operations Management in multiple roles with MIT Lincoln Laboratory, Hudson Williams, FactSet Research Systems, and PSINet. Kathleen holds a Master of Science Degree in Computer Science from Rensselaer Polytechnic Institute, as well as, a Bachelor of Science Degree in Mathematics from Siena College. Published Work: - Transforming Information Security: Optimizing Five Concurrent Trends to Reduce Resource Drain, July 2020.

Comments 0