In late October and early November of 2022, the tech news cycle caught on to new security concerns around OpenSSL 3.x, which was deemed a critical vulnerability. This vulnerability, upon exploitation, can be used to dump the information from a server’s running memory – including user credentials and private keys. This is a Remote Code Execution (RCE) vulnerability and would allow malicious users to remotely execute code directly on a system’s memory.
So, where is OpenSSL used? Frankly, everywhere. Most applications and operating systems today rely on OpenSSL to secure Transport Layer Security (TLS) to protect most modern connectivity methods. Historically, the Heartbleed vulnerability of 2014 was another OpenSSL bug that impacted thousands of systems and resulted in hundreds of hours of patching and security updates.
It is crucial to recognize how widely OpenSSL has been adopted and how frequently it is used and, indeed, depended on. When a common tool like OpenSSL has a security bug that creates a vulnerability, it is not an obscure incident.
Walking through just one scenario of how this OpenSSL vulnerability could be exploited, the potential for dangerous outcomes becomes immediately clear.
An attacker finds an internet-exposed server running a vulnerable version of OpenSSL. This information is readily available from easy to execute search techniques.
The attacker exploits the vulnerability to capture SSH private keys running on the server.
The attacker can now remotely login to the server and effectively impersonate another valid user.
Malicious payloads such as ransomware can now be executed, and the exploited server will spread that payload to other machines on the network.
The final stage of the attack is worth emphasizing. Cyberattacks are not intended to affect just one person or device; on the contrary, exploits are built to propagate damage to as many connected machines as possible.
Every time a major vulnerability like this is announced, the first step is to immediately patch. This is a logical reaction, as patching vulnerable systems is always the ideal course of action. Unfortunately, in many environments patching or upgrading is not always an option that is realistic or even possible. The ability to patch is highly dependent on both the vendor and the customer.
The vendor must release approved software that solves the security vulnerability, which takes time to ensure it works.
The customer must evaluate their system for affected parts and find availability to install this protective upgrade.
The best example is critical infrastructure, such as energy grids, water treatment plants, and industrial lines. OpenSSL could be running on any of these systems, though in some cases it may not even be in use. For this reason, it is imperative that vulnerability scans include all assets in an environment, not just production or exposed systems.
Oftentimes, critical systems, especially in operational technology (OT) and industrial settings, are running on legacy operating systems. Windows CE, Windows 95, and old Linux versions are common in these environments due to specific applications that are vital to business operations. Trying to patch for operating system vulnerabilities on these systems is impossible because they are no longer supported. On the other hand, how can a system – like an electrical grid, for instance – be taken out of service to upgrade when that would affect the power supply of thousands or more people?
For the reasons just outlined, the only way to effectively “patch” critical infrastructure systems are to do so virtually. Virtual patching involves cutting all external direct connections to the target system to ensure it cannot be exploited, as well as locking down access to specific identities with additional security posture checks.
The virtual patching process is a two-step process that safeguards the application from exploits, but also enhances its overall security level.
Remove broad public access.
Apply granular access policies with multifactor authentication (MFA) and device posturing.
However, this poses another challenge; what if the application does not support modern security practices such as strong authentication?
With the acceleration of digital transformation initiatives aimed at moving business operations to the cloud, most security leaders have focused on protecting modern Software as a Service (SaaS) and cloud applications. This is clearly important, but it has created a security gap for the many organizations that depend on legacy and on-premises resources. The result is a widespread failure to account for the “last mile” of application security.
Cyolo extends and glues together existing identity providers, networks, and applications — all based on the principle of zero-trust. The Cyolo platform lives in your secure boundary, alongside your applications, to safely broker connections to both SaaS and legacy applications, while simultaneously enforcing strong access policies. Now, risky and vulnerable legacy applications can be hidden from the public internet and support modern security capabilities like MFA and SSO to protect your last mile.
Josh Martin is a security professional who told himself he'd never work in security. With close to 5 years in the tech industry across Support, Product Marketing, Sales Enablement, and Sales Engineering, Josh has a unique perspective into how technical challenges can impact larger business goals and how to craft unique solutions to solve real world problems. Josh joined Cyolo in 2021 and prior worked at Zscaler, Duo Security, and Cisco.
Outside of Cyolo, Josh spends his time outdoors - hiking, camping, kayaking, or whatever new hobby he's trying out for the week. Or, you can find him tirelessly automating things that do NOT need to be automated in his home at the expense of his partner. Josh lives in North Carolina, USA.