Security researchers at Wiz discover another major Azure vulnerability
Cloud security vendor Wiz—which recently made news by discovering a massive vulnerability in Microsoft Azure’s CosmosDB-managed database service—has found another hole in Azure.
The new vulnerability impacts Linux virtual machines on Azure. They end up with a little-known service called OMI installed as a byproduct of enabling any of several logging reporting and/or management options in Azure’s UI.
At its worst, the vulnerability in OMI could be leveraged into remote root code execution—although thankfully, Azure’s on-by-default, outside-the-VM firewall will limit it to most customers’ internal networks only.
OMIGOD
Opting in to any of several attractive Azure infrastructure services (such as distributed logging) automatically installs a little-known service inside the Azure virtual machine in question. That service, OMI—short for Open Management Interface—is intended to function much like Microsoft Windows’ WMI service, enabling collection of logs and metrics as well as some remote management.
Part of the OMI specification requires authentication in order to bind commands and requests to a specific user ID (UID)—but unfortunately, a bug caused malformed requests that omit the authentication stanza entirely to be accepted as though given by the root
user itself.
When configured for remote management, OMI runs an HTTPS server on port 5986, which can be connected to with a standard HTTPS client like curl
and given reasonably human-readable commands in the XML-derived SOAP protocol. In other configurations, OMI only runs on a local Unix socket at /var/opt/omi/run/omiserver.sock
, which limits its exploitation to local users only.
As Wiz senior security researcher Nir Ohfeld walked me through a demonstration of the vulnerability, he described it mostly in terms of privilege escalation—an attacker who gets any toehold on an affected virtual machine can issue any arbitrary command as root using OMI syntax.
In larger environments where OMI listens on a network port, not just a local Unix socket, it’s also a great way to laterally pivot—an attacker who gets a shell on one VM in a customer’s Azure local network can typically use the buggy OMI to get control of any other virtual machine on the same network segment.
As it turns out, Azure isn’t the only place you’ll find OMI. Organizations that adopt Microsoft System Center (which gets advertised on every new install of Windows Server 2019 and up) and manage on- or off-premise Linux hosts with it also end up with the buggy version of OMI deployed on those managed hosts.
As Nir and I talked through the vulnerability’s scope, I pointed out the likelihood of some Azure customers both enabling logging in the UI and adding a “default allow” rule to a Linux VM’s Azure firewall—sure, it’s incorrect practice, but it happens. “Oh my god,” I exclaimed—and the Wiz team burst out laughing. As it turns out, that’s exactly what they’d named the vulnerability—OMIGOD.
A difficult bounty to collect
Despite the obvious severity of OMIGOD—which includes four separate but related bugs Wiz discovered—the company had difficulty getting Microsoft to pay it a bounty for its discovery and responsible disclosure. In a series of emails Ars reviewed, Microsoft representatives initially dismissed the vulnerabilities as “out of scope” for Azure. According to Wiz, Microsoft representatives in a phone call further characterized bugs in OMI as an “open source” problem.
This claim is complicated by the fact that Microsoft authored OMI in the first place, which it donated to The Open Group in 2012. Since then, the vast majority of commits to OMI have continued to come from Redmond-based, Microsoft-employed contributors—open source or not, this is clearly a Microsoft project.
In addition to Microsoft’s de facto ownership of the project, Azure’s own management system automatically deploys OMI—admins are not asked to hit the command line and install the package for themselves. Instead, it’s deployed automatically inside the virtual machine whenever an OMI-dependent option is clicked in the Azure GUI.
Even when Azure management deploys OMI, there’s little obvious notice to the administrator who enabled it. We found that most Azure admins seem only to discover that OMI exists if their /var partition fills with its core dumps.
Eventually, Microsoft relented on its refusal to pay an Azure Management bug bounty for OMIGOD and awarded Wiz with a total of $70,000 for the four bugs comprising it.
A dusty corner of the supply chain
“OMI is like a Linux implementation of Windows Management Infrastructure,” Ohfeld told Ars. “Our assumption is when they moved to the cloud and had to support Linux machines, they wanted to bridge the gap, to have the same interface available for both Windows and Linux machines.”
OMI’s inclusion in Azure Management—and in Microsoft System Center, advertised directly on every new Windows Server installation—means it gets installed as a low-level component on a staggering number of critically important Linux machines, virtual and otherwise. The fact that it listens for commands on an open network port in some configurations, using extremely well-known protocols (SOAP over HTTPS), makes it a very attractive target for attackers.
With the scope of both deployment and potential vulnerability, one might reasonably expect a lot of eyeballs would be on OMI—enough that a vulnerability summed up as “you forgot to make sure the user authenticated” would be rapidly discovered. Unfortunately, this is not the case—OMI has a disturbingly low total of 24 contributors, 90 forks, and 225 “stars” (a measurement of relatively casual developer interest) over the nine years it’s had a home on Github.
By contrast, my own ZFS management project Sanoid—which listens on no ports and has been accurately if uncharitably described as “a couple thousand lines of Perl script”—has more than twice the contributors and forks and nearly 10 times the stars.
By any reasonable standard, an infrastructure component as critically important as OMI should be receiving far more attention—which raises questions about how many other dusty corners of the software supply chain are being equally under-inspected and under-maintained.
An unclear upgrade path
Microsoft employee Deepak Jain committed the necessary fixes to OMI’s GitHub repository on August 11—but as Ars confirmed directly, those fixes had still not been deployed to Azure as of September 13. Microsoft told Wiz that it would announce a CVE on Patch Tuesday, but Wiz researchers expressed uncertainty as to how or when those fixes could be universally deployed.
“Microsoft has not shared their mitigation plan with us,” Wiz CTO Ami Luttwak told Ars, “but based on our own customer telemetry, this could be a tricky one to patch properly. OMI is embedded across multiple Azure services and each may require a different upgrade path.”
For arbitrary Linux systems remotely managed from Microsoft System Center, the upgrade path might be even more convoluted—because the Linux agents for System Center have been deprecated. Customers still using System Center with OMI-enabled Linux may need to manually update the OMI agent.
Mitigation for affected users
If you’re a Linux system administrator worried that you might be running OMI, you can detect it easily by looking for listening ports on TCP 5985 and 5986 (TCP 1270, for OMI agents deployed by Microsoft System Center rather than Azure) or a Unix socket located beneath /var/opt/omi
.
If you have the Unix socket but not the ports, you’re still vulnerable until Microsoft deploys a patch—but the scope is limited to local privilege escalation only.
In the cases where OMI listens on TCP ports, it binds to all interfaces, including public ones. We strongly recommend limiting access to these ports via Linux firewall, whether your OMI instance is repaired or not.
In particular, security-conscious administrators should be carefully limiting access to this and any other network services to only those network segments that actually need access. Machines running Microsoft System Center obviously need access to OMI on client systems, as does Azure’s own infrastructure—but the clients themselves don’t need OMI access from one to another.
The best practice for reduction of network attack surface—with this and any other potentially vulnerable service—is a global firewall deny
rule, with specific allow
rules in place only for machines that need to access a given service.
Where that’s not practical—for example, in an Azure environment where the administrator isn’t certain what Microsoft network segments need to access OMI in order for Azure Management to work properly—simply denying access from other VMs on the same network segment will at least prevent lateral movement of attackers from one machine to another.
For more technical information, Wiz’s own blog post detailing its findings can be found here.