Organizations simply love OOBM. In fact, secure OOBM services constitute the backbone of any sound high-availability and disaster recovery solution involving the unplanned downtime of critical remote infrastructure. Typically, this involves having dedicated physical interfaces and LAN segmentation features that provide the level of redundancy that is required for troubleshooting purposes. Alas, properly implementing this level of isolation isn’t a trivial pursuit; from aging equipment to suboptimal operational design, OOBM environments can be plagued with important security vulnerabilities, including the inadvertent exposure of management interfaces to the internet at large.
Despite the challenges, one can expect this technology to continue to lead the out-of-band connectivity space for the foreseeable future; therefore, it behooves network engineers, and cyber practitioners in general, to learn the subtleties of how to properly secure it.
What is OOBM, what’s it used for?
We’ve all been there, right? It’s just another typical day in Sysadmin heaven when all of a sudden it seems that you’re not able to connect to a certain remote server—for all you know, similar gear is online and working as expected. In addition, recent staffing shortages have taken your last 24x7 onsite technician out of circulation for the day, so your chances of getting someone to physically troubleshoot this device are getting pretty slim. A visit to the data center is basically all that’s left; if that’s even possible.
As hinted, this scenario repeats itself over and over throughout the corporate ecosystem, affecting business continuity everywhere. To prevent it, OOBM was conceived as a secondary management plane to reduce the operational costs associated with the loss of in-band (e.g., SSH) network connectivity—attributed to either misconfigurations (human error) or hardware failures—to mission-critical systems. More recent “smart” OOBM solutions tend to provide an even bigger safety fabric by including intelligent monitoring using standards like LTE or 5G wireless communications.
On the journey to resilience, companies have also identified OOBM as being an excellent scaffold to NetOps automation—the agile concept of bringing rapid and scalable deployment of network-driven applications and resources through repeatable, yet self-correcting processes. Furthermore, this novel approach seeks to enhance the protections already afforded by the OOBM mindset by including centralized appliances able to carry day-to-day maintenance and remediation tasks.
Remote management the OOBM way
Server technologies that support OOBM out of the box are said to be running a small-scale, sub-CPU architecture that is entirely agent-free and independent of the rest of the hardware, operating system, etc. This includes remote access functions supported by a dedicated management interface (or port) connecting to either a segregated IP-based network or to a production one via virtual tunneling.
The first effort to standardize access to this subsystem came from Intel in 1998, when the IT behemoth released the first specifications of its so-called Intelligent Platform Management Interface (IPMI), sporting custom private management busses and protocols, common language primitives, LAN configuration requirements, and a host of extensibility options therein. As a result, IPMI could interact with a collection of sensors integrated into the motherboard to monitor health status, uptimes, and similar metrics. Not surprisingly, IPMI became the de facto control and recovery platform composed of specific low-level microcontrollers and satellite circuitry with enough design flexibility as to allow hardware vendors adequate room for proprietary features.
This hedging of remote access functionalities around a single host was soon given a purpose and a name: lights-out management, or LOM, for short. For almost a decade, LOM has enabled network operators to perform a subset of critical system-level tasks, including firmware updates, log monitoring, basic troubleshooting, and in-proximity measures like reboots and shutdowns.
OOBM/IPMI platforms in vendor space
Soon after, companies like Dell and HP began a series of efforts to expand the OOBM market in a new direction. More concisely, in compliance with Common Information Model standards, the IPMI protocol, including any accompanying baseboard management controller (BMC) technology, gradually expanded from a relatively simple message-based specification to a full-blown embedded intelligence platform with multiple remote access possibilities and enhanced power-related diagnostics.
During this time, products such as Dell’s IDRAC (Integrated Dell Remote Access Controller), or HP’s iLO (Integrated Lights Out), quickly began showcasing feature-rich (e.g., web based) user consoles backed by a growing number of alerting and scripting opportunities via command line interfaces, enhanced communication bridges, improved OS health monitoring, as well as new backup and restore capabilities led by the latest advancements in microcontroller integration.
The IDRACv6 system summary page - Source:dell.com
In-house technologies like IDRAC also allowed organizations to build entire portfolios of security controls and drift detections designed to stay in compliance. For instance, newly discovered vulnerabilities (CVEs) and firmware requirements would meet with timely responses to ensure round-the-clock protection, helping to avoid exposure. Dedicated BMC remote access and service ports could be protected using access policies, while physical network segregation ensured limited visibility; that is, access to ports like UDP 623 (IPMI’s service port) would be carried independently via firewalls and/or dedicated switching gear.
Security risks and similar implications
If the thought of an attacker having unfettered access to your prized server infrastructure, even when powered off, doesn’t keep you up at night; well, it should.
Intrinsically, IPMI devices tend to exhibit a large attack surface. Over the years, researchers have identified a series of irritable security mishaps, ranging from the accidental exposure of systems’ BMCs to the internet, to the failure to remove default login credentials, which have resulted in threat actors completely taking over the host at the baseboard level. Access to the IPMI under these circumstances would normally allow perfunctory actions such as the ability to modify critical BIOS settings, server reboots and/or shutdowns resulting in loss of availability, the tampering of any power options, a complete replacement of any attached storage devices, the mounting of custom ISO images, or the subverting of the underlying OS.
In fulfilling its promise of providing system administrators with an independent remote management platform of almost endless possibilities, the IPMI standard had also opened the door to a host of security vulnerabilities with severe implications. Case in point, in 2020, an attack vector was discovered against HP’s iLO module (dubbed “iLOBleed”) allowing an undisclosed APT group to implant a rootkit on Iranian targets that persisted not only through system reboots, but also through any attempted firmware upgrades. To add insult to injury, the absence of any significant SOC tools to monitor activity at this level would likely result in these attacks going completely undetected.
How to secure IPMI-enabled devices
As mentioned, there are several key issues at play when protecting IPMI-enabled infrastructure. On the one hand, almost every major server hardware manufacturer offers one form or another of BMC-like access; in fact, over 200 vendors, according to Intel—this clearly extends the problem domain far beyond anyone’s ability to understand every possible security implication. On the other hand, threat actors are getting craftier by the day, and a new wave of out-of-band attacks could be around the corner for all we know.
So, taking matters into our own hands, here are a few tips and recommendations, in no particular order, on how to keep IPMI secure:
Avoid exposing IPMI/BMC management interfaces to the internet by using network segmentation best practices; this includes restricting any egress traffic from traversing the BMC network.
Assign IPMI traffic to a management VLAN segment and monitor traffic to other machines to ensure compliance. Have an alert system in place to notify all stakeholders if remote/unauthorized logins are detected.
Close any ports, services, consoles, interfaces, etc., that aren’t strictly necessary for administrative purposes—this significantly reduces the attack surface and the chances of any undisclosed vulnerabilities from wreaking havoc.
Change all default accounts and passwords—including any built-in admin, or any preexisting anonymous account, while enforcing strong password policies that don’t violate complexity requirements or allow for password reuse. Additionally, disabling the Cipher Suite 0 option on certain IPMI device families can also prevent attackers from bypassing authentication altogether or sending arbitrary commands.
Enable encryption on all IPMI interfaces and generate SSL certificates accordingly.
Patch, patch, patch!—while getting your IPMI device up and running (and accessible) is the obvious first step, some IT administrators forget the second most important aspect: keeping IPMI/BMC devices constantly updated as new patches are available. Review any released firmware upgrades frequently, and apply these expeditiously.
Detecting exposed IPMI interfaces and their CVEs using Attack Surface Intelligence
To reduce the IPMI attack surface even further, cyber practitioners are paying closer attention to the public-facing side of organizations, and specifically, to the role that Attack Surface Intelligence (ASI) plays in identifying unknown or forgotten assets.
Exposed iLOv4 device login page
As shown above, scanning activities can easily recognize the presence of OOBM equipment on the internet, given IT staffs’ propensity to inadvertently (or not) disregard the guidelines we’ve just exposed. ASI’s continuous, round-the-clock monitoring features bring complete visibility to this situation by proactively looking for the tell-tale signs of systemic IPMI exposures and similar administrative oversights.
In particular, our ASI platform is able to detect service misconfigurations, default settings, and any corresponding CVEs attributed to publicly-accessible OOBM/IPMI infrastructure:
|Dell iDRAC7/8 Devices - Remote Code Injection Dell iDRAC7 and iDRAC8 Devices Code Injection/RCE (CVE-2018-1207)||Dell EMC iDRAC7/iDRAC8, versions prior to 126.96.36.199, contain a CGI injection vulnerability which could be used to execute remote code. A remote unauthenticated attacker may potentially be able to use CGI variables to execute remote code.||9|
|Dell iDRAC6/7/8/9 Default Login||Dell iDRAC6/7/8/9 default login information was discovered. The default iDRAC username and password are widely known, and any user with access to the server could change the default password.||7|
|Detect Dell iDRAC6, iDRAC7, iDRAC8, iDRAC9||The Integrated Dell Remote Access Controller (iDRAC) is designed for secure local and remote server management and helps IT administrators deploy, update and monitor Dell EMC PowerEdge servers.||1|
|Supermicro IPMI Default Login||The default login information was discovered. This makes Supermicro IPMI devices vulnerable to unauthorized logins.||7|
|Supermicro BMC Login Panel||The default Baseboard management controller (BMC) login panel was detected. While this is not an actual vulnerability or misconfiguration, you must be careful while exposing this kind of logins to the public Internet.||1|
|Cisco UCS KVM Login||The KVM console is an interface accessible from the Cisco UCS Manager GUI or the KVM Launch Manager that emulates a direct KVM connection. Unlike the KVM dongle, which requires you to be physically connected to the server, the KVM console allows you to connect to the server from a remote location across the network.||1|
The above are just a brief example of the power of ASI at play; some additional guiding details, such as severity levels, can certainly enhance risk prioritization initiatives on the road to remediation.
In a digital world where data center downtimes can easily translate into millions in lost revenue, out-of-band communication and IPMI protocols reign supreme. The technology’s ultimate success, however, resides in its ability to provide a high level of accessibility and resilience in the absence of in-band network infrastructure, or even electrical power (hence the term “lights out”); a clear testimony to its popularity amongst IT personnel paradoxically tasked with keeping the “lights on.”
There’s also little doubt that exposing these technologies to the public IP space is riddled with serious risks; risks just waiting to be exploited by the next APT group or perhaps by your script kiddie next door. Therefore, protecting these assets using the best practices spread throughout this article is of the utmost importance. But if all these measures should fail, whether the issue is OOBM-related or not, you can always resort to ASI to give you that much-needed edge in closing any visibility gaps threatening your organization.
The choice is yours.