enterprise security

SecurityTrails Blog · Jul 21 · by Esteban Borges

Popular Misconfigurations that Make Containerized Apps Vulnerable to Attacks

Reading time: 11 minutes
Listen to this article

With today’s staggering number of internet users, and the fact that web applications themselves are doing more than they ever have in the past, scaling, maintaining, and developing large web applications has become a significant challenge for DevOps teams.

Now that applications scale across multiple public clouds, with multiple technology stacks, maintaining and deploying them calls for modern solutions. Among those in popular use are containerized applications, commonly powered by technologies like Docker. Further scalability has been made possible with the advent of technologies like Kubernetes.

Containerized applications allow DevOps teams to maintain containers running specific application configurations and versions. This practice also allows DevOps teams to replicate them as many times as needed, all while being automated. And combining technologies like Kubernetes (which has made deploying, scaling, and maintaining containerized applications super easy) with technologies like Docker has proven itself a solution for fulfilling modern application requirements—bringing on a significant spike in uptake in the recent past.

The adoption of new technologies, however, can always invite new mistakes and vulnerabilities. Such risk, therefore, calls for the same level of attention as a manual deployment would, to avoid an influx of vulnerabilities creeping into your containerized and automated deployment.

The most common source of vulnerabilities within technologies such as Docker, Kubernetes, and other automation technologies like SaltStack, Ansible, and Puppet, comes from outdated software versions, but also from the lack of hardening procedures and proper configuration reviews.

Permissions

One of the most basic forms of misconfiguration seen within Docker involves user permissions escalation. In these cases, Docker containers being run as root pose far more significant security threats than non-root user-run containers.

If a container is run under the root user and security vulnerabilities exist for that version of Docker, for example, there exist past vulnerabilities that have allowed attackers to enter into the host node via vulnerable or misconfigured software running within the container itself. Such vulnerable software would also allow attackers to escape or exit from the container and enter the host-server on which the container is running.

This is why running containers under users with just the right amount of permissions and access is highly recommended.

Docker supports running as a rootless/non-root user mode, allowing for significantly higher levels of security, configurations of which can be seen in the official Docker guide.

Data/configuration security

Where one stores data is an important consideration when working with containers such as Docker, as is the knowledge that storing data within containers frequently has more disadvantages than advantages. It’s highly recommended to store any user data outside the container; in the event of any vulnerabilities, the container can be destroyed, upgraded, and redeployed from a clean state. This allows for further automation to be made possible in handling CVEs since the data isn’t inside the container.

The storing of credentials poses another frequent misconfiguration. Just as with the storing of data, it’s recommended to avoid storing credentials within the container itself so that if a vulnerability arises, the container can be easily upgraded and re-launched without the leaking of any credentials.

Mounting data outside the container is easily handled using Docker. Consider the following Docker run command:

_docker run -dp 3000:3000 -v todo-db:/etc/todos getting-started_

In this example, the data is stored outside the container in a path/etc/todos—this way the container operates independently, allowing for the container to be created, destroyed or migrated while keeping the data as-is and in a separate path of its own.

Automation security

Technologies such as Ansible, SaltStack, and Puppet are used to automate tasks that are needed to execute across a large number of servers. These technologies make use of configuration files called “playbooks” which contain what needs to be done, and where it needs to be executed.

For these playbooks to work, access to servers via SSH or similar console-level access is required. However, executing these playbooks as root and/or storing root passwords in plaintext within these configurations can lead to security-related incidents in the event that configurations are leaked.

Consider the following example. To install MySQL via Ansible:

- hosts: webservers
  user: vagrant
  sudo: true
  vars_files:
    - vars.yml

  tasks:
    - name: Install MySQL
      action: apt pkg=$item state=installed
      with_items:
        - mysql-server-core-5.5
        - mysql-client-core-5.5
        - libmysqlclient-dev
        - python-mysqldb
        - mysql-server
        - mysql-client

    - name: Start the MySQL service
      action: service name=mysql state=started

    - name: Remove the test database
      mysql_db: name=test state=absent

    - name: Create deploy user for mysql
      mysql_user: user="deploy" host="%" password={{mysql_root_password}} priv=*.*:ALL,GRANT

    - name: Ensure anonymous users are not in the database
      mysql_user: user='' host=$item state=absent
      with_items:
        - 127.0.0.1
        - ::1
        - localhost

    - name: Copy .my.cnf file with root password credentials
      template: src=templates/.my.cnf dest=/etc/mysql/my.cnf owner=root mode=0600

    - name: Update mysql root password for all root accounts
      mysql_user: name=root host={{item}} password={{mysql_root_password}}
      with_items:
        - 127.0.0.1
        - ::1
        - localhost

As shown above, we’re able to use Ansible to install a specific version of MySQL (5.5), start the service, remove any test databases, add a user, remove any anonymous users, copy over an existing my.cnf file and update the root password.

With the MySQL root password, various other sensitive information (such as MySQL version), and the deploy MySQL user name being stored right there within the configuration file, security becomes an urgent issue.

Configuration/file exposure

Docker configuration/file exposure can be the perfect showcase for this kind of misconfigurations. Let’s take a look at a basic official Docker compose file (docker-compose.yml) example used for setting up WordPress with MySQL:

version: "3.9"

services:
  db:
    image: mysql:5.7
    volumes:
      - db_data:/var/lib/mysql
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: somewordpress
      MYSQL_DATABASE: wordpress
      MYSQL_USER: wordpress
      MYSQL_PASSWORD: wordpress

  wordpress:
    depends_on:
      - db
    image: wordpress:latest
    volumes:
      - wordpress_data:/var/www/html
    ports:
      - "8000:80"
    restart: always
    environment:
      WORDPRESS_DB_HOST: db
      WORDPRESS_DB_USER: wordpress
      WORDPRESS_DB_PASSWORD: wordpress
      WORDPRESS_DB_NAME: wordpress
volumes:
  db_data: {}
  wordpress_data: {}

Here we see MySQL 5.7 being set up with the latest WordPress image available on Docker Hub as well as the MySQL root password, database name, username and user password.

With the Docker container configuration being listed in plain/human-readable text, including passwords to the database, paths to the data, and files, it’s a must to ensure not only that the file is held securely, but also if any part of the automated deploy processes is sanitized when used to deploy containers.

Our Attack Surface Intelligence (ASI) platform is able to detect this kind of misconfiguration easily, letting you detect exposed Docker configuration files in just seconds, as you see from the following screenshot:

Attack Surface Intelligence

Detecting this kind of configuration issues allows blue teams to prevent sensitive data leaks that can later be used to execute more sophisticated attacks against the cloud infrastructure.

AWS Exposures

Another popular example is the Dockerrun AWS configuration exposure. Dockerrun is the defining format used when running containers within the Amazon Web Services (AWS) Elastic Beanstalk platform. While it’s since been replaced in the Amazon Linux 2 Docker platform, with the standard docker-compose.yml there remain various applications still deployed and used via the Dockerrun JSON configuration.

Consider the official example for dockerrun from AWS:

{

  "AWSEBDockerrunVersion": 2,
  "volumes": [
    {
      "name": "php-app",
      "host": {
        "sourcePath": "/var/app/current/php-app"
      }
    },
    {
      "name": "nginx-proxy-conf",
      "host": {
        "sourcePath": "/var/app/current/proxy/conf.d"
      }
    }
  ],
  "containerDefinitions": [
    {
      "name": "php-app",
      "image": "php:fpm",
      "environment": [
        {
          "name": "Container",
          "value": "PHP"
        }
      ],
      "essential": true,
      "memory": 128,
      "mountPoints": [
        {
          "sourceVolume": "php-app",
          "containerPath": "/var/www/html",
          "readOnly": true
        }
      ]
    },
    {
      "name": "nginx-proxy",
      "image": "nginx",
      "essential": true,
      "memory": 128,
      "portMappings": [
        {
          "hostPort": 80,
          "containerPort": 80
        }
      ],
      "links": [
        "php-app"
      ],
      "mountPoints": [
        {
          "sourceVolume": "php-app",
          "containerPath": "/var/www/html",
          "readOnly": true
        },
        {
          "sourceVolume": "nginx-proxy-conf",
          "containerPath": "/etc/nginx/conf.d",
          "readOnly": true
        },
        {
          "sourceVolume": "awseb-logs-nginx-proxy",
          "containerPath": "/var/log/nginx"
        }
      ]
    }
  ]
}

The above configuration creates two containers under the Amazon Web Services (AWS) Elastic Beanstalk service, running an Nginx reverse proxy as well as a container with PHP-FPM running within.

If exposed the above configuration can leak key information such as memory limits which can be used to attack the web application, with DoS attacks consuming up the available memory. The configuration also includes paths to log files and config paths which can expose further information such as visitor information including visitor IP addresses which can be used to find and target end users of the web application.

Paths to log files and config paths

As you can see from the previous screenshot, this is another example of how easy it is for our ASI platform to detect such configuration mistakes.

Kubernetes exposures

Rancher is an open-source multi-cluster orchestration platform that lets operations teams deploy, manage and secure enterprise Kubernetes.

This orchestration software often falls prey to misconfigurations as well; for example, default admin credentials are frequently fully discovered, and in turn, are exploited by bad actors.

Kubernetes Console exposure

The Kubernetes Console (or Dashboard) is an essential part of the Kubernetes setup. It allows you to have a complete overview of all your containers managed by the Kubernetes cluster, including their state, memory usage, other resource usages, and various management functions.

Kubernetes Console exposure

Exposure of this console can lead to various types of attacks. In the past, exposed consoles have been used to set up cryptocurrency mining, which can lead to financial losses and application slowdowns due to resources being consumed by crypto mining processes.

Kubernetes Kustomization disclosure

The Kubernetes Kustomization tool is used to customize Kubernetes objects through a ”Kustomization” file. From the projects page, Kustomize lets you customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as-is.

In turn, this ability to customize configurations makes the new Kustomization files quite powerful, allowing you to combine various existing Kubernetes configuration files at once.

However, exposure to these files can lead to large amounts of sensitive data leaking from your organization, hence it is critical to ensure these files remain safe and sanitized always.

What about container orchestration software vulnerabilities?

Configuring these tools correctly is essential to ensure that your containerized application deployment remains secure. After all, security flaws within these tools can be as simple as the dashboard allowing authentication bypass, or as complex as the dashboard having security vulnerabilities allowing for shell injection attacks.

Considering a recent vulnerability like CVE-2020-16846 affecting SaltStack Salt, infrastructure management, and container orchestration software, the CVE allowed for shell injection attacks when certain web requests were sent to SaltStack’s API.

Simply put, as vulnerabilities are possible within the management stack of your containerized infrastructure, the security of your entire containerized application is likely at risk as well.

Container orchestration software vulnerabilities

Not only does ASI do a great job of finding misconfigurations on these container orchestration platforms, it also does an amazing job of detecting vulnerabilities associated with such services. Here’s a brief list of some of the CVEs we detect:

Name Description Severity
SaltStack Shell Injection (CVE-2020-16846) SaltStack Salt through 3002 allows an unauthenticated user with network access to the Salt API to use shell injections to run code on the Salt-API using the SSH client. 9
SaltStack wheel_async unauth access (CVE-2021-25281) SaltStack Salt before 3002.5 does not honor eauth credentials for the wheel_async client, allowing attackers to remotely run any wheel modules on the master. 9
Kubernetes Dashboard unauthenticated secret access (CVE-2018-18264) Kubernetes Dashboard before 1.10.1 allows attackers to bypass authentication and use Dashboard’s Service Account for reading secrets within the cluster. 7
Puppet Server and PuppetDB sensitive information disclosure (CVE-2020-7943) Puppet Server and PuppetDB provide useful performance and debugging information via their metrics API endpoints, which may contain sensitive information. 7

Summary

Along with the inevitable migration to containerized applications being used within organizations to ensure easier management, faster deployment, and efficient scaling of web applications, there come certain critical security challenges which are often overlooked when examining the various advantages (and sometimes complexity) of managing these clusters of containers.

While containers have proven to help ensure consistency, and reduce deployment times and other issues when dealing with software versions and configurations between developer and production instances, they also pose the age-old challenge of ensuring things are correctly and securely configured. This is especially true with configuration files containing details concerning every aspect of the software being deployed from data paths, database passwords, and other access credentials.

Integrating the usage of automated scanning and the alerting of your organization’s containerized deployments via the SecurityTrails Attack Surface Intelligence platform is a key consideration for your security arsenal. From drawing attention to configurational issues to scanning for CVEs and vulnerable software running within containers themselves, Attack Surface Intelligence provides you with a complete and critical overview of your organization’s container security outlook.

Esteban Borges Blog Author
ESTEBAN BORGES

Esteban is a seasoned cybersecurity specialist, and marketing manager with nearly 20 years of experience. Since joining SecurityTrails in 2017 he’s been our go-to for technical server security and source intelligence info.

X