There’s a massive surge in the adoption of Docker containers to create, deploy, and run applications. While the technology is promising for DevOps, building apps through containerization introduces new security challenges and risks. Without proper security measures, even a single compromised Docker container can put all other containers as well as the underlying host under the risk of security attacks.

Before you start building and deploying applications using this intuitive software platform, you must harden your containerized environment to get the most out of Docker without leaving your critical applications vulnerable to attacks.

Here’s a quick guide outlining the security best practices you should follow while working with Docker containers.

Hardware Considerations

Hardware plays a vital role in hardening your container environment. When building, running, or managing containers, prefer the hardware integrated with the Trusted Platform Module or TPM to provide a basis for trusted computing.

TPM is a tamper-resistant device to secure the hardware using cryptographic keys, providing a verified system platform, and building a chain of trust rooted in the hardware. This chain of trust further extends to bootloaders and OS kernel to cryptographically verify boot mechanisms, system images, and container images.

Host Operating System

Since all the containers share the OS kernel, the security of your Docker container has a significant dependence on the host operating system. Once the host gets compromised, all the processes will become vulnerable to attacks. As such, to prevent container breaches, the host OS must be appropriately secured.

To ensure this, regularly update the operating system with all the available patches and ensure that it supports your Docker Engine version. Also, remove any unnecessary software from the OS to limit the attack surface within the environment.

Additionally, you can also take advantage of the built-in security configurations and policies available in Linux and the Docker Engine itself. Below are some of the modules you should configure adequately to protect the host.


Federal Information Processing Standards (FIPS) specify the security requirements to protect sensitive information. If you’re using RHEL or CentOS, you can enable FIPS mode using the following command:

sed -i ‘s/GRUB_CMDLINE_LINUX=”/GRUB_CMDLINE_LINUX=”fips=1 /g’ /etc/default/grub
grub2-mkconfig -o /boot/grub2/grub.cfg

To enable FIPS mode in Docker Engine, add a systmed file.

mkdir -p /etc/systemd/system/docker.service.d 2>&1; echo -e “[Service]\n
Environment=\”DOCKER_FIPS=1\”” > /etc/systemd/system/docker.service.d/fips-module.conf; systemctl daemon-reload; systemctl restart docker


AppArmor is a Mandatory Access Control (MAC) security module available on Ubuntu and Debian and is based on file system paths. You can customize and use AppArmor profiles to block the applications from accessing unsafe directories. You can apply AppArmor profiles at container run time. For example;

docker run \
–interactive \
–tty \
–rm \
–security-opt apparmor=docker-default \

For more on AppArmor profiles, refer Docker Docs.


SELinux is another MAC security module based on type enforcement, which allows you to define a type and assign specific privileges to it. You can enable SELinux in the Docker daemon by modifying /etc/docker/daemon.json, and adding the following;

“selinux-enabled”: true


Short for Secure Computing, Seccomp is also a security module that allows you to restrict system calls available to a given process. Attaching this module to your container limits the container’s access to the OS kernel.

To check if your OS kernel supports Seccomp, use the following command:

cat /boot/config-`uname -r` | grep CONFIG_SECCOMP=

The response should be:


Docker has a default Seccomp profile that doesn’t require customization. Refer Docker Docs to know more about the default Seccomp profile and the syscalls allowed through it.

Linux Capabilities

You can further reduce the surface area for attackers using Linux Capabilities, which are nothing but groups of permissions the processes can have. You can include or exclude additional capabilities that processes can use inside the container. By limiting the privileges of processes, it is possible to restrict the damage in case if they are compromised.

Unless it is necessary, avoid granting new capabilities to new containers, and at the same time, do not give all the permissions to all the container processes.

Privileged Containers

Privilege containers have access to all the namespaces, and they can do everything a host can do. To keep your host secure, always avoid running privileged containers. If your container needs privileges, only grant capabilities that it actually requires.

In addition to privileged containers, you should also avoid running your containers as root users. In Docker’s default setting, the user inside the container is root. It means that attackers can quickly gain access to all the sensitive information and the kernel. As a security best practice, don’t allow your container to run as root.

However, if you need to run processes in the container as a root user, you can do so by re-mapping the root to a less privileged user on the Docker host. Refer to the Docker docs to know how you can do this.

Application Installation Dependencies

Your application is likely going to have dependencies on third-party Docker images. But, pulling images from the Docker Hub without validating their authenticity can put you at substantial risk. Make sure to use only the official images that are pushed by the publisher and not modified by any other party.

Enable Docker Content Trust to verify the publisher and integrity of all the data. By default, Docker Content Trust is disabled. You can enable it using the following in the command line:


Refer the official documentation to get detailed information on setting up signed images using Content Trust.

Another way is to use Docker Certified images offered by trusted partners and curated by the professional team for Docker Hub.

You can also reduce the attack surface by using a minimal base image that doesn’t include unnecessary software packages. A minimal image will not only reduce the attack possibility, but it also improves performance. You can use BusyBox or Alpine to build a minimal base image.

One of the best practices is also to scan the images in the first place. The Docker Trusted Registry (DTR) comes with an on-premises image scanning, which scans the images against the CVE Database that consists of all the known security vulnerabilities. You will see a green checkmark shield icon on a clean image scan. Refer Docker Docs for more on setting up security scanning in DTR.


When building an image using Dockerfile, several packages gets created, which are only required during the build-time for compiling, running tests, secrets, and more. However, these packages will increase the Docker image size, subsequently increasing the attack area.

To avoid this, use Docker’s multi-stage build capability, which allows you to use multiple temporary images during the build process and keep only the latest image and the information you copied. It means you have two images – one with all the dependencies to build the app and run the tests, and the second smaller image with only a copy of the packages needed to run the application.

Also, avoid mounting sensitive host system directories on containers. Even if you do so, make sure that these are read-only to avoid exposing the information and compromising the host. Always specify the memory and CPU needed to operate the container instead of relying on the Docker defaults that allow sharing the resources equally with no limits.

In addition, do not mount the Docker socket inside containers, as it permits the processes to execute commands that can compromise the host. As a best practice, monitor the container for malicious behavior and make sure to scan and re-scan the image to ensure that the vulnerabilities get addressed.


By default, Docker binds all the privileged ports (1 to 1023). However, these are mostly used by network services. Hence, do not allow a container to use any of these port numbers as it becomes easy for an unauthorized user to listen to sensitive information such as logins or run unauthorized server applications. Use ‘inspect’ to find out if the container has access to any privileged port.

By using docker container inspect my_container_name, you can get all the details about your container’s security profile. You can also list out all the containers and remove all the unnecessary exposed ports.


Logs and repositories are easy to access by anyone, which means your unencrypted sensitive information can get exposed easily. To prevent this from happening, prefer not to use environment variables for your information because logs often dump these variable values, giving out sensitive information to anyone.

The best practice to protect such unencrypted and sensitive data is to transfer it to Docker Secrets. Secrets encrypt the information and provide a secure way to transmit the data to only those containers that require access.


It is critical to apply permission levels correctly according to the roles to ensure a secure containerized environment. Consider incorporating a role-based access control (RBAC) and utilize directory solutions to effectively manage the permissions for everyone against the repositories within the organization.

Docker offers the following permission levels:

  • Anonymous Users: Can only search and pull public repositories
  • Admin: Can manage everything across UCP (Universal Control Plane) and DTR (Docker Trusted Registry)
  • Organization Admin: Can do everything a team admin can do and can also create new teams and add members to the organization.
  • Team Admin: Can do everything a team member can do, plus can add team members.
  • Team Member: Can do everything a user can do, in addition to the permissions granted by the team for which the user is a member.
  • Users: Can search and pull public repositories, create and manage their repositories.


One of the best ways to ensure a secure containerized environment is to monitor the architecture itself, and this begins with measuring the performance metrics. Keep an eye on the memory and CPU usage by the containers. Look for anomalies to get a clear idea of what is happening within the environment.

Third-Party Security

Several open-source tools can help you monitor and audit your Docker containers for security. For example, the Docker Bench for Security allows you to audit your containers against the industry standard CIS benchmarks.

Similarly, Dagda is another open-source tool that is capable of scanning Trojans, viruses, malware, and other vulnerabilities in Docker containers. There is also Cilium, which allows securing the network connectivity at the kernel layer.

Known Vulnerabilities

The following are some of the known vulnerabilities that have attracted security review. Note that the bugs listed here are only a sample of the vast number of vulnerabilities that get reported regularly. You can get the list of latest security vulnerabilities here.

  • CVE-2013-1956: Increases the attack surface available to unprivileged users
  • CVE-2015-3214, 4036: Allows a guest OS user to execute code on the host OS
  • CVE-2015-3290, 5157: Allows privilege escalation that can exploit Docker containers
  • CVE-2016-5195: Allows unprivileged local users write access to read-only memory
  • CVE-2018-15664: Gives arbitrary read-write access to the host with root privileges
  • CVE-2019-15752: Allows local users to gain privileges by placing a Trojan horse