1) Docker what it is, how to install on Ubuntu and secure it

 

Docker, What it is?

Docker is a container-based software platform to build applications — small and lightweight runtime environments that share the operating system kernel but otherwise run in isolation from each other. Although containers have been around as a term for some time, Docker, an open source project released in 2013, helped popularize the platform, and helped drive the software development movement towards containerization and micro-services that has become known as cloud-native development. Here is the complete guide on Install Docker Ubuntu and Hoe to secure it!

Table of Content +

Why we need containers.

One of the objectives of modern software engineering is to have applications separated from each other on the same host or cluster, so that they do not overly interfere with the performance or functionality of each other. Due to the modules, libraries, and other program components required for them to run, this can be difficult. Virtual machines, which keep programs on the same hardware fully separate and mitigate differences between software components and competition for hardware resources, have become one solution to this problem. But virtual machines are cumbersome — each needs its own operating system and are usually gigabytes in size — and hard to operate and update.

Are Containers Replacing Virtual Machines? - Docker Blog

Figure 1: VMs vs Docker Containers

Docker Container

Containers, on the other hand, separate execution environments of apps from one another but share the OS kernel underlying them. Usually they are stored in megabytes, they use far less resources than VMs, and they start almost immediately. Containers offer a highly effective and extremely granular framework for integrating software components into the kinds of system and service stacks needed in a modern enterprise and for managing and upgrading such software components. Docker is an open source project which facilitates the development of containers and container-based applications. Docker was initially designed for Linux and now also runs on Windows and MacOS.

Installing docker on Ubuntu 18.04.

To install Docker on Ubuntu, you obviously need to have Ubuntu 18.04 with internet access, and you need to be a “sudo user”.

  • To update and upgrade the local database before installation, execute the following command to update the existing list of packages.
    • sudo apt update
    • sudo apt upgrade
  • For dependencies to support HTTPS, install the packages to allow apt to transfer file over HTTPS, using the following set of command.
    • sudo apt install apt-transport-https ca-certificates curl software-properties-common

This will allow apt to transfer files and data over HTTPS (sudo apt install apt-transport-https), make the browser and the system to check the security certificates (ca-certificates), using a file transfer tool (curl) and adding software management scripts (software-properties-common).

  • The next one will be to add the Docker’s GPG key-a security mechanism that guarantees reliability of the installation files.
    • curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add –
  • Next we will add the repository using the command below in this stage to connect the Docker repository to apt sources.
    • sudo add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable”
  • Update the database from the repository with the Docker packages that you just installed.
    • sudo apt update
  • Run this command, to ensure the deployment is running from the official Docker repo.
    • apt-cache policy docker-ce
  • This command should result in the following output, if everything is done correctly.
    • docker-ce:

Installed: (none)

Candidate: 5:19.03.5~3-0~ubuntu-bionic

Version table:

5:19.03.5~3-0~ubuntu-bionic 500

500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages

5:19.03.4~3-0~ubuntu-bionic 500

500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages

5:19.03.3~3-0~ubuntu-bionic 500

You will note from the performance that the docker-ce is not yet installed. The display will also indicate the target operating system and the Docker’s version number. Please notice that version numbers can vary according to installation time.

  • once confirmed, use the following command to Install the Latest Version of Docker.
    • sudo apt install docker-ce
  • This installs Docker, starts the daemon, and allows it to run automatically on boot. To ensure that the Docker is alive and working, run the following command;
    • sudo systemctl status docker

if successfully configured and operating, the command will provide the following information

docker.service – Docker Application Container Engine

Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: e

Active: active (running) since Sat 2019-12-14 07:46:40 UTC; 50s ago

Docs: https://docs.docker.com

Main PID: 2071 (dockerd)

Tasks: 8

CGroup: /system.slice/docker.service

└─2071 /usr/bin/dockerd -H fd:// –containerd=/run/containerd/contain

This information indicates that the deployment was successful, and that Docker is up and running.

Securing the Docker

We need to understand the functionality of a number of components, before understanding how we can secure the Docker Container.

Docker Engine– Three components shape the Docker Engine.

A Server: This module is a long-running process or daemon that is responsible for image and container management.

REST API: This interface allows for communication between the docker daemon and the docker client application.

Docker Client Tool: The Docker client tool uses the REST API feature to notify the docker daemon that a containerized program is running.

Docker Engine Components Flow

Docker Trusted Repository is Docker’s image storage system for the business enterprise platform. It’s distinct from the docker hub. Although the docker hub is located in the cloud, Docker Enterprise Edition’s trusted repository is an on-premise storage solution.

Docker Content Trust allows data signatures to be used for images sent and received from and to external docker registries, such as docker hub.

Linux Control Groups is a Linux kernel feature that helps you to assign resources to active processes on a host such as CPU power, network bandwidth, system memory and so on.

In Linux, the kernel subsystem provides a protection function that can be set or implemented to restrict the privileged phase of such a phase carried out by a user using UID 1. However though privileged processes or users can bypass discretionary permissions on access control, they cannot bypass the rules on capabilities.

This section will focus on Securing Docker installed on Ubuntu.

  1. You need to test the kernel first before you host a docker on a Linux platform. There are many open-source software which you can use to search the Linux kernel, such as Lynis and OpenVAS.

Github’s Lynis project can be copied or cloned using the git clone command.

Next, use the following command to navigate to the Lynis directory and inspect the Linux System.

    • cd lynis; ./lynis audit system
  1. After testing the Linux kernel for system-based vulnerabilities, you may apply an additional protective layer to the kernel using grsecurity. It offers security features like the one below.

Buffer overflow protection

/tmp race violation protection

/proc constraints that don’t leak device owner’s information.

Prevention of execution of arbitrary code in the kernel, and so on.

Originally, you can import grsecurity patches for free and add them to the new kernel. Yet it no longer makes secure fixes.

  1. You can provide an extra layer of security by running it within a virtual machine instead of running Docker directly on a Linux server. Thus, it does not impact docker containers even though there is a bug problem with the host kernel.
  2. To build and maintain containers, Docker needs the root privileges by default. The malicious script will exploit this surface of attack to elevate to a superuser on a Linux host and ultimately access confidential files / folders, images, credentials, etc. We will use the following order to avoid this. We may opt to remove functionality such as setgid and setuid to deter other systems or processes from moving their GID to another GID that may result in privilege for escalation. The command below runs the container of the apache webserver and drops the setgid and setuid capabilities via —cap-drop to prevent the container from modifying its GID and UID to a specific UID and GID.
    • docker run -d –cap-drop SETGID –cap-drop SETUID apache

In this case GID and UID corresponds respectively to group ID and user ID

  1. In addition to stopping other programs or processes, you can also build a user to handle docker operations such as docker run, rather than handling it through a superuser.

You can connect a docker user or build it by the following:

    • sudo groupadd docker

The above command generates a community called docker. Next, create a user using the following command:

    • sudo useradd user1

Eventually, use the below command to add a user “user1” to the docker group to handle docker operations.

    • sudo usermod -aG docker user1
  1. You can have more than 1 container in a production setting.

If you have no cgroups available on your server, you may use the following command to install it, and then test how to setup it (for Ubuntu).

    • sudo apt-get install cgroup-bin cgroup-lite cgroup-tools cgroupfs-mount libcgroup1
  1. We can assign containers to small Processor resources via —cpu-shares and —cpuset-cpus The following example indicates that the prodnginx container process is executed only on the first core via —cpuset-cpus and that 20 CPUs are allocated via —cpu-shares while the proxnginx container process is executed on the first two Processor core and 20 CPU are also allocated.
    • docker run -d –name prodnginx –cpuset-cpus=0 –cpu-shares=20 nginx
    • docker run -d –name testnginx –cpuset-cpus=2 –cpu-shares=20 nginx

Then type the command docker stats for the prodnginx and testnginx containers to display their CPU use

CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O

845bea7263fb prodnginx 57.69% 1.258MiB / 985.2MiB 0.13% 578B / 0B 1.33MB / 0B

189ba15e8258 testnginx 55.85% 1.25MiB / 985.2MiB 0.13% 578b / 0B 1.33MB / 0B

Defining CPU-shares for a docker host is a smart practice because you have more than one server operating on it.

  1. A namespace is capable of stopping containers from operating as privileged accounts and will help stop escalation attacks.

Through making use of /etc/subuid and /etc/subgid files as seen below we can allow namespace in docker.

Create a user by using the command adduser

    • sudo adduser user2

Setup a subuid for the user “user2”

    • sudo sh -c ‘echo user2:400000:65536 > /etc/subuid’

Setup a subgid for the user “user2”

    • sudo sh -c ‘echo user2:400000:65536 > /etc/subgid’

Open the daemon.json file and fill it with the following content to connect the users-remap attribute to the user2.

    • vi /etc/docker/daemon.json

{

“userns-remap”: “user2”

}

Press :wq to save and close daemon.json file and finally restart docker to enable namespaces on a docker host

    • sudo /etc/init.d/docker restart
  1. In fact, the Docker server has to be designed to ensure safe contact between the docker client and the docker server via TLS.

Using the following command to open file daemon.json and copy and paste the following material as shown below (replace the IP with the actual)

    • vi daemon.json

{

“debug”: false,

“tls”: true,

“tlscert”: “/var/docker/server.pem”,

“tlskey”: “/var/docker/serverkey.pem”,

“hosts”: [“tcp://192.168.16.5:2376”]

}

  1. We need to download and install docker-compose, before we can use the Notary service to sign files. We’ll set up a notary service using Docker Compose.

Download the new version of Docker Compose using the command below

    • sudo curl -L “https://github.com/docker/compose/releases/download/1.25.4/docker-compose-$(uname -s)-$(uname -m)” -o /usr/local/bin/docker-compose

As shown below, add executable permissions to the docker-compose

    • sudo chmod 700 /usr/local/bin/docker-compose

You can check whether you built a docker-compose with success using the following command

    • docker-compose –version

We can now install the notary service with a docker-compose

    • git clone https://github.com/theupdateframework/notary.git

The above command clones or copies the notary server from the notary repository Begin the notary server and signer using the following commands:

    • docker-compose build
    • docker-compose up -d

Then use the command below to copy the setup and check certificates into your local notary directory

    • mkdir -p ~/.notary && cp cmd/notary/config.json cmd/notary/root-ca.crt ~/.notary

Now run the command below to connect the notary server to the docker client

    • export DOCKER_CONTENT_TRUST=1
    • export DOCKER_CONTENT_TRUST_SERVER=https://notaryserver:4443

using the command below, Generate a delegation key pair.

    • docker trust key generate mike –dir ~./docker/trust

Now let’s build new target keys if the registry doesn’t exist

    • docker trust signer add –key ~/.docker/trust/mike.pub mike mikedem0/whalesay
    • docker trust sign mikedem0/nginx:latest

You can then sign your docker image using docker trust sign command. Using the docker pull and docker tag command respectively to remove the docker file from the docker hub and re-tab.

    • docker trust sign mikedem0/nginx:latest

Then you can sign the docker file with the docker trust sign command. Use the command docker pull and docker tag to remove the container file from the docker hub and re-tag, respectively. You can also search for bugs and design flaws on docker files. You should search here to figure out how to inspect bugs using Anchor Engine and Docker Bench Protection to search for configuration flaws.