Use Docker

Docker - how it works, advantages, limitations

The term "Docker" can refer to a number of things. These can be: an open source community project; tools from the open source project; Docker Inc., the company that primarily supports this project; and the tools provided by Docker The fact that the technology and company share the same name can be confusing.

Here is a brief explanation:

  • The IT software "Docker" is a containerization technology that enables the creation and operation of Linux®Containers.
  • The open source Docker community is working to improve these technologies for the benefit of all users - and for free.
  • The company Docker Inc. improves the work of the Docker community, makes it more secure and shares these improvements with the entire community. It then supports the improved and more reliable technologies for enterprise customers.

With Docker, you can treat containers like extremely lightweight, modular virtual machines. And these containers give you flexibility - you can create, paste, copy, and move them between environments, which in turn helps optimize your apps for the cloud.


How does Docker work?

Docker technology uses the Linux kernel and its functions such as cgroups and namespaces to isolate processes so that they can run independently of one another. This independence is the purpose of the container - the ability to operate multiple processes and apps separately from one another. In this way, your infrastructure is used better and at the same time the security that results from working with separate systems is preserved.

Container tools, including Docker, work with an image-based deployment model. This makes it easier to share an application or a package of services with all their dependencies in multiple environments. Docker also automates the deployment of the application (or combinations of processes that make up an application) within this container environment.

These tools are built on top of Linux containers - which makes Docker user-friendly and unique - and give users unprecedented access to applications. They enable a significantly faster provision and control of versions as well as their distribution.

Is Docker technology the same as traditional Linux containers?

No. Docker technology was originally based on LXC technology - which is mostly associated with “traditional” Linux containers; but since then she has freed herself from this dependency. LXC was useful as a lightweight virtualization but did not provide a good developer or user experience. Docker technology offers more than the ability to run containers - it also simplifies the process of creating and building containers, sending images and, among other things, versioning images.

Traditional Linux containers use an init system that can manage multiple processes. This means that complete applications can be operated as one. However, Docker technology supports the breakdown of applications into their individual processes and provides the appropriate tools for this. This granular approach has its advantages.

The advantages of Docker containers

Modularity

The Docker approach to containerization focuses on taking just a portion of the application out of service for a repair or upgrade, without having to take the entire application out of service. In addition to this microservices-based approach, you can share processes in multiple apps - similar to a service-oriented architecture (SOA).

Layer and image version control

Each Docker image is made up of a number of layers. These layers are combined into a single image. When the image changes, a layer is created. Every time a user sends a command like run or copy a new layer will be created.

With Docker, these layers are reused for new containers, which speeds up development enormously. Changes made in the meantime are shared between the images, which further improves speed, size and efficiency. Version control is an integral part of layering. With each new one, you basically have a built-in change log and thus full control over your container images.

Rollback

Probably the best thing about layering is rollback, which is reverting to the previous version. Every image has layers. Not satisfied with the current iteration of an image? Just roll it back to the previous version. This approach supports agile development and, in terms of tools, ensures continuous integration and deployment (Continuous Integration / Continuous Deployment - CI / CD).

Fast deployment

Getting new hardware up and running usually took days and was a huge effort. With Docker-based containers, deployment can be reduced to seconds. By creating a container for each process, you can quickly share similar processes with new apps. And since the operating system does not have to be booted to add or move a container, provisioning times are much shorter. In addition, at this speed of development, you can easily and cost-effectively create data and delete the data generated by your containers without worry.

Docker technology is therefore a granular, controllable, microservices-based approach that is significantly more efficient.

Are there any restrictions on using Docker?

Docker on its own is ideally suited for managing individual containers. As you start to use more and more containers and containerized apps that are broken into hundreds of pieces, management and orchestration can become very difficult. At some point you will have to take a step back and group containers to provide services like networking, security, telemetry, etc. in all of your containers. This is where Kubernetes comes in.

With Docker, you don't get the same UNIX-like functionality that traditional Linux containers offer. This also affects the ability to use processes like cron or syslog alongside your app inside the container. There are also limitations to cleaning child processes after killing child processes - something that was done by traditional Linux containers. This can be remedied by changing the configuration file and setting up these capabilities from the start - something that is not immediately obvious.

There are also other Linux subsystems and devices that are not namespaced. These include SELinux, Cgroups, and / dev / sd * devices. So if an attacker gains control of these subsystems, the host is at risk. To avoid complicated structures, the host kernels are shared with the containers, and that opens up this security hole. This is the difference to virtual machines, which are more strictly separated from the host system.

The Docker daemon can also cause security concerns. To use and run Docker containers, you are most likely using the Docker daemon, a persistent runtime for containers. The Docker daemon requires root rights. It is therefore important to be particularly careful with regard to access authorizations and the position of the processes. For example, a local daemon has a smaller attack surface than one stored in a more public location, such as a web server.