Dockerizing Your Applications: The Ultimate Solution for Cross-Platform Deployment

Ahmet B. Simsek
12 min readJan 27, 2023

--

Software codes inside of the various containers

Docker is a tool that allows you to run applications in a consistent way, like a box, where all the necessary parts, settings and dependencies are included, so that you can run the same application in different places with the same outcome, for example, imagine you want to make a cake, but you want to make it in different places, instead of sharing the recipe and making sure that all ingredients and ovens are the same, you put all the ingredients and oven settings in a “Docker container” and now you can make the same cake with the same recipe and ingredients in any kitchen with the same result.

With little bit more technical words;
For developing, delivering and operating distributed applications, Docker is a powerful platform. It enables programmers to bundle an application together with all of its dependencies into a solitary, portable container that may work on any Linux or Windows based computer. This makes it simple to test and deploy apps across various contexts and removes the “it works on my computer ¯\_(ツ)_/¯ ” difficulties while working on code with coworkers. Additionally, by offering lightweight and isolated environments, Docker makes it simple to expand and manage applications and promotes resource efficiency.
In this article, we’ll talk about Docker’s fundamentals, the reasons we need it, possible substitutes, how to use it with a sample.NET Core project, the advantages of utilizing it with.NET Core and some performance statistics.

Let’s dive into Docker more 👇

  • What is Docker?
    Docker is a platform that makes it simple for developers to build, distribute and operate programs inside containers. Applications may operate reliably across several contexts, including on-premises, cloud, and hybrid environments, thanks to containers; which are small, portable and self-sufficient environments.
  • Why do we need Docker?
    Docker is needed because it enables predictable, consistent settings for developing, testing and deploying applications. Additionally, it facilitates sharing and distribution of programs and the optimal use of resources by packaging applications and their dependencies.
  • Does it affect performance in positive way?
    Docker makes it possible to use resources more effectively and helps speed up application starting. It can also cut down on the number of servers required for deployment, which can save money. Depending on the particular use case, it could add some overhead and impact an application’s overall performance.
  • Is there any negative points for Dockerizing the software?
    A built-in monitoring and logging solution is not offered by Docker. To monitor and resolve problems in the containers, you will need to use third-party tools or develop your own solution. Additional security considerations are needed when operating containers in a production environment, including making sure the containers are configured correctly, the images are safe, and there are no known vulnerabilities.

Docker Basics and Commons

  • Containers: A container is a compact, mobile, and independent environment that enables a program to function reliably in many settings. Because containers offer separation for the program and its dependencies, moving and running the application on various computers is simple.
  • Images: An image serves as a container’s template. It has every file, setting, and dependency required for a program to function. A Dockerfile, a script containing instructions for creating the image, is used to generate an image.
  • Docker Daemon: The Docker daemon is the background service that manages containers and images. The Docker CLI is used to interact with the Docker daemon.
  • Docker Hub: For the purpose of storing and exchanging Docker images, the Docker hub serves as the primary repository. It may be used as both a private register for businesses and a public registry that is open to everyone. (https://hub.docker.com)
  • Volumes: Data is preserved in containers using volumes. They make it possible to keep information outside of the container’s filesystem so that it can endure deletion of the container.
  • Networks: With the help of Docker, virtual networks can be built and utilized to link containers to the host computer and one another. This enables host exposure of container ports and communication amongst containers.
  • Compose: A tool for creating and operating multi-container applications is called Docker Compose. By specifying the services, networks and volumes in a single yaml file; it enables the development of complicated applications.
  • Swarm: A tool for orchestrating the deployment and administration of containers is called Docker Swarm. It enables the development of a swarm, which is a collection of machines that collaborate to manage a number of services.
  • Kubernetes: Kubernetes is an open-source container orchestration system. It is used to automatically deploy, scale, and manage containerized applications. To manage the containers operating on a cluster of machines, it may be used in combination with Docker.

These are some of the fundamental ideas and details of Docker, which may be effectively and consistently utilized for developing, deploying, and maintaining containerized applications.

Docker Images and Containers In Details

A Docker image is a container’s key component. It has every file, setting, and dependency required for a program to function. A Dockerfile, a script containing instructions for creating the image, is used to generate an image. Once an image has been created, it may be saved, distributed, and used to build additional containers.

An image is made up of a series of layers. Every command in a Dockerfile adds a fresh layer to the image. Each layer serves as the foundation for the subsequent layer when a container is built from an image. This enables effective layer sharing between photos, which minimizes the size of the image and accelerates the creation of new containers.

A running instance of an image is what a Docker container is. A container is given its own own filesystem, process tree, and network interfaces when it is formed. As a result, the container may be separated from the host computer and from other containers, giving it a unique perspective on the system and preventing it from accessing host resources unless specifically permitted.

Each container has a distinct ID that is used to manage and identify it. Since containers are lightweight and portable, they may be launched, halted, and removed without having an impact on the host computer or other containers. They can also be transported across environments without requiring any modifications. This makes it simple to test and deploy apps in various settings.

Developing, testing, and deploying applications can be done consistently and effectively using Docker’s key ideas of containers and images.

Docker CLI

Using the command line, you may communicate with and control Docker using the Docker CLI (Command Line Interface). It enables a number of functions, including the creation and management of containers, pictures, networks and volumes.

Here are some common operations that can be performed using the Docker CLI;

Pulling images from a registry: docker pull <image>
Running a container: docker run <image>
Listing running containers: docker ps
Stopping a container: docker stop <container-id>
Removing a container: docker rm <container-id>
Building an image: docker build -t <image-name> .
Pushing an image to a registry: docker push <image>
Inspecting a container's configuration: docker inspect <container-id>

Docker CLI also provides a set of commands for managing networks and volumes like creating, listing and removing them;

Create a new network: docker network create <network-name>
List all networks: docker network ls
Inspect a network: docker network inspect <network-name>
Remove a network: docker network rm <network-name>
Connect container to a network: docker network connect <network-name> <container-id>
Disconnect container from a network: docker network disconnect <network-name> <container-id>
Create a new volume: docker volume create <volume-name>
List all volumes: docker volume ls
Inspect a volume: docker volume inspect <volume-name>
Remove a volume: docker volume rm <volume-name>
Mount a volume to a container: docker run -v <volume-name>:/<container-path> <image>
Unmount a volume from a container: docker volume rm $(docker volume ls -qf dangling=true)

In order to manage containers and images, some individuals also choose to utilize a GUI tool. Docker Desktop is an official desktop application for Docker, and it may make some jobs simpler.

Docker Desktop can be downloaded from Docker’s official web page;
https://www.docker.com/products/docker-desktop

Screenshot for Docker Desktop application

Here is a sample of a simple Docker-based Net Core Web API project 👇

First, lets start with creating the Net Core Web API project;

Screenshot from Visual Studio’s new project creation screen

Then, lets define the project name and solution path (i named project as “DockerizedSampleApp” for this example);

Screenshot from Visual Studio’s new project name and path configuration screen

Now this is the important part which will create “Dockerfile” for our sample project when you check “Enable Docker” checkbox; (i used Linux as Docker OS which will be the base OS to run my application on)

Screenshot from Visual Studio’s new project’s additional information configuration screen

When we achieve all of these steps successfuly, our sample project should be generated by Visual Studio with following files including “Dockerfile”;

Screenshot from Visual Studio’s solution explorer window

Well done 👏 We’ve just created a sample Net Core Web API project with Docker support including our Dockerfile;

Screenshot from dockerfile

Let’s have a look to that “Dockerfile” in details section by section👇

  • Section 1:
FROM mcr.microsoft.com/dotnet/aspnet:7.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443



The first section, "FROM mcr.microsoft.com/dotnet/aspnet:7.0 AS base",
specifies the base image that will be used for the Docker image.
This image is based on .NET Core 7.0 and is designed for ASP.NET web
applications. The "AS base" at the end of the line specifies that this
image will be referred to as "base" in the later sections of the Dockerfile

The next line, "WORKDIR /app", sets the working directory for the
container to be "/app" folder.

The next two lines, "EXPOSE 80" and "EXPOSE 443", tell Docker that the
container will listen on the specified network ports at runtime.
  • Section 2:
FROM mcr.microsoft.com/dotnet/sdk:7.0 AS build
WORKDIR /src
COPY ["DockerizedSampleApp/DockerizedSampleApp.csproj", "DockerizedSampleApp/"]
RUN dotnet restore "DockerizedSampleApp/DockerizedSampleApp.csproj"
COPY . .
WORKDIR "/src/DockerizedSampleApp"
RUN dotnet build "DockerizedSampleApp.csproj" -c Release -o /app/build



"FROM mcr.microsoft.com/dotnet/sdk:7.0 AS build", specifies a new image to
use as the build environment, using the SDK image for .NET Core 7.0.
It then copies the project file and runs the dotnet restore command to
restore the dependencies of the application.

"WORKDIR /src",
"COPY ["DockerizedSampleApp/DockerizedSampleApp.csproj", "DockerizedSampleApp/"]",
"RUN dotnet restore "DockerizedSampleApp/DockerizedSampleApp.csproj""
sets the working directory to /src, copies the project file and runs a
command to restore the dependencies of the project.

"COPY . ." copies all the files from build context to the /src directory
of the container,

"WORKDIR "/src/DockerizedSampleApp""
"RUN dotnet build "DockerizedSampleApp.csproj" -c Release -o /app/build"
sets the working directory to /src/DockerizedSampleApp and runs a
command to build the application in release mode and output it to
the /app/build directory.
  • Section 3
FROM build AS publish
RUN dotnet publish "DockerizedSampleApp.csproj" -c Release -o /app/publish /p:UseAppHost=false



"FROM build AS publish" uses the build image as the starting point and runs
the dotnet publish command to publish the application.

"RUN dotnet publish "DockerizedSampleApp.csproj" -c Release -o /app/publish /p:UseAppHost=false"
runs a command to publish the application in release mode and output it to
the /app/publish directory and set useapphost configuration to false.
  • Section 4
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "DockerizedSampleApp.dll"]



The final block, "FROM base AS final", uses the base image as the
starting point and copies the published application from the previous
step and sets the entry point for the container to be the application's
DLL.

"WORKDIR /app" sets the working directory of the container to /app

"COPY --from=publish /app/publish ." copies the published files from
publish stage to the final image

The final line "ENTRYPOINT ["dotnet", "DockerizedSampleApp.dll"]" sets the
entry point for the container to run the command
"dotnet DockerizedSampleApp.dll" when the container starts.

TL;DR

As pseudo code form of all of the codes in the given “Dockerfile” is

It starts by using the "mcr.microsoft.com/dotnet/aspnet:7.0" image as the 
base image and sets the working directory to "/app". It also exposes
ports 80 and 443.

Next, it uses the "mcr.microsoft.com/dotnet/sdk:7.0" image as the build
image, sets the working directory to "/src" and copies the project files.

Then it runs the command "dotnet restore" on the project file to restore
its dependencies

Next, it copies all the files and set the working directory to the project
folder

Then it runs the command "dotnet build" on the project file to build it
in release mode and output the result to the "/app/build" directory.

Next, it uses the "build" image as the publish image and runs the command
"dotnet publish" on the project file to publish it in release mode and
output the result to the "/app/publish" directory.

Finally, it uses the "base" image as the final image, sets the working
directory to "/app" and copies the published files from the previous stage
to the final image. It also sets the entrypoint command to run the
application using "dotnet" and providing the "DockerizedSampleApp.dll"
file as the command argument.

Let’s run our application in Docker container and see the output;

In Visual Studio, it’s very simple such as just one click will execute your dockerfile and run it inside of the configured container;

Screenshot to run our application using Docker

And here is the magic;

Screenshot from our application’s Swagger page which indicates it is running

Our application is running inside of our Docker container and can be reached from 32772 port.

Screenshot from docker desktop showing our sample project’s container

Also from Docker Desktop application, you can see there is a container created with the name of our project and the status of that container which is “Running” in the screenshot. Also you can see the ports which makes you reach to your application inside the container.

Real-world use cases for Docker;

1- Web Applications: Since Docker enables uniform and portable environments, it is frequently used to deploy web apps. This makes it simple to grow the application as necessary and to test and deploy apps in various contexts.

2- Microservices: Docker can create tiny, narrowly focused containers that are simple to deploy and scale, making it a good fit for microservices design. This makes it simple to create and maintain complicated applications made up of several separate services.

3- CI/CD: Docker may be used as a component of a pipeline for continuous integration and continuous deployment since it enables the construction of consistent build and test environments. This makes it simple to develop, test and deploy apps in an automated and repeatable manner.

4- Cloud-Native Applications: Docker is frequently used to deploy cloud-native apps because it enables the production of portable, lightweight containers that are simple to set up in cloud settings.

5- Database and Data Processing: In a containerized environment, Docker may be used to run databases and data processing programs. This makes it possible for resource segregation and simple application scaling.

6- IoT: Docker may be used to deploy apps in IoT (internet of things) gadgets like the Raspberry Pi, enabling the usage of the same program on several gadgets and the simple updating of the application.

7- Machine learning and Artificial Intelligence (AI): Since Docker enables the establishment of isolated environments in which the programs may function and guarantees that all requirements are satisfied, it can be used to deploy machine learning and AI applications.

Docker explanation in simplest way;

Docker is a tool that allows you to run applications in a consistent way, like a box, where all the necessary parts, settings and dependencies are included, so that you can run the same application in different places with the same outcome.

Imagine you want to make a cake, but you want to make it in different places, instead of sharing the recipe and making sure that all ingredients and ovens are the same, you put all the ingredients and oven settings in a “Docker container” and now you can make the same cake with the same recipe and ingredients in any kitchen with the same result.

I appreciate your interest in my article about Docker. Now that we’ve learned how to use Docker and its basic concepts including commands with a sample application example.

I hope you can put this knowledge to use in your projects. 🙏

Please feel free to ask any additional questions you may have. Good luck and;

Happy Dockerizings 🤗

--

--