Docker archivos » Aprende IT All the latest news about IT Tue, 18 Jun 2024 21:54:38 +0000 en-US hourly 1 https://aprendeit.com/wp-content/uploads/2020/02/LOGO-CORTO-100x100.png Docker archivos » Aprende IT 32 32 Install Your Own Wazuh Server on Ubuntu https://aprendeit.com/en/install-your-own-wazuh-server-on-ubuntu/ https://aprendeit.com/en/install-your-own-wazuh-server-on-ubuntu/#respond Sat, 27 Apr 2024 14:41:38 +0000 https://aprendeit.com/?p=6208 Wazuh has become an essential tool for security management in information systems. Thanks to its ability to detect intrusions, ensure data integrity, and monitor security, many companies and individuals choose ...

La entrada Install Your Own Wazuh Server on Ubuntu se publicó primero en Aprende IT.

]]>
Wazuh has become an essential tool for security management in information systems. Thanks to its ability to detect intrusions, ensure data integrity, and monitor security, many companies and individuals choose to set up their own Wazuh server. Here I will explain how you can install and configure your Wazuh server, step by step, without using complicated lists or enumerations.

What is Wazuh and Why Should You Use It?

Wazuh is an open-source security platform that provides intrusion detection, integrity monitoring, incident response, and compliance auditing. Its versatility makes it ideal for both small businesses and large corporations. Furthermore, being open-source, Wazuh is completely free and allows modifications to meet any specific needs.

Initial Preparations Before Installation

Before you dive into the installation of Wazuh, it is crucial that you prepare your system. This involves ensuring that the operating system is updated and setting up the environment to support the installation of Wazuh through Docker. Here is how you do it:

First, it is necessary to disable the firewall to prevent it from interfering with the installation process. To do this, simply execute in the terminal:

ufw disable

This command will disable the firewall, ensuring that it will not block any of the necessary connections during the installation.

Next, you must ensure that all system packages are updated and that git is installed, as you will need it to clone the Wazuh repository. Execute:

apt update && apt install git

With these commands, your system will be updated and ready for the next phase.

Installing Docker

Wazuh in Docker simplifies dependency management and ensures that the platform can run isolated and secure. To install Docker, you can use the script provided by Docker, which sets up everything automatically:

curl -sSL https://get.docker.com/ | sh

Once Docker is installed, it is essential to ensure it automatically runs at system startup:

systemctl start docker
systemctl enable docker

These commands will start the Docker service and configure it to automatically start at each system boot.

Docker Compose

If you install Docker as previously indicated, you do not need to install this tool, but if you already have Docker and it does not support “docker compose”, you can install docker-compose like this:

curl -L "https://github.com/docker/compose/releases/download/v2.12.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

The following commands that have “docker compose” should be executed as docker-compose.

 

Setting Up the Wazuh Environment

With Docker already configured, the next step is to prepare the specific environment for Wazuh. Head to the optimal directory to keep organized the files related to security:

cd /opt

Now, it is time to clone the most recent version of the Wazuh repository for Docker:

git clone https://github.com/wazuh/wazuh-docker.git -b v4.7.3

This command downloads all the necessary files to run Wazuh in a Docker container.

Generating Certificates and Starting Up Wazuh

Before starting Wazuh, you must generate the necessary certificates for the proper functioning of the Wazuh components. Navigate to the correct directory and execute the certificate generator:

cd wazuh-docker/single-node/
docker compose -f generate-indexer-certs.yml run --rm generator

With the certificates generated, you are now ready to start all the Wazuh services:

docker compose up -d

This last command lifts all the containers necessary for Wazuh to operate properly in a single-node mode, ideal for test environments or small implementations.

Verification of the Installation

Once all the previous steps are completed, it is important to verify that everything is working as expected. You can check the status of the Docker containers to ensure that all Wazuh services are active and running. Additionally, access the Wazuh web interface to start exploring the functionalities and available settings.

Customization and Monitoring

With your Wazuh server now operational, the next step is to customize the configuration to adapt it to your specific needs. Wazuh offers a wide variety of options for configuring rules, alerts, and automatic responses to incidents. Take advantage of the available documentation to explore all the possibilities that Wazuh offers.

Installing and configuring your own Wazuh server may seem like a complex task, but by following these steps, you will have a robust computer security system without needing large investments. Not only will it improve the security of your information, but it will also provide you with a powerful tool to monitor and proactively respond to any incident.

Wazuh Password Change

Stop the service using Docker Compose:

docker compose down

Generate the hash of the new password using the Wazuh container:

Run the following command to start the hash script:

docker run --rm -ti wazuh/wazuh-indexer:4.6.0 bash /usr/share/wazuh-indexer/plugins/opensearch-security/tools/hash.sh

Enter the new password when prompted and copy the generated hash.

Update the internal users file with the hash of the new password:

Open the file with a text editor like vim:

vim config/wazuh_indexer/internal_users.yml

Paste the generated hash for the admin user.

Update the docker-compose.yml file with the new password:

Open the docker-compose.yml file:

vim docker-compose.yml

Enter the new password in lines 24 and 81 where it says INDEXER_PASSWORD.

Raise the services again with Docker Compose:

docker compose up -d

This restarts the service stack.

Access the container and run the security script:

Access the container:

docker exec -it single-node-wazuh.indexer-1 bash

Define the variables and run the security script:

export INSTALLATION_DIR=/usr/share/wazuh-indexer
CACERT=$INSTALLATION_DIR/certs/root-ca.pem
KEY=$INSTALLATION_DIR/certs/admin-key.pem
CERT=$INSTALLATION_DIR/certs/admin.pem
export JAVA_HOME=/usr/share/wazuh-indexer/jdk
bash /usr/share/wazuh-indexer/plugins/opensearch-security/tools/securityadmin.sh -cd /usr/share/wazuh-indexer/opensearch-security/ -nhnv -cacert $CACERT -cert $CERT -key $KEY -p 9200 -icl

Exit the container:

exit

This process allows you to update the admin password for Wazuh using Docker, making sure to follow all the steps correctly to ensure the changes are effective.

La entrada Install Your Own Wazuh Server on Ubuntu se publicó primero en Aprende IT.

]]>
https://aprendeit.com/en/install-your-own-wazuh-server-on-ubuntu/feed/ 0
Docker Container Security: Best Practices and Recommendations https://aprendeit.com/en/docker-container-security-best-practices-and-recommendations/ https://aprendeit.com/en/docker-container-security-best-practices-and-recommendations/#respond Fri, 11 Aug 2023 17:51:49 +0000 https://aprendeit.com/?p=5413 Hey there, tech enthusiast! If you’re here, it’s probably because you’ve heard a lot about Docker and how crucial it is to keep our containers secure. But do you know ...

La entrada Docker Container Security: Best Practices and Recommendations se publicó primero en Aprende IT.

]]>
Hey there, tech enthusiast! If you’re here, it’s probably because you’ve heard a lot about Docker and how crucial it is to keep our containers secure. But do you know how to do that? If the answer is no, you’re in the right spot. Today, I’ll guide you through discovering the best practices and recommendations for Docker container security. Let’s dive in!

Understanding the Importance of Docker Security
Before diving into the world of Docker and its security, it’s essential to grasp why it’s so critical. Docker has revolutionized the way we deploy applications, making the process more streamlined and efficient. However, like any technology, it’s not without its vulnerabilities. A poorly configured Docker container can be a gateway for cybercriminals. And as we all know, it’s better to be safe than sorry.

The Principle of Least Privilege
First off, let’s talk about the principle of least privilege. It’s a golden rule in IT security. The idea is to grant programs and processes only the privileges they genuinely need to get their job done. Nothing more.

In the Docker context, this means you should avoid running containers with root privileges unless absolutely necessary. If an attacker manages to access a container with root privileges, they could potentially take control over the entire host system. So, whenever possible, limit those privileges.

Trusted Images
Now, let’s focus on images. They’re the foundation of our containers. But where do you get them from? Not all images available on Docker Hub are safe. Some might have known vulnerabilities or even hidden malware.

I recommend only using images from trustworthy sources. If possible, opt for official images or those from well-known providers. And if you decide to build your own images, ensure they’re up to date and follow good security practices in their design.

Vulnerability Scanning
Let’s talk tools! Nowadays, there are specific solutions designed to scan Docker containers for vulnerabilities. These tools can identify issues in images before they’re deployed. It’s a proactive approach to tackle risks.

I advise integrating these scans into your continuous integration and delivery process. This way, every time a new version of your application is prepared, you can be sure the containers are clean and ready for action.

Networking and Communications
Another critical aspect is networking. Docker allows you to create virtual networks for your containers to communicate with each other. However, not all containers should talk to one another. In fact, in many cases, it’s preferable they’re isolated.

By familiarizing yourself with Docker networks, you can configure them so only specific containers have access to others. This reduces the attack surface and limits the potential lateral movement of any intruders.

Regular Updates
One thing that should never be missing from your security routine is updates. Keeping Docker, your containers, and the applications running in them updated is vital. Updates don’t only introduce new features but also patch vulnerabilities.

So, always stay tuned to Docker news and updates. If a critical vulnerability emerges, you’ll want to be among the first to address it.

Limit Access
Last but not least, limit access to your containers. Not everyone in your organization needs to access all Docker functions. Define roles and permissions and grant them wisely. And, of course, ensure any access is backed by robust authentication and, if possible, multi-factor.

So, what did you think of this journey through Docker security? I hope you found it beneficial and will implement these recommendations. Cybersecurity is an ongoing task, requiring our attention and care. But with the right tools and best practices, you can rest easy knowing your Docker containers are well-protected. Catch you next time!

La entrada Docker Container Security: Best Practices and Recommendations se publicó primero en Aprende IT.

]]>
https://aprendeit.com/en/docker-container-security-best-practices-and-recommendations/feed/ 0
Docker Container Performance Optimization: Practical Tips for Best Performance https://aprendeit.com/en/docker-container-performance-optimization-practical-tips-for-best-performance/ https://aprendeit.com/en/docker-container-performance-optimization-practical-tips-for-best-performance/#respond Mon, 24 Jul 2023 04:52:17 +0000 https://aprendeit.com/?p=5329 Hello, and welcome to a new post! Today, we’re diving into a crucial topic for any developer using Docker: how to optimize Docker container performance. You might have landed here ...

La entrada Docker Container Performance Optimization: Practical Tips for Best Performance se publicó primero en Aprende IT.

]]>
Hello, and welcome to a new post! Today, we’re diving into a crucial topic for any developer using Docker: how to optimize Docker container performance. You might have landed here wondering, “How can I make my Docker containers run as efficiently as possible?” Well, you’re in the right place!

Why do you need to optimize Docker containers?

First, it’s important to understand why you need to optimize your Docker containers. Docker is a fantastic tool that allows developers to package and distribute their applications in containers really effectively. However, like any other technology, it’s not perfect and might require some optimization to ensure your application runs as well as possible.

Imagine you’re driving a car. If you don’t change the oil regularly or check the brakes, your car is likely not going to perform at its best. The same goes for Docker. If you don’t make an effort to optimize your containers, you can end up with suboptimal performance.

How to know if your Docker containers need optimization?

Well, the million-dollar question, how do you know if your Docker containers need optimization? Several signs might indicate that you need to work on optimizing your Docker containers.

If you observe that your applications take too long to load, or if your containers use an excessive amount of CPU or memory, it’s likely you need to make some adjustments. Another indicator could be if you see your containers crash frequently, or if you notice that your applications are unable to handle the amount of traffic you expected.

Understanding Docker and Resource Optimization

To be able to optimize the performance of your Docker containers, you first need to understand how Docker uses system resources. Docker runs on a host machine and uses the resources of that machine to run containers. However, Docker doesn’t use all the resources of the host machine by default. Instead, it limits the amount of resources each container can use.

Now, with a better understanding of how Docker uses system resources, we can explore how to optimize the performance of your Docker containers.

Reducing Docker Image Size

One effective way to improve the performance of your Docker containers is by reducing the size of your Docker images. Large images can slow down the startup of your containers and increase memory usage. Therefore, by reducing the size of your Docker images, you can help improve the speed and efficiency of your containers.

There are several ways to do this. One is by using smaller base images. For instance, instead of using a Ubuntu base image, you could use an Alpine base image, which is significantly smaller. Another strategy is to remove any unnecessary files from your images. This includes temporary files, cache files, and packages that aren’t necessary for running your application.

Limiting Resource Usage

Another strategy to optimize your Docker containers is to limit resource usage. As mentioned before, Docker limits the amount of resources each container can use. However, you can adjust these limits to ensure that your containers aren’t using more resources than they need.

For example, you can limit the amount of CPU a container can use by setting a CPU limit in your Docker configuration file. Similarly, you can limit the amount of memory a container can use by setting a memory limit.

Efficiently Using Storage in Docker

Storage is another crucial resource that Docker uses, and it can affect the performance of your containers. Therefore, it’s vital that you use Docker’s storage as efficiently as possible.

One tip to do this is to limit the amount of data your containers are writing to disk. The more data a container writes to disk, the slower it will be. Therefore, if you can reduce the amount of disk writes, you can improve your containers’ performance.

Additionally, keep in mind that Docker uses a storage layer to manage container data. Each time a container writes data to disk, Docker creates a new storage layer. This can slow down your containers, especially if they’re writing large amounts of data. Therefore, it’s recommended that you optimize the use of Docker’s storage layer.

Optimizing Networks in Docker

Last but not least, the network is a crucial resource in Docker that can also affect the performance of your containers. Networking in Docker can be complex as it involves communication between containers, between containers and the host machine, and between containers and the outside world.

One way to optimize networking in Docker is by using custom networks. Docker allows you to create your own networks and assign containers to these networks. This can be helpful for optimizing container-to-container communication, as you can group containers that need to communicate with each other on the same network.

Additionally, you can optimize networking in Docker by adjusting network parameters. Docker allows you to adjust various network parameters, such as buffer size, network congestion, and flow control. By adjusting these parameters, you can help improve Docker’s network efficiency.

And that’s all…

I hope these tips have helped you understand how you can optimize the performance of your Docker containers. Remember that each application is unique, and what works for one might not work for another. Therefore, it’s important to experiment and find the optimization strategies that work best for your applications.

Until the next post!

La entrada Docker Container Performance Optimization: Practical Tips for Best Performance se publicó primero en Aprende IT.

]]>
https://aprendeit.com/en/docker-container-performance-optimization-practical-tips-for-best-performance/feed/ 0
How to debug applications in Docker containers: Your ultimate guide https://aprendeit.com/en/how-to-debug-applications-in-docker-containers-your-ultimate-guide/ https://aprendeit.com/en/how-to-debug-applications-in-docker-containers-your-ultimate-guide/#respond Thu, 13 Jul 2023 12:44:46 +0000 https://aprendeit.com/?p=5294 Hey there, fearless developer! If you’re here, it’s because you’re looking for how to debug your applications in Docker containers. We understand this process can seem complex, but don’t worry! ...

La entrada How to debug applications in Docker containers: Your ultimate guide se publicó primero en Aprende IT.

]]>
Hey there, fearless developer! If you’re here, it’s because you’re looking for how to debug your applications in Docker containers. We understand this process can seem complex, but don’t worry! You’re in the right place. Throughout this post, you will learn the tricks and techniques to deploy and debug your applications efficiently.

Understanding Docker and containers

Before diving into the intricacies of debugging, it’s good to briefly clarify what Docker is and why containers are so relevant in modern application development. Docker is a tool that allows developers like you to package applications and their dependencies into containers. These containers are lightweight and portable, allowing you to run your applications on any operating system that supports Docker, without worrying about tedious configuration tasks.

Tools for debugging in Docker

Debugging from the host

First, let’s talk about how you can debug your applications from the same host where the Docker container is running. This is useful in situations where you want to track what’s happening in your application in real-time without needing to access the container.

You can use tools like docker logs, which allows you to view your applications’ logs in real-time. Plus, you can use docker top to view the processes that are running inside your container. This allows you to see what’s consuming resources and if there’s any process that shouldn’t be running.

Accessing the container

Occasionally, you will need to directly access the container to debug your application. Docker allows you to do this using the docker exec command, which lets you run commands inside your container as if you were on the host operating system.

Once inside the container, you can use the debugging tools installed on your image. For example, if you’re working with a Python application, you could use pdb to debug your code.

Debugging with Docker Compose

Docker Compose is another tool that will be useful in debugging your applications. Docker Compose allows you to define and run multi-container applications with a simple description in a YAML file.

Like with Docker, you can access your applications’ logs with docker-compose logs, and you can also access the container with docker-compose exec.

Techniques for debugging applications in Docker

Runtime debugging

Runtime debugging allows you to inspect your application’s state while it’s running. You can do this using tools like pdb (for Python) or gdb (for C/C++) within your container.

These tools allow you to put breakpoints in your code, inspect variables, and step through your application’s execution, allowing you to see exactly what’s happening at each moment.

Post-mortem debugging

Post-mortem debugging is done after your application has crashed. This allows you to inspect your application’s state at the moment of failure.

Post-mortem debugging is especially useful when you encounter intermittent or hard-to-reproduce errors. In these cases, you can set up your application to generate a memory dump in case of failure, which you can later analyze to find the problem.

Tracing and Profiling

Another useful technique in debugging applications in Docker is tracing and profiling. This gives you detailed information about your application’s execution, such as how long each function takes to execute or memory usage.

There are various tools that allow you to trace and profile your applications in Docker, like strace (for Linux-based systems) or DTrace (for Unix-based systems).

Final tips

Before wrapping up, I’d like to give you some tips to make your experience debugging applications in Docker as bearable as possible:

  • Make sure you have a good understanding of how Docker works. The better you understand Docker, the easier it will be to debug your applications.
  • Familiarize yourself with the debugging tools available for your programming language.
  • Don’t forget the importance of good logs. A good logging system can be your best ally when debugging problems in your applications.
  • Use Docker Compose to orchestrate your multi-container applications. This will make it easier to debug problems that arise from the interaction between various containers.

In summary, debugging applications in Docker containers can be a complex task, but with the right tools and techniques, you’ll be able to do it efficiently and effectively. Remember, practice makes perfect, so don’t get frustrated if it seems complicated at first. Cheer up and let’s get debugging!

La entrada How to debug applications in Docker containers: Your ultimate guide se publicó primero en Aprende IT.

]]>
https://aprendeit.com/en/how-to-debug-applications-in-docker-containers-your-ultimate-guide/feed/ 0
Migrating from Docker Swarm to Kubernetes: A Case Study https://aprendeit.com/en/migrating-from-docker-swarm-to-kubernetes-a-case-study/ https://aprendeit.com/en/migrating-from-docker-swarm-to-kubernetes-a-case-study/#respond Mon, 19 Jun 2023 11:21:42 +0000 https://aprendeit.com/?p=5190 Hello everyone! Today, I’m going to share an exciting story with you – how we decided to migrate from Docker Swarm to Kubernetes. You might be wondering: why make this ...

La entrada Migrating from Docker Swarm to Kubernetes: A Case Study se publicó primero en Aprende IT.

]]>

Hello everyone! Today, I’m going to share an exciting story with you – how we decided to migrate from Docker Swarm to Kubernetes. You might be wondering: why make this change? Well, there are various reasons, and all of them add up to make Kubernetes a very appealing option. Let’s get into it!

Why the Change: Kubernetes Advantages over Docker Swarm

Docker Swarm is great, don’t get me wrong. It’s easy to use, has a gentle learning curve, and deployments are quick. However, if you’re looking for a tool with greater scalability, robustness, and flexibility, Kubernetes is your guy.

On the one hand, Kubernetes takes the trophy when it comes to scalability. Its ability to handle a large number of containers in a cluster is something that Kubernetes excels at. And if you add the possibility of managing several clusters at once, we have an indisputable winner.

Moreover, Kubernetes boasts a rich and diverse ecosystem. It offers a wide range of plugins and extensions, greatly increasing its flexibility. On top of that, the community that backs it is very active, with constant updates and improvements. In contrast, the Docker Swarm community, although dedicated, can’t compete in terms of size and activity.

Our Scenario: Where We Started

We were in a situation where we had already implemented Docker Swarm in our infrastructure. We had several services running on Swarm, which worked well and served their purpose. But we knew we could improve our architecture.

The Path to Kubernetes: First Steps

The first step to migrating from Docker Swarm to Kubernetes is creating a Kubernetes cluster. In our case, we chose to use Google Kubernetes Engine (GKE) for its ease of use and powerful functionalities. However, there are other options, like AWS EKS or Azure AKS, that you might also consider.

Once we created our cluster, we set to work on converting our Docker Compose Files to Kubernetes. This is where Helm comes in. Helm is a package manager for Kubernetes that allows us to define, install, and upgrade applications easily.

From Swarm to Cluster: Conversions and Configurations

Converting Docker Compose files to Helm files isn’t tricky, but it does require attention to detail. Luckily, there are tools like Kompose that make our lives easier. Kompose automatically converts Docker Compose files into Kubernetes files.

Once we converted our files, it was time to define our configurations. Kubernetes’ ConfigMaps and Secrets are the equivalent to environment variables in Docker Swarm. Here, we needed to make some modifications, but in general, the process was quite straightforward.

Deploying on Kubernetes: Challenges Faced

Now, with our Kubernetes cluster ready and our Helm files prepared, it was time to deploy our services. This is where we encountered some challenges.

The first challenge was managing network traffic. Unlike Docker Swarm, which uses an overlay network to connect all nodes, Kubernetes uses a different approach called CNI (Container Network Interface). This required a change in our network configuration.

Additionally, we had to adjust our firewall rules to allow traffic between the different Kubernetes services. Fortunately, Kubernetes’ Network Policies made this task easier.

The next challenge was managing volumes. While Docker Swarm uses volumes for persistent storage, Kubernetes uses Persistent Volumes and Persistent Volume Claims. While the concept is similar, the implementation differs somewhat.

In our case, we used Docker volumes to store data from our databases. When migrating to Kubernetes, we had to convert these volumes into Persistent Volumes, which required some additional work.

Finally, we faced the challenge of monitoring our new Kubernetes cluster. Although there are many tools for monitoring Kubernetes, choosing the right one can be complicated.

In our case, we opted for Prometheus and Grafana. Prometheus provides us with a powerful monitoring and alerting solution, while Grafana allows us to visualize the data in an attractive way.

Surprises Along the Way: What We Didn’t Expect

As with any project, we ran into a few surprises along the way. Some of them were pleasant, others not so much.

On one hand, we were pleasantly surprised by how easily we could scale our services on Kubernetes. Thanks to the auto-scaling function, we were able to automatically adjust the number of pods based on workload. This allowed us to improve the performance of our services and save resources.

On the other hand, we encountered some issues with updates. Unlike Docker Swarm, where updates are quite straightforward, in Kubernetes we had to grapple with Rolling Updates. Although they are a powerful feature, they require some practice to master.

Mission Accomplished!: Kubernetes Up and Running

Finally, after overcoming challenges and learning from surprises, we successfully migrated from Docker Swarm to Kubernetes. Now, our services run more efficiently, and we have greater flexibility and control over our infrastructure.

I’m sure that we still have a lot to learn about Kubernetes. But, without a doubt, this first step has been worth it. The migration has allowed us to improve our architecture, optimize our services, and prepare for future challenges.

And you, have you considered migrating from Docker Swarm to Kubernetes? What do you think of our experience? We’re eager to hear your impressions and learn from your experiences!

La entrada Migrating from Docker Swarm to Kubernetes: A Case Study se publicó primero en Aprende IT.

]]>
https://aprendeit.com/en/migrating-from-docker-swarm-to-kubernetes-a-case-study/feed/ 0
10 Essential Docker Tricks You Should Know https://aprendeit.com/en/10-essential-docker-tricks-you-should-know/ https://aprendeit.com/en/10-essential-docker-tricks-you-should-know/#respond Tue, 13 Jun 2023 17:31:44 +0000 https://aprendeit.com/?p=5213 Before diving into Docker’s tricks, it’s vital to understand what Docker is and why it has become such a critical tool in the world of development and system administration. Docker ...

La entrada 10 Essential Docker Tricks You Should Know se publicó primero en Aprende IT.

]]>

Before diving into Docker’s tricks, it’s vital to understand what Docker is and why it has become such a critical tool in the world of development and system administration. Docker is a container platform that allows developers and system administrators to efficiently and securely package, distribute, and manage applications. At the heart of Docker are containers, isolated environments where applications run, bypassing the issues of “it works on my machine”.

But now let’s get to what really matters, those Docker tricks that will make your life much easier.

Leveraging the Dockerfile

The Dockerfile is the document that defines how your Docker image, which will give life to your container, is built. You can see it as a recipe, where each line is an instruction to add ingredients (layers) to your image.

Don’t forget about .dockerignore: Just like with git, Docker has a system to ignore files. Any file or folder specified in .dockerignore will not be copied into the Docker image. This is useful for ignoring unnecessary files, like logs, node_modules files, or any others not needed to run your application.

Use of cache layers: Docker caches the image layers every time you build one. If you haven’t made changes to that specific layer, Docker will reuse it, speeding up the building process. If you place the instructions that change the least at the beginning of the Dockerfile, you’ll be able to take full advantage of this feature.

Playing with Containers

Containers are the essence of Docker, but they can also be tricky to handle if you don’t know some tricks.

Efficient container management: Docker offers several commands to manage containers. For example, you can use docker ps -a to see all your containers (even the ones that are stopped), docker stop $(docker ps -aq) to stop all running containers, and docker rm $(docker ps -aq) to remove all containers.

Docker logs for debugging: If something goes wrong with your application, you can check your container logs with docker logs CONTAINER_ID. You can even follow the logs in real-time with docker logs -f CONTAINER_ID.

Docker Images: Less is More

Docker images can be huge if not handled correctly. Here are a few tricks to keep them as light as possible.

Use minimalist base images: There are many Docker base images available, but not all are created equal. Some are very heavy and contain a lot of things you probably don’t need. Try using minimalist base images like Alpine, which only have the essentials.

Remove cache after installations: When you install something with apt, yum, or any other package manager, it generates a cache that is not necessary to run your application. You can delete it on the same line where you install the package, so you don’t generate a new layer: for instance, you can use RUN apt-get update && apt-get install -y my-package && rm -rf /var/lib/apt/lists/*.

Orchestrating Containers with Docker Compose

Docker Compose is an incredible tool that allows you to define and manage multiple containers at once. And, of course, it also has its own tricks.

Use environment variables: Docker Compose lets you define environment variables in a .env file, which you can then use in your docker-compose.yml file. This is very handy to avoid writing the same things over and over again.

Dependencies between containers: With Docker Compose, you can define dependencies between containers using the depends_on option. This ensures that Docker Compose starts the containers in the correct order.

Creating Networks with Docker

Docker allows you to create networks to connect your containers so that they can communicate with each other.

Creating and managing networks: You can create a network with the docker network create command. Once the network is created, you can connect a container to it with docker network connect.

Inspecting networks: If you want to see what containers are connected to a network, you can use docker network inspect.

Optimizing Docker Use

Finally, there are some general tricks that will help you make the most of Docker.

Docker system prune: Over time, you are likely to accumulate a bunch of images, containers, and networks that you no longer use. You can remove them all with docker system prune.

Using Docker with CI/CD: Docker fits perfectly into any continuous integration and continuous deployment process. You can build your Docker image as part of your CI/CD pipeline, test it, and then deploy it to production.

Using Multi-Stage Builds for Efficient Images

In Docker, you can use multi-stage builds to optimize your images. This involves dividing your Dockerfile into multiple stages, where each can use a different base image. For example, you can have one stage to compile your application and another to run it. This allows you to include only what is necessary in the final image, keeping it as lightweight as possible.

Configuring Volumes for Data Persistence

Data stored in containers is ephemeral and is lost when the container is deleted. To maintain persistent data, you can configure volumes in Docker. This allows you to store data outside the container, ensuring it is not lost even if the container is deleted or updated.

Using Docker Secrets to Manage Sensitive Data

Managing sensitive data such as passwords and access tokens is crucial. Docker Secrets provides a secure way to store and manage this information. Secrets are encrypted in transit and at rest, providing an additional layer of security for your sensitive data.

Optimization with Health Checks

Health checks in Docker allow you to automatically verify the status of your containers. You can define commands or instructions that Docker will periodically execute to ensure that your application is running correctly. This is especially useful for quickly detecting problems and improving the availability and reliability of your services.

I hope these tricks will help you master Docker and make your life a little easier. Remember, Docker is a powerful tool, but it can also be complicated. With these tips and tricks, you’ll be able to get the most out of Docker and avoid some of the most common issues.

La entrada 10 Essential Docker Tricks You Should Know se publicó primero en Aprende IT.

]]>
https://aprendeit.com/en/10-essential-docker-tricks-you-should-know/feed/ 0
Best practices building images with Dockerfiles https://aprendeit.com/en/best-practices-building-images-with-dockerfiles/ https://aprendeit.com/en/best-practices-building-images-with-dockerfiles/#respond Sun, 06 Mar 2022 19:09:39 +0000 https://aprendeit.com/?p=2745 Order matters In Dockerfiles the order matters a lot. For example, it is not the same to execute a COPY or ADD instruction to add an executable file and then ...

La entrada Best practices building images with Dockerfiles se publicó primero en Aprende IT.

]]>
Order matters

In Dockerfiles the order matters a lot. For example, it is not the same to execute a COPY or ADD instruction to add an executable file and then execute it than trying to execute it before adding it. This seems obvious but it is one of the main errors that cause a Dockerfile to not work correctly when trying to create an image from it.

Lighten the image by deleting files

Whenever you create an image take into account the deletion of temporary files that we will not need when running the application because this way we will save disk space. For example, if to run the application we download a compressed file, unzip its content and this content is the one we will use, we should delete the compressed file to make the image lighter.

Reduces the number of files

Avoid installing unneeded packages. If you do not do this you may have a higher memory and disk consumption with the image you are creating and you may also generate more security problems since you will have to maintain and update these files in each version.

Avoid including files that you should not by using “.dockerignore”.

Avoid including files that should not be included such as files containing personal data by using “.dockerignore” files. These files are similar to “.gitignore” files and with a few lines we can avoid filtering information.

Specifies the base image version and dependencies.

It is important to use concrete versions and not to use base images and dependencies without specifying version. Not specifying versions can lead to bugs that are not contemplated and difficult to locate.

Use the correct base image

It is important to use base images as small as possible as Alpine or Busybox whenever possible. On the other hand it is possible that with some applications we need specific images to make the application work, in this case there is not much more to comment, use it.

Finally whenever possible use official base images, doing this you will avoid problems such as using images with embedded malware.

Reuse images

If all the images running on your hosts are based on Ubuntu:20.04 for example, using this base image can save you more disk space than using a small image like Alpine or Busybox since you already have the other image saved on disk.

La entrada Best practices building images with Dockerfiles se publicó primero en Aprende IT.

]]>
https://aprendeit.com/en/best-practices-building-images-with-dockerfiles/feed/ 0
Why we should containerize our applications https://aprendeit.com/en/why-we-should-containerize-our-applications/ https://aprendeit.com/en/why-we-should-containerize-our-applications/#respond Wed, 09 Feb 2022 16:46:13 +0000 https://aprendeit.com/?p=2697 Why should we containerize our applications? First of all, it should be noted that an application can run correctly in a system without containers or inside a container. It can ...

La entrada Why we should containerize our applications se publicó primero en Aprende IT.

]]>
Why should we containerize our applications? First of all, it should be noted that an application can run correctly in a system without containers or inside a container. It can run correctly in either mode.

So why “waste time” passing the application to containers?

When we prepare an application to run in containers we are not wasting time. On the contrary, we are gaining time in the future.

Let me explain, when an application is prepared to run on containers, we are making the application more independent of a system because we can update the system where the containers run without affecting the application and on the contrary, we can update the application image without affecting the base system. Therefore, we provide a layer of isolation to the application.

It is important to highlight that the image that we prepare for the application should comply with the OCI or Open Container Initiative standards (as can be verified in https://opencontainers.org/ ), that is to say, the image is OCI compliant and we can run the image of our application in all the compatible routines such as:

 

  • Docker
  • Containerd
  • Cri-o
  • Rkt
  • Runc

Well, what else does it bring us to have the application ready to run in a container?

We can take advantage of the above mentioned with the previous routines and stand-alone managers such as docker from orchestrators such as:

  • Docker-swarm (it is not the most used) 
  • Kubernetes (the orchestrator most used)

This type of orchestrators provide great advantages for our application, such as high availability, scalability, monitoring, flexibility, etc. They provide an extra abstraction layer that makes it easier to manage networks, volumes, instance management, and everything related to container management.

For example, using Kubernetes you can have an application in production and have it scale based on CPU or RAM usage. You can also make sure that there are a certain number of instances. And most importantly, you can deploy without causing a disaster by very quickly managing a rollback if necessary.

Conclusions

Just a few years ago the industry in general only saw this as viable for non-production environments (except for the most daring) but recently we are seeing more and more widespread adoption of this type of technology. In fact, the vast majority of the major cloud technology players have implemented cloud-related services.

La entrada Why we should containerize our applications se publicó primero en Aprende IT.

]]>
https://aprendeit.com/en/why-we-should-containerize-our-applications/feed/ 0
The docker system command https://aprendeit.com/en/the-docker-system-command/ https://aprendeit.com/en/the-docker-system-command/#respond Tue, 18 Jan 2022 18:39:41 +0000 https://aprendeit.com/?p=2433 Hello again! This article talks about how to get general information about the whole docker system and do “cleanup” of containers, images and volumes with docker system. Docker system The ...

La entrada The docker system command se publicó primero en Aprende IT.

]]>
Hello again! This article talks about how to get general information about the whole docker system and do “cleanup” of containers, images and volumes with docker system.

Docker system

The docker system command has several subcommands:

• docker system info
• docker system df
• docker system events
• docker system prune

View docker host information

To see the information about the host from docker we can run the command docker system info or its abbreviation docker info like this:

[ger-pc ~]# docker info
Client:
Debug Mode: false

Server:
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 5
Server Version: 19.03.7-ce
Storage Driver: overlay2
Backing Filesystem: <unknown>
Supports d_type: true
Native Overlay Diff: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: d76c121f76a5fc8a462dc64594aea72fe18e1178.m
runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 5.5.8-1-MANJARO
Operating System: Manjaro Linux
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.233GiB
Name: ger-pc
ID: MWH3:24AM:UAZJ:E5UL:TRI3:F4NZ:Y3JD:IMKP:7G2V:VV6I:L5XF:J2TW
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

This way we will be able to know on which hardware, docker version, system characteristics, we are working on.

View disk usage of each container

We can see the disk capacity used by each image, container, volume and build cache. The docker system df command will give us a brief summary of the disk for each docker component as you can see in the following example:

[ger-pc ~]# docker system df 
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 6 1 1.006GB 803.4MB (79%)
Containers 1 0 2B 2B (100%)
Local Volumes 0 0 0B 0B
Build Cache 0 0 0B 0B
[ger-pc ~]#

If we also want to know how much disk space each image or each container or each volume occupies, we have to add the -v option to the command as you can see in the following example:

 
[ger-pc ~]# docker system df -v
Images space usage:

REPOSITORY TAG IMAGE ID CREATED SIZE SHARED SIZE UNIQUE SIZE CONTAINERS
nginx alpine 89ec9da68213 3 days ago 19.94MB 0B 19.94MB 0
archlinux latest 9651b9e35f39 2 weeks ago 412.2MB 0B 412.2MB 0
ubuntu 18.04 4e5021d210f6 5 weeks ago 64.21MB 0B 64.21MB 0
centos 8 470671670cac 3 months ago 237.1MB 0B 237.1MB 0
ubuntu 19.04 c88ac1f841b7 3 months ago 69.99MB 0B 69.99MB 0
centos 7 5e35e350aded 5 months ago 203MB 0B 203MB 1

Containers space usage:

CONTAINER ID IMAGE COMMAND LOCAL VOLUMES SIZE CREATED STATUS NAMES
6e034771f891 centos:7 "/bin/bash" 0 2B 25 hours ago Exited (137) 15 hours ago container1

Local Volumes space usage:

VOLUME NAME LINKS SIZE

Build cache usage: 0B

CACHE ID CACHE TYPE SIZE CREATED LAST USED USAGE SHARED
[ger-pc ~]#

View disk usage of each container

The docker system events command reports docker system events in real time. This can help us when the system has a failure and we want to prevent it from reoccurring.
The command syntax is very simple:

docker system events

 

Remove docker residues

Docker has a very fast way to clean up the system.
The docker system prune command can help us a lot to perform the system cleanup,

The syntax of the command is:

docker system prune

An example of the output is:

[ger-pc ~]# docker system prune
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all dangling images
- all dangling build cache

Are you sure you want to continue? [y/N] y
Deleted Containers:
6e034771f891225529dc6fc7eef6f40d537820d7607f68885edc10f2c71c6f9d

Total reclaimed space: 2B
[ger-pc ~]#

As you can see it will only eliminate:

  • Stopped containers
  • Unused nets (no containers)
  • Images hung
  • Build cache hung

To not ask for confirmation we add the -f parameter so it should be “docker system prune -f” and so it will delete all of the above at once, without confirmation,

If we want to delete all the images we can execute the command by adding -a and the command will look like this:

docker system prune -a

To remove volumes as well, add the –volumes parameter as in the following example:

docker system prune --volumes

 

If you are interested in learning Docker you can purchase our book here

Docker para novatos

Docker para novatos - Gerardo G. Urtiaga-portada-web

If you liked the article, share it on your social networks so we can reach more people!

Best regards

La entrada The docker system command se publicó primero en Aprende IT.

]]>
https://aprendeit.com/en/the-docker-system-command/feed/ 0