Ubuntu df/du Commands: Checking Disk Space Usage

In the Linux system, `df` and `du` are core tools for disk space management, used to view overall partition usage and specific directory/file space respectively. `df` (Disk Free) analyzes partition-level usage: The basic command is `df -h` (human-readable units). Key parameters include `-T` (display file system type) and `-i` (check inode usage). Output columns include partition device (e.g., `/dev/sda2`), total capacity, used/available space, usage percentage, and mount point (e.g., `/`). Note that `tmpfs` is a memory-based virtual partition and can be ignored. `du` (Disk Usage) focuses on directory/file details: Common commands are `du -sh` (quickly sum directory sizes), `du -ah` (include hidden files), and `du --max-depth=1` (only first-level subdirectories). Examples include `du -sh /home` to check total directory usage, and `du -ah /tmp | sort -hr | head -n 10` to identify large files. **Key Differences**: `df` checks overall partition usage (e.g., clean if root partition exceeds 85% usage), while `du` inspects specific content (

Read More
System Information Viewing: Usage of the Ubuntu uname Command

`uname` is a lightweight and practical system information viewing tool in Ubuntu that requires no additional installation. It can quickly obtain basic information such as kernel version, hostname, and hardware architecture, making it suitable for beginners. Basic usage of `uname`: Executing it directly displays the kernel name (default: `Linux`). Common parameter functions: - `-a` (or `--all`): Displays all system information, including kernel name, hostname, kernel version, hardware architecture, and operating system name (e.g., `Linux my-ubuntu 5.15.0-76-generic x86_64 GNU/Linux`); - `-r` (or `--kernel-release`): Displays the kernel release version; - `-n` (or `--nodename`): Displays the hostname; - `-m` (or `--machine`): Displays the hardware architecture (e.g., `x86_64`); - `-v` (or `--kernel-version`): Displays the detailed kernel version; - `-o` (or `--operating-system`): Displays the operating system name (usually `GNU/Linux`). Application scenarios include quickly checking system information, script automation tasks (e.g., adapting software for different architectures), and comparing kernel versions across multiple devices. In summary:

Read More
Essential for Terminal: Monitoring System Resources with Ubuntu's top Command

In the Ubuntu system, the `top` command is a practical tool for monitoring system resources in the terminal, which can dynamically display the status of CPU, memory, processes, etc. To start it, open the terminal (Ctrl+Alt+T) and enter `top` (ordinary users can use it; `sudo` provides more system information). The core areas of the interface include: system overview information (uptime, number of users, load), process summary (total processes, running/sleeping/zombie counts), CPU status (`us` user mode, `id` idle, `wa` IO wait), memory (total/used/free/cached), Swap, and the process list (PID, `%CPU`/`%MEM`, etc.). Common shortcut keys: `P` (sort by CPU), `M` (sort by memory), `1` (display for multi-core CPUs), `k` (terminate a process), `q` (quit). Practical scenarios: Use `P` + `k` to troubleshoot CPU-high-usage processes, `M` to monitor memory leaks (where `RES` continues to rise), and address high load through `load average` (high `wa` indicates IO bottlenecks, high `us` requires program optimization). Mastering the core shortcuts allows efficient system management, making `top` an essential daily monitoring tool.

Read More
Common Issues and Solutions when Using `apt install` on Ubuntu

The following are common issues and solutions for Ubuntu's `apt install`: **1. Unable to locate package**: Verify the package name spelling (use `apt search` for confirmation), run `sudo apt update` to refresh sources, or fix misconfigured sources (e.g., replace with a domestic mirror). **2. Unable to acquire lock**: Caused by lingering `apt` processes. Terminate the process (find PID via `ps aux | grep apt`, then `sudo kill PID`), or directly delete lock files: `sudo rm /var/lib/dpkg/lock`, then retry installation. **3. Could not resolve domain name**: Check network connectivity (use `ping` for testing), modify DNS settings (edit `/etc/resolv.conf` to add 8.8.8.8, etc.), or temporarily switch to an HTTP source. **4. Dependency error**: Run `sudo apt install -f` to automatically fix dependencies, or manually install missing packages before retrying. **5. Insufficient permissions**: Add `sudo` before the command (e.g., `sudo apt install 软件名`). **6. Software fails to start after installation**: Check installation status (`sudo dpkg -l | grep 软件名`), and re[continue troubleshooting steps as original content cut off]. *Note: The original text was truncated at step 6. The above includes all provided content.*

Read More
Cleaning Up Ubuntu System: Detailed Explanation of the `apt autoremove` Command

After installing and uninstalling software in Ubuntu, residual unnecessary dependency packages often remain, occupying disk space and bloating the system. `apt autoremove` can automatically clean up these "useless automatic dependencies"—packages that were "incidentally" installed to satisfy dependencies when software was installed but are no longer required by any other software. This command requires administrative privileges, with the basic syntax being `sudo apt autoremove`. After execution, it will prompt for the packages to be removed and the space to be freed; enter `y` to confirm. Optional flags include `-y` for automatic confirmation (recommended to check risks without parameters first) or `--purge` to remove configuration files (not the default behavior). Distinct from `apt clean` (clears cache) and `remove` (removes packages without dependencies), `autoremove` focuses specifically on cleaning useless dependencies. Before use, simulate checks with `--dry-run` to avoid frequent operations. It is safer to run after updating the software sources, and `-y` should be used cautiously to prevent accidental deletions. Regular use can free up disk space, and mistakenly removed dependencies can be reinstalled to restore functionality.

Read More
Essential for System Updates: The Difference Between `apt update` and `upgrade` in Ubuntu

Updating Ubuntu systems relies on `apt update` and `apt upgrade`, which serve different purposes and must be executed in sequence. `apt update` is used to refresh the package index (checking the latest list), ensuring the system is aware of available software versions and dependencies. In contrast, `apt upgrade` upgrades installed software to the latest versions based on this index (utilizing the list to update software). **Key distinction**: **`apt update` must be executed first**. Otherwise, outdated information may lead to upgrade failures or version incompatibilities. **Correct procedure**: 1. Run `sudo apt update` in the terminal to update the package list. 2. Then execute `sudo apt upgrade` to upgrade installed software. **Notes**: - If `update` fails, check your network or switch to a different source (e.g., Aliyun or Tsinghua mirrors). - Use `--fix-broken install` to resolve dependency conflicts. - Kernel/driver upgrades require a system restart. - Regularly update systems and back up data; prefer LTS (Long-Term Support) versions for stability. In short, `update` checks the package list, and `upgrade` uses this list to update software. Both are essential, and following the sequential execution is critical.

Read More
Ubuntu Software Installation: A Beginner's Guide to the apt install Command

The most common and secure way for Ubuntu beginners to install software is by using the `apt install` command. First, open the terminal (shortcut `Ctrl+Alt+T` or search for "Terminal"). Before installation, execute `sudo apt update` to update the software source information. For installation, use `sudo apt install <package name>`, and multiple software packages can be installed at once (separated by spaces). To uninstall, use `sudo apt remove` (preserves configuration files) or `purge` (completely removes the software). Common issues: incorrect software name (search with `apt search`), unavailable sources (check network or change sources), insufficient permissions (ensure `sudo` is used). Security tips: only install software from the official repository; do not manually download `.deb` files. Core steps: update sources → install → verify. With more practice, proficiency will be achieved.

Read More
Safe Deletion: A Correct Guide to Using rm -rf in Ubuntu

This article introduces the safe usage of the `rm -rf` command in Ubuntu to avoid accidental data deletion. The `rm -rf` command consists of `rm` (remove), `-r` (recursive), and `-f` (force). Its danger lies in the fact that accidental operations can lead to irreversible file deletion or system crashes (e.g., `rm -rf /`). Key principles for safe use: 1. **Confirm the target**: Use `ls` to check the files/directories before deletion to ensure the path and content are correct. 2. **Replace `-f` with `-i`**: The `-i` parameter will prompt for confirmation, preventing accidental deletions. 3. **Be cautious with directory deletion**: When deleting a directory containing subdirectories, first navigate to the target directory (using `cd`), then execute `rm -rf .` or confirm the path before deletion. 4. **Avoid high-risk commands**: Never execute commands like `rm -rf /` or `rm -rf ~/*`. After accidental deletion, tools like `extundelete` or `testdisk` can be attempted for recovery, but prevention is crucial. By developing the habit of "check first, confirm, and avoid blind operations," the `rm -rf` command can be used safely.

Read More
Ubuntu chmod Command: A Comprehensive Guide to Modifying File Permissions

This article introduces the basics of file permission management in Ubuntu and the usage of the `chmod` command. Permissions are divided into three user categories: owner (u), group (g), and others (o), with permission types being read (r), write (w), and execute (x), corresponding to different operations. Directory permissions are special: `x` grants entry into the directory, and `w` allows creating/deleting files. The `chmod` command has two syntaxes: symbolic notation (role+operation+permission, e.g., `u+x` adds execute permission to the owner) and numeric notation (three digits representing the sum of permissions for u/g/o, where r=4, w=2, x=1, e.g., 754 means u=rwx, g=rx, o=r). Operations should follow the principle of least privilege to avoid `777` (full access). Insufficient directory permissions cause "Permission denied," requiring checks on `x`/`r` permissions. Distinguish `x` permissions for files (execution) and directories (entry). `chmod` is a core tool for permission management. Using symbolic or numeric notation reasonably, combined with the least privilege principle, ensures system security.

Read More
Beginner's Guide: Fundamentals of Ubuntu File Permission Management

Ubuntu file permission management is fundamental to system security, controlling three types of permissions (read r, write w, execute x) for three categories of subjects (owner, group, others). Permissions can be represented in two ways: symbolic (e.g., rwxr-xr--) and numeric (where r=4, w=2, x=1; e.g., 754). To view permissions, use `ls -l`; the first column displays permission information. To modify permissions, `chmod` is used (symbolic mode like `u+x` or numeric mode like `755`). `chown` and `chgrp` change the owner and group, respectively. **Note**: Directories require execute permission (x) to be accessed. Default file permissions are 644, and directories are 755. Avoid 777 permissions. When using `chmod` and `chown` on critical files, use `sudo`. Mastering basic permissions suffices for daily needs; always follow security principles and practice regularly.

Read More
mv Command: Ubuntu File Moving/Renaming Tips

`mv` is a commonly used file management command in the Ubuntu system, whose core function is to **move files/directories** or **rename files/directories**. The basic syntax is `mv [options] source_file/directory target_location/new_filename`. If the target is a directory, the file/directory is moved; if it is a new filename, the renaming is performed. **Moving operation**: It can be done within the same directory (e.g., `mv test.txt ~/Documents/`), or across directories (absolute path: `mv ~/Downloads/data.csv /tmp/`; relative path: `mv ../Desktop/report.pdf ./`). **Renaming operation**: Essentially, it is moving a file/directory to the same directory with a new name (e.g., `mv oldname.txt newname.txt`). For renaming across directories, the target path is directly specified as the new name. **Common parameters**: `-i` prompts for confirmation before overwriting, `-n` skips existing files, and `-v` displays the operation process. Note that the target directory must exist, and `mv` performs "move" (the source file is deleted, not "copy"). If misoperated, recovery tools or undo actions can be used to correct it. Mastering the syntax and parameters allows efficient handling of most file management needs.

Read More
The `cp` Command: How to Copy Files in Ubuntu

In the Ubuntu system, `cp` is a basic command for copying files/directories without deleting the source files. The basic format is `cp source_file/directory target_location`. Common parameters include: `-i` (prompt for confirmation before overwriting), `-r` (recursively copy directories, **required**), and `-v` (show detailed process). **Scenario Examples**: - Copy a single file to the current directory: `cp test.txt .` - Copy to a specified directory (requires `docs` to exist): `cp test.txt docs/` - Copy multiple files: `cp file1.txt file2.txt docs/` - Copy a directory (must use `-r`; auto-creates target directory): `cp -r docs/ backup/` - Confirm overwrites with `-i`: `cp -i test.txt docs/` **Notes**: - Omitting `-r` when copying a directory will cause failure. - The target file is overwritten by default when it exists; use `-i` for safety. - Hidden files (e.g., `.bashrc`) can be copied directly. - `-r` automatically creates the target directory if it does not exist. **Key Takeaways**: Basic format, `-r` for directories, `-i` to confirm overwrites, and `-v` to view the process.

Read More
Ubuntu rm Command: The Correct Way to Delete Files/Directories

This article introduces the correct usage of the `rm` command in the Ubuntu system to avoid accidentally deleting important data. `rm` is a core tool for deleting files/directories; it deletes directly by default without sending files to the trash, making recovery difficult after deletion. **Basic Usage**: Delete a single file with `rm filename`; to delete a directory, use the `-r` (recursive) option: `rm -r directoryname`. Common options include: `-i` (interactive confirmation, prompting before deletion to prevent accidental removal), `-f` (force deletion, ignoring errors, use with caution), and `-v` (verbose, showing deletion progress). **Safety Notes**: Avoid using `rm *` or `rm -rf *` (which delete all contents of the current directory). Do not delete system-critical directories (e.g., `/etc`). Before deleting a directory, use `ls` to confirm its structure; for empty directories, `rmdir` is safer. If accidentally deleted, attempt recovery via the graphical trash bin (files deleted via terminal are not sent there) or tools like `extundelete` (requires installation, and avoid writing data after deletion). **Summary**: Always confirm the target before deletion, prioritize using `-i`, avoid dangerous commands, and ensure data security.

Read More
Quick Start: Creating Folders with Ubuntu mkdir

This article introduces the basic command `mkdir` for creating directories in the Ubuntu system. Short for "make directory", `mkdir` is used to create empty directories and is an essential tool for organizing files. **Basic usage**: To create a single folder in the current directory, use the command format `mkdir 文件夹名称` (e.g., `mkdir projects`). For creating directories at a specified path (relative or absolute), directly specify the path: e.g., `mkdir ~/Documents/notes` or `mkdir /tmp/temp_files`. To create nested directories (e.g., `a/b/c`), the regular `mkdir` will fail if parent directories do not exist. In this case, use the `-p` option (`--parents`) to automatically create all parent directories (e.g., `mkdir -p workspace/code/python`). **Common issues**: Use `-p` when parent directories do not exist; if permission is insufficient, use `sudo` (with caution). **Summary**: The core syntax of `mkdir` is `mkdir [options] path`. It creates single directories by default, requires `-p` for nested directories, and uses `sudo` for permission issues.

Read More
Essential Ubuntu: Using the pwd Command to View Current Directory Path

In the Ubuntu system, `pwd` (Print Working Directory) is a practical command that displays the current working directory, helping users clarify their location in the file system. The file system is structured as a tree with the root directory `/` as the starting point, and the current path represents the user's specific position within this structure (e.g., the user's home directory is commonly denoted by `~`). The basic usage is straightforward: after opening the terminal (`Ctrl+Alt+T`), entering `pwd` will display the current path (e.g., `/home/yourname`). It has two hidden parameters: `-P` shows the physical path (ignoring symbolic links to display the real location), and `-L` shows the symbolic link path (the default option, displaying the link path instead of the real location). For example, if `link_to_docs` is a soft link pointing to `~/Documents`, `pwd -L` will display `~/link_to_docs`, while `pwd -P` will show `~/Documents`. Mastering `pwd` helps avoid file operation errors, and when combined with `cd` to switch paths, it enables efficient file management. It is a fundamental tool for file management.

Read More
Nanny-Level Tutorial: Detailed Explanation of the ls Command in Ubuntu

In Ubuntu, `ls` is a commonly used command to view directory contents. The basic usage is `ls` (displays non-hidden files in the current directory, sorted alphabetically). Its core lies in option combinations: `-a` shows hidden files (including `.` and `..`); `-l` displays detailed information (including permissions, owner, size, modification time, etc.); `-h` works with `-l` to show sizes in units like KB/MB; `-t` sorts by modification time, `-r` reverses the order, `-S` sorts by size, `-d` only shows directory names, and `--color=auto` differentiates file types by color. Combinable options include `-lha` (detailed + hidden + size) and `-ltr` (detailed + time + reverse). It can also view specified paths, such as `ls /home/user/Documents`. Common combinations are `ls -l` (detailed), `ls -a` (hidden), `ls -lha` (detailed hidden size), etc. It is recommended to use `man ls` for more help.

Read More
Ubuntu Newbie Guide: How to Use the cd Command?

This article introduces the use of the `cd` command in the Ubuntu system, which is a core tool for directory switching, similar to clicking on folders in Windows. **Basic Usage**: The format is `cd target_directory`. You can directly enter a subdirectory of the current directory (e.g., `cd Documents`), or access another user's home directory via `~username` (requires permissions, e.g., `cd ~root`). **Path Distinction**: Relative paths start from the current directory (`..` represents the parent directory, e.g., `cd ..`); absolute paths start from the root directory `/`. You can use `~` to refer to the home directory (e.g., `cd ~/Pictures`) or write the full path directly (e.g., `cd /usr/share/doc`). **Common Tips**: `cd -` returns to the previous directory, `cd ~` directly goes to the home directory, and `cd ..` returns to the parent directory. **Common Issues**: Directory does not exist or spelling error (case-sensitive, use `ls` to check); directories with spaces require quotes or backslashes (e.g., `cd "my docs"`); system directories requiring permissions use `sudo` (ordinary users should prioritize operating on their home directory). Finally, use `pwd` to confirm the current directory. Mastering paths and these techniques is sufficient.

Read More
Experimental Measurement of Z-Image: An Efficient Image Generation Model with 6B Parameters

Z-Image is an efficient image generation model with 6B parameters, achieving or even surpassing the performance of mainstream competitive models with 8 inference steps (8 NFEs). It can run smoothly on consumer-grade devices with 16G VRAM. The model has three variants: Turbo (lightweight and real-time, suitable for AIGC applications and mini-programs), Base (undistilled for secondary fine-tuning), and Edit (specialized for image editing), with Turbo being the most valuable for practical deployment. In practical tests, the generation time for 1024×1024 resolution is 0.8 seconds (with Flash Attention + model compilation), and the peak memory usage is 14G. Technically, its S3-DiT architecture enhances parameter efficiency, the Decoupled-DMD distillation algorithm enables 8-step inference, and DMDR fuses RL and DMD to optimize quality. Its strengths lie in bilingual text rendering, photorealistic generation, low-VRAM deployment, and image editing. Limitations include only Turbo being open, and the need for optimization in extreme stylized generation and model compilation time. Z-Image balances performance, efficiency, and practicality, making it suitable for small and medium-sized teams and developers to reduce deployment barriers.

Read More
Nginx Port and Domain Binding: Easily Achieve Domain Access to the Server

This article explains how to bind ports and domains in Nginx to achieve hosting multiple websites/services on a single server. The core is to distinguish different sites by "port + domain name". Nginx configures virtual hosts through the `server` block, with key directives including `listen` (port), `server_name` (domain name), `root` (file path), and `index` (home page). Prerequisites: The server needs Nginx installed, the domain name should be registered and resolved to a public IP, and the server should be tested to be accessible. Practical cases are divided into two scenarios: 1. The same domain name with different ports (e.g., binding 80 and 443 ports for `www.myblog.com`, with HTTPS certificate required for the latter); 2. Different domain names with different ports (e.g., `www.myblog.com` using port 80, `blog.myblog.com` using port 8080). Configuration files are stored in `/etc/nginx/conf.d/`, and examples should include `listen` and `server_name`. Verification: Execute `nginx -t` to check syntax, use `systemctl restart nginx` to apply changes, and verify access via a browser. Common issues: Configuration errors (check syntax), unapplied domain resolution (wait for DNS or use `nslookup`), port conflicts (change port or ...).

Read More
Common Nginx Commands: Essential Start, Stop, Restart, and Configuration Check for Beginners

This article introduces the core commands for Nginx daily management to help beginners get started quickly. There are two ways to start Nginx: using `nginx` for source code installation, and `sudo systemctl start nginx` for system services installed via yum/apt. Verification can be done by `ps aux | grep nginx` or accessing the test page. For stopping, there are quick stop (`nginx -s stop`, which may interrupt ongoing requests) and graceful stop (`nginx -s quit`, recommended, waiting for current requests to complete). The difference lies in whether the service is interrupted. For restarting, there are two methods: reloading the configuration (`nginx -s reload`, essential after configuration changes without interruption) and full restart (`systemctl restart`, which may cause brief interruption). Configuration checks require first verifying syntax with `nginx -t`, then applying changes with `nginx -s reload`. `nginx -T` can display the complete configuration. Common commands for beginners include start/stop, reload, and syntax checking. Note permissions, configuration paths, and log troubleshooting. Mastering these commands enables efficient daily Nginx operation and maintenance.

Read More
Nginx Beginner's Guide: Configuring an Accessible Web Server

### A Beginner's Guide to Nginx Nginx is a high-performance, lightweight web server/reverse proxy, ideal for high-concurrency scenarios. It features low resource consumption, flexible configuration, and ease of use. **Installation**: On mainstream Linux systems (Ubuntu/Debian/CentOS/RHEL), install via `apt` or `dnf`. Start and enable Nginx with `systemctl start/ enable nginx`, then verify with `systemctl status nginx` or by accessing the server's IP address. **Core Configuration**: Configuration files are located in `/etc/nginx/`, where `nginx.conf` is the main configuration file and `conf.d/` stores virtual host configurations. Create a website directory (e.g., `/var/www/html`), write an `index.html` file, and add a `server` block in `conf.d/` (specifying port 80 listening and the website directory). **Testing & Management**: After modifying configurations, use `nginx -t` to check syntax and `systemctl reload` to apply changes. Ensure port 80 is open (firewall settings) and file permissions are correct for testing access. Common commands include `start/stop/restart/reload nginx` and status checks. **Summary**

Read More
Nginx Dynamic and Static Content Separation: Speed Up and Stabilize Your Website Loading

Nginx static-dynamic separation separates static resources (images, CSS, JS, etc.) from dynamic resources (PHP, APIs, etc.). Nginx focuses on quickly returning static resources, while backend servers handle dynamic requests. This approach can improve page loading speed, reduce backend pressure, and enhance scalability (static resources can be deployed on CDNs, and dynamic requests can use load balancing). The core of implementation is distinguishing requests using Nginx's `location` directive: static resources (e.g., `.jpg`, `.js`) are directly returned using the `root` directive with specified paths; dynamic requests (e.g., `.php`) are forwarded to the backend (e.g., PHP-FPM) via `fastcgi_pass` or similar. In practice, within the `server` block of the Nginx configuration file, use `~*` to match static suffixes and set paths, and `~` to match dynamic requests and forward them to the backend. After verification, restart Nginx to apply the changes and optimize website performance.

Read More
Introduction to Nginx Caching: Practical Tips for Improving Website Access Speed

Nginx caching temporarily stores frequently accessed content to "trade space for time," enhancing access speed, reducing backend pressure, and saving bandwidth. It mainly includes two types: proxy caching (for static resources in reverse proxy scenarios, with origin requests to the backend) and web caching (HTTP caching, relying on the backend `Cache-Control` headers for browser local caching). Dynamic content and frequently changing content (e.g., user information, real-time data) are not recommended for caching. Configuring proxy caching requires defining paths (e.g., `proxy_cache_path`) and parameters (e.g., cache size, key rules), enabling them in `location` (e.g., `proxy_cache my_cache`), and restarting Nginx after verifying the configuration. Management involves checking cache status (logging `HIT/MISS`), clearing caches (manually deleting cache files or using the `ngx_cache_purge` module), and optimization (caching only static resources, setting `max-age` reasonably). Common issues: For cache misses, check configuration, backend headers, or permissions; for stale content, verify `Cache-Control` headers. Key points: Cache only static content, monitor hit status via logs, and prohibit caching dynamic content.

Read More
Configuring HTTPS in Nginx: A Step-by-Step Guide to Achieving Secure Website Access

This article introduces the necessity and practical methods of configuring HTTPS for websites. HTTPS ensures data transmission security through SSL/TLS encryption, preventing user information from being stolen. It also improves search engine rankings and user trust (since browser "insecure" prompts can affect experience), making it an essential configuration for modern websites. The core of configuration is using Let's Encrypt free certificates (obtained via the Certbot tool). On Ubuntu/Debian systems, execute `apt install certbot python3-certbot-nginx` to install Certbot and the Nginx plugin. Then, use `certbot --nginx -d example.com -d www.example.com` to obtain the certificate by specifying the domain name. Certbot will automatically configure Nginx (listening on port 443, setting SSL certificate paths, and redirecting HTTP to HTTPS). Verification methods include checking certificate status (`certbot certificates`) and accessing the HTTPS site via a browser to check the small lock icon. It is important to note certificate path, permissions, and firewall port configurations. Let's Encrypt certificates auto-renew every 90 days, which can be tested with `certbot renew --dry-run`. In summary, HTTPS configuration is simple and can enhance security, SEO, and user experience, making it an essential skill for modern websites.

Read More
Nginx Virtual Hosts: Deploying Multiple Websites on a Single Server

This article introduces the Nginx virtual host feature, which allows a single server to host multiple websites, thereby reducing costs. The core is to simulate multiple virtual servers through technology. There are three implementation methods in Nginx: domain name-based (the most common, where different domains correspond to different websites), port-based (distinguished by different ports, suitable for scenarios without additional domains), and IP-based (for servers with multiple IPs, where different IPs correspond to different websites). Before configuration, Nginx needs to be installed, website content prepared (e.g., directories `/var/www/site1` and `/var/www/site2` with homepages), and domain name resolution or test domains (optional) should be ensured. Taking the domain name-based method as an example, the steps are: create the configuration file `/etc/nginx/sites-available/site1.com`, write a `server` block (listening on port 80, matching the domain name, specifying the root directory), configure the second website similarly, create a soft link to `sites-enabled`, test with `nginx -t`, and restart Nginx. For other methods: the port-based method requires specifying a different port (e.g., 8080) in the `server` block; the IP-based method requires the server to bind multiple IPs, with the `listen` directive in the configuration file specifying the IP and port. Common issues include permissions, configuration errors, and domain name resolution, which require checking directory permissions, syntax, and confirming that the domain name points to the server's IP. In summary, Nginx's virtual host feature is a cost-effective solution for hosting multiple websites on a single server, with flexible configuration options based on domain names, ports, or IPs to meet various deployment needs.

Read More