Nginx Static Resource Service: Rapid Setup for Image/File Access
Nginx is suitable for hosting static resources such as images and CSS due to its high performance, lightness, stability, and strong concurrency capabilities, which enhances access speed and saves server resources. For installation, run `sudo apt install nginx` on Ubuntu/Debian and `sudo yum install nginx` on CentOS/RHEL. After startup, access `localhost` to verify. For core configuration, create `static.conf` in `/etc/nginx/conf.d/`. Example: Listen on port 80, use `location` to match paths (e.g., `/images/` and `/files/`), specify the resource root directory with `root`, and enable directory browsing with `autoindex on` (with options to set size and time display). During testing, create `images` and `files` directories under `/var/www/static`, place files in them, run `nginx -t` to check configuration, and reload Nginx with `systemctl reload nginx` to apply changes. Then test access via `localhost/images/xxx.jpg` or `localhost/files/xxx.pdf`. Key considerations include Nginx user permissions and configuration reload effectiveness. Setting up Nginx for static resource service is simple, with core configuration paths and directory browsing functionality, ideal for rapid static resource hosting. It can be extended with features like image compression and anti-leeching.
Read MoreNginx Load Balancing: Simple Configuration for Multi-Server Traffic Distribution
This article introduces Nginx load balancing configuration to solve the problem of excessive load on a single server. At least two backend servers running the same service are required, with Nginx installed and the backend ports open. The core configuration consists of two steps: first, define the backend server group using `upstream` (supporting round-robin, weight, and health checks, e.g., `server 192.168.1.100:8080 weight=2;` or `max_fails=2 fail_timeout=10s`); second, configure `proxy_pass` to this group in the `server` block, passing the client's `Host` and real IP (`proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr;`). Verification involves running `nginx -t` to check syntax, `nginx -s reload` to restart, and testing access to confirm request distribution. Common issues such as unresponsive backends or configuration errors can be resolved by checking firewalls and logs. Advanced strategies include IP hashing (`ip_hash`) and URL hashing (requires additional module).
Read MoreIntroduction to Nginx Reverse Proxy: Easily Achieve Frontend-Backend Separation
In a web front-end and back-end separation architecture, Nginx reverse proxy can solve problems such as cross-origin issues, complex domain name management, and back-end exposure. The reverse proxy acts as an intermediary server, so users access the back-end real service by visiting Nginx, which is transparent to users. When front-end and back-end are separated, reverse proxy can unify domain names (users only need to remember one domain name), hide the back-end address (enhancing security), and distribute requests by path (e.g., `/` for the front-end and `/api` for the back-end). Nginx is simple to install (Ubuntu uses `apt install nginx`, CentOS uses `yum install nginx`). The core of configuration is the `location` block: the front-end static files use `root` and `index` to point to the front-end directory, while the back-end API uses `proxy_pass` to forward to the real address, with `proxy_set_header` to pass header information. In practice, place the front-end files in the Nginx directory. After the back-end service is started, use `location` to distinguish paths. Nginx intercepts requests and forwards them, allowing users to complete front-end and back-end interaction by accessing a single domain name. Reverse proxy also supports extended functions such as load balancing and caching, making it a key tool in front-end and back-end separation architectures.
Read MoreDetailed Explanation of Nginx Configuration Files: Server Block and Location for Beginners
The core of Nginx configuration lies in Server blocks (virtual hosts) and location blocks (path distribution). The main configuration file (nginx.conf) includes the global context (with directives like worker_processes), the events context (with worker_connections), and the http context (which contains multiple Server blocks). A Server block defines a website using directives such as listen (port), server_name (domain name), root (root directory), and index (homepage). Location blocks match requests based on paths, supporting prefix, exact, regular expression, and other types, with priority order: exact match > prefix with ^~ > ordinary prefix > regular expression > default. After configuration, use `nginx -t` to verify syntax and `nginx -s reload` to apply changes. After mastering basic configurations (port, domain name, static path), beginners can progressively learn advanced features like dynamic request forwarding and caching.
Read MoreLearn Nginx from Scratch: A Step-by-Step Guide to Installation and Startup
This article introduces the basics of learning Nginx, emphasizing its lightweight, efficient, and flexible configuration, making it suitable for web server setup. The content includes: Nginx supports Windows and Linux systems. Installation is explained using Ubuntu/Debian and CentOS/RHEL as examples: for Ubuntu, run `apt update` followed by `apt install nginx`; for CentOS, first install the EPEL repository and then use `yum install nginx`. After starting with `systemctl start nginx`, access `localhost` to verify a successful default welcome page display. The core configuration files are located in `/etc/nginx/`, where the `default` configuration file defines listening on port 80, the root directory `/var/www/html`, etc. Common commands include starting/stopping, reloading, and syntax checking. It also mentions common troubleshooting (port conflicts, configuration errors) and methods for customizing the homepage. For Windows installation, download, extract, and start via command line. Finally, it encourages hands-on practice to master advanced features.
Read MoreNode.js File System: Quick Reference Guide for Common fs Module APIs
# Node.js File System: Quick Reference for the fs Module This article introduces the core APIs of the `fs` module in Node.js, helping beginners quickly get started with file operations. The `fs` module provides both synchronous and asynchronous APIs: synchronous methods (e.g., `readFileSync`) block execution and are suitable for simple scripts, while asynchronous methods (e.g., `readFile`) are non-blocking and handle results via callbacks, making them ideal for high-concurrency scenarios. Common APIs include: reading files with `readFile` (asynchronous) or `readFileSync` (synchronous); writing with `writeFile` (overwrite mode); creating directories with `mkdir` (supports recursive creation); deleting files/directories with `unlink`/`rmdir` (non-empty directories require `fs.rm` with `recursive: true`); reading directories with `readdir`; getting file information with `stat`; and checking existence with `existsSync`. Advanced tips: Use the `path` module for path handling; always check for errors in asynchronous operations; optimize memory usage for large files with streams; and be mindful of file permissions. Mastering the basic APIs will cover most common scenarios, with further learning needed for complex operations like stream processing.
Read MoreNon-blocking I/O in Node.js: Underlying Principles for High-Concurrency Scenarios
This article focuses on explaining Node.js non-blocking I/O and its advantages. Traditional synchronous blocking I/O causes programs to wait for I/O completion, leaving the CPU idle and resulting in extremely low efficiency under high concurrency. Non-blocking I/O, by contrast, initiates a request without waiting, immediately executing other tasks, and notifies completion through callback functions, which are uniformly scheduled by the event loop. Node.js implements non-blocking I/O through the event loop and the libuv library: asynchronous I/O requests are handed over to the kernel (e.g., Linux epoll) by libuv. The kernel monitors I/O completion status, and upon completion, the corresponding callback is added to the task queue. The main thread is not blocked and can continue processing other tasks. Its high concurrency capability arises from: a single-threaded JS engine that does not block, with a large number of I/O requests waiting concurrently. The total time consumed is only the average time per single request, not the sum. libuv abstracts cross-platform I/O models and maintains an event loop (handling microtasks, macrotasks, and I/O callbacks) to uniformly schedule callbacks. Non-blocking I/O enables Node.js to excel in scenarios such as web servers, real-time communication, and I/O-intensive data processing. It is the core of Node.js's high concurrency handling, efficiently supporting tasks like front-end engineering and API services.
Read MoreNode.js REPL Environment: An Efficient Tool for Interactive Programming
The Node.js REPL (Read-Eval-Print Loop) is an interactive programming environment that provides immediate feedback through an input-execute-output loop, making it suitable for learning and debugging. To start, install Node.js and enter `node` in the terminal, where you'll see the `>` prompt. Basic operations include simple calculations (e.g., `1+1`), variable definition (`var message = "Hello"`), and testing functions/APIs (e.g., `add(2,3)` or the array `map` method). Common commands are `.help` (view commands), `.exit` (quit), `.clear` (clear), `.save`/`.load` (file operations), with support for arrow key history navigation and Tab auto-completion. The REPL enables quick debugging, API testing (e.g., `fs` module), and temporary script execution. Note that variables are session-specific, making it ideal for rapid validation rather than large-scale project development. It serves as an efficient tool for Node.js learning, accelerating code verification and debugging.
Read MoreBuilding RESTful APIs with Node.js: Routing and Response Implementation
This article introduces the core process of building a RESTful API using Node.js and Express. Node.js is well-suited for high-concurrency services due to its non-blocking I/O and single-threaded model, and when paired with the lightweight and efficient Express framework, it is ideal for beginners. For preparation, install Node.js (recommended LTS version) and initialize the project. Install the Express framework via `npm install express`. The core involves creating a service with Express: importing the framework, instantiating it, and defining routes. Use methods like `app.get()` to handle different HTTP requests (GET/POST/PUT/DELETE), with the `express.json()` middleware to parse JSON request bodies. Each method corresponds to different operations: GET retrieves resources, POST creates, PUT updates, and DELETE removes. Data is passed using route parameters and request bodies, with status codes such as 200, 201, and 404 returned in results. Advanced content includes route modularization (splitting route files) and 404 handling. Finally, test the API using Postman or curl. After mastering this, you can connect to a database to extend functionality and complete the construction of a basic API.
Read MoreFrontend Developers Learning Node.js: The Mindset Shift from Browser to Server
This article introduces the necessity and core points for front-end developers to learn Node.js. Based on Google Chrome's V8 engine, Node.js enables JavaScript to run on the server-side, overcoming the limitations of front-end developers in building back-end services and enabling full-stack development. Its core features include "non-blocking I/O" (handling concurrent requests through the event loop), "full-access" environment (capable of operating on files and ports), and the "CommonJS module system". For front-end developers transitioning to back-end roles, mindset shifts are required: changing from the sandboxed (API-limited) runtime environment to a full-access environment; transforming asynchronous programming from an auxiliary task (e.g., setTimeout) to a core design principle (to avoid server blocking); and adjusting from ES Modules to CommonJS (require/module.exports) for module systems. The learning path includes: mastering foundational modules (fs, http), understanding asynchronous programming (callbacks/Promise/async), developing APIs with frameworks like Express, and exploring the underlying principles of tools such as Webpack and Babel. In summary, Node.js empowers front-end developers to build full-stack capabilities without switching programming languages, enabling them to understand server-side logic and expand career horizons. It is a key tool for bridging the gap between front-end and back-end development.
Read MoreNode.js Buffer: An Introduction to Handling Binary Data
In Node.js, when dealing with binary data such as images and network transmission data, the Buffer is a core tool for efficiently storing and manipulating byte streams. It is a fixed-length array of bytes, where each element is an integer between 0 and 255. Buffer cannot be dynamically expanded and serves as the foundation for I/O operations. There are three ways to create a Buffer: `Buffer.alloc(size)` (specifies the length and initializes it to 0), `Buffer.from(array)` (converts an array to a Buffer), and `Buffer.from(string, encoding)` (converts a string to a Buffer, requiring an encoding like utf8 to be specified). A Buffer can read and write bytes via indices, obtain its length using the `length` property, convert to a string with `buf.toString(encoding)`, and concatenate Buffers using `Buffer.concat([buf1, buf2])`. Common methods include `write()` (to write a string) and `slice()` (to extract a portion). Applications include file processing, network communication, and database BLOB operations. It is important to note encoding consistency (e.g., matching utf8 and base64 conversions), avoid overflow (values exceeding 255 will be truncated), and manage off-heap memory reasonably to prevent leaks. Mastering Buffer is crucial for understanding Node.js binary data processing.
Read MoreA Guide to Using `exports` and `require` in Node.js Module System
The Node.js module system enables code reuse, organization, and avoids global pollution by splitting files. Each .js file is an independent module; content inside is private by default and must be exposed via exports. Exports can be done through `exports` (mounting properties) or `module.exports` (directly assigning an object), with the latter being the recommended approach (as `exports` is a reference to it). Imports use `require`, with local modules requiring relative paths and third-party modules directly using package names. Mastering export and import is fundamental to Node.js development and enhances code organization capabilities.
Read MoreWhat Can Node.js Do? 5 Must-Do Practical Projects for Beginners
Node.js is a tool based on Chrome's V8 engine that enables JavaScript to run on the server side. Its core advantages are non-blocking I/O and event-driven architecture, making it suitable for handling high-concurrency asynchronous tasks. It has a wide range of application scenarios: developing web applications (e.g., with Express/Koa frameworks), API interfaces, real-time applications (e.g., real-time messaging using Socket.io), command-line tools, and data analysis/crawlers. For beginners, the article recommends 5 practical projects: a personal blog (using Express + EJS + file reading/writing), a command-line to-do list (using commander + JSON storage), a RESTful API (using Express + JSON data), a real-time chat application (using Socket.io), and a weather query tool (using axios + third-party APIs). These projects cover core knowledge points such as route design, asynchronous operations, and real-time communication. In summary, it emphasizes that starting with Node.js requires hands-on practice. Completing these projects allows gradual mastery of key skills. It is recommended to begin with simple projects, consistently practice by consulting documentation and referring to examples, and quickly enhance practical capabilities.
Read MoreNode.js Event Loop: Why Is It So Fast?
This article uses the analogy of a coffee shop waiter to explain the core mechanism of Node.js for efficiently handling concurrent requests—the event loop. Despite being single-threaded, Node.js can process a large number of concurrent requests efficiently, with the key lying in the collaboration between non-blocking I/O and the event loop: when executing asynchronous operations (such as file reading and network requests), Node.js delegates the task to the underlying libuv library and immediately responds to other requests. Once the operation is completed, the callback function is placed into the task queue. The event loop is the core scheduler, processing tasks in fixed phases: starting with timer callbacks (Timers), system callbacks (Pending Callbacks), followed by the crucial Poll phase to wait for I/O events, and then handling immediate callbacks (Check) and close callbacks (Close Callbacks). It ensures the ordered execution of asynchronous tasks through the call stack, task queues, and phase-based processing. The efficient design stems from three points: non-blocking I/O avoids CPU waiting, callback scheduling is executed in an ordered manner across phases, and the combination of single-threaded execution with asynchronous concurrency achieves high throughput. Understanding the scheduling logic of the event loop helps developers write more efficient Node.js code.
Read MoreWriting Your First Web Server with Node.js: A Quick Start with the Express Framework
This article introduces the method of building a web server using Node.js and Express. Based on the V8 engine, Node.js enables JavaScript to run on the server side, while Express, as a popular framework, simplifies complex tasks such as routing and request handling. For environment preparation, first install Node.js (including npm), and verify it using `node -v` and `npm -v`. Next, create a project folder, initialize it with `npm init -y`, and install the framework with `npm install express`. The core step is writing `server.js`: import Express, create an instance, define a port (e.g., 3000), use `app.get('/')` to define a GET request for the root path and return text, then start the server with `app.listen`. Access `http://localhost:3000` to test it. Extended features include adding more routes (e.g., `/about`), dynamic path parameters, returning JSON (`res.json()`), and hosting static files (`express.static`). The key steps are summarized as: installing tools, creating a project, writing routes, and starting the test, laying the foundation for subsequent learning of middleware, dynamic routing, etc.
Read MoreIntroduction to Node.js Asynchronous Programming: Callback Functions and Promise Basics
Node.js, due to JavaScript's single-threaded nature, requires asynchronous programming to handle high-concurrency I/O operations (such as file reading and network requests). Otherwise, synchronous operations will block the main thread, leading to poor performance. The core of asynchronous programming is to ensure that time-consuming operations do not block the main thread and that results are notified via callbacks or Promises upon completion. Callback functions were the foundation of early asynchronous programming. For example, the callback of `fs.readFile` receives `err` and `data`, which is simple and intuitive but prone to "callback hell" (with deep nesting and poor readability). Error handling requires repetitive `if (err)` checks. Promises address callback hell by being created with `new Promise`, having states: pending (in progress), fulfilled (success), and rejected (failure). They enable linear and readable asynchronous code through `.then()` chaining and centralized error handling with `.catch()`, laying the groundwork for subsequent `async/await`. Core Value: Callback functions are foundational, Promises enhance readability, and asynchronous thinking is key to building efficient Node.js programs.
Read MoreDetailed Explanation of Node.js Core Module fs: Easily Implement File Reading and Writing
The `fs` module in Node.js is a core tool for interacting with the file system, supporting both synchronous and asynchronous APIs. Synchronous methods block code execution, while asynchronous methods are non-blocking and suitable for high concurrency; beginners are advised to prioritize learning asynchronous operations first. Basic operations include file reading and writing: for asynchronous reading, use `readFile` (requiring callbacks to handle errors and data), and for synchronous reading, use `readFileSync` (requiring `try/catch` blocks). Writing can be either overwriting (`writeFile`) or appending (`appendFile`). Directory operations include `mkdir` (supports recursive creation), `readdir` (lists directory contents), and `rmdir` (only removes empty directories). Path handling should use the `path` module. It is recommended to combine `__dirname` (the directory where the script is located) to construct absolute paths, avoiding reliance on relative paths that depend on the execution location. For large file processing, streams (Stream) should be used to read/write data in chunks and avoid excessive memory usage. Common issues: Path errors can be resolved by using absolute paths, and large files should be processed with the `pipe` stream method. Practical suggestions include starting with simple read/write and directory operations, combining them with the `path` module, and understanding the advantages of the asynchronous non-blocking model.
Read MoreNode.js npm Tools: A Comprehensive Guide from Installation to Package Management
This article introduces the core knowledge of Node.js and npm. Node.js is a JavaScript runtime environment based on Chrome's V8 engine, and npm is its default package management tool for downloading, installing, and managing third-party code packages. **Installation**: Node.js can be installed on Windows, Mac, and Linux systems via the official website or package managers (npm is installed alongside Node.js). After installation, verify with `node -v` and `npm -v`. **Core npm Functions**: - Initialize a project with `npm init` to generate `package.json` (project configuration file). - Install dependencies: local (default, project-only) or global (`-g`, system-wide); categorized as production (`--save`) or development (`--save-dev`) dependencies. - Manage dependencies: view, update, uninstall (`npm uninstall`), etc. **Common Commands**: `npm install` (Install), `npm list` (View), `npm update` (Update), etc. For slow domestic access, accelerate with Taobao mirror (`npm config set registry`) or cnpm. **Notes**: Avoid committing `node_modules` to Git, use version numbers (`^` or `~`) reasonably, and prioritize local dependency installation. npm is a core tool for Node.js development; mastering its usage enhances efficiency.
Read MoreA Step-by-Step Guide to Installing Node.js and Configuring the Development Environment
Node.js is a JavaScript runtime environment based on Chrome's V8 engine, supporting backend development and extending JavaScript to server, desktop, and other domains, making it suitable for full-stack beginners. Installation varies by system: for Windows, download the LTS version installer and check "Add to PATH"; for Mac, use Homebrew; for Linux (Ubuntu), run `apt update` followed by `apt install nodejs npm`. VS Code is recommended for environment configuration—install the Node.js extension, create an `index.js` file, input `console.log('Hello, Node.js!')`, and execute `node index.js` in the terminal to run. npm is a package manager; initialize a project with `npm init -y`, install dependencies like `lodash` via `npm install lodash`, and use `require` in code. After setup, you can develop servers, APIs, etc., with regular practice recommended.
Read MoreGetting Started with Node.js: The First Step in JavaScript Backend Development
Node.js is a JavaScript runtime environment built on the V8 engine, enabling JavaScript to run on the server - side without a browser and facilitating full - stack development. Its core advantages lie in: no need to switch languages for full - stack development, non - blocking I/O for efficient handling of concurrent requests, light weight for rapid project development, and npm providing a rich ecosystem of packages. Installation is simple; after downloading the LTS version from the official website, you can verify the success by running `node -v` and `npm -v`. For the first program, create a `server.js` file, use the `http` module to write an HTTP server, and listen on a port to return "Hello World". Core capabilities include file operations with the `fs` module and npm package management (such as installing `figlet` to achieve artistic text). It is easy to get started with Node.js. It is recommended to start with practice, and later you can explore the Express framework or full - stack projects.
Read Morepandas Sorting Operations: An Introduction and Practical Guide to the sort_values Function
This article introduces the sorting method of the `sort_values` function in pandas, which is applicable to sorting DataFrame/Series data. Core parameters: `by` specifies the column(s) to sort by (required), `ascending` controls ascending/descending order (default is ascending True), and `inplace` determines whether to modify the original data (default is False, returning a new dataset). Basic usage: Single-column sorting, e.g., ascending order by "Chinese" (default) or descending order by "Math"; multi-column sorting can pass a list of column names and corresponding ascending/descending directions (e.g., first by "Chinese" ascending, then by "Math" descending). Setting `inplace=True` directly modifies the original data; it is recommended to prioritize preserving the original data (default False). Practical examples: After adding a "Total Score" column, sort by total score in descending order to clearly display the ranking of comprehensive scores. Notes: For multi-column sorting, ensure the lengths of the `by` and `ascending` lists are consistent; prioritize data safety to avoid accidental overwriting of original data. By mastering core parameters and common scenarios through examples, sorting serves as a foundational step in data processing, becoming more critical when combined with subsequent analyses (e.g., TopN).
Read MorePandas Super Useful Tips: Getting Started with Data Cleaning, Easy for Beginners to Master
Data cleaning is crucial for data analysis, and pandas is an efficient tool for this task. This article teaches beginners how to perform core data cleaning using pandas: first, install and import data (via `pd.read_csv()` or creating a sample DataFrame), then use `head()` and `info()` for initial inspection. For missing values: identify with `isnull()`, remove with `dropna()`, or fill with `fillna()` (e.g., mean/median). Duplicates are detected via `duplicated()` and removed with `drop_duplicates()`. Outliers can be identified through `describe()` statistics or logical filtering (e.g., income ≤ 20000). Data type conversion is done using `astype()` or `to_datetime()`. The beginner workflow is: Import → Inspect → Handle missing values → Duplicates → Outliers → Type conversion. Emphasize hands-on practice to flexibly apply these tools to solve real-world data problems.
Read MorePandas Data Merging: Basic Operations of merge and concat, Suitable for Beginners
This article introduces two data merging tools in pandas: `merge` and `concat`, suitable for beginners to quickly master. **concat**: No associated keys, direct concatenation, either row-wise (axis=0) or column-wise (axis=1). Row concatenation (axis=0) is suitable for tables with the same structure (e.g., multi-month data), and it is important to use `ignore_index=True` to reset the index and avoid duplicates. Column concatenation (axis=1) requires the number of rows to be consistent, used for merging by row identifiers (e.g., student information + grade table). **merge**: Merging based on common keys (e.g., name, ID), similar to SQL JOIN, supporting four methods: `inner` (default, retains common keys), `left` (retains left table), `right` (retains right table), and `outer` (retains all). When key names differ, use `left_on`/`right_on` to specify. The default merging method is `inner`. **Key Difference**: concat concatenates without keys, while merge matches by keys. Beginners should note: for column-wise concat, the number of rows must be consistent; merge uses the `how` parameter to control the merge scope, and avoid index duplication and key name mismatch issues.
Read MoreBeginner's Guide to pandas Index: Mastering Data Sorting and Renaming Effortlessly
### Detailed Explanation of pandas Index An index is a key element in pandas for identifying data positions and content, similar to row numbers/column headers in Excel. It serves as the "ID card" of data, with core functions including quick data location, supporting sorting and merging operations. **Data Sorting**: - **Series Sorting**: To sort by index, use `sort_index()` (ascending by default; set `ascending=False` for descending order). To sort by values, use `sort_values()` (ascending by default; same parameter for descending order). - **DataFrame Sorting**: Sort by column values using `sort_values(by=column_name)`, and sort by row index using `sort_index()`. **Renaming Indexes**: - Modify row/column labels with `rename()`, e.g., `df.rename(index={old_name: new_name})` or `df.rename(columns={old_name: new_name})`. - Direct assignment: `df.index = [new_index]` or `df.columns = [new_column_names]`, with length consistency required. **Notes**: - Distinguish between row index (`df.index`) and column index (`df.columns`). - When modifying indexes,
Read MorePandas Data Statistics: 5 Common Functions to Quickly Master Basic Analysis
Pandas is a powerful tool for processing tabular data in Python. This article introduces 5 basic statistical functions to help beginners quickly master data analysis skills. - **sum()**: Calculates the total sum, automatically ignoring missing values (NaN). Using `axis=1` allows summation by rows, which is useful for total statistics (e.g., total scores). - **mean()**: Computes the average, reflecting central tendency, but is sensitive to extreme values. Suitable for scenarios without extreme values. - **median()**: Calculates the median, which is robust to extreme values and better reflects the "true level of most data." - **max()/min()**: Returns the maximum/minimum values, respectively, for statistical extremes (e.g., highest/lowest scores). - **describe()**: Provides a one-stop statistical summary, outputting count, mean, standard deviation, quantiles, etc., to comprehensively understand data distribution and variability. These functions address basic questions like "total amount, average, middle level, and extreme values," serving as the "basic skills" of data analysis. Subsequent learning can advance to skills like groupby for more advanced statistics.
Read More