Introduction to the curl Command
This lab provides a comprehensive introduction to the curl command, a vital tool for systemadmin tasks involving data transfer across protocols like HTTP, FTP, and more. You will discover the power and flexibility of curl, starting with basic usage and progressing to more complex operations. We will begin by validating the curl installation and its version, then proceed to retrieve web page content and download files directly from the command line. This hands-on lab emphasizes practical examples, illustrating why curl is an indispensable asset for networking and communication within system administration.
This lab explores the following key areas:
- An Overview of the curl Command
- Retrieving Web Page Content Using curl
- Downloading Files with curl
Understanding the curl Command
This section dives into the curl command, a robust tool for seamless data transfer through various protocols, including HTTP and FTP. Curl is a command-line utility that enables interaction with web servers, facilitating file downloads and various network operations critical for any systemadmin.
Initially, let's verify the installed curl version within our Ubuntu 22.04 Docker container. This step is fundamental for ensuring compatibility and access to the latest features:
curl --version
Example output:
curl 7.81.0 (x86_64-pc-linux-gnu) libcurl/7.81.0 OpenSSL/3.0.2 zlib/1.2.11 brotli/1.0.9 libidn2/2.3.2 libpsl/0.21.0 (+libidn2-2.3.2) libstdc++/9.4.0 libssh/0.9.6/openssl/zlib nghttp2/1.47.0 librtmp/2.3
Protocols: dict file ftp ftps gopher gophers http https imap imaps ldap ldaps mqtt pop3 pop3s rtmp rtsp scp sftp smb smbs smtp smtps telnet tftp
Features: alt-svc AsynchDNS brotli GSS-API HTTP2 HTTPS-proxy IPv6 Kerberos Largefile libz NTLM NTLM_WB PSL SPNEGO SSL TLS-SRP UnixSockets
This output displays essential information such as the curl version number, supported protocols (like HTTP, FTP, and more), and a list of enabled features. Understanding these details is crucial for effective troubleshooting and leveraging curl's capabilities.
Next, let's retrieve the content of a webpage using curl. We will use the official curl project homepage as our example:
curl https://curl.se
Example output:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>curl - transfer data with URL</title>
...
The output reveals the HTML code of the curl project's main page, directly fetched and presented in your terminal, highlighting curl's ability to grab web content swiftly.
In the subsequent steps, we will expand on these basics, covering how to download files and work with a variety of protocols, further solidifying curl as an essential tool for systemadmin tasks.
How to Retrieve Web Page Content with curl
This section details the process of using curl to efficiently retrieve web page content. Mastering this ability is fundamental for various tasks, including monitoring website status and automated data extraction.
Initially, we revisit the curl project's homepage, this time utilizing the -o
option. This option enables us to save the retrieved HTML content into a local file:
curl -o curl_homepage.html https://curl.se
Example output:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 31748 100 31748 0 0 93644 0 --:--:-- --:--:-- --:--:-- 93644
The -o
flag directs curl to save the output to the file named curl_homepage.html
. The terminal output provides a progress report, showing download statistics such as the total file size, current download speed, and estimated time to completion.
The -s
(silent) option can be added to suppress this progress output, displaying only the raw fetched content. This is particularly useful in automated scripts and cron jobs:
curl -s https://curl.se
Example output:
<!doctype html>
<html>
<head>
<meta charset="utf-8" />
<title>curl - transfer data with URL</title>
...
</head>
</html>
By using the -s
option, we ensure a cleaner output stream, focusing solely on the retrieved HTML without progress updates.
The -I
or --head
option is also highly useful. It allows you to retrieve only the HTTP headers of a webpage, which is valuable for server diagnostics and confirming resource availability without downloading the entire page content:
curl -I https://curl.se
Example output:
HTTP/1.1 200 OK
Server: nginx/1.14.0 (Ubuntu)
Date: Wed, 19 Apr 2023 06:34:26 GMT
Content-Type: text/html
Content-Length: 31748
Last-Modified: Fri, 07 Apr 2023 14:37:54 GMT
Connection: close
ETag: "64306f62-7b0c"
Accept-Ranges: bytes
The resulting output contains HTTP headers from the web server, including status codes, content types, and server information. These headers can assist in diagnosing connection problems and understanding server responses.
Next, we will explore using curl to download files, covering essential techniques for obtaining resources directly from the command line, a common task for a Linux systemadmin.
Downloading Files Using the curl Command
This section will guide you through downloading files from the web using curl. Understanding this process is vital for acquiring software packages, configuration files, and other essential resources efficiently.
We begin by downloading a file from the curl project's website, employing the -O
option to save the file using its original name from the remote server:
curl -O https://curl.se/download/curl-7.81.0.tar.gz
Example output:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 3.8M 100 3.8M 0 0 6901k 0 --:--:-- --:--:-- --:--:-- 6901k
The -O
option tells curl to save the downloaded file as curl-7.81.0.tar.gz
, preserving the original filename. The output shows real-time progress information like total file size, transfer speed, and completion status.
You can also use the -o
option to specify a new filename for the downloaded file:
curl -o curl_source.tar.gz https://curl.se/download/curl-7.81.0.tar.gz
Example output:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 3.8M 100 3.8M 0 0 6901k 0 --:--:-- --:--:-- --:--:-- 6901k
In this case, the downloaded file will be renamed to curl_source.tar.gz
upon completion.
Curl also supports downloading files from FTP servers. This is an example of downloading a file from an FTP server:
curl -O ftp://ftp.example.com/file.zip
Example output:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 12.3M 100 12.3M 0 0 6901k 0 --:--:-- --:--:-- --:--:-- 6901k
This command downloads file.zip
from the FTP server ftp.example.com
to your current directory. Ensure you have the necessary permissions to access the FTP server and file.
Remember to replace the provided URLs and filenames with your desired targets.
In our next step, we summarize our learnings from this lab and reflect on the practical implications of using curl within a systemadmin environment.
Lab Summary
Throughout this lab, we have explored the curl command. It is a powerful and flexible command-line tool for data transfer using various protocols. We began by examining the curl version and features installed on our Ubuntu 22.04 Docker container. We then used curl to fetch the HTML content of the curl project's homepage, showcasing both terminal output and saving the content to a file. We concluded by demonstrating how to use curl to download files from web and FTP servers, which is a frequent task for Linux administrators and essential for automating tasks as root or any other user.