More actions
nginx is a reverse proxy. This can be thought of as a router: you've got a lot of applications in your server. To handle the traffic, the router "cleanly" distributes the network traffic. nginx does the same thing. nginx has some really good tutorials out there. Don't like nginx? You can also use Apache.
Initial setup
nginx is a key program in every major distribution. Please simply look up "<your distro> nginx install" and run that command for your package manager.
Let's say you only want to return a single file: This is intentionally left blank.
, then in your /etc/nginx/nginx.conf
configuration, you can add:
location / { return 200 'This is intentionally left blank.' }
If that's all you want, don't use nginx; it'd be easier to write a single-lined Bash script to run a single page.
In /etc/nginx/nginx.conf
, /etc/nginx/conf.d/*.conf
and /etc/nginx/sites-enabled/*
are included. Therefore, if you have a complicated set up, you can split up your configuration among multiple files.
To test your configuration, nginx -t
will tell you what syntax is wrong if there happens to be any. For instance, some directives can't be certain areas.
A single domain
You own the domain example.com
and you want to serve your amazing blog to the Internet. Create the file: /etc/nginx/conf.d/example.com.conf
or /etc/nginx/sites-enabled/example.com.conf
or just edit /etc/nginx/nginx.conf
directly. You'll need to point your DNS records to the IP that nginx is running.
server { server_name example.com; root /var/www/html; index index.html; listen 80; listen [::]:80; }
The server_name
tells nginx to use only this domain. For example, when there are subdomains or multiple domains running on a single nginx instance, you'll want correctly get the right content corresponding to each domain.
The root
directive tells nginx where to serve files. So, inside /var/www/html
, you'll want to have an index.html
file in there, or more. Everything inside /var/www/html
will be served via nginx.
The index
directive tells nginx that if someone goes to your website example.com
, it will try to find index.html
and serve that, without requiring the user to directly specify index.html
.
The listen
directive tells nginx from which ports to serve the files in the root
directive. A single 80
is over IPv4, while [::]:80
is over IPv6.
Multiple domains
Say you want a blog, a forum, a wiki, a streaming service, etc. If you want each of these to be in different root folders to keep these neat:
server { root /var/www/blog; index index.html; server_name blog.example.com; listen 80; listen [::]:80; } server { root /var/www/phpbb; index index.html; server_name forum.example.com; listen 80; listen [::]:80; } server { root /var/www/mediawiki; index index.html; server_name wiki.example.com; listen 80; listen [::]:80; }
Please note that you'll need php add-ons and more configurations to have phpbb and mediawiki to run, but this is just a basic example.
If you want to protect your server from people access your IP (typically if they're crawling via IPs, they're probably not up to something good), you can up a configuration:
server { listen 80; listen [::]:80; server_name _; return 444; }
that rejects them.
Add-ons
nginx has modules added while it compiles. This makes adding these modules frustrating in the case of obscure modules on pre-compiled package managers, though the only two that should really matter are proxy
to proxy any arbitrary code and fastcgi
to proxy php-fpm.
proxy
I believe all nginx configurations default to include the proxy
add-on, so there's no need to discuss how to install it (gentoo doesn't, follow this).
To use a reverse proxy, remember the port your service is running on, and then add it into your nginx configuration:
server { server_name example.com; listen 80; listen [::]:80; location / { proxy_pass http://localhost:8080; } }
This is a simple set up for an executable running on port 8080. I would ensure your firewall does not allow outside access to these ports, else anyone can directly access the service without nginx's protection.
Suppose you want multiple webapps, then:
server { server_name example.com; listen 80; listen [::]:80; location / { proxy_pass http://localhost:8080; # Homepage } location /tags { proxy_pass http://localhost:8081; # Cool web app 1 } location /wiki { proxy_pass http://localhost:8082; # Cool web app 2 } ... }
If someone navigates to example.com
, nginx serves them data from the service running on port 8080
.
If someone navigates to example.com/tags
, nginx serves them data from the service running on port 8081
.
If someone navigates to example.com/wiki
, nginx serves them data from the service running on port 8082
.
Why's there no root
or index
? It doesn't matter. nginx no longer bothers with any of that, since it's directly passing everything into the service.
This shows the purpose of the location
directive.
fastcgi
fastcgi
lets you run php files.
php is not installed by default. You'll need to find the most recent php version, and install php7.4 php7.4-fpm
or what happens to be the most recent php version. With systemd, systemd enable php7.4-fpm
and systemd start php7.4-fpm
will get you up and running.
To be honest, just look up nginx php ubuntu
and you'll find a tutorial that steps you through installing each add-on required for Ubuntu.
In your nginx configuration, you'll need to set up:
server { listen 80; server_name example.com; root /var/www/; index index.php; location ~ \.php$ { try_files $uri = 404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } }
This allows any php file to be passed through the fastcgi
module, which executes the php file in accordance to /etc/php/<version num>/fpm/php-fpm.conf
(note that the bottom of php-fpm.conf
includes pool.d/www.conf
which is actually where the user configurations are set up).
The fastcgi_pass
directive is equivalent to proxy_pass
but for php. Why is php special? I don't know.
The fastcgi_index
directive is equivalent to index
.
The fastcgi_param
directive passes in information into the fastcgi server. The rest of the params are inside /etc/nginx/fastcgi_params
, which is only 20 or so more params.
We only want to run *.php
files into the fastcgi server, so we want to make sure only files that end with .php
are actually passed into the fastcgi server (note the location
directive).
TLS
In order to get an https in your domain, you need to set up SSL (HTTPS = HTTP SSL), which is now changed to TLS. In nginx, running over https is simple:
server { listen 443 ssl; # IPv4 listen [::]:443 ssl; # IPV6 # ... rest of configuration }
If you've got a domain name (example.com
), this won't get browsers happy. This configuration has an SSL connection, but it does not have a certificate yet. You can generate your own certificate using a self-signed certificate, but no one is going to trust this self-signed certificate.
Instead, we can use a free service, Let's Encrypt, Dehydrated, or ZeroSSL. Let's Encrypt is the most common, and is a straightforward set up.
apt install certbot python3-certbot-nginx
Once certbot is installed, ensure your domain is pointed to the correct nginx server, then run:
certbot --nginx -d example.com -d ...
Where you can keep chaining -d <domain>
for each domain you have. python3-certbot-nginx
will find the right nginx configuration to call, and certbot
will make sure you've got rights to that domain. You can't just run certbot on google.com, you need to own the domain and the IP that domain is connected to. At this point, python3-certbot-nginx
should have edited your nginx configuration to have certbot's certificate auto-configured. If you force https, you'll see:
1 server { 2 listen 443 ssl; 3 listen [::]:443 ssl; 4 ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; 5 ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; 6 include /etc/letsencrypt/options-ssl-nginx.conf; 7 ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; 8 root /var/www; 9 } 10 server { 11 if ($host = example.com) { 12 return 301 https://$host$request_uri; 13 } 14 listen 80; 15 listen [::]:80; 16 server_name example.com; 17 return 404; 18 root /var/www; 19 }
When you request example.com
, you'll pass through the server starting on line 9
, as specified by server_name
on line 15
.
You'll get redirected to the https version of example.com
, specified by line 11
, which sends you to the server starting on line 1
.
These two servers are loading the same data, both are pointing to /var/www
, but one runs http, while the other runs https.
The configurations in /etc/letsencrypt/options-ssl-nginx.conf
and /etc/letsencrypt/ssl-dhparams.pem
specify how SSL is used. The options-ssl-nginx.conf
will give basic configurations, but more importantly which protocols are allowed (TLSv1.2
and TLSv1.3
) and a list of ciphers that nginx will serve.
Let's Encrypt allows you to not redirect to https, which gives you:
server { listen 80; listen [::]:80; listen 443 ssl; listen [::]:443 ssl; ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; include /etc/letsencrypt/options-ssl-nginx.conf; ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; root /var/www; }
ACME
Automatic certificate management environment. Certificates expire after a period of time to prevent certificates being valid when your site or server has been dead for a long time.
Let's Encrypt is simple. Just run certbot renew
and it'll renew your certificate.
Let's Encrypt's certificates are valid for 3 months, so you can update this every 3 months when it expires. Or, have a cronjob do it for you! Type crontab -e
and then add:
0 12 * * * /usr/bin/certbot renew --quiet
Examples
You want to run a blog that hosts static pages and a wiki that runs mediawiki.