Nginx: Difference between revisions

From MTU LUG Wiki
Jump to navigation Jump to search
imported>Sjwhitak
(More links)
imported>Sjwhitak
 
(8 intermediate revisions by the same user not shown)
Line 5: Line 5:


Let's say you only want to return a single file: <code>This is intentionally left blank.</code>, then in your <code>/etc/nginx/nginx.conf</code> configuration, you can add:
Let's say you only want to return a single file: <code>This is intentionally left blank.</code>, then in your <code>/etc/nginx/nginx.conf</code> configuration, you can add:
location / {
return 200 'This is intentionally left blank.'
}


<syntaxhighlight lang="nginx">
If that's all you want, don't use nginx; it'd be easier to write a [https://funprojects.blog/2021/04/11/a-web-server-in-1-line-of-bash/ single-lined Bash script] to run a single page.
location / {
return 200 'This is intentionally left blank.'
}
</syntaxhighlight>

{{Note|If that's all you want, don't use nginx; it'd be easier to write a [https://funprojects.blog/2021/04/11/a-web-server-in-1-line-of-bash/ single-lined Bash script] to run a single page.}}


In <code>/etc/nginx/nginx.conf</code>, <code>/etc/nginx/conf.d/*.conf</code> and <code>/etc/nginx/sites-enabled/*</code> are included. Therefore, if you have a complicated set up, you can split up your configuration among multiple files.
In <code>/etc/nginx/nginx.conf</code>, <code>/etc/nginx/conf.d/*.conf</code> and <code>/etc/nginx/sites-enabled/*</code> are included. Therefore, if you have a complicated set up, you can split up your configuration among multiple files.


To test your configuration,
To test your configuration, <code>nginx -t</code> will tell you what syntax is wrong if there happens to be any. For instance, some directives can't be certain areas.
{{RootCmd|nginx -t}}
This will tell you what syntax is wrong if there happens to be any. For instance, some directives can't be certain areas.


== A single domain ==
== A single domain ==


You own the domain <code>example.com</code> and you want to serve your amazing blog to the Internet. Create the file: <code>/etc/nginx/conf.d/example.com.conf</code> or <code>/etc/nginx/sites-enabled/example.com.conf</code> or just edit <code>/etc/nginx/nginx.conf</code> directly. You'll need to point your DNS records to the IP that nginx is running.
You own the domain <code>example.com</code> and you want to serve your amazing blog to the Internet. Create the file: <code>/etc/nginx/conf.d/example.com.conf</code> or <code>/etc/nginx/sites-enabled/example.com.conf</code> or just edit <code>/etc/nginx/nginx.conf</code> directly. You'll need to point your DNS records to the IP that nginx is running.
<syntaxhighlight lang="nginx">

server {
server {
server_name example.com;
server_name example.com;
root /var/www/html;
root /var/www/html;
index index.html;
index index.html;
listen 80;
listen 80;
listen [::]:80;
listen [::]:80;
}
}
</syntaxhighlight>
The [https://nginx.org/en/docs/http/ngx_http_core_module.html#server_name <code>server_name</code>] tells nginx to use only this domain. For example, when there are subdomains or multiple domains running on a single nginx instance, you'll want correctly get the right content corresponding to each domain.
The [https://nginx.org/en/docs/http/ngx_http_core_module.html#server_name <code>server_name</code>] tells nginx to use only this domain. For example, when there are subdomains or multiple domains running on a single nginx instance, you'll want correctly get the right content corresponding to each domain.


Line 38: Line 44:
Say you want a blog, a forum, a wiki, a streaming service, etc. If you want each of these to be in different root folders to keep these neat:
Say you want a blog, a forum, a wiki, a streaming service, etc. If you want each of these to be in different root folders to keep these neat:


<syntaxhighlight lang="nginx">
server {
server {
root /var/www/blog;
index index.html;
root /var/www/blog;
index index.html;
server_name blog.example.com;
server_name blog.example.com;
listen 80;
listen [::]:80;
listen 80;
listen [::]:80;
}
}
server {
server {
root /var/www/phpbb;
index index.html;
root /var/www/phpbb;
index index.html;
server_name forum.example.com;
server_name forum.example.com;
listen 80;
listen [::]:80;
listen 80;
listen [::]:80;
}
}
server {
server {
root /var/www/mediawiki;
index index.html;
root /var/www/mediawiki;
index index.html;
server_name wiki.example.com;
server_name wiki.example.com;
listen 80;
listen [::]:80;
listen 80;
listen [::]:80;
}
}
Please note that you'll need [https://shell.lug.mtu.edu/wiki/index.php?title=Nginx&action=submit#fastcgi php add-ons] and more configurations to have [https://www.phpbb.com/ phpbb] and [https://www.mediawiki.org/wiki/MediaWiki mediawiki] to run, but this is just a basic example.
</syntaxhighlight>
Please note that you'll need [https://shell.lug.mtu.edu/wiki/index.php?title=Nginx#fastcgi php add-ons] and more configurations to have [https://www.phpbb.com/ phpbb] and [https://www.mediawiki.org/wiki/MediaWiki mediawiki] to run, but this is just a basic example.

=== Subservers ===

One can also use a single server using varying [https://nginx.org/en/docs/http/ngx_http_core_module.html#location location] directives:

<syntaxhighlight lang="nginx">
server {
root /var/www/html;
index index.html;
server_name example.com;
listen 80;
listen [::]:80;

location /blog {
alias /var/www/blog;
}
location /wiki {
alias /var/www/mediawiki;
}
location /forum {
alias /var/www/phpbb;
index index.php;
# php stuff
}
}
</syntaxhighlight>

The [https://nginx.org/en/docs/http/ngx_http_core_module.html#location location] directive can use regex:
<syntaxhighlight lang="nginx">
server {
listen 80;
listen [::]:80;
location ~ ^/~(.+?)(\/.*)?$ {
alias /home/$1/website$2;
}
}
</syntaxhighlight>
where this regex maps a home directory with a tilde; hence, a public access unix server with a public html page.

== Simple security ==


If you want to protect your server from people access your IP (typically if they're crawling via IPs, they're probably not up to something good), you can up a configuration:
If you want to protect your server from people access your IP (typically if they're crawling via IPs, they're probably not up to something good), you can up a configuration:
<syntaxhighlight lang="nginx">
server {
server {
listen 80;
listen [::]:80;
listen 80;
server_name _;
listen [::]:80;
return 444;
server_name _;
return 444;
}
}
</syntaxhighlight>
that rejects them.
that rejects them.


Line 78: Line 128:


To use a reverse proxy, remember the port your service is running on, and then add it into your nginx configuration:
To use a reverse proxy, remember the port your service is running on, and then add it into your nginx configuration:
<syntaxhighlight lang="nginx">

server {
server {
server_name example.com;
server_name example.com;
listen 80;
listen 80;
listen [::]:80;
listen [::]:80;
location / {
location / {
proxy_pass http://localhost:8080;
proxy_pass http://localhost:8080;
}
}
}
}
</syntaxhighlight>
This is a simple set up for an executable running on port 8080. I would ensure your firewall does not allow outside access to these ports, else anyone can directly access the service without nginx's protection.
This is a simple set up for an executable running on port 8080. I would ensure your firewall does not allow outside access to these ports, else anyone can directly access the service without nginx's protection.


Suppose you want multiple webapps, then:
Suppose you want multiple webapps, then:
<syntaxhighlight lang="nginx">

server {
server {
server_name example.com;
server_name example.com;
listen 80;
listen 80;
listen [::]:80;
listen [::]:80;
location / {
location / {
proxy_pass http://localhost:8080; # Homepage
proxy_pass http://localhost:8080; # Homepage
}
}
location /tags {
location /tags {
proxy_pass http://localhost:8081; # Cool web app 1
proxy_pass http://localhost:8081; # Cool web app 1
}
}
location /wiki {
location /wiki {
proxy_pass http://localhost:8082; # Cool web app 2
proxy_pass http://localhost:8082; # Cool web app 2
}
}
...
...
}
}
</syntaxhighlight>
If someone navigates to <code>example.com</code>, nginx serves them data from the service running on port <code>8080</code>.
If someone navigates to <code>example.com</code>, nginx serves them data from the service running on port <code>8080</code>.


Line 115: Line 167:


This shows the purpose of the [https://nginx.org/en/docs/http/ngx_http_core_module.html#location <code>location</code>] directive.
This shows the purpose of the [https://nginx.org/en/docs/http/ngx_http_core_module.html#location <code>location</code>] directive.

=== flask ===

Running a simple flask server can be done in Python:
<syntaxhighlight lang="python">
from flask import Flask
app = Flask(__name__)

@app.route('/flask')
def main():
return "Flask app"

app.run(port=9999)
</syntaxhighlight>

And in nginx, the configuration should be:
<syntaxhighlight lang="nginx">
server {
location /flask {
proxy_pass http://localhost:9999;
}
}
</syntaxhighlight>


== <code>fastcgi</code> ==
== <code>fastcgi</code> ==
Line 126: Line 201:
In your nginx configuration, you'll need to set up:
In your nginx configuration, you'll need to set up:


<syntaxhighlight lang="nginx">
server {
server {
listen 80;
listen 80;
server_name example.com;
server_name example.com;
root /var/www/;
index index.php;
root /var/www/;
location ~ \.php$ {
index index.php;
try_files $uri = 404;
location ~ \.php$ {
try_files $uri = 404;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
include fastcgi_params;
}
}
}
}
</syntaxhighlight>
This allows any php file to be passed through the <code>fastcgi</code> module, which executes the php file in accordance to <code>/etc/php/<version num>/fpm/php-fpm.conf</code> (note that the bottom of <code>php-fpm.conf</code> includes <code>pool.d/www.conf</code> which is actually where the user configurations are set up).
This allows any php file to be passed through the <code>fastcgi</code> module, which executes the php file in accordance to <code>/etc/php/<version num>/fpm/php-fpm.conf</code> (note that the bottom of <code>php-fpm.conf</code> includes <code>pool.d/www.conf</code> which is actually where the user configurations are set up).


Line 151: Line 228:
= TLS =
= TLS =
In order to get an [https://www.openssl.org/ https] in your domain, you need to set up SSL (HTTPS = HTTP SSL), which is now changed to TLS. In nginx, running over https is simple:
In order to get an [https://www.openssl.org/ https] in your domain, you need to set up SSL (HTTPS = HTTP SSL), which is now changed to TLS. In nginx, running over https is simple:
<syntaxhighlight lang="nginx">
server {
server {
listen 443 ssl; # IPv4
listen [::]:443 ssl; # IPV6
listen 443 ssl; # IPv4
listen [::]:443 ssl; # IPV6
# ... rest of configuration
# ... rest of configuration
}
}
</syntaxhighlight>
If you've got a domain name (<code>example.com</code>), this won't get browsers happy. This configuration has an SSL connection, but it does not have a certificate yet. You can generate your own certificate using a [https://stackoverflow.com/a/41366949/14089008 self-signed certificate], but no one is going to trust this self-signed certificate.
If you've got a domain name (<code>example.com</code>), this won't get browsers happy. This configuration has an SSL connection, but it does not have a certificate yet. You can generate your own certificate using a [https://stackoverflow.com/a/41366949/14089008 self-signed certificate], but no one is going to trust this self-signed certificate.


Instead, we can use a free service, [https://letsencrypt.org/ Let's Encrypt], [https://dehydrated.io/ Dehydrated], or [https://zerossl.com/ ZeroSSL]. Let's Encrypt is the most common, and is a straightforward set up.
Instead, we can use a free service, [https://letsencrypt.org/ Let's Encrypt], [https://dehydrated.io/ Dehydrated], or [https://zerossl.com/ ZeroSSL]. Let's Encrypt is the most common, and is a straightforward set up.
apt install certbot python3-certbot-nginx
{{RootCmd|apt install certbot python3-certbot-nginx}}
Once certbot is installed, ensure your domain is pointed to the correct nginx server, then run:
Once certbot is installed, ensure your domain is pointed to the correct nginx server, then run:
certbot --nginx -d example.com -d ...
{{RootCmd|certbot --nginx -d example.com -d ...}}
Where you can keep chaining <code>-d <domain></code> for each domain you have. <code>python3-certbot-nginx</code> will find the right nginx configuration to call, and <code>certbot</code> will make sure you've got rights to that domain. You can't just run certbot on google.com, you need to own the domain and the IP that domain is connected to. At this point, <code>python3-certbot-nginx</code> should have edited your nginx configuration to have certbot's certificate auto-configured. If you force https, you'll see:
Where you can keep chaining <code>-d <domain></code> for each domain you have. <code>python3-certbot-nginx</code> will find the right nginx configuration to call, and <code>certbot</code> will make sure you've got rights to that domain. You can't just run certbot on google.com, you need to own the domain and the IP that domain is connected to. At this point, <code>python3-certbot-nginx</code> should have edited your nginx configuration to have certbot's certificate auto-configured. If you force https, you'll see:
<syntaxhighlight lang="nginx" line>
1 server {
server {
2 listen 443 ssl;
3 listen [::]:443 ssl;
listen 443 ssl;
listen [::]:443 ssl;
4 ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
5 ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
6 include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
7 ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
8 root /var/www;
root /var/www;
9 }
}
10 server {
server {
11 if ($host = example.com) {
if ($host = example.com) {
12 return 301 https://$host$request_uri;
return 301 https://$host$request_uri;
13 }
14 listen 80;
}
15 listen [::]:80;
listen 80;
listen [::]:80;
16 server_name example.com;
server_name example.com;
17 return 404;
18 root /var/www;
return 404;
root /var/www;
19 }
}
When you request <code>example.com</code>, you'll pass through the server starting on line <code>9</code>, as specified by <code>server_name</code> on line <code>15</code>.
</syntaxhighlight>
When you request <code>example.com</code>, you'll pass through the server starting on line <code>10</code>, as specified by <code>server_name</code> on line <code>16</code>.


You'll get redirected to the https version of <code>example.com</code>, specified by line <code>11</code>, which sends you to the server starting on line <code>1</code>.
You'll get redirected to the https version of <code>example.com</code>, specified by line <code>12</code>, which sends you to the server starting on line <code>1</code>.


These two servers are loading the same data, both are pointing to <code>/var/www</code>, but one runs http, while the other runs https.
These two servers are loading the same data, both are pointing to <code>/var/www</code>, but one runs http, while the other runs https.
Line 191: Line 272:


Let's Encrypt allows you to not redirect to https, which gives you:
Let's Encrypt allows you to not redirect to https, which gives you:
<syntaxhighlight lang="nginx">

server {
server {
listen 80;
listen 80;
listen [::]:80;
listen [::]:80;
listen 443 ssl;
listen 443 ssl;
listen [::]:443 ssl;
listen [::]:443 ssl;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
root /var/www;
root /var/www;
}
}
</syntaxhighlight>
== ACME ==
== ACME ==
Automatic certificate management environment. Certificates expire after a period of time to prevent certificates being valid when your site or server has been dead for a long time.
Automatic certificate management environment. Certificates expire after a period of time to prevent certificates being valid when your site or server has been dead for a long time.
Line 208: Line 290:
[https://letsencrypt.org/ Let's Encrypt] is simple. Just run <code>certbot renew</code> and it'll renew your certificate.
[https://letsencrypt.org/ Let's Encrypt] is simple. Just run <code>certbot renew</code> and it'll renew your certificate.


[https://letsencrypt.org/ Let's Encrypt]'s certificates are valid for 3 months, so you can update this every 3 months when it expires. Or, have a cronjob do it for you! Type <code>crontab -e</code> and then add:
[https://letsencrypt.org/ Let's Encrypt]'s certificates are valid for 3 months, so you can update this every 3 months when it expires. Or, have a cronjob do it for you! Type
{{RootCmd|crontab -e}}
and then add:

0 12 * * * /usr/bin/certbot renew --quiet
0 12 * * * /usr/bin/certbot renew --quiet


Line 214: Line 299:


You want to run a blog that hosts static pages and a wiki that runs [https://www.mediawiki.org/wiki/MediaWiki mediawiki].
You want to run a blog that hosts static pages and a wiki that runs [https://www.mediawiki.org/wiki/MediaWiki mediawiki].

Here's the steps you'd take with a fresh system (everything run as root):

<syntaxhighlight lang="bash">
apt install nginx php7.4 php7.4-fpm git
</syntaxhighlight>

Make sure you get the php version that's most recent or the one that's used by whatever software you're trying to use. This example uses version 7.4.

Then, make your folders and grab your content:
<syntaxhighlight lang="bash" line>
mkdir /var/www/wiki; cd /var/www/wiki
git clone https://github.com/wikimedia/mediawiki .
mkdir /var/www/blog; cd /var/www/blog
echo "Here's all my blog files" > index.html
</syntaxhighlight>
Configure <code>nginx</code> to point at these files, edit <code>/etc/nginx/sites-enabled/sites.conf</code>:
<syntaxhighlight lang="nginx">
server {
root /var/www/blog;
index index.html;
server_name blog.example.com;
listen 80;
listen [::]:80;
}
server {
root /var/www/mediawiki;
index index.php;
server_name wiki.example.com;
listen 80;
listen [::]:80;
location ~ \.php {
try_files $uri = 404;
fastcgi_pass 127.0.0.1:7777;
fastcgi_index index.php;
include fastcgi_params;
include fastcgi.conf;
}
}
</syntaxhighlight>
Please note that specifically with mediawiki, there are more configurations typically added, like denying access to deleted images, cached files, etc. To do that, paste your URL to: [https://shorturls.redwerks.org/ shortURLs] and step through their given configuration. Finally, mediawiki uses [[mysql]] to run a database, though this is explained when you follow the [https://www.mediawiki.org/wiki/Manual:Installation_guide installation guide].

At this point, we have nginx pointing to port <code>7777</code> for our <code>fastcgi</code> server to run the php files. We need to configure <code>fpm</code> to do this:
{{RootCmd|vim /etc/php/7.4/fpm/pool.d/www.conf}}
and write
<syntaxhighlight lang="text" line start="36">
listen = 127.0.0.1:7777
</syntaxhighlight>

Update everything with systemd,
{{RootCmd|systemctl restart nginx}}
{{RootCmd|systemctl restart php7.4-fpm}}

and the two sites should work.

Latest revision as of 13:02, 7 April 2022

nginx is a reverse proxy. This can be thought of as a router: you've got a lot of applications in your server. To handle the traffic, the router "cleanly" distributes the network traffic. nginx does the same thing. nginx has some really good tutorials out there. Don't like nginx? You can also use Apache.

Initial setup

nginx is a key program in every major distribution. Please simply look up "<your distro> nginx install" and run that command for your package manager.

Let's say you only want to return a single file: This is intentionally left blank., then in your /etc/nginx/nginx.conf configuration, you can add:

location / {
    return 200 'This is intentionally left blank.'
}
Note
If that's all you want, don't use nginx; it'd be easier to write a single-lined Bash script to run a single page.

In /etc/nginx/nginx.conf, /etc/nginx/conf.d/*.conf and /etc/nginx/sites-enabled/* are included. Therefore, if you have a complicated set up, you can split up your configuration among multiple files.

To test your configuration,

root #nginx -t

This will tell you what syntax is wrong if there happens to be any. For instance, some directives can't be certain areas.

A single domain

You own the domain example.com and you want to serve your amazing blog to the Internet. Create the file: /etc/nginx/conf.d/example.com.conf or /etc/nginx/sites-enabled/example.com.conf or just edit /etc/nginx/nginx.conf directly. You'll need to point your DNS records to the IP that nginx is running.

server {
    server_name example.com;
    root /var/www/html;
    index index.html;
    listen 80;
    listen [::]:80;
}

The server_name tells nginx to use only this domain. For example, when there are subdomains or multiple domains running on a single nginx instance, you'll want correctly get the right content corresponding to each domain.

The root directive tells nginx where to serve files. So, inside /var/www/html, you'll want to have an index.html file in there, or more. Everything inside /var/www/html will be served via nginx.

The index directive tells nginx that if someone goes to your website example.com, it will try to find index.html and serve that, without requiring the user to directly specify index.html.

The listen directive tells nginx from which ports to serve the files in the root directive. A single 80 is over IPv4, while [::]:80 is over IPv6.

Multiple domains

Say you want a blog, a forum, a wiki, a streaming service, etc. If you want each of these to be in different root folders to keep these neat:

server {
    root /var/www/blog;
    index index.html;
    server_name blog.example.com;
    listen 80;
    listen [::]:80;
}
server {
    root /var/www/phpbb;
    index index.html;
    server_name forum.example.com;
    listen 80;
    listen [::]:80;
}
server {
    root /var/www/mediawiki;
    index index.html;
    server_name wiki.example.com;
    listen 80;
    listen [::]:80;
}

Please note that you'll need php add-ons and more configurations to have phpbb and mediawiki to run, but this is just a basic example.

Subservers

One can also use a single server using varying location directives:

server {
    root /var/www/html;
    index index.html;
    server_name example.com;
    listen 80;
    listen [::]:80;

    location /blog {
        alias /var/www/blog;
    }
    location /wiki {
        alias /var/www/mediawiki;
    }
    location /forum {
        alias /var/www/phpbb;
        index index.php;
        # php stuff
    }
}

The location directive can use regex:

server {
    listen 80;
    listen [::]:80;
    location ~ ^/~(.+?)(\/.*)?$ {
        alias /home/$1/website$2;
    }
}

where this regex maps a home directory with a tilde; hence, a public access unix server with a public html page.

Simple security

If you want to protect your server from people access your IP (typically if they're crawling via IPs, they're probably not up to something good), you can up a configuration:

server {
    listen 80;
    listen [::]:80;
    server_name _;
    return 444;
}

that rejects them.

Add-ons

nginx has modules added while it compiles. This makes adding these modules frustrating in the case of obscure modules on pre-compiled package managers, though the only two that should really matter are proxy to proxy any arbitrary code and fastcgi to proxy php-fpm.

proxy

I believe all nginx configurations default to include the proxy add-on, so there's no need to discuss how to install it (gentoo doesn't, follow this).

To use a reverse proxy, remember the port your service is running on, and then add it into your nginx configuration:

server {
    server_name example.com;
    listen 80;
    listen [::]:80;
    location / {
        proxy_pass http://localhost:8080;
    }
}

This is a simple set up for an executable running on port 8080. I would ensure your firewall does not allow outside access to these ports, else anyone can directly access the service without nginx's protection.

Suppose you want multiple webapps, then:

server {
    server_name example.com;
    listen 80;
    listen [::]:80;
    location / {
        proxy_pass http://localhost:8080; # Homepage
    }
    location /tags {
        proxy_pass http://localhost:8081; # Cool web app 1 
    }
    location /wiki {
        proxy_pass http://localhost:8082; # Cool web app 2
    }
    ...
}

If someone navigates to example.com, nginx serves them data from the service running on port 8080.

If someone navigates to example.com/tags, nginx serves them data from the service running on port 8081.

If someone navigates to example.com/wiki, nginx serves them data from the service running on port 8082.

Why's there no root or index? It doesn't matter. nginx no longer bothers with any of that, since it's directly passing everything into the service.

This shows the purpose of the location directive.

flask

Running a simple flask server can be done in Python:

from flask import Flask
app = Flask(__name__)

@app.route('/flask')
def main():
    return "Flask app"

app.run(port=9999)

And in nginx, the configuration should be:

server {
     location /flask {
         proxy_pass http://localhost:9999;
     }
}

fastcgi

fastcgi lets you run php files.

php is not installed by default. You'll need to find the most recent php version, and install php7.4 php7.4-fpm or what happens to be the most recent php version. With systemd, systemd enable php7.4-fpm and systemd start php7.4-fpm will get you up and running.

To be honest, just look up nginx php ubuntu and you'll find a tutorial that steps you through installing each add-on required for Ubuntu.

In your nginx configuration, you'll need to set up:

server {
    listen 80;
    server_name example.com;
    root /var/www/;
    index index.php;
    location ~ \.php$ {
        try_files $uri = 404;
        fastcgi_pass unix:/var/run/php5-fpm.sock;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
    }
}

This allows any php file to be passed through the fastcgi module, which executes the php file in accordance to /etc/php/<version num>/fpm/php-fpm.conf (note that the bottom of php-fpm.conf includes pool.d/www.conf which is actually where the user configurations are set up).

The fastcgi_pass directive is equivalent to proxy_pass but for php. Why is php special? I don't know.

The fastcgi_index directive is equivalent to index.

The fastcgi_param directive passes in information into the fastcgi server. The rest of the params are inside /etc/nginx/fastcgi_params, which is only 20 or so more params.

We only want to run *.php files into the fastcgi server, so we want to make sure only files that end with .php are actually passed into the fastcgi server (note the location directive).

TLS

In order to get an https in your domain, you need to set up SSL (HTTPS = HTTP SSL), which is now changed to TLS. In nginx, running over https is simple:

server {
    listen 443 ssl;      # IPv4
    listen [::]:443 ssl; # IPV6
    # ... rest of configuration
}

If you've got a domain name (example.com), this won't get browsers happy. This configuration has an SSL connection, but it does not have a certificate yet. You can generate your own certificate using a self-signed certificate, but no one is going to trust this self-signed certificate.

Instead, we can use a free service, Let's Encrypt, Dehydrated, or ZeroSSL. Let's Encrypt is the most common, and is a straightforward set up.

root #apt install certbot python3-certbot-nginx

Once certbot is installed, ensure your domain is pointed to the correct nginx server, then run:

root #certbot --nginx -d example.com -d ...

Where you can keep chaining -d <domain> for each domain you have. python3-certbot-nginx will find the right nginx configuration to call, and certbot will make sure you've got rights to that domain. You can't just run certbot on google.com, you need to own the domain and the IP that domain is connected to. At this point, python3-certbot-nginx should have edited your nginx configuration to have certbot's certificate auto-configured. If you force https, you'll see:

server {
    listen 443 ssl;
    listen [::]:443 ssl;
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    include /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
    root /var/www;
}
server {
    if ($host = example.com) {
        return 301 https://$host$request_uri;
    }
    listen 80;
    listen [::]:80;
    server_name example.com;
    return 404;
    root /var/www;
}

When you request example.com, you'll pass through the server starting on line 10, as specified by server_name on line 16.

You'll get redirected to the https version of example.com, specified by line 12, which sends you to the server starting on line 1.

These two servers are loading the same data, both are pointing to /var/www, but one runs http, while the other runs https.

The configurations in /etc/letsencrypt/options-ssl-nginx.conf and /etc/letsencrypt/ssl-dhparams.pem specify how SSL is used. The options-ssl-nginx.conf will give basic configurations, but more importantly which protocols are allowed (TLSv1.2 and TLSv1.3) and a list of ciphers that nginx will serve.

Let's Encrypt allows you to not redirect to https, which gives you:

server {
    listen 80;
    listen [::]:80;
    listen 443 ssl;
    listen [::]:443 ssl;
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    include /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
    root /var/www;
}

ACME

Automatic certificate management environment. Certificates expire after a period of time to prevent certificates being valid when your site or server has been dead for a long time.

Let's Encrypt is simple. Just run certbot renew and it'll renew your certificate.

Let's Encrypt's certificates are valid for 3 months, so you can update this every 3 months when it expires. Or, have a cronjob do it for you! Type

root #crontab -e

and then add:

0 12 * * * /usr/bin/certbot renew --quiet

Examples

You want to run a blog that hosts static pages and a wiki that runs mediawiki.

Here's the steps you'd take with a fresh system (everything run as root):

apt install nginx php7.4 php7.4-fpm git

Make sure you get the php version that's most recent or the one that's used by whatever software you're trying to use. This example uses version 7.4.

Then, make your folders and grab your content:

mkdir /var/www/wiki; cd /var/www/wiki
git clone https://github.com/wikimedia/mediawiki .
mkdir /var/www/blog; cd /var/www/blog
echo "Here's all my blog files" > index.html

Configure nginx to point at these files, edit /etc/nginx/sites-enabled/sites.conf:

server {
    root /var/www/blog;
    index index.html;
    server_name blog.example.com;
    listen 80;
    listen [::]:80;
}
server {
    root /var/www/mediawiki;
    index index.php;
    server_name wiki.example.com;
    listen 80;
    listen [::]:80;
    location ~ \.php {
        try_files $uri = 404;
        fastcgi_pass 127.0.0.1:7777;
        fastcgi_index index.php;
        include fastcgi_params;
        include fastcgi.conf;
    }
}

Please note that specifically with mediawiki, there are more configurations typically added, like denying access to deleted images, cached files, etc. To do that, paste your URL to: shortURLs and step through their given configuration. Finally, mediawiki uses mysql to run a database, though this is explained when you follow the installation guide.

At this point, we have nginx pointing to port 7777 for our fastcgi server to run the php files. We need to configure fpm to do this:

root #vim /etc/php/7.4/fpm/pool.d/www.conf

and write

listen = 127.0.0.1:7777

Update everything with systemd,

root #systemctl restart nginx
root #systemctl restart php7.4-fpm

and the two sites should work.