Docs: Difference between revisions

From MTU LUG Wiki
Jump to navigation Jump to search
No edit summary
 
(28 intermediate revisions by the same user not shown)
Line 1: Line 1:
This page is intended as a 'hub' for all of LUGs internal documentation.
This page is the hub for LUG's documentation.


All of our documentation is intentionally public so that other student organizations or individuals can replicate aspects of our infrastructure if they so desire.
All of our documentation is intentionally public so that other orgs (or students) can replicate aspects of our infrastructure if they want. Everything sensitive (private keys, break-glass passwords, etc) should go in the LUG Bitwarden.


'''<br />Topics should generally be broken out into individual articles and linked on this page, unless there's a very small amount of content.'''
Topics should generally be broken out into individual articles and linked on this page, unless there's a very small amount of content.


== [[Docs/Infrastructure|Infrastructure]] ==
== Cables ==
General infrastructure notes
[[Infrastructure/Cables]]


== Servers & Services ==
=== [[Docs/Plans|Plans]] ===
Pending upgrades/maintenance to our infrastructure


=== Proxmox Cluster ===
=== Network ===
The majority of our infrastructure are VMs in the Proxmox cluster, so everything can be [https://en.wikipedia.org/wiki/High_availability highly-available] (meaning VMs can jump to another Proxmox node if one goes down).


==== [[Docs/Cables|Cables]] ====
In the panel for each VM in the webUI, make sure to enable the guest agent; Debian will auto install the QEMU guest agent on first install when it detects being run inside a VM.
Physical cabling and "layer 1" network config.


=== Proxmox Nodes ===
==== [[Docs/Switches|Switches]] ====
Switch and layer 2 network configs (VLANs).
The nodes in the cluster include:


=== Servers ===
* [10.10.1.20] Kurisu
* [10.10.1.21] Okabe (currently offline; running Windows 10 LTSC temporarily to poke around with [[Locked HGST drives|HGST Drives]])
* [10.10.1.22] Daru
* [10.10.1.23] Luka
* [10.10.1.24] Mayuri
* [10.10.1.25] MrBraun (HP Server)
Note that all these addresses are static, and must be changed manually on each host (Proxmox doesn't currently support DHCP). The process is loosely outlined by the comments [https://forum.proxmox.com/threads/proxmox-change-ip-address.145254/ here].


==== [[Docs/Leskinen|Leskinen]] ====
These are also listed in [[Servers]] since they're all physical servers in the GLRC rack.


The primary storage server.
=== Virtual Machines ===
The VMs in the cluster include:


Currently has Shell home directory backups and media for maho.
* [10.10.1.8] Huskybot IRC<->Matrix<->Discord bridge
* [10.10.1.9] LUG IRC Server (running ergo)
* [10.10.1.15] webserver running NGINX, hosts the lug.mtu.edu homepage and servers as a reverse-proxy for all other webservers behind our NAT.
* [10.10.1.16] This mediawiki instance
* [10.10.1.70] Socksproxy (so members using the split-tunneled LUG VPN have an easy way to route traffic through LUG)
* [10.10.1.76] debian (noah courseproject; will eventually delete)
* [10.10.1.170] hashtopolis (RedTeam Hashtopolis server for CTFs)
* [10.10.1.172] badapple (parrot.live-like badapple service)
* [10.10.1.202] "Main-MC" (idk; ask Allen)
* [10.10.1.212] [https://papermc.io/software/velocity Velocity] reverse-proxy for Minecraft servers (so we can offer unlimited servers to clubs/halls on campus without running out of public IPs)
* [10.10.1.224] Allen's Gaming VM (runs Windows)
* [10.10.1.229] "Kube-Minecraft" (idk; ask Allen)
You can see all VMs listed in the Proxmox WebUI.


=== Updating Nodes ===
==== [[Docs/Maho|Maho]] ====
The GPU compute server.
Proxmox runs on top of Debian, so the updating process is the mostly same.


Currently hosts a [https://studio.blender.org/films/ Blender Open Studio Films] mirror via Jellyfin and Local LLM host for members to experiment with.
# <code>apt update && apt upgrade</code>
# (Optional) Remove the annoying unlicensed popup from web dashboard: <code>sed -Ezi.bak "s/(Ext.Msg.show\(\{\s+title: gettext\('No valid sub)/void\(\{ \/\/\1/g" /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js && systemctl restart pveproxy.service</code>
# (Optional) Manually migrate all VMs to other Proxmox nodes first; Proxmox doesn't do this automatically and all the VMs running on the host when it reboots will go offline until the host comes back up
# (Optional but recommended) <code>reboot</code> the node


==== [[Docs/Mirrors|Mirrors]] ====
For the yearly major version bumps, you may need to run <code>apt update && apt upgrade</code>, followed by <code>apt dist-upgrade</code>; This is the process on Debian, but I haven't tested it on Proxmox.
The Linux mirror server at mirrors.lug.mtu.edu


==== [[Docs/OPNsense|OPNsense]] (Lasanga/Ravioli) ====
Check the [https://pve.proxmox.com/wiki/Category:Upgrade Proxmox wiki's 'Upgrade' category] for specific instructions when the time comes.
Router/Firewall and layer 3+ network configs.


==== [[Docs/Proxmox Cluster|Proxmox Cluster]] ====
=== Updating VMs ===
Our Proxmox cluster running the majority of our services
All VMs run Debian to keep things homogenous and easy to upgrade/automate, except a few Windows VMs like Allen's scuffed Win10 LTSC gaming VM; Those are presumed self-managed.


==== [[Docs/Shell|Shell]] ====
The update process is the same as any Debian system:
The shared multi-tenant server for LUG members/alums at shell.lug.mtu.edu


=== Services ===
# <code>apt update && apt upgrade</code>
## If the kernel or systemd get updated, it's a good idea to <code>reboot</code>
# For major version bumps (I think there's one each year?), you need to run the aforementioned <code>apt update && apt upgrade</code>, followed by <code>apt dist-upgrade</code>


==== BlueSky ====
Updates need to be automated with [https://docs.ansible.com/ Ansible] at some point.


=== Mirrors ===
==== [[Docs/IRC|IRC]] ====
[[Infrastructure/Mirrors]]


=== Shell ===
==== Website ====
[[Infrastructure/Shell]]


=== Firewall/Router/Network ===
==== Wiki ====
Our firewall/router runs [https://www.pfsense.org/ pfSense], soon to be [https://opnsense.org/ OPNsense].


== Org Management ==
All IP addressing of servers and virtual machines happens through DHCP, and can be viewed in the pfSense 'DHCP Leases' tab. (except Proxmox nodes, which don't support DHCP and require static addressing)


=== Wiki ===
Otherwise, most configuration can be viewed by poking around the web interface.


=== Firewall rules ===
==== Docs ====
How to create/manage pages in this category ("Docs").
View the WebUI for the specific firewall rules, but some of the more basic/essential ones are:


==== Meeting Minutes ====
# Management cannot communicate with LAN/WAN (the internet), and LAN cannot communicate with Management.
<insert process/methodology for making/formatting meeting minutes (e.g. wiki page guidelines)>
## Generally, Management should be restricted from everything else. (maybe even other iDrac servers?)
## OOB services tend to be ''super'' vulnerable, there are dozens of [https://github.com/mgargiullo/cve-2018-1207 premade scripts] that instapwn iDRACs and give you a root shell by just pointing them at the IP address.
## Because of this, the iDRAC web login interface should only be accessible to anyone you're okay having root on the server.
# Wireguard
## The admin/user split is so all members can be given a wireguard config to the internal network without having to worry about them being able to trivially get root on all servers running premade-exploits like [https://github.com/mgargiullo/cve-2018-1207 these] on the iDracs.
## If someone shows up to a couple meetings they're probably fine to get an admin config; this is more for peace-of-mind to not need to worry about the configs given to people who went to one meeting once at the beginning of the semester and have never been seen again.
## Neither config should have access to WAN, just to prevent someone getting LUG in hot water if they attempt to torrent or something similarly dumb through the VPN.

=== Main networks ===
We have two main networks:
* 10.10.0.0/24 - Management (OOB Management services like [https://www.dell.com/en-us/lp/dt/open-manage-idrac Dell iDRAC] / [https://www.hpe.com/us/en/hpe-integrated-lights-out-ilo.html HP iLO])
* 10.10.1.0/24 - LAN (servers/VMs)
We may also be getting a <code>/27</code> of Tech's <code>141.219.0.0/16</code> block through IT (~28-30 usable public IP addresses).

The plan is to use reverse-NAT to map the public IPs to select internal IPs, since we won't have enough IPs for every VM (so we can't do it like IT and exclusively use publicly routable addresses).

=== VPN Networks ===
In addition, there are two main VPN networks:

* 10.10.10.0/24 - OpenVPN
* 10.10.11.0/24 - Wireguard
** 10.10.11.0/25 - Wireguard admin range (access to Management+LAN, no WAN)
** 10.10.11.128/25 - Wireguard user range (access to only LAN, no WAN)

== Fileserver ==
Coming Soon, currently unprovisioned (waiting on new PSU; and fixing [[Locked HGST drives|HGST drives]])

== Org Management ==


=== Time-sensitive ===
=== Time-sensitive ===
Line 118: Line 68:
* Email IT for new certs (example template to use, make sure to keep SubjectAltName, etc)
* Email IT for new certs (example template to use, make sure to keep SubjectAltName, etc)
* Install-a-thons
* Install-a-thons
* shirt printing
* Shirt printing / stickers


=== Budget ===
=== Budget ===


* USG meetings
* USG meetings
* making presentable diagrams and representations of data
* Making presentable diagrams and representations of data

=== MTU Policies and Procedures ===
https://www.mtu.edu/umc/services/websites/requirements/

All (sub)domains need to be approved by UMC (University Marketing & Communication)

IT handles IP addressing and SSL certificates

USG handles funding and reimbursements

Latest revision as of 08:09, 4 November 2025

This page is the hub for LUG's documentation.

All of our documentation is intentionally public so that other orgs (or students) can replicate aspects of our infrastructure if they want. Everything sensitive (private keys, break-glass passwords, etc) should go in the LUG Bitwarden.

Topics should generally be broken out into individual articles and linked on this page, unless there's a very small amount of content.

Infrastructure

General infrastructure notes

Plans

Pending upgrades/maintenance to our infrastructure

Network

Cables

Physical cabling and "layer 1" network config.

Switches

Switch and layer 2 network configs (VLANs).

Servers

Leskinen

The primary storage server.

Currently has Shell home directory backups and media for maho.

Maho

The GPU compute server.

Currently hosts a Blender Open Studio Films mirror via Jellyfin and Local LLM host for members to experiment with.

Mirrors

The Linux mirror server at mirrors.lug.mtu.edu

OPNsense (Lasanga/Ravioli)

Router/Firewall and layer 3+ network configs.

Proxmox Cluster

Our Proxmox cluster running the majority of our services

Shell

The shared multi-tenant server for LUG members/alums at shell.lug.mtu.edu

Services

BlueSky

IRC

Website

Wiki

Org Management

Wiki

Docs

How to create/manage pages in this category ("Docs").

Meeting Minutes

<insert process/methodology for making/formatting meeting minutes (e.g. wiki page guidelines)>

Time-sensitive

  • Email IT for new certs (example template to use, make sure to keep SubjectAltName, etc)
  • Install-a-thons
  • Shirt printing / stickers

Budget

  • USG meetings
  • Making presentable diagrams and representations of data

MTU Policies and Procedures

https://www.mtu.edu/umc/services/websites/requirements/

All (sub)domains need to be approved by UMC (University Marketing & Communication)

IT handles IP addressing and SSL certificates

USG handles funding and reimbursements