Docs: Difference between revisions

From MTU LUG Wiki
Jump to navigation Jump to search
No edit summary
 
(36 intermediate revisions by the same user not shown)
Line 1: Line 1:
This page is intended as a 'hub' for all of LUGs internal documentation.
This page is the hub for LUG's documentation.


All of our documentation is intentionally public so that other student organizations or individuals can replicate aspects of our infrastructure if they so desire.
All of our documentation is intentionally public so that other orgs (or students) can replicate aspects of our infrastructure if they want. Everything sensitive (private keys, break-glass passwords, etc) should go in the LUG Bitwarden.


Topics should generally be broken out into individual articles and linked on this page, unless there's a very small amount of content.
'''<br />If a topic requires a significant amount of content, you may want to break it out into a new article and link it on this page.'''
= Servers & Services =


== [[Docs/Infrastructure|Infrastructure]] ==
== Proxmox Cluster ==
General infrastructure notes
The majority of our infrastructure are VMs in the Proxmox cluster, so everything can be [https://en.wikipedia.org/wiki/High_availability highly-available] (meaning VMs can jump to another Proxmox node if one goes down).


=== [[Docs/Plans|Plans]] ===
In the panel for each VM in the webUI, make sure to enable the guest agent; Debian will auto install the QEMU guest agent on first install when it detects being run inside a VM.
Pending upgrades/maintenance to our infrastructure


=== Proxmox Nodes ===
=== Network ===
The nodes in the cluster include:


==== [[Docs/Cables|Cables]] ====
* [10.10.1.20] Kurisu
Physical cabling and "layer 1" network config.
* [10.10.1.21] Okabe (currently offline; running Windows 10 LTSC temporarily to poke around with [[Locked HGST drives|HGST Drives]])
* [10.10.1.22] Daru
* [10.10.1.23] Luka
* [10.10.1.24] Mayuri
* [10.10.1.25] MrBraun (HP Server)
Note that all these addresses are static, and must be changed manually on each host (Proxmox doesn't currently support DHCP). The process is loosely outlined by the comments [https://forum.proxmox.com/threads/proxmox-change-ip-address.145254/ here].


==== [[Docs/Switches|Switches]] ====
These are also listed in [[Servers]] since they're all physical servers in the GLRC rack.
Switch and layer 2 network configs (VLANs).


=== Virtual Machines ===
=== Servers ===
The VMs in the cluster include:


==== [[Docs/Leskinen|Leskinen]] ====
* [10.10.1.2] PXEBoot server (inactive)
* [10.10.1.8] Huskybot IRC<->Matrix<->Discord bridge
* [10.10.1.9] LUG IRC Server (running ergo)
* [10.10.1.12] Invidious (private youtube frontend, currently inactive)
* [10.10.1.14] BookStack (alternative knowledgebase for documentation. Inactive, we're using this Wiki instead)
* [10.10.1.15] webserver running NGINX, hosts the lug.mtu.edu homepage and servers as a reverse-proxy for all other webservers behind our NAT.
* [10.10.1.16] This mediawiki instance
* [10.10.1.17] Netbox (network/rack-related documentation. Currently inactive, overly complicated for our needs)
* [10.10.1.70] Socksproxy (so members using the split-tunneled LUG VPN have an easy way to route traffic through LUG)
* [10.10.1.71] VM for accessvillage.net (contact [[User:D2wn|Noah]] if any issues)
* [10.10.1.76] debian (noah courseproject; will eventually delete)
* [10.10.1.99] Noah's personal VM for random stuff
* [10.10.1.170] hashtopolis (RedTeam Hashtopolis server for CTFs)
* [10.10.1.172] badapple (parrot.live-like badapple service)
* [10.10.1.202] "Main-MC" (idk; ask Allen)
* [10.10.1.212] [https://papermc.io/software/velocity Velocity] reverse-proxy for Minecraft servers (so we can offer unlimited servers to clubs/halls on campus without running out of public IPs)
* [10.10.1.224] Allen's Gaming VM (runs Windows)
* [10.10.1.229] "Kube-Minecraft" (idk; ask Allen)
You can see all VMs listed in the Proxmox WebUI.


The primary storage server.
=== Updating Nodes ===
Proxmox runs on top of Debian, so the updating process is the mostly same.


Currently has Shell home directory backups and media for maho.
# <code>apt update && apt upgrade</code>
# (Optional) Remove the annoying unlicensed popup from web dashboard: <code>sed -Ezi.bak "s/(Ext.Msg.show\(\{\s+title: gettext\('No valid sub)/void\(\{ \/\/\1/g" /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js && systemctl restart pveproxy.service</code>
# (Optional) Manually migrate all VMs to other Proxmox nodes first; Proxmox doesn't do this automatically and all the VMs running on the host when it reboots will go offline until the host comes back up
# (Optional but recommended) <code>reboot</code> the node


==== [[Docs/Maho|Maho]] ====
For the yearly major version bumps, you may need to run <code>apt update && apt upgrade</code>, followed by <code>apt dist-upgrade</code>; This is the process on Debian, but I haven't tested it on Proxmox.
The GPU compute server.


Currently hosts a [https://studio.blender.org/films/ Blender Open Studio Films] mirror via Jellyfin and Local LLM host for members to experiment with.
Check the [https://pve.proxmox.com/wiki/Category:Upgrade Proxmox wiki's 'Upgrade' category] for specific instructions when the time comes.


=== Updating VMs ===
==== [[Docs/Mirrors|Mirrors]] ====
The Linux mirror server at mirrors.lug.mtu.edu
All VMs run Debian to keep things homogenous and easy to upgrade/automate, except a few Windows VMs like Allen's scuffed Win10 LTSC gaming VM; Those are presumed self-managed.


==== [[Docs/OPNsense|OPNsense]] (Lasanga/Ravioli) ====
The update process is the same as any Debian system:
Router/Firewall and layer 3+ network configs.


==== [[Docs/Proxmox Cluster|Proxmox Cluster]] ====
# <code>apt update && apt upgrade</code>
Our Proxmox cluster running the majority of our services
## If the kernel or systemd get updated, it's a good idea to <code>reboot</code>
# For major version bumps (I think there's one each year?), you need to run the aforementioned <code>apt update && apt upgrade</code>, followed by <code>apt dist-upgrade</code>


==== [[Docs/Shell|Shell]] ====
Updates need to be automated with [https://docs.ansible.com/ Ansible] at some point.
The shared multi-tenant server for LUG members/alums at shell.lug.mtu.edu


== Mirrors ==
=== Services ===
Mirrors is a standalone [https://www.dell.com/en-us/shop/povw/poweredge-r730xd/1000 Dell R730xd] server (3.5" drive bay variant) running FreeBSD, and all services are managed by salt.


==== BlueSky ====
We're in the process of rebuilding it, but in the meantime this is what we've been doing to manage it thus far:


==== [[Docs/IRC|IRC]] ====


==== Website ====


==== Wiki ====
Certificate maintenance:


== Org Management ==
put it in /usr/share/salt/<somewhere> where salt will copy it to /etc/nginx/<somewhere>


=== Wiki ===


== Shell ==
==== Docs ====
How to create/manage pages in this category ("Docs").


==== Meeting Minutes ====
== Firewall/Router/Network ==
<insert process/methodology for making/formatting meeting minutes (e.g. wiki page guidelines)>
Our firewall/router runs [https://www.pfsense.org/ pfSense], soon to be [https://opnsense.org/ OPNsense].


=== Time-sensitive ===
All IP addressing of servers and virtual machines happens through DHCP, and can be viewed in the pfSense 'DHCP Leases' tab. (except Proxmox nodes, which don't support DHCP and require static addressing)


* Email IT for new certs (example template to use, make sure to keep SubjectAltName, etc)
Otherwise, most configuration can be viewed by poking around the web interface.
* Install-a-thons
* Shirt printing / stickers


=== Firewall rules ===
=== Budget ===
View the WebUI for the specific firewall rules, but some of the more basic/essential ones are:


* USG meetings
# Management cannot communicate with LAN/WAN (the internet), and LAN cannot communicate with Management.
* Making presentable diagrams and representations of data
## Generally, Management should be restricted from everything else. (maybe even other iDrac servers?)
## OOB services tend to be ''super'' vulnerable, there are dozens of [https://github.com/mgargiullo/cve-2018-1207 premade scripts] that instapwn iDRACs and give you a root shell by just pointing them at the IP address.
## Because of this, the iDRAC web login interface should only be accessible to anyone you're okay having root on the server.
# Wireguard
## The admin/user split is so all members can be given a wireguard config to the internal network without having to worry about them being able to trivially get root on all servers running premade-exploits like [https://github.com/mgargiullo/cve-2018-1207 these] on the iDracs.
## If someone shows up to a couple meetings they're probably fine to get an admin config; this is more for peace-of-mind to not need to worry about the configs given to people who went to one meeting once at the beginning of the semester and have never been seen again.
## Neither config should have access to WAN, just to prevent someone getting LUG in hot water if they attempt to torrent or something similarly dumb through the VPN.


=== Main networks ===
=== MTU Policies and Procedures ===
https://www.mtu.edu/umc/services/websites/requirements/
We have two main networks:
* 10.10.0.0/24 - Management (OOB Management services like [https://www.dell.com/en-us/lp/dt/open-manage-idrac Dell iDRAC] / [https://www.hpe.com/us/en/hpe-integrated-lights-out-ilo.html HP iLO])
* 10.10.1.0/24 - LAN (servers/VMs)
We may also be getting a <code>/27</code> of Tech's <code>141.219.0.0/16</code> block through IT (~28-30 usable public IP addresses).


All (sub)domains need to be approved by UMC (University Marketing & Communication)
The plan is to use reverse-NAT to map the public IPs to select internal IPs, since we won't have enough IPs for every VM (so we can't do it like IT and exclusively use publicly routable addresses).


IT handles IP addressing and SSL certificates
=== VPN Networks ===
In addition, there are two main VPN networks:


USG handles funding and reimbursements
* 10.10.10.0/24 - OpenVPN
* 10.10.11.0/24 - Wireguard
** 10.10.11.0/25 - Wireguard admin range (access to Management+LAN, no WAN)
** 10.10.11.128/25 - Wireguard user range (access to only LAN, no WAN)

== Fileserver ==
Coming Soon, currently unprovisioned (waiting on new PSU; and fixing [[Locked HGST drives|HGST drives]])

= Management =

== Time-sensitive ==
Email IT for new certs (example template to use, make sure to keep SubjectAltName, etc)

Install-a-thons

shirt printing

== Budget ==
USG meetings

making presentable diagrams and representations of data

Latest revision as of 08:09, 4 November 2025

This page is the hub for LUG's documentation.

All of our documentation is intentionally public so that other orgs (or students) can replicate aspects of our infrastructure if they want. Everything sensitive (private keys, break-glass passwords, etc) should go in the LUG Bitwarden.

Topics should generally be broken out into individual articles and linked on this page, unless there's a very small amount of content.

Infrastructure

General infrastructure notes

Plans

Pending upgrades/maintenance to our infrastructure

Network

Cables

Physical cabling and "layer 1" network config.

Switches

Switch and layer 2 network configs (VLANs).

Servers

Leskinen

The primary storage server.

Currently has Shell home directory backups and media for maho.

Maho

The GPU compute server.

Currently hosts a Blender Open Studio Films mirror via Jellyfin and Local LLM host for members to experiment with.

Mirrors

The Linux mirror server at mirrors.lug.mtu.edu

OPNsense (Lasanga/Ravioli)

Router/Firewall and layer 3+ network configs.

Proxmox Cluster

Our Proxmox cluster running the majority of our services

Shell

The shared multi-tenant server for LUG members/alums at shell.lug.mtu.edu

Services

BlueSky

IRC

Website

Wiki

Org Management

Wiki

Docs

How to create/manage pages in this category ("Docs").

Meeting Minutes

<insert process/methodology for making/formatting meeting minutes (e.g. wiki page guidelines)>

Time-sensitive

  • Email IT for new certs (example template to use, make sure to keep SubjectAltName, etc)
  • Install-a-thons
  • Shirt printing / stickers

Budget

  • USG meetings
  • Making presentable diagrams and representations of data

MTU Policies and Procedures

https://www.mtu.edu/umc/services/websites/requirements/

All (sub)domains need to be approved by UMC (University Marketing & Communication)

IT handles IP addressing and SSL certificates

USG handles funding and reimbursements