Minutes 2025-11-06
Jump to navigation
Jump to search
- No scheduled presentation this week
- Average LUG Meeting
- Someone disassembling their Framework laptop
- Another person installing Arch onto their laptop
- Impromptu talk on mpd by Freya!
- Binds to localhost by default
- mpc list Artist/Album to list content based on the ~/Music/ directory
- Apparently keeps this list cached in memory on first run to avoid disk operations
- mpc add <path/to/song>
- Today is yapping night
- Hardware upgrade
- Need to reshuffle all servers
- We don't control the bottom half of the rack since that's ITO, but we should still try to move the big 2U servers lower
- Thinking we'll move 2U to the bottom of our space, firewalls on the top, etc.
- Still need to finish our migration off PfSense to OPNSense
- Currently have one OPNSense box, one pfSense box
- Should graph out a diagram of current rack and where everything should go
- LUG is eating good now
- We came in with 3 mostly-dead servers that IT was begging us to upgrade
- they even gave us old handmedown R610's
- Then Tim came in clutch with the new servers that make up our current infra
- So long as you know how to use servers, he is willing to provide the hardware
- When Simone updates from East Hall (that has 1Gb/s on ethernet, while all other res halls only get 100Mb/s), is that saturating all of Mirrors' link?
- Quite possibly
- Allen said to tell him the next time Simone updates, and he will pull up the UniFi switch bandwidth page to track exactly how many Mbps are being used
- Some corporations and downstream mirrors may not be too happy with Simone...
- Need to reshuffle all servers
- NCSA has 40Gb/s LAN because they found super cheap Cisco Connect X3's with 2 40Gb/s ports
- Great for Ceph
- Simone was lobbying for Ceph until Noah told him about the unfortunate drive bay situation
- We have 3 R630's and 2 R620's in our Proxmox Cluster
- The 3 R630's have 8 total bays that can be used for Ceph
- but the 2 R620's only have two bays each
- They're missing the cage and backplane that adds the other 8
- Probably more economical to just buy new servers at that point anyway
- So in effect, this means if two R630's went down, that'd' be 16/28 OSDs offline in Ceph, which would certainly cause a problem
- "And for that reason, I'm out"
- Or more literally, it may make more sense to not go Ceph and go with a traditional storage setup
- One dedicated storage server, exporting data via NFS or iSCSI
- Simone was lobbying for Ceph until Noah told him about the unfortunate drive bay situation
- Great for Ceph
- Redundant Mirrors may be nice so maintenance doesn't have to always be like at 5AM on a holiday
- But that may add more latency
- The big thing with mirrors is they're sorted by latency, so we want to make sure it's low-latency
- NCSA had to switch off Caddy for latency reasons
- It was faster to curl LUG's homepage than NCSA's homepage.... from the a NCSA server on the same rack as their website
- LUG uses Nginx for every server, that may be the difference
- It was faster to curl LUG's homepage than NCSA's homepage.... from the a NCSA server on the same rack as their website
- There is apparently a good website to check wayland protocol information (wayland.app)
- Logan is already at the point he had to draw a line in the sand with SAT2711 (Linux Fundamentals)
- Let it be known that if a student installs gentoo as their distro of choice to do their tasks on, he won't help
- Plant data recovery happenings
- Was apparently in the process of backing things up to Google Drive right when the SSD died out
- Arch installer remounts root as read-only then proceeds to complain it's mounted read-only
- RedTeam after hours returns once again