Minutes 2025-11-06

Revision as of 03:51, 7 November 2025 by D2wn (talk | contribs) (initial commit)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
  1. No scheduled presentation this week
  2. Average LUG Meeting
    1. Someone disassembling their Framework laptop
    2. Another person installing Arch onto their laptop
  3. Impromptu talk on mpd by Freya!
    1. Binds to localhost by default
    2. mpc list Artist/Album to list content based on the ~/Music/ directory
      1. Apparently keeps this list cached in memory on first run to avoid disk operations
    3. mpc add <path/to/song>
  4. Today is yapping night
  5. Hardware upgrade
    1. Need to reshuffle all servers
      1. We don't control the bottom half of the rack since that's ITO, but we should still try to move the big 2U servers lower
    2. Thinking we'll move 2U to the bottom of our space, firewalls on the top, etc.
    3. Still need to finish our migration off PfSense to OPNSense
      1. Currently have one OPNSense box, one pfSense box
    4. Should graph out a diagram of current rack and where everything should go
    5. LUG is eating good now
      1. We came in with 3 mostly-dead servers that IT was begging us to upgrade
      2. they even gave us old handmedown R610's
      3. Then Tim came in clutch with the new servers that make up our current infra
      4. So long as you know how to use servers, he is willing to provide the hardware
    6. When Simone updates from East Hall (that has 1Gb/s on ethernet, while all other res halls only get 100Mb/s), is that saturating all of Mirrors' link?
      1. Quite possibly
      2. Allen said to tell him the next time Simone updates, and he will pull up the UniFi switch bandwidth page to track exactly how many Mbps are being used
      3. Some corporations and downstream mirrors may not be too happy with Simone...
  6. NCSA has 40Gb/s LAN because they found super cheap Cisco Connect X3's with 2 40Gb/s ports
    1. Great for Ceph
      1. Simone was lobbying for Ceph until Noah told him about the unfortunate drive bay situation
        1. We have 3 R630's and 2 R620's in our Proxmox Cluster
        2. The 3 R630's have 8 total bays that can be used for Ceph
        3. but the 2 R620's only have two bays each
          1. They're missing the cage and backplane that adds the other 8
          2. Probably more economical to just buy new servers at that point anyway
        4. So in effect, this means if two R630's went down, that'd' be 16/28 OSDs offline in Ceph, which would certainly cause a problem
      2. "And for that reason, I'm out"
        1. Or more literally, it may make more sense to not go Ceph and go with a traditional storage setup
        2. One dedicated storage server, exporting data via NFS or iSCSI
  7. Redundant Mirrors may be nice so maintenance doesn't have to always be like at 5AM on a holiday
    1. But that may add more latency
    2. The big thing with mirrors is they're sorted by latency, so we want to make sure it's low-latency
  8. NCSA had to switch off Caddy for latency reasons
    1. It was faster to curl LUG's homepage than NCSA's homepage.... from the a NCSA server on the same rack as their website
      1. LUG uses Nginx for every server, that may be the difference
  9. There is apparently a good website to check wayland protocol information (wayland.app)
  10. Logan is already at the point he had to draw a line in the sand with SAT2711 (Linux Fundamentals)
    1. Let it be known that if a student installs gentoo as their distro of choice to do their tasks on, he won't help
  11. Plant data recovery happenings
    1. Was apparently in the process of backing things up to Google Drive right when the SSD died out
    2. Arch installer remounts root as read-only then proceeds to complain it's mounted read-only
  12. RedTeam after hours returns once again