Minutes 2025-01-16
Jump to navigation
Jump to search
RAID (and ZFS) By: Noah
- RAID stands for “Redundant Array of [Inexpensive/Independent] Disks”
- Used by enterprises for high availability on drives and resiliency.
Drawbacks over flat Disks
- increased power draw
RAID levels
- RAID0
- Storage is that of every disk combined
- Fastest for reading and writing
- not really used anymore
- RAID1
- One disk replicated across every other disk
- Common for boot drives
- RAID5
- Any one disk is allowed to fail in a given array without losing any data
- used to be a gold standard, but fell out of favor due to disk size increasing
- RAID6
- Any two disks can fail in a given array without losing data
- the new gold standard
- if disks fail during rebuild, it allows another to fail.
Nested RAID levels
- RAID10
- RAID01 (bad)
- WMTU uses RAID60
Weird RAID levels
- RAID2
- implemented own error correction
- no reason to use it at all because HDDS implements its own error correction
- RAID3
- Striped data at byte-level rather than block levels
- Replaced by RAID5
- used in video streaming
ZFS
- zfs is the storage administrator, while the user only has to check in
- arrays are now pools
- logical volumes are now datasets
- has its own approach to logical structure
- calls RAIDs something different RAID0 -> stripe
- Described as paranoid because it doesn’t trust anything from your disks and constantly checks and compares information to parity calculation.
- Shell server is set with ZFS in RAID10 all in one single zpool, which allows the shell to put a quota on datasets, so no users can get out of hand
- Extra features
- has a cache vdev that reads in a ring buffer
- SLOG, stores the ZFS intent
- biggest issue: licensing
- Sun was trying to get out of the GPL and then have parasite companies from directly shipping ZFS
- now oracle has ZFS and has first-party support on Solaris
- maintains BTRFS support
- OpenZFS
- last open-source version of ZFS
- Under MIT license
- used to be better supported on the BSD’s
- BTFRS
- made a few advancements in filesystems since ZFS
- Great for home labs
- biggest problem: no RAID5/6, might not ever be supported
- Better Linux integration, don’t have to worry about importing kernel modules
- CEPH
- Different take on filesystems
- only 6-9 servers with CEPH under LUG jurisdiction
- Treats all disks in the cluster as one giant group
- Overkill for our purposes
- the cluster self-manages
- common for VM images
- can pick and choose the redundancy per file
LUG News
- Cypher-Con is happening soon
- Not gonna be reimbursed by RED TEAM or college
- happening in Milwaukee
- provided two free hotel rooms
- Got a subnet from IT
- Total of 30-ish public addresses
- can use it to reverse-net for people who don’t have their own public IP addresses
- potential Zigbee talk
- hosting an access control booth to do OSDP challenges
- Tunnel Bob
- Some random guy keeps breaking into the University of Wisconsin, Madison tunnels
- Cops have chased him through the tunnels and have yet to apprehend him
- Guys at MIT did similar in the 80's, mapping out steam tunnels (vadding/tunneling)
- 'vadding' mentioned in the jargon file on most systems, that's where it comes from
- Mentions "elevator rodeo"
- "Watch out for elevator counterweights"
- Mentions "elevator rodeo"
- 'vadding' mentioned in the jargon file on most systems, that's where it comes from
- Some random guy keeps breaking into the University of Wisconsin, Madison tunnels
- Idea for making cards that are LUG-themed
- Trying to get the equipment for LUG shirts