Minutes 2026-01-22

From MTU LUG Wiki
(Redirected from Minutes 2026-01-23)
Jump to navigation Jump to search

Meeting Minutes 01/22/26

  • Scott Raiford presenting on how to use Kubernetes
  • A little banter about the meeting and how LUG runs :P

Scott Story:

  • At a job lost a guy who knew a lot of stuff. There's a system they have to shut down everything and bring it back up 15 minutes later.
  • Next guy comes in to see how the code worked. The code is full of message boxes that say "I am here" and then SEGFAULTS.
  • There was no timing to the script, he conned the company into paying him :P

The Presentation:

  • Kubernetes allows the hosting of a lot of software on very small amounts of hardware
    • Uses containers very efficiently to make this work
  • Linux has the kernel, then on top of that the userspace, and on top of that your own programs.
    • Userspace is where your programs live.
    • Userspace asks the kernel to do things like file access or sending packets.
    • Userspace programs will ask the kernel about network adapters or other processes on the system
  • A container is a namespace for processes
    • If the kernel is asked what processes are running, it lies and says only the namespace's programs are running
    • Multiple namespaces are possible
    • Keeps applications from interfering with each other
  • Kubernetes is solving the problem of managing containers in a scalable manner
    • Example:
      1. 3 nodes in a cloud
      2. nginx is running on node 1, gateway on nodes 2 and 3
      3. node 1 dies
      4. nginx is moved into another node
      5. when that other node dies, nginx is moved again This is what Kubernetes does, allowing things to keep running even if hardware fails.
    • apiserver knows we need some state (a set of software that should be running), and if there's a mismatch, it changes things to make things the way they should be.
      • apiserver and its database is redundant so it is resilient
    • istio: routes traffic based on hostname, etc.
    • Controllers can be deployed as pods. apiserver itself is a container
    • Working example:
      • Containerfile defines what is in the container, what is installed, what is exposed, etc
      • Trivia game needs to be run on a node, with a service that passes traffic into it
      • Write YAML to describe how to actually run the software, pass in traffic
        • Push YAML to apiserver, which then gets added to etcd and run by Kubernetes
      • YAML states how many copies of a software should run
      • Controller configures things to keep things running according to spec, can have side effects.
      • If any node in the cluster gets traffic on any port, it is forwarded through a virtual network to the trivia app
      • a change in the config will trigger a rebuild of the pods (clusters of containers)
      • several minutes of troubleshooting have been omitted from these minutes
      • _1954_ IT WORKS!!

Extra Little Notes

  • Each control node runs an apiserver
  • There is a kubernetes service, anything that wants to make a request can ask it and it will be load-balanced to one of the apiservers
  • Meant to fail: if one node goes down, others can pick up the slack
  • Outside of Kubernetes, you can run a few instances of software like HAproxy so that when one dies you still have load-balancing.
  • Kubernetes will constantly try and repair the cluster if things are killed