Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
distortion park
Apr 25, 2011


helm template-value files can get a bit complex sometimes, so it can be nice to template them out as well for maximum simplicity and ease of use

Adbot
ADBOT LOVES YOU

dads friend steve
Dec 24, 2004

that is, no poo poo, what our Platform Team decided to do. the template files are loving ghastly and the sole guy who built it left, so it’s unmaintainable

distortion park
Apr 25, 2011


brb, just checking out my worker's taints

Progressive JPEG
Feb 19, 2003

putting the nerd in containerd

git apologist
Jun 4, 2003

echinopsis posted:

sounds neat

it isn’t

echinopsis
Apr 13, 2004

by Fluffdaddy
sounds poo poo then

fresh_cheese
Jul 2, 2014

MY KPI IS HOW MANY VP NUTS I SUCK IN A FISCAL YEAR AND MY LAST THREE OFFICE CHAIRS COMMITTED SUICIDE
are polling loops just the default way to manage internet scale distributed clusters nowadays??

what happened to pub/sub queue designs?? i mean openstack is a colossal pile of crap for a lot of reasons but they did at least base the cluster comms on a mq topology.

why the hell does an idle 3 controller and 2 worker kubernetes cluster consume 6 cores of cpu capacity while doing nothing?? is there really that much work involved in understanding how much nothing is happening??? wtf?

distortion park
Apr 25, 2011


I'm not an expert but IME pub/sub stuff sucks for synchronization. If you mess up handling a message you need some way to eventually do the thing - so you end up building a reconciliation system. The watch-loop stuff is just going straight to the reconciliation based system

distortion park
Apr 25, 2011


really want to emphasize how far i am from being in any way knowledgeable about this though

Progressive JPEG
Feb 19, 2003

for a queue you'd probably just want to run kafka but it's a memory hog and a pita to tune/maintain once you get significant load going through it

CRIP EATIN BREAD
Jun 24, 2002

Hey stop worrying bout my acting bitch, and worry about your WACK ass music. In the mean time... Eat a hot bowl of Dicks! Ice T



Soiled Meat

fresh_cheese posted:

why the hell does an idle 3 controller and 2 worker kubernetes cluster consume 6 cores of cpu capacity while doing nothing?? is there really that much work involved in understanding how much nothing is happening??? wtf?

sounds like you hosed something up bud

fresh_cheese
Jul 2, 2014

MY KPI IS HOW MANY VP NUTS I SUCK IN A FISCAL YEAR AND MY LAST THREE OFFICE CHAIRS COMMITTED SUICIDE

CRIP EATIN BREAD posted:

sounds like you hosed something up bud

that is highly likely

I should let the smart cloud people handle this


oh wait...

I HAVE GOUT
Nov 23, 2017
kube owns, and its multiplied my productivity because of how easy it is to deploy a new project.

p much all you need to know is pods, deployments/statefulsets/daemonsets, services, and ingress. and then u copy paste yamls to do most of what u want. metallb too if ur baremetal (the only way to run bc cloud costs are unreasonable for personal projs)

the only problem ive had so far is cloud storage. nfs/samba are complete poo poo for anything besides minio. for any db you pretty much have to deploy directly onto a specific node's ssd using hostpath. which is risky unless you replicaset it onto three nodes total for uptime.

ive tried ceph multiple times, but its a complete joke. i might give up and do gluster, but idk. i really just want a zero maintenance file storage thats decent. and everything ive looked at so far wants u to waste ur life janitoring it

carry on then
Jul 10, 2010

by VideoGames

(and can't post for 10 years!)

rook-ceph seems fine to me

Progressive JPEG
Feb 19, 2003

can use local-path-provisioner instead of hostPath, much less scary while being functionally equivalent

meanwhile for replicated storage ive been using longhorn and its worked fine even with my 4gb rpi nodes. but ive just used it for smaller stuff like configs where its not too io-intensive and where i don't want the pod to be stuck in pending if the host node dies

suffix
Jul 27, 2013

Wheeee!
id use gce-pd or aws-ebs for db storage - i was surprised how few problems we had with gke storage considering the whole distributed stateful system thingie
though preferably i'd provision a managed db

Progressive JPEG
Feb 19, 2003

oh yeah if you're running it on one of them then just use their stuff, the above recommendations assume onprem

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
todays kubernetes s:

Helm can install ValidatingAdmissionWebhooks that validate Secrets. When it does this, it's usually also installing the Pod that runs the webhook service. If that Pod is not running, all attempts to validate resources will fail, and Kubernetes will reject the change. Helm tries to update a Secret containing release info immediately after deploying the Pod that must be running for Kubernetes to allow that change, and it does not retry if that update fails. If the update fails, Helm will refuse to modify the release after, because the previous action is still "in progress" per the update it made to its metadata Secret before deploying the thing that prevented it from updating that Secret.

thanks, Kubernetes. thubernetes

Nomnom Cookie
Aug 30, 2009



validating and mutating webhooks are a great way to brick your cluster. I guess they do other stuff too but I’ve mostly observed them causing problems

Progressive JPEG
Feb 19, 2003

i think it says a lot about how much helm sucks when everyone is now just diying their own operators that do nothing other than deploy a service, rather than continuing to deal with helm's bullshit

i still sparingly use helm but just because i don't quite hate it enough to bother with figuring out a replacement yet

imo the best way to use it is "helm template | kubectl apply -f -" whenever possible, because helm's own handling of updates is super broken, e.g. treating updates to specs as purely additive. handling that part with "kubectl apply" avoids the broken logic in helm. but "helm template" itself will silently fail if you want to use lookup calls. it will treat the lookup items as not found rather than just safely failing the operation like you'd expect

now that i've written all this i am really leaning towards dropping helm since i only use it for:
- random one-time generated internal secrets via this one weird trick that helm devs hate (silently fails with "helm template" per above)
- templating fixed/external secret values out into a separate yaml file that i keep in a password manager

Progressive JPEG
Feb 19, 2003

Nomnom Cookie posted:

validating and mutating webhooks are a great way to brick your cluster. I guess they do other stuff too but I’ve mostly observed them causing problems

for validators at least, might be able to use "failurePolicy: Ignore" to fail open if the pod is down. for example ive been using (and happy with) opa/gatekeeper and that's their default

idk about mutators, they always seemed like sort of a bad idea since suddenly it isn't WYSIWYG anymore but i haven't really seen much of them either

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

Nomnom Cookie posted:

validating and mutating webhooks are a great way to brick your cluster. I guess they do other stuff too but I’ve mostly observed them causing problems

ideally, if you need to validate a non-CRD resource, you use them with label selectors to not validate random stuff you'll never touch anyway. we, stupidly, use the presence of a specific key in a Secret for our filtering, so we can't do that. fortunately, you can also do "!value" in label selectors, so you can exclude Helm's Secrets.

i just wish operators didn't need an actual operator deployment--you'd think Red Hat would have learned from everyone hating Tiller and Helm 3's (actually well done) state management. you should be able to just run operator CLI poo poo locally and have it vomit out YAML to apply, more or less. instead they've exacerbated the problem by making everyone roll their own Tiller

Helm's additive/3-way diff thing is actually useful, but it shouldn't need to be. one of the shittier things about templates for users is that if you want to add in a field to some templated resource, it kinda just has to be in the template and exposed by values.yaml. the net effect of this is that your values.yaml slowly grows to just be the entire API spec for any templated resource, but organized differently than the spec for complicated historical reasons. the 3-way diff means that you can technically get around this, since your changes persist through upgrades despite not being in the template, but it's also poo poo because it's tied to the release and there's no standard way to persist it.

the actual solution would be first-class support for kustomize, which, while it has its own serious limitations, is real good at taking a diff and applying it to some resource. Helm kinda pretends to have this by allowing you to run arbitrary post-processing scripts on template output, but that's more annoying to set up. you should be able to just provide the static diff files

neosloth
Sep 5, 2013

Professional Procrastinator
I thought kustomize was bad and then I had to write a helm chart and kustomize is good enough actually

Nomnom Cookie
Aug 30, 2009



Progressive JPEG posted:

for validators at least, might be able to use "failurePolicy: Ignore" to fail open if the pod is down. for example ive been using (and happy with) opa/gatekeeper and that's their default

idk about mutators, they always seemed like sort of a bad idea since suddenly it isn't WYSIWYG anymore but i haven't really seen much of them either

linkerd provides a mutating webhook and that works fine. it's not modifying your containers or anything

Progressive JPEG
Feb 19, 2003

sharding? more like sharting

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
oh boy! work got us free subscriptions to professional development content and there's a "Certified Kubernetes Application Developer" course!

wait, no, this course covers "how to write a Deployment spec in a not stupid way" and not "writing custom controller go code better and understanding the kubernetes libraries"

eh, well, maybe the CKA course will be of some interest for learning something about the ops side that i don't deal with daily

current annoyance/dilemma, which i think i've brought up before, is wanting to migrate from using Helm to an operator for our preferred deployment system. i am mad that the operator framework providing no option for running tasks outside an application in the cluster, as if it were fully impossible to run something to manage Deployments via an external client maybe utilizing static internal cluster state info

Getting rid of Tiller without loss of functionality was the best thing Helm did, but operators are essentially stuck in world where you not only need to run Tiller still, you need to run a different Tiller instance for each application, with the Tiller version fully under the control of app devs. hopefully they keep it up to date, because users sure as hell can't!

Nomnom Cookie
Aug 30, 2009



operator framework doesnt support it because its a pretty deranged thing to want

Nomnom Cookie
Aug 30, 2009



like when you say "maybe utilizing static internal cluster state info" i start thinking that an operator probably isnt what you actually want to make, because the whole point of an operator is that it sits around watching events on a resource and reacting to them. if you don't care about getting updates from the apiserver then what you're making is not an operator more or less by definition

Silver Alicorn
Mar 30, 2008

𝓪 𝓻𝓮𝓭 𝓹𝓪𝓷𝓭𝓪 𝓲𝓼 𝓪 𝓬𝓾𝓻𝓲𝓸𝓾𝓼 𝓼𝓸𝓻𝓽 𝓸𝓯 𝓬𝓻𝓮𝓪𝓽𝓾𝓻𝓮
I’ve read this entire thread and I still don’t know what a kubernete is

akadajet
Sep 14, 2003

Silver Alicorn posted:

I’ve read this entire thread and I still don’t know what a kubernete is

it's when u like cloud computing so much you run your own virtualized cloud on top of the one made and maintained by professionals

graph
Nov 22, 2006

aaag peanuts

Silver Alicorn posted:

I’ve read this entire thread and I still don’t know what a kubernete is

a miserable pile of docker containers op

Nomnom Cookie
Aug 30, 2009



imagine a container crashing on a worker node, forever. that's the kubernetes vision of the future

carry on then
Jul 10, 2010

by VideoGames

(and can't post for 10 years!)

akadajet posted:

it's when u like cloud computing so much you run your own virtualized cloud on top of the one made and maintained by professionals

sometimes it runs on top of the one made and maintained by your own it jackasses instead

akadajet
Sep 14, 2003

lick the taint of my pod

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?

Silver Alicorn posted:

I’ve read this entire thread and I still don’t know what a kubernete is

you know how a mainframe runs batch jobs to do transaction processing and it’s all coordinated by a job control language that describes the data flow so the OS can maximize the throughput

its basically like that, except reinvented from first principles by UNIX nerds for doing web poo poo and implemented via the UNIX philosophy of “string together lots of components that do only one thing in a janky-rear end way that doesn’t quite fit the way the other components work so you need to introduce more points of failure in between to translate between them” and in theory the system will ensure scalability

what I’m saying is that the UNIX folks have never bothered to learn about how mainframe and minicomputer transaction processing and database systems work, and have had to reinvent all of that stuff poorly, often without the OS support that enables the best performance and compatibility

DEC VMS, HP MPE, Stratus VOS, the various IBM mainframe and midrange operating systems, etc. are really interesting to learn about without applying “UNIX is better” prejudice, especially in light of how you might use them to implement applications in the modern style

Progressive JPEG
Feb 19, 2003

its an API and reference implementation for running things in containers on other computers

the API acts like a config spec where you say what you want things to look like, then the implementation tries to get it there

my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine

Silver Alicorn posted:

I’ve read this entire thread and I still don’t know what a kubernete is

the problem it's solving is you have a bunch of servers and you want to schedule some processes amongst them, but you don't really care which server each goes to and you want it to handle failing nodes or processes automatically. also you want them all to be isolated (this is done using containers)

the non-trivial parts of this are generally things like persistent storage and setting things up so the processes can all talk to each other over IP (each one gets its own IP address)

ate shit on live tv
Feb 15, 2004

by Azathoth
Don't forget that whoever implemented your docker orchestration system doesn't understand that routing is a thing so all the IPs have to be in the same network. This is why you can only ever have one Availability Zone in AWS, it's just impossible to work in any other way.

Nomnom Cookie
Aug 30, 2009



ate poo poo on live tv posted:

Don't forget that whoever implemented your docker orchestration system doesn't understand that routing is a thing so all the IPs have to be in the same network. This is why you can only ever have one Availability Zone in AWS, it's just impossible to work in any other way.

this is a weirdly specific gripe that i dont understand

Adbot
ADBOT LOVES YOU

Bored Online
May 25, 2009

We don't need Rome telling us what to do.

Silver Alicorn posted:

I’ve read this entire thread and I still don’t know what a kubernete is

its a bunch of golang and yaml meant to abstract away the chore of scheduling containers onto a server cluster. its poo poo

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply