|
i use helm templates but only for two things: - generating an arbitrary random token for a Secret if the Secret doesn't already exist, for things like internal creds between services - embedding config files into configmaps without needing to have a nested copy in the configmap yaml and i never use other people's helm charts because they're all the union of every possible option that every passerby needed so that it would run on their SPARC cluster with 7 bit bytes or whatever. if a helm chart is the only option then ill render it to yaml with default settings and then edit/use the result because otherwise its hard to tell wtf the thing is doing when its 89% templating logic
|
# ¿ Jul 15, 2021 01:27 |
|
|
# ¿ Apr 27, 2024 09:49 |
|
maybe go one level up with containerd apis to start with: https://containerd.io/docs/getting-started/
|
# ¿ Jul 20, 2021 13:49 |
|
Fortaleza posted:You replace it with a well-disciplined puppet setup, making it so the only yamls are simple flat structures for hiera values. puppet are also known for: - laying off a bunch of people march 2020
|
# ¿ Sep 14, 2021 08:08 |
|
the reason to write an operator is resume driven development
|
# ¿ Oct 18, 2021 00:32 |
|
putting the nerd in containerd
|
# ¿ Nov 8, 2021 07:18 |
|
for a queue you'd probably just want to run kafka but it's a memory hog and a pita to tune/maintain once you get significant load going through it
|
# ¿ Nov 12, 2021 20:43 |
|
can use local-path-provisioner instead of hostPath, much less scary while being functionally equivalent meanwhile for replicated storage ive been using longhorn and its worked fine even with my 4gb rpi nodes. but ive just used it for smaller stuff like configs where its not too io-intensive and where i don't want the pod to be stuck in pending if the host node dies
|
# ¿ Nov 13, 2021 02:14 |
|
oh yeah if you're running it on one of them then just use their stuff, the above recommendations assume onprem
|
# ¿ Nov 16, 2021 08:08 |
|
i think it says a lot about how much helm sucks when everyone is now just diying their own operators that do nothing other than deploy a service, rather than continuing to deal with helm's bullshit i still sparingly use helm but just because i don't quite hate it enough to bother with figuring out a replacement yet imo the best way to use it is "helm template | kubectl apply -f -" whenever possible, because helm's own handling of updates is super broken, e.g. treating updates to specs as purely additive. handling that part with "kubectl apply" avoids the broken logic in helm. but "helm template" itself will silently fail if you want to use lookup calls. it will treat the lookup items as not found rather than just safely failing the operation like you'd expect now that i've written all this i am really leaning towards dropping helm since i only use it for: - random one-time generated internal secrets via this one weird trick that helm devs hate (silently fails with "helm template" per above) - templating fixed/external secret values out into a separate yaml file that i keep in a password manager
|
# ¿ Nov 23, 2021 23:17 |
|
Nomnom Cookie posted:validating and mutating webhooks are a great way to brick your cluster. I guess they do other stuff too but I’ve mostly observed them causing problems for validators at least, might be able to use "failurePolicy: Ignore" to fail open if the pod is down. for example ive been using (and happy with) opa/gatekeeper and that's their default idk about mutators, they always seemed like sort of a bad idea since suddenly it isn't WYSIWYG anymore but i haven't really seen much of them either
|
# ¿ Nov 23, 2021 23:44 |
|
sharding? more like sharting
|
# ¿ Jan 6, 2022 03:28 |
|
its an API and reference implementation for running things in containers on other computers the API acts like a config spec where you say what you want things to look like, then the implementation tries to get it there
|
# ¿ Jan 7, 2022 00:23 |
|
ate poo poo on live tv posted:I pipe all my bash commands into nc which connect to other servers running nc -l which is piped into bash. This is my orchestration strategy. you might like dask
|
# ¿ Jan 8, 2022 04:32 |
|
assuming its some cloud stuff they should just launch multiple actually-isolated clusters this also applies to people who think they want a single cluster spanning multiple AZs
|
# ¿ May 23, 2022 09:47 |
|
oh has anyone tried azure k8s? i assume it's trash but figure its good to check how bad
|
# ¿ May 23, 2022 09:48 |
|
imo helm chart installs are extremely bad and dumb and all anybody really wants out of helm is some semi-reasonable yaml templating the half-assed package management stuff the helm devs keep trying to shoehorn into there just make it more cumbersome without providing any of the benefits that you'd expect from a real package manager, like managed updates/upgrades. so at that point why even bother dealing with that imo the pro move is to pipe the output of "helm template" into regular "kubectl apply" and sidestep helm's logic for that entirely, using it only for a template rendering stage - but this doesn't work if you're doing something involving "lookup", like one-time random secrets. the built-in "helm install" implementation has a bunch of bizarre corner cases around e.g. subtractive edits that break things in hard to detect ways. meanwhile in a CD context treating the template/apply as separate stages also lets you shove the rendered output into source control
|
# ¿ Jun 3, 2022 05:34 |
|
afaict the "standard" way to get away from the helm devs' bad opinions is to just write your own gosh darn operator+CRDs via kubebuilder or similar. that's a bridge too far for me rn but i may just give up and do that at some point because it doesn't look like there's gonna be a "helm except just the parts you want done well" tool anytime soon
|
# ¿ Jun 3, 2022 05:37 |
|
i'm running a $6 digitalocean vps with 1gb ram, originally planned to put a k3s server on it but that consumed 400mb when empty/idle with flannel/servicelb/traefik already turned off decided to just run everything as plain "restart=always" docker containers orchestrated via terraform and the combined system memory usage for 17 containers spanning a bunch of different stuff is around that same 400mb sorta amazing how much overhead even a "minimal" k8s install creates
|
# ¿ Sep 8, 2022 14:55 |
|
i recently replaced a bunch of stuff that was deploying things via helm templates with terraform's k8s support and its been overall a good move tfstate management is of course an exercise left to the reader as usual but everything else (templating, secrets management, one-off passwords for things, not leaving random old poo poo lying around as the deployment evolves, structure in general) is waaay nicer can also deploy public/3rdparty helm charts directly from tf and that seems to work fine as well. was previously opposed to that when already using helm directly, keeping copies of the full yamls in source control. but now tf just handles it transparently one catch is tf is pretty bad at crd management because it tries to look up the cr even when the crd might not exist yet. "official" solution is a separate preceding stage for just adding the crds. was able to avoid that in the one case where it was an issue by using the helm chart version of the thing
|
# ¿ Sep 8, 2022 15:42 |
|
the home shoestring trino cluster was running ubuntu 20.04 with k3s. this was sort of by accident because the cluster started as a few 4gb rpi4s a couple years ago, at a time when ubuntu was producing prebuilt aarch64 rpi images that worked with k3s. the cluster grew organically from there but it was all still managed by a couple ansible yamls for turning off ubuntu's endless poo poo and for installing k3s onto there, respectively given the situation it made sense to just have a clean slate with upgrading from 20.04 to 22.04. i got things mostly working after a couple hours but it was very unstable with all pods randomly crashing with no logs, even in a stock/empty k3s cluster on a single machine. this instability turned into a weekend-consuming pita with nothing explaining or solving the problem and i really didn't want to be dealing with it. there wasn't anything to lose at this point so i ended up just trying talos os. after a couple hours i had an empty talos cluster up with the instability fixed, and with my janitorial workload significantly reduced. being able to delete the aforementioned ansible yamls was very satisfying one catch with talos is it wants a dedicated storage device to itself on each machine, while any persistent storage should go on separate devices. can put stuff directly in the "ephemeral" partition within /var but that is easy to wipe in e.g. a talos upgrade if you aren't careful. probably a good idea to have separate devices anyway but overall if you're wanting to do some on-prem k8s without regular effort to maintain the underlying os then talos seems real good so far. at least until their funding dries up or whatever their situation is
|
# ¿ Sep 8, 2022 16:23 |
|
to clarify talos is basically an "appliance" linux distro whose sole purpose is to provide a k8s environment, managed via cli/api. i think it uses kubeadm underneath
|
# ¿ Sep 8, 2022 16:31 |
|
tbh i'd rather pay someone $6/mo than risk interacting with oracle for free
|
# ¿ Sep 8, 2022 17:49 |
|
for single node just use docker compose
|
# ¿ Sep 11, 2022 19:42 |
|
1.24 of what
|
# ¿ Sep 30, 2022 17:49 |
|
oh god are you using aks
|
# ¿ Sep 30, 2022 17:50 |
|
|
# ¿ Apr 27, 2024 09:49 |
|
BedBuglet posted:I've been spending the last two weeks having a special kind of hatefest at helm. Set up makefiles so that when a developer builds the repo it builds all their docker images tagged as developer images and pushes them to artifactory then does a helm lint/template/package on their helmcharts, updating the values.yaml and pushes up into artifactory. Works great. i've switched a bunch of helm stuff to terraform and it's been a lot smoother overall despite the usual caveats around terraform itself like needing to deal with safe tfstate storage etc
|
# ¿ Jan 29, 2023 17:55 |