Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
how do we not have a thread to complain about everyone's favorite YAML datacenter operating system

i am mad about red hat IBM whatever. i never want to hear about whatever loving bullshit "what if we made OUR OWN THING" for something that already exists in the k8s ecosystem IBM is pushing this month so they can make consulting dollars hocking it. just participate in the lovely OSS governance system everyone else has to

swear to god i am going to yeet the next salesperson who asks about certified operators into the sun. having to maintain separate builds to meet arbitrary redhat requirements and ensure that every container uses a blessed RHEL image (entirely for the purposes of "containing a single golang binary" is loving pointless beyond being able to tell technically naive customers that something is "certified"

Adbot
ADBOT LOVES YOU

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

MrQueasy posted:

aw poo poo, someone put quotes around the 'on' again.

accidental norway boolean is some good poo poo

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

epitaph posted:

a team at my last job decided they’d be better off inventing their own orchestration instead of using kubernetes. last i checked they’d spent 4 months reinventing cron poorly on top of zookeeper.

moral of the story: kubernetes may have a lot of incidental complexity, but can be very helpful for building distributed systems automation.

hey theres like twenty companies out there that have the internal engineering muscle and use cases to get their zookeeper/nomad stuff right and useful. it's like how netflix and sony have viable in-house freebsd poo poo that works for them. there's probably also a ton of companies that /arent those companies/ and have one engineer on some sorta crazy vision quest trying to build their dream orchestration poo poo out of twigs and spit in the outback

could be worse, could be workin on some infra built on dc/os or ECS

godspeed to the devops architect dude i had one meeting with, who was working on utilizing some of our poo poo in ECS. at wework. in late 2019. he was an extremely chill dude, ill give him that

Qtotonibudinibudet fucked around with this message at 07:39 on Jul 13, 2021

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

carry on then posted:

you, a moron: `kubectl exec -it my-lovely-pod-0 -- /bin/bash`

me, a genius: `oc rsh my-cool-pod-0`

my beef with openshift is that they seem to feel the need to NIH like loving everything and entirely control the product. i know dealing with k8s oss governance can kinda suck but goddamn you don't need to replace the ENTIRE user-facing part of the ecosystem (unless you intentionally want to do this to get lockin and consulting $$$ where you're the one billing all the consultants, which given IBM is... yeah). this poo poo probably could have been a kubectl plugin

dealing with the idiosyncrasies of EKS versus GCP versus roll-your-own is enough of its own workload but at least the UX part is the same, and i don't need to know about whatever OpenShift's replacement tooling or CRs are, esp since the OpenShift users seem to be less familiar with what they're doing and can't translate their questions to vanilla k8s terms at all. kubectl and a lot of the prominent stock tooling aren't so bad that they desperately need a replacement; they usually have enough people working with them that the UX pain gets hammered out over time

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

Bored Online posted:

you know what is worse than yaml? helm charts. here, write some loving yaml but also with go templating. loving hell

yah as someone who maintains a chart i can affirm that Helm is in many ways quite poo poo. really wish they had a proper kustomize integration so all the many many feature requests i get for "add this new deployment feature into the deployment" could be handled with "this doesn't need a loop or advanced template functionality, it is just stuffing a YAML blob into a defined place in another YAML blob, which is exactly what kustomize does, use that instead" but their solution for that is "uh you can run a bash script that invokes kustomize" which is poo poo.

it's useful for de-duplicating poo poo for our 8 mostly-identical Services but other than that it's a hot mess.

also someone submitted a PR the other day that'd let you put a template inside values.yaml and render it inside one of the chart templates. i hate that this is even possible and i want to die.

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

Progressive JPEG posted:

i never use other people's helm charts because they're all the union of every possible option that every passerby needed

there's some very telling bit in a Helm commentary "why this feature is the way it is" post to the effect of "when we started this project, the idea was that you'd maintain your own chart for filling out the specifics of your environments, prod, test, etc. the author and user were expected to be the same person. in practice, we have wound up with chef recipes, where the users are using charts as a shortcut to not understanding how the app they want is deployed"

helm would probably be a much saner place if general user audience charts weren't really a thing

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

echinopsis posted:

Its time to be honest with you all


I don’t know what kuberneetus is

my version is that it's a datacenter operating system. all the traditional single-machine OS scheduling/hardware interaction/etc. stuff that you wouldn't want to deal with as an application developer and instead have the kernel do on your behalf, that's what kubernetes is supposed to handle for you, just for lots of networked machines.

i have tried to give this explanation to our salespeople and im pretty sure it doesn't really work because lol salespeople don't understand what an operating system is either.

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

refleks posted:

have you met financial services companies?

for them there's azure on-prem!


pointsofdata posted:

I don't really see the benefits of using kubernetes over say aws + terraform so far, it seems pretty similar for run of the mill business stuff

choice my friend, choice! a vibrant and diverse ecosystem of cloud computing management software is key to ongoing improvement through competition

for example, you can choose between an RBAC system that is practically useless in kubernetes and an RBAC system that is incomprehensible in AWS

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

eschaton posted:

why the gently caress are containers even a thing just use jails smdh

yeah just let me find the three engineers with extensive prod experience managing fleets of app instances using BSD jails. i assume they're all more beard than flesh and bone at this point

in unrelated news, the k8s blogs have some choice nuggets:

> The way PSPs are applied to Pods has proven confusing to nearly everyone that has attempted to use them.

wait you mean users didn't get a resource that applies to pods via a binding to the pod's associated serviceaccount and only does anything when you enable the special resource admission controller?

i know there has to be some sort of "well, these are the tools we have currently built for these APIs in the kubelet security poo poo, so this is what we're using, gently caress if the ux makes no sense" reason behind why PSPs work this way, but still, lol

I JUST WANTED TO FORCE READ ONLY CONTAINER FILESYSTEMS GODDAMNIT KUBERNETES

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

Bored Online posted:

just copy what one of the better engineers does

there are no better engineers. we are but a morass of bad engineers; occasionally an okay enough idea bubble percolates up to the top of the swamp muck and pops, scattering scant detritus of goodness across our technology plain

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

dads friend steve posted:

literally nothing about k8s is simple. it cannot possibly be simple; there are a million moving parts to even the smallest cluster

breh how do you think a conventional single machine operating system and kernel stack works

it's not some magical rock you kick and and say "SEND MY PACKET TO NETWORK ADDRESS" and "SCHEDULE MY PROCESS" and then have that magically be so

we just added another bullshit layer atop that, with a standard config language whose complexity is just shy of turing completeness

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

carry on then posted:

took a building operators workshop. knowing how to write one in go with the ask seems handy enough but apparently people are turning ansible playbooks into operators which seems kind of unmaintainable?

Nomnom Cookie posted:

why on earth would you make an operator backed by ansible. just use ansible. the point of the operator is that you use go or something to do stuff your normal tooling can't handle well

it is the distant past of 2-3 years ago. you are red hat and want the k8s community to adopt your new doodad, which ideally means essentially writing a separate go app to manage your app (with a feature barren, alpha stability library because it hasnt matured yet). this is a tough ask when there are a ton of alternatives. so, to entice people in, you offer two halfway versions of your thing that allow devs to take advantage of your platform without developing a new go app: instead, you can use a popular k8s packaging/lifecycle tool or a popular generic linux app provisioning system (which you conveniently control!) as a basis. many applications already have stuff written for one or the other ready to adapt into an operator! this lower barrier to entry option means more packages in your ecosystem (even if a lot of them are kinda poo poo), which is good for your salespeople when pitching openshift support contracts

i assume ansible operators are as silly (or sillier) than helm operators, which basically have you set up a very privileged serviceaccount and a CRD so that a robot can obfuscate away running helm cli commands for you. the dev side is lol since you have to manage an extra CRD, roll out a new release for every chart version release (there's no mechanism to just say "use this chart version" at runtime, the specific version is baked into the operator), and deal with frequent library updates that don't really do anything for you. you get OLM, which mostly seems like a massively overengineered, unintuitive version of dependabot that breaks in weird ways

as an added bonus, red hat have established a certification program that serves gently caress all purpose but is a requirement for providing your operator on red hat's airgapped network version of openshift. to do this, you need to maintain a separate version of your operator that is mostly identical to your other one but with subtle incompatibilities, so you can't easily maintain both from the same git repo. you also need to publish poo poo to red hat's docker registry, which is an unimaginable pile of garbage that fails to acknowledge pushes half the time. it's great fun.

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
todays kubernetes s:

Helm can install ValidatingAdmissionWebhooks that validate Secrets. When it does this, it's usually also installing the Pod that runs the webhook service. If that Pod is not running, all attempts to validate resources will fail, and Kubernetes will reject the change. Helm tries to update a Secret containing release info immediately after deploying the Pod that must be running for Kubernetes to allow that change, and it does not retry if that update fails. If the update fails, Helm will refuse to modify the release after, because the previous action is still "in progress" per the update it made to its metadata Secret before deploying the thing that prevented it from updating that Secret.

thanks, Kubernetes. thubernetes

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

Nomnom Cookie posted:

validating and mutating webhooks are a great way to brick your cluster. I guess they do other stuff too but I’ve mostly observed them causing problems

ideally, if you need to validate a non-CRD resource, you use them with label selectors to not validate random stuff you'll never touch anyway. we, stupidly, use the presence of a specific key in a Secret for our filtering, so we can't do that. fortunately, you can also do "!value" in label selectors, so you can exclude Helm's Secrets.

i just wish operators didn't need an actual operator deployment--you'd think Red Hat would have learned from everyone hating Tiller and Helm 3's (actually well done) state management. you should be able to just run operator CLI poo poo locally and have it vomit out YAML to apply, more or less. instead they've exacerbated the problem by making everyone roll their own Tiller

Helm's additive/3-way diff thing is actually useful, but it shouldn't need to be. one of the shittier things about templates for users is that if you want to add in a field to some templated resource, it kinda just has to be in the template and exposed by values.yaml. the net effect of this is that your values.yaml slowly grows to just be the entire API spec for any templated resource, but organized differently than the spec for complicated historical reasons. the 3-way diff means that you can technically get around this, since your changes persist through upgrades despite not being in the template, but it's also poo poo because it's tied to the release and there's no standard way to persist it.

the actual solution would be first-class support for kustomize, which, while it has its own serious limitations, is real good at taking a diff and applying it to some resource. Helm kinda pretends to have this by allowing you to run arbitrary post-processing scripts on template output, but that's more annoying to set up. you should be able to just provide the static diff files

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
oh boy! work got us free subscriptions to professional development content and there's a "Certified Kubernetes Application Developer" course!

wait, no, this course covers "how to write a Deployment spec in a not stupid way" and not "writing custom controller go code better and understanding the kubernetes libraries"

eh, well, maybe the CKA course will be of some interest for learning something about the ops side that i don't deal with daily

current annoyance/dilemma, which i think i've brought up before, is wanting to migrate from using Helm to an operator for our preferred deployment system. i am mad that the operator framework providing no option for running tasks outside an application in the cluster, as if it were fully impossible to run something to manage Deployments via an external client maybe utilizing static internal cluster state info

Getting rid of Tiller without loss of functionality was the best thing Helm did, but operators are essentially stuck in world where you not only need to run Tiller still, you need to run a different Tiller instance for each application, with the Tiller version fully under the control of app devs. hopefully they keep it up to date, because users sure as hell can't!

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

Silver Alicorn posted:

I’ve read this entire thread and I still don’t know what a kubernete is

i tried explaining it to our salespeople via a metaphor about how an operating system manages sharing CPU time/memory space and providing hardware abstractions for applications on a single machine, but for a machine cluster, and am pretty sure our sales team has no better understanding of kubernetes as a result

Nomnom Cookie posted:

like when you say "maybe utilizing static internal cluster state info" i start thinking that an operator probably isnt what you actually want to make, because the whole point of an operator is that it sits around watching events on a resource and reacting to them. if you don't care about getting updates from the apiserver then what you're making is not an operator more or less by definition

for i guess what are essentially legacy reasons (building our initial k8s poo poo before operator framework existed) we have helm to deploy our application/CRDs/associated supporting resources and upgrade it to new versions, and then a separate controller to continuously handle transforming other resources into configuration. these could be theoretically combined into a single operator like what you're describing

the reason i'd like an operator that doesn't need to actively watch resources is to basically get something like the helm half, but without having to deal with handling complex logic in go templates, because go templates are complete rear end to deal with and a massive pain to test. management of the application itself doesn't need to run always, just at the times users update app settings or upgrade to a new version.

that'd be useful both as a transition point to an operator that's doing both and for users that are real picky about what permissions you grant to service accounts

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

Hed posted:

i'm reading a kubernetes book right now and am questioning what i'm doing.

I want to run containers that are small self-contained jobs, so the async task queue is way more part of the solution than serving a crud website. So if I have jobs (containers) that I want to run on a schedule or by trigger what's the equivalent in kochos?

If this were just hosted on a server I'd use something like django with celery as the async task queue. I don't see anything but the fields of yaml for configuring new sets of things to run in k8s.

jobs are, well, jobs: https://kubernetes.io/docs/concepts/workloads/controllers/job/

it spawns a container and checks if it it succeeds. if it fails, it tries again. if it doesn't fail, it marks itself done and does nothing more.

on a schedule, https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/

it's jobs, but you get a new one every tick of the schedule

triggers you generally need some sort of controller that will poll the API and spawn a job when the trigger happens

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

Scud Hansen posted:

Went to check the Operator SDK docs on github one day and they were gone, fused with the Red Hat Borg Cube. I went to the new URL for the docs, hosted by Red Hat, and it was pages full of "Lorem Ipsum" test text. Very cool!

the more i understand the ecosystem the more i struggle to understand what operator sdk actually does beyond being a rebranded kubebuilder

i guess OLM is there but it seems kinda annoying for no obvious purpose

this is, however, nothing in comparison to the actual red hat branded stuff you upload your operators to, which is an incredible combination of incomprehensible and broken

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
reviving my dead thread to wonder and/or complain about cluster admins just categorically denying any cluster-level permissions in kubernetes, an environment which explicitly has a number of cluster-level resources

what the gently caress bad thing is someone going to do with GET permissions on CRDs

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

Scud Hansen posted:

They took away our admin access on the cluster and hired a guy just to admin it and he's the most disagreeable prick I've ever worked with. But now every time I need to do something I have to grovel to the cluster troll and beg him to run commands for me. I am a broken man

after many years of developers having free reign over their environments via containers, finally the natural order is restored and bofhs' god-given control over the platform is the law of the land again

Progressive JPEG posted:

assuming its some cloud stuff they should just launch multiple actually-isolated clusters

this also applies to people who think they want a single cluster spanning multiple AZs

please, the customers want a "single cluster" spanning multiple providers and think Ingress implementations will somehow provide this

no, none of these people have the slightest idea what SIG multi cluster is

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
i really do hate that every CD tool on the market has added a helm option and that they universally use helm template instead of helm install, each with its own idiosyncrasies

for the unfamiliar, the helm devs didn't really intend for the template command to be used this way (im not really sure _how_ they intended it to be used), so this does poo poo you probably don't want when actually deploying software, like:
- not applying a namespace
- rendering ALL templates simultaneously regardless of purpose, such as templates that should only render for tests and templates that should only render during certain lifecycle events
- not talking to the cluster at all, so that any conditionals for CRD or API version availability just always return false unless you manually instruct helm that these things are available. naturally not all of the CD tools set these automatically, nor do they let you provide arguments for calling helm

this naturally results in an endless line of requests for snowflake toggles in values.yaml to adjust template behavior for some particular tool's bullshit. no, begone. file an issue with jenkins or argo and tell them to fix their poo poo. if they can't let them fight it out with the helm devs to add features they need. our config is already enough of a mess without adding 50 toggles that won't make sense to novices

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

Cerberus911 posted:

All those template problems have flags to change the behaviour and do what you want. If someone requests changes to a chart because of their template output, tell them to fix their poo poo.

they do, yes. sadly this cool code that builds the template command for you with a hard-coded subset of available arguments doesn't let you use them

this is more a complaint about dealing with OSS community bullshit. jenkins may be the party at fault here but that doesn't stop people telling me to paper over their failings and complaining when i tell them to submit an issue elsewhere because other chart authors have already acquiesced to just duplicating features

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

nudgenudgetilt posted:

very few orgs can actually use that published $whatever without having to alter it to their needs. instead of maintaining simple internal whatever that describes just the organization's needs, they end up maintaining a fork of a complex whatever that tries to do everything for everyone.

forking and modifying was the ostensible original use model for helm, and it probably would have been okay-ish, but nobody does it between a combination of "UGH ITS SO MUCH EXTRA WORK" (it's not unless you're doing massive modifications to a bunch of core templates) and "i am babby who has never done a kubernetes, i just wanna fill in value and get Deployment, no i do not understand what a Deployment is please i do not want to learn new things", so every official chart ends up being a mess of every possible feature ever, to the point of fully replicating every field in every resource it spawns in values.yaml, but organized differently than the underlying resources because lol organic growth. it's a crime that helm lacks first-class support for kustomize to let it handle the 85% of requests that are "just add this additional field to the Deployment or w/e" without writing a post-processing script. after spurning all these alternatives, everyone proceeds to complain that the values.yaml is too complex

carry on then posted:

i'm glad we transitioned to publishing operators for our own products

months deep into trying to replace a chart with one, i am unconvinced at least for basic deploying app poo poo (once you're taking action based on custom resources it's a different story). now you have:

- the same values.yaml problem where it's each vendor organizing core k8s resources in their own way (i wanted to try and avoid this; the more cavalier engineer on the team has rammed through config design with the barest review possible to get ready for Big Sales Event, promising we can change them after. we won't)
- redoing all of Helm's state management from scratch, cause controller-runtime and kubebuilder aren't really prescriptive about how to do that, using something akin to Tiller, which everyone rightly wanted to get rid of
- an additional layer of red hat bullshit with unclear, conflicting documentation and guidance because there are apparently 3 people that understand the additional poo poo red hat added, all of whom have left the company. red hat is literally paying us to comply with all the extra openshift requirements, and still can't find someone who can answer our questions authoritatively. choice moments include:
-- someone acknowledging that the original OLM config design was poo poo, so they changed it, but offered no migration path (rather, they promised to find someone who could describe the migration path, and we never heard from them again). didn't hear back on recommended approaches for handling both a "community" and "certified" operator, where the design makes it functionally impossible to easily maintain both from the same git repo because they use mutually incompatible config instead of an overlay
-- an engineer saying something to the effect of "yeah, our validation servers just uh... break, a lot. you gotta just retry and open a support ticket if the retry just breaks it further"
-- a red hat person asking why a helm feature (we originally started with the comedy helm-based operator poo poo) wasn't working, when it wasn't working because red hat's docs say "you must override this in such a fashion that the standard approach doesn't work", with no response to our questions about what their recommendations were to make it less unintutive. we've received this same question on three separate occasions

ultimately it's not clear wtf operators offer that isn't just kubebuilder and controller-runtime (which, to be fair, do have a significant degree of involvement from red hat afaik) really. they add on OLM (poo poo? idk, at least from the app dev perspective idk what it's doing or how to best use it, and it's cumbersome to work with) and a CLI tool that i have no obvious use for (jfc just give me a flat file format) that makes backwards-incompatible changes every few versions

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

carry on then posted:

i mean this is the crux of it, it sounds like it's way overkill for what you're to do but for us (delivering a java application server image that users are going to take and customize with their own config and apps, and allowing them to reconfigure on the fly to capture debug data) the extra flexibility of CRs and a running agent managing things is worth the extra complexity in development, because the end-deployer experience does get simpler.

we do also update configuration on the fly based on API resource changes, but via a controller that essentially predates the operator framework and controller-runtime. we've since ported it over to use controller-runtime. i was kinda expecting the operator framework stuff to actually provide something beyond what we were doing already, but nah, not really. it's just more of the same, with a lot of marketing fluff. we're now managing Deployments also (because we're implementing a standard that does require spawning Deployments when someone creates another API resource), but beyond being able to react to resource CRUD instead of requiring something external run "helm install" it doesn't seem like there's much on that side we couldn't do with Helm--the basic "fill this envvar with the name of some other resource you're creating" glue work is entirely doable with templates, even if actually writing the templates sucks

that last part matters a lot though--being able to use a proper type system, write unit tests, and get failure reports more useful than "couldn't parse the output YAML, good luck finding the source of the problems in the templates" is arguably far more useful than anything it's providing capability-wise

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
i love how the industry is so excited about kubernetes being so hot yet never understands poo poo about it. our product people have decided that we need to sell our operator somehow. attempts to explain that this is basically like trying to sell one of those windows installer exes separate from the program it's installing are falling on deaf ears

it makes even less sense given that we deal entirely with big enterprise contracts where the actual listed SKUs are basically window dressing to justify whatever price sales was going to charge anyway

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

nudgenudgetilt posted:

is argocd still the only gitops tooling with commit signature verification?

how is this not a standard feature by now? does everyone doing gitops just blindly trust that github will never get owned?

i mean, there's "github will never get owned" and "github will never get owned by someone who burns a github exploit on you"

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

Jonny 290 posted:

dont nest virtualization or containers unless you have a specific reason to, basically

our CI system:

kind running on VMs go brrrrrrrrrrrrrr

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
not sure which i hate more:

- Azure deciding as of 1.24 that HTTP LoadBalancer Services must have mandatory HTTP healthchecks that just default to "GET /", because it's a well-known part of the HTTP standard that you just have to respond with 200s to those requests, even though the standard Pod readiness/liveness mechanism is right there and was working fine before. to their credit, they at least let you configure the path (so they're not as boneheaded as GCP), but this isn't of much use for our HTTP service that doesn't allow any requests that don't present a client cert first.
- Azure support apparently not knowing about this at all and just saying "idk, no idea why it doesn't work anymore, just flip the appProtocol to tcp" like yeah, sure, we'd love to just change the setting that's been there forever (for everyone) to have other customers test how that changes behavior on other cloud providers
- Random sales engineer saying "just make it configurable!" like they aren't the same people constantly complaining how there are too many settings and we don't produce ready-made manifests for the exact config they happened to want that day

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

Progressive JPEG posted:

oh god are you using aks

well, some of our customers are

they're not the worst ones though--AKS at least has a somewhat active OSS cloud provider repo and responded to the PR i opened to add an override (but has since gone silent since saying "yep this'll work", so vov). today i encountered, wonder of wonders, someone using kubernetes on the loving IBM CLOUD (nee Bluemix)

unlike AKS, IBM Cloud does not make iffy assumptions about standard behaviors of HTTP applications. IBM Cloud simply rejects any LoadBalancer Service that sets appProtocol outright. yes, any value whatsoever. i guess some dumbfuck saw a standard field and, knowing that their infra couldnt do anything more complicated than a basic TCP or UDP load balancer, said "nuh uh, we're not gonna fail gracefully and just create a basic L4 LB following the protocol field anyway, we're gonna force you to remove that config". while they do have an OSS repo for their cloud provider, it has received an astonishing 0 issues and PRs over its 2 or so years of existence. this is simultaneously not encouraging and hilarious--IBM's cloud offering is so irrelevant that nobody even bothers to report problems despite it being poo poo

kudos to <other vendor in our space> who just accepted a PR to make this configurable, because why ask other vendors in the ecosystem to not be garbage when we can instead just turn the ostensibly vendor-agnostic parts of the system into a minefield of "you shouldn't actually need to change this, but you need this one weird trick if you use this vendor". im not sure why im surprised--it's basically the same as the rest of the software ecosystem--but it is grating when we get continuous complaints about "ugh too many config options i thought you were supposed to make this simple for us!" followed by "oh but could you add a config option for this thing that shouldn't need it? we need it for uh... reasons". the reason there are too many config options is u. or rather the reason is that some middle manager let IBM wine and dine them knowing that they were gonna jump ship and not have to deal with their lovely vendor choice

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
do not use namespaces for prod/test/dev separation, you will hate your existence. don't even have a separate prod and then a multi-environment non-prod cluster if you can avoid it. plenty of poo poo is cluster-wide and not really possible to isolate. namespaces are for (kinda lovely) isolation between applications and account permissions

afaik back in the day it was an official gke recommendation to even spin up separate clusters for some levels of application isolation cause hey, free control instances. then they started charging for control instances. oh no.

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
more fun helm poo poo:

someone tried to suggest that a values.yaml key use the `tpl` function when rendering its value for https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#topologyspreadconstraint-v1-core because they have various environments with different topologies

clearly the only way to fill this is to inject arbitrary template output into the field, in case, idk, maybe you need the otherwise strongly-typed field to contain a value from template. like sure, maybe you might need to template TopologySpreadConstraint because maybe there's some unknown field inside that may contain _the entire universe_ as generated by a template, or a 🌴 emoji. who knows? 🌴🌴🌴🌴🌴

loving templates inside templates interpreting config values as templates so you can template while you template

never give an engineer a tool that can perform multiple functions ever. hammer nail nail hammer palm tree palm tree palm tree 🌴🌴🌴🌴🌴🌴

https://www.youtube.com/watch?v=k1Zwhi6ag7g

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

carry on then posted:

you need knative for that

nobody uses knative lol

Adbot
ADBOT LOVES YOU

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
poo poo you find in what are ostensibly supposed to be technical standards documents

quote:

We use the term “metaresource” to describe the class of objects that only augment the behavior of another Kubernetes object, regardless of what they are targeting.

“Meta” here is used in its Greek sense of “more comprehensive” or “transcending”, and “resource” rather than “object” because “metaresource” is more pronounceable than “metaobject”. Additionally, a single word is better than a phrase like “wrapper object” or “wrapper resource” overall, although both of those terms are effectively synonymous with “metaresource”.

if your copy has even a hint that you may need to consult a classics professor to understand how to implement a technology standard please, reconsider what you are writing

fire half the engineers and give me a squad of technical writers for the love of god

the same document proceeds into some sort of stage play between hypothetical users. it's like someone heard the joke about the paxos paper flying under the radar for decades because nobody gives a gently caress about hypothetical extinct greek island political systems and thought it was a template for success

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply