Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
distortion park
Apr 25, 2011


i followed a kubernetes tutorial and it didn't make me touch any yaml op

Adbot
ADBOT LOVES YOU

distortion park
Apr 25, 2011


I don't really see the benefits of using kubernetes over say aws + terraform so far, it seems pretty similar for run of the mill business stuff.


I guess if you wanted to run it on your own hardware it would let you do that, but my impression is that most people do it in the cloud anyway

distortion park
Apr 25, 2011


i keep hearing k8s promoters at work describing doing things with k8s as "easy" or "simple" and feel like i'm living on another planet.

distortion park
Apr 25, 2011


i hate helm so, so much. Like wtf, at least if it was in code it would all by typechecked

distortion park
Apr 25, 2011


CMYK BLYAT! posted:

it is the distant past of 2-3 years ago. you are red hat and want the k8s community to adopt your new doodad, which ideally means essentially writing a separate go app to manage your app (with a feature barren, alpha stability library because it hasnt matured yet). this is a tough ask when there are a ton of alternatives. so, to entice people in, you offer two halfway versions of your thing that allow devs to take advantage of your platform without developing a new go app: instead, you can use a popular k8s packaging/lifecycle tool or a popular generic linux app provisioning system (which you conveniently control!) as a basis. many applications already have stuff written for one or the other ready to adapt into an operator! this lower barrier to entry option means more packages in your ecosystem (even if a lot of them are kinda poo poo), which is good for your salespeople when pitching openshift support contracts

i assume ansible operators are as silly (or sillier) than helm operators, which basically have you set up a very privileged serviceaccount and a CRD so that a robot can obfuscate away running helm cli commands for you. the dev side is lol since you have to manage an extra CRD, roll out a new release for every chart version release (there's no mechanism to just say "use this chart version" at runtime, the specific version is baked into the operator), and deal with frequent library updates that don't really do anything for you. you get OLM, which mostly seems like a massively overengineered, unintuitive version of dependabot that breaks in weird ways

as an added bonus, red hat have established a certification program that serves gently caress all purpose but is a requirement for providing your operator on red hat's airgapped network version of openshift. to do this, you need to maintain a separate version of your operator that is mostly identical to your other one but with subtle incompatibilities, so you can't easily maintain both from the same git repo. you also need to publish poo poo to red hat's docker registry, which is an unimaginable pile of garbage that fails to acknowledge pushes half the time. it's great fun.

Sounds like I should get a red hat support contract to help with all this op

distortion park
Apr 25, 2011


helm template-value files can get a bit complex sometimes, so it can be nice to template them out as well for maximum simplicity and ease of use

distortion park
Apr 25, 2011


brb, just checking out my worker's taints

distortion park
Apr 25, 2011


I'm not an expert but IME pub/sub stuff sucks for synchronization. If you mess up handling a message you need some way to eventually do the thing - so you end up building a reconciliation system. The watch-loop stuff is just going straight to the reconciliation based system

distortion park
Apr 25, 2011


really want to emphasize how far i am from being in any way knowledgeable about this though

distortion park
Apr 25, 2011


Hed posted:

i'm reading a kubernetes book right now and am questioning what i'm doing.

I want to run containers that are small self-contained jobs, so the async task queue is way more part of the solution than serving a crud website. So if I have jobs (containers) that I want to run on a schedule or by trigger what's the equivalent in kochos?

If this were just hosted on a server I'd use something like django with celery as the async task queue. I don't see anything but the fields of yaml for configuring new sets of things to run in k8s.

We use an external, managed airflow instance and then run stuff using the k8s airflow operator. yes it would be a lot simpler and cheaper to just run stuff on a big server

distortion park
Apr 25, 2011


maybe (definitely) I'm dumb and my colleagues are too, but ime k8s doesn't achieve the devops goals or anything beyond really basic self serve capabilities. the vast majority of our developers can't debug issues with our setup without help from the infrastructure team (sorry, "devops team") and can't work out how to do things which aren't part of the existing wrappers or patterns. this wasn't the case with octopus deploy and was less bad on ECS (although understanding networking was very hard there as well)

distortion park
Apr 25, 2011


The CNCF certification exams only let you use the official docs, no googling

distortion park
Apr 25, 2011


freeasinbeer posted:

I’m a K8s architect, so I’m biased, but everytime I use ECS I’m annoyed that once you step outside the golden path you end up building so much stuff. and it doesn’t have a ton of the nicer primitives built in K8s has.

Kubernetes also has such a huge ecosystem that what ever you’re doing has probably been solved countless times.

if you're doing something really basic (like running some stateless web apis) then ECS (via fargate) is really nice and has decent docs. And it has some features that our much harder to do in k8s - I think one of essentialContainer and containerDependency is a real PITA to recreate in k8s and requires adding some custom bash code or magic shared files.

distortion park
Apr 25, 2011


carry on then posted:

kubernetes: probably too complex for your use case

distortion park
Apr 25, 2011


please alternative daily with "kubernetes: actually, it's super simple"

distortion park
Apr 25, 2011


my homie dhall posted:

i think kubernetes is only complex if you need persistent storage or if you do something stupid like install a service mesh

maybe using a service mesh is the root of our problems (we've certainly spent a lot of time messing with config values after the infrastructure team added it and we started getting random networking errors). But it's also hard to say no to something described like this


quote:

Kubernetes supports a microservices architecture through the Service construct. It allows developers to abstract away the functionality of a set of Pods, and expose it to other developers through a well-defined API. It allows adding a name to this level of abstraction and perform rudimentary L4 load balancing. But it doesn’t help with higher-level problems, such as L7 metrics, traffic splitting, rate limiting, circuit breaking, etc.

Istio, announced last week at GlueCon 2017, addresses these problems in a fundamental way through a service mesh framework. With Istio, developers can implement the core logic for the microservices, and let the framework take care of the rest – traffic management, discovery, service identity and security, and policy enforcement. Better yet, this can be also done for existing microservices without rewriting or recompiling any of their parts. Istio uses Envoy as its runtime proxy component and provides an extensible intermediation layer which allows global cross-cutting policy enforcement and telemetry collection.

The current release of Istio is targeted to Kubernetes users and is packaged in a way that you can install in a few lines and get visibility, resiliency, security and control for your microservices in Kubernetes out of the box


It almost sounds compulsory for a microservices architecture

distortion park
Apr 25, 2011


that wasn't our experience of it btw, we did have to change a bunch of things. e.g. our apps now all have little loops at the beginning that sleep until they confirm that the envoy sidecar is up and fully functional before they try and do anything

distortion park
Apr 25, 2011


dads friend steve posted:

istio seems like an insanely over complicated solution that ends up being much bigger than the problem it claims to solve

I’m real glad the team at work that wanted us to standardize on it just kinda gave up and moved on

in one of the other threads people were making fun of someone for not wanting to learn how deployments work on k8s (which is fine, as an app developer you should care about how your deployments work and be able to change things about them). but the combination of needing to know about k8s and a service mesh (and therefore a load of details about how networking works because there will definitely be hosed up bugs and weird config params not interacting well) and your own special snowflake CI/CD pipeline is basically impossible for a junior dev, and takes up a lot of the mental capacity of anyone else who is trying to deliver application features.

i want to say that the problem is that "self serve" dev ops systems are being chosen by people who dedicate their jobs to infrastructure, not to the people focussing on application and feature development, but have not much confidence in that statement.

distortion park
Apr 25, 2011


we've been migrating services from ecs to k8s for a while now and about 50% result in some unplanned downtime. the end result is sometimes a bit better, sometimes a bit worse, but definitely not worth all the investment by the infra team and all the new poo poo that the rest of the team has had to learn about and debug.

if we were starting from scratch then it might make sense to start with k8s, but as a migration target for some http apis from a perfectly functional ecs/fargate setup it was completely unjustified.

distortion park
Apr 25, 2011


dads friend steve posted:

it’s an interesting point. on the flip side, right now in my org we have the dev team trying to push through an IAC standardization, but they’re also of the mindset that they don’t want to and don’t have time to learn poo poo that should be handled by an infra / platform team. which is fine and valid, but i don’t believe it’s a recipe for success to have the people who want to minimize their own long-term responsibility and involvement in a system designing that system

which I guess was the original industry motivation behind devops as a proper role, but no one in my group, dev or ops, is interested in becoming devops lol

agreed, i think outside some specific setups (some Vercel stuff, maybe simple fly.io type things, some cloud provider services) a full devops role is too hard right now to be broadly achievable, even if it remains a good goal.

distortion park
Apr 25, 2011


Nomnom Cookie posted:

I sincerely wish for everyone who believes that k8s isn’t complex to not run into one of the many, many “edge cases” that make k8s complex. edge cases in scare quotes because they weren’t until k8s showed up

e.g. reliably serving traffic during deployments https://scribe.rip/kubernetes-dirty-endpoint-secret-and-ingress-1abcf752e4dd

distortion park
Apr 25, 2011


I should point out that idk if the problem I originally posted is impossible to solve in general, but it definitely didn't occur using ECS Fargate and definitely did running the same system on eks. This was a pretty small system with light but consistent load

distortion park
Apr 25, 2011


I like ECS because the documentation/public blogs about how to do basic poo poo is pretty good, and most people end up with similar setups. Can't say the same for k8s where there are a million options for everything

distortion park
Apr 25, 2011


nrook posted:

I have a personal project where I need to deploy a django webapp + postgres (+ a staging instance of the same webapp), and since it's a personal project my budget is like $25/mo. In real life I would use k8s for this obviously but at home I'm not going to janitor a loving self-administered kubernetes cluster on a single VPS node. what should I do instead? I'm definitely using containers because there is no way I am going to try to deploy python apps without them.

I hear docker swarm is lightweight and easy to use but I also hear it is for clowns so I'm a bit reluctant. I guess I could just use docker compose

this might be free using fly.io, which also has a very nice user experience

e: I think it should be, as long as you share the pg instance between staging and prod (use different dbs or schemas within the instance): https://fly.io/docs/reference/postgres/

distortion park fucked around with this message at 08:39 on Sep 12, 2022

Adbot
ADBOT LOVES YOU

distortion park
Apr 25, 2011


Corla Plankun posted:

i want prod to be in its own cluster because a few jobs ago we had a situation where a non prod system created too much 'cluster metadata stuff' and brought the whole thing down even though it was just a dumb batch process (i dont know what exactly the "stuff" was, but using Argo for batch processes that were too small created a shitload of pods an hour that lived for like 4 minutes each and somehow this overtaxed some k8s system that was supposed to keep track of data about the cluster. this was apparently an unrecoverable issue because all of the k8s stuff was unresponsive)

agreed, not least because of how dumb you'll look when an issue on staging brings down prod. everyone outside engineering is going to see it as an obviously avoidable fuckup

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply