Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine

ate poo poo on live tv posted:

Don't forget that whoever implemented your docker orchestration system doesn't understand that routing is a thing so all the IPs have to be in the same network. This is why you can only ever have one Availability Zone in AWS, it's just impossible to work in any other way.

you can always use an overlay

Adbot
ADBOT LOVES YOU

my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine
also is that even true? I thought in AWS you can solve that with routing tables

jesus WEP
Oct 17, 2004


Bored Online posted:

its a bunch of golang and yaml meant to abstract away the chore of scheduling containers onto a server cluster. its poo poo
young man yamls at cloud

ate shit on live tv
Feb 15, 2004

by Azathoth

Nomnom Cookie posted:

this is a weirdly specific gripe that i dont understand

Which part don't you understand? Why writing the code so that layer 2 adjacency is required is bad? Or something else.

The gripe comes from my background as a network engineer where every single place I've worked made a dumb decision that predates me where their dumb application requires vlans to function. This inevitably means that "for redundancy" they need to have the same IP-space in multiple locations which wrecks havoc on network stability as I'm forced to use poo poo from the 80's like spanning tree and we are stuck using ipv4 forever. This get's even worse when they decide to port their application to "the cloud" and now they are stuck within a single availability zone in a single region and so their application is highly prone to failure creating a headache for me.

ate shit on live tv
Feb 15, 2004

by Azathoth

my homie dhall posted:

also is that even true? I thought in AWS you can solve that with routing tables

If your application expects layer 2 adjacency, you can't route that traffic. A trivial example would be some kind of janky fintech application that uses the 224.0.0.0/24 multicast space.

Nomnom Cookie
Aug 30, 2009



ate poo poo on live tv posted:

Which part don't you understand? Why writing the code so that layer 2 adjacency is required is bad? Or something else.

The gripe comes from my background as a network engineer where every single place I've worked made a dumb decision that predates me where their dumb application requires vlans to function. This inevitably means that "for redundancy" they need to have the same IP-space in multiple locations which wrecks havoc on network stability as I'm forced to use poo poo from the 80's like spanning tree and we are stuck using ipv4 forever. This get's even worse when they decide to port their application to "the cloud" and now they are stuck within a single availability zone in a single region and so their application is highly prone to failure creating a headache for me.

ah you are responsible for some application that was written by people holding the packets wrong, and then cloud is made for people who don't even know what vlans are so that causes a lot of pain. i think i get it now

12 rats tied together
Sep 7, 2006

i dont even know how you could hold the packets so wrong as to require layer 2 connectivity. that should be completely transparent to IP, which is a layer 3 protocol

Nomnom Cookie
Aug 30, 2009



sending raw ethernet frames directly to cluster peers "for performance". like if you didn't hear a skinny 23 year old white guy talking earnestly about routing overhead when you read that sentence, you're still a junior

carry on then
Jul 10, 2010

by VideoGames

(and can't post for 10 years!)

kube drives a certain kind of control-freak nerd absolutely nuts and for that reason alone i can't hate it completely

dads friend steve
Dec 24, 2004

Nomnom Cookie posted:

sending raw ethernet frames directly to cluster peers "for performance". like if you didn't hear a skinny 23 year old white guy talking earnestly about routing overhead when you read that sentence, you're still a junior

what the gently caress

that can’t be what ate poo poo on live tv is talking about

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?

jesus WEP posted:

kubernetes: young man yamls at cloud

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?
I’m honestly surprised nobody has designed a “modern” transaction processing system with a “modern” syntax for job control to supplant all this garbage

carry on then
Jul 10, 2010

by VideoGames

(and can't post for 10 years!)

eschaton posted:

I’m honestly surprised nobody has designed a “modern” transaction processing system with a “modern” syntax for job control to supplant all this garbage

I don’t know about modern but you can do something like that with JBatch which uses an xml-based jcl to define jobs

12 rats tied together
Sep 7, 2006

you have to figure this garbage came from google, which is primarily an adtech company, which (before they started just cheating at the adtech part) meant putting a bunch of network interfaces all around the world and saturating them fully as efficiently as possible

it would be significantly harder to ship specialized mainframes around the world and janitor them all perfectly that it would be to buy whatever is available, abstract node performance out to "1000ths of a cpu" and "MiB of memory", and then write a lovely control plane on top so you can treat them all effectively the same

just like any other adtech company there are likely multiple dozens of google SWE actively firefighting the internet firehose of poo poo at any given moment, since "just capturing it all" is the main challenge and your primary revenue driver (again, unless you just start cheating)

there is certainly a part later down the road where the firehose of poo poo is processed by a relatively normal job processing system, for billing and customer metrics. IME this tends to be hp vertica but im sure google has their own thing

ate shit on live tv
Feb 15, 2004

by Azathoth

dads friend steve posted:

what the gently caress

that can’t be what ate poo poo on live tv is talking about

There are a few things I've encountered where "layer2 adjacency" is required. Because the application does something dumb.
VMWare Vmotion. Requires matching IP-Space at the destination, used to require layer 2 the whole way, and this dumb requirement made Cisco and a bunch of other vendors create OTV (cisco proprietary) and standardize EVPN-VXLAN.
SANs. I'm not a storage guru, but I know there is some old-rear end Storage Area Network protocol that can't be routed for reasons, even though it's payload is in an IP Packet. I assume storage has a latency requirement and so rather then communicate that latency requirement to their customers, back in the day they either used link-local addressing, so it couldn't be routed at all, or they set the IP Packets to ttl 1, which effectively makes it unroutable.

Various Discovery protocols. Kafka used to be bad for this, but it wasn't the only one by any means. Tons of apps still use broadcasts to find neighbor nodes and broadcasts cannot pass layer3 boundry's. IPX was one of these. Bonjour, various Server Initial Configuration protocols (MaaS, PXE, etc.) were also all layer2 bound for a long time. But even if they've been fixed now, you'll still run into the Kafka problem where even though the node communicate over plan old IP Packet's with >1 ttl, since they lack a discovery mechanism for creating their cluster, the administrator's don't know how to create their cluster and continue to insist on "vlans."

post hole digger
Mar 21, 2011

jesus WEP posted:

young man yamls at cloud

hah

Bored Online
May 25, 2009

We don't need Rome telling us what to do.

eschaton posted:

I’m honestly surprised nobody has designed a “modern” transaction processing system with a “modern” syntax for job control to supplant all this garbage

when i think alternative to koobs i think nomad and get sad

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?

12 rats tied together posted:

it would be significantly harder to ship specialized mainframes around the world and janitor them all perfectly that it would be to buy whatever is available, abstract node performance out to "1000ths of a cpu" and "MiB of memory", and then write a lovely control plane on top so you can treat them all effectively the same

except you don’t need specialized mainframe hardware to do real transaction processing, even to do it fast; the specialized hardware is mainly for uptime and to continue scaling decades-old applications

for example DEC and HP had transaction processing systems for VMS and MPE and both of those ran on hardware not substantially different from any other 32-bit workstation, server, or minicomputer—at some point I’ll get my hands on an HP rp2400 series server (ideally an rp2470) that can natively boot all of MPE/iX, HP-UX, Linux, and BSD

k8s and pods are essentially more reinvention of stuff figured out during the 1960s and 1970s by people who never really look outside UNIX (and, increasingly, “web technology”)

Nomnom Cookie
Aug 30, 2009



dads friend steve posted:

what the gently caress

that can’t be what ate poo poo on live tv is talking about

not literally, but that level of brain damage, yes. have you not done thing with packet before

my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine
m-m-m-multicast

Bored Online
May 25, 2009

We don't need Rome telling us what to do.
yaml-lamma-ding-dong

12 rats tied together
Sep 7, 2006

eschaton posted:

except you don’t need specialized mainframe hardware to do real transaction processing, even to do it fast; the specialized hardware is mainly for uptime and to continue scaling decades-old applications

for example DEC and HP had transaction processing systems for VMS and MPE and both of those ran on hardware not substantially different from any other 32-bit workstation, server, or minicomputer—at some point I’ll get my hands on an HP rp2400 series server (ideally an rp2470) that can natively boot all of MPE/iX, HP-UX, Linux, and BSD

k8s and pods are essentially more reinvention of stuff figured out during the 1960s and 1970s by people who never really look outside UNIX (and, increasingly, “web technology”)

i dont know anything about these systems so i will take your word for it. out of genuine curiosity, no rhetoric intended, can I take a uhhh HP MPE (i dont know what this means) transaction processing system and run it at ~3000-3500 transactions/sec per node @ 2500 nodes in a regional datacenter? the transactions are inbound HTTP calls that happen when people load a website that i have a 1px transparent png on.

is this something that would be meaningfully better on this tech from the 60s and 70s? the comparison points here are usually just using bare metal, vs using kubernetes (which is stupid).

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?
MPE is the HP 3000 series operating system, the transaction processing system is layered on top of OS facilities (created in part to support transaction processing)

unfortunately I don’t know what kind of transactions/sec you could get with the final releases of MPE on HP Integrity servers (it was discontinued in the late 2000s as HP consolidated on UNIX) but I’m pretty sure it was up there, I’ll do some digging

dads friend steve
Dec 24, 2004

Nomnom Cookie posted:

not literally, but that level of brain damage, yes. have you not done thing with packet before

hell no I haven’t. I live in layer 7, making my living the old fashioned way - writing fart apps in Java and Go

ate shit on live tv
Feb 15, 2004

by Azathoth

Nomnom Cookie posted:

not literally, but that level of brain damage, yes. have you not done thing with packet before

I pipe all my bash commands into nc which connect to other servers running nc -l which is piped into bash. This is my orchestration strategy.

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?
based on references to tpmC benchmarks it looks like the midrange models of the early 2000s were able to handle a couple thousand update-intensive transactions per second (and I think the benchmark is intended to model financial transactions, not eg log entries)

they could also be clustered against a fiber channel SAN for more throughput, backup, and so on

you probably wouldn’t want to buy thousands of those systems though since we’re talking like a hundred grand per

what I’m really saying is that the implementation techniques used for oldschool transaction processing systems could result in amazing throughput on modern hardware if the “all the world is Linux” and “we’re inventing things that are completely brand new, gotta start from first principles” and “gotta use HTTP everywhere” mindsets were set aside, and not necessarily sacrifice maintainability

12 rats tied together
Sep 7, 2006

thank you for doing the research, that is pretty sick actually. i agree with your take on it, it sucks that the most successful technology company is google and they make all of their money from banner and video ads over http, and its probably poisoned computer for good

ate shit on live tv
Feb 15, 2004

by Azathoth

eschaton posted:

based on references to tpmC benchmarks it looks like the midrange models of the early 2000s were able to handle a couple thousand update-intensive transactions per second (and I think the benchmark is intended to model financial transactions, not eg log entries)

they could also be clustered against a fiber channel SAN for more throughput, backup, and so on

you probably wouldn’t want to buy thousands of those systems though since we’re talking like a hundred grand per

what I’m really saying is that the implementation techniques used for oldschool transaction processing systems could result in amazing throughput on modern hardware if the “all the world is Linux” and “we’re inventing things that are completely brand new, gotta start from first principles” and “gotta use HTTP everywhere” mindsets were set aside, and not necessarily sacrifice maintainability

I think having API's standardized over http was a net good. If you want something better, there is gRPC. Which again is from google, but it was specifically designed for fleet orchestration, not end user requests.

Anyway, ban advertising.

Progressive JPEG
Feb 19, 2003

ate poo poo on live tv posted:

I pipe all my bash commands into nc which connect to other servers running nc -l which is piped into bash. This is my orchestration strategy.

you might like dask

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

Silver Alicorn posted:

I’ve read this entire thread and I still don’t know what a kubernete is

i tried explaining it to our salespeople via a metaphor about how an operating system manages sharing CPU time/memory space and providing hardware abstractions for applications on a single machine, but for a machine cluster, and am pretty sure our sales team has no better understanding of kubernetes as a result

Nomnom Cookie posted:

like when you say "maybe utilizing static internal cluster state info" i start thinking that an operator probably isnt what you actually want to make, because the whole point of an operator is that it sits around watching events on a resource and reacting to them. if you don't care about getting updates from the apiserver then what you're making is not an operator more or less by definition

for i guess what are essentially legacy reasons (building our initial k8s poo poo before operator framework existed) we have helm to deploy our application/CRDs/associated supporting resources and upgrade it to new versions, and then a separate controller to continuously handle transforming other resources into configuration. these could be theoretically combined into a single operator like what you're describing

the reason i'd like an operator that doesn't need to actively watch resources is to basically get something like the helm half, but without having to deal with handling complex logic in go templates, because go templates are complete rear end to deal with and a massive pain to test. management of the application itself doesn't need to run always, just at the times users update app settings or upgrade to a new version.

that'd be useful both as a transition point to an operator that's doing both and for users that are real picky about what permissions you grant to service accounts

Cybernetic Vermin
Apr 18, 2005

i do appreciate kubernetes and this general category of system for having managed to carve loose a giant piece of functionality full of pitfalls for system administrators to handle instead of doing it very badly somewhere code-adjacent. hats off, i intend to never learn all of this.

Cybernetic Vermin
Apr 18, 2005

e: oops, double post

Nomnom Cookie
Aug 30, 2009



CMYK BLYAT! posted:

i tried explaining it to our salespeople via a metaphor about how an operating system manages sharing CPU time/memory space and providing hardware abstractions for applications on a single machine, but for a machine cluster, and am pretty sure our sales team has no better understanding of kubernetes as a result

for i guess what are essentially legacy reasons (building our initial k8s poo poo before operator framework existed) we have helm to deploy our application/CRDs/associated supporting resources and upgrade it to new versions, and then a separate controller to continuously handle transforming other resources into configuration. these could be theoretically combined into a single operator like what you're describing

the reason i'd like an operator that doesn't need to actively watch resources is to basically get something like the helm half, but without having to deal with handling complex logic in go templates, because go templates are complete rear end to deal with and a massive pain to test. management of the application itself doesn't need to run always, just at the times users update app settings or upgrade to a new version.

that'd be useful both as a transition point to an operator that's doing both and for users that are real picky about what permissions you grant to service accounts

ok so you know you don't have to use operator framework. again, if you're not watching for changes it's not an operator. it's just code that reads from the cluster and then writes to the cluster and then terminates. replace the helm crap with some more CRDs, run a script to turn the new resources into whatever is the result of a deployment, and when everyone realizes that this is stupid you have a clear path to handle those CRDs with the current controller/a new one.

it sounds like these concerns are separable so maybe you want to end up with two controllers but anyway, why are you stuck on operator framework. just don't use it until you need to use it

Bored Online
May 25, 2009

We don't need Rome telling us what to do.

Cybernetic Vermin posted:

i do appreciate kubernetes and this general category of system for having managed to carve loose a giant piece of functionality full of pitfalls for system administrators to handle instead of doing it very badly somewhere code-adjacent. hats off, i intend to never learn all of this.

feeling incredibly owned rn

Hed
Mar 31, 2004

Fun Shoe
i'm reading a kubernetes book right now and am questioning what i'm doing.

I want to run containers that are small self-contained jobs, so the async task queue is way more part of the solution than serving a crud website. So if I have jobs (containers) that I want to run on a schedule or by trigger what's the equivalent in kochos?

If this were just hosted on a server I'd use something like django with celery as the async task queue. I don't see anything but the fields of yaml for configuring new sets of things to run in k8s.

Bored Online
May 25, 2009

We don't need Rome telling us what to do.

Hed posted:

i'm reading a kubernetes book right now and am questioning what i'm doing.

I want to run containers that are small self-contained jobs, so the async task queue is way more part of the solution than serving a crud website. So if I have jobs (containers) that I want to run on a schedule or by trigger what's the equivalent in kochos?

If this were just hosted on a server I'd use something like django with celery as the async task queue. I don't see anything but the fields of yaml for configuring new sets of things to run in k8s.

you can use aws ecs and run fields of json

12 rats tied together
Sep 7, 2006

containers auto terminate when their entrypoint finishes, so, a job in kubernetes is just a deployment that has a set of containers that spin up and do something. the containers probably run software specifically for this

if that sounds stupid and way harder than celery for no benefit, thats because it is

El Mero Mero
Oct 13, 2001


ya mlyp

distortion park
Apr 25, 2011


Hed posted:

i'm reading a kubernetes book right now and am questioning what i'm doing.

I want to run containers that are small self-contained jobs, so the async task queue is way more part of the solution than serving a crud website. So if I have jobs (containers) that I want to run on a schedule or by trigger what's the equivalent in kochos?

If this were just hosted on a server I'd use something like django with celery as the async task queue. I don't see anything but the fields of yaml for configuring new sets of things to run in k8s.

We use an external, managed airflow instance and then run stuff using the k8s airflow operator. yes it would be a lot simpler and cheaper to just run stuff on a big server

Adbot
ADBOT LOVES YOU

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

Hed posted:

i'm reading a kubernetes book right now and am questioning what i'm doing.

I want to run containers that are small self-contained jobs, so the async task queue is way more part of the solution than serving a crud website. So if I have jobs (containers) that I want to run on a schedule or by trigger what's the equivalent in kochos?

If this were just hosted on a server I'd use something like django with celery as the async task queue. I don't see anything but the fields of yaml for configuring new sets of things to run in k8s.

jobs are, well, jobs: https://kubernetes.io/docs/concepts/workloads/controllers/job/

it spawns a container and checks if it it succeeds. if it fails, it tries again. if it doesn't fail, it marks itself done and does nothing more.

on a schedule, https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/

it's jobs, but you get a new one every tick of the schedule

triggers you generally need some sort of controller that will poll the API and spawn a job when the trigger happens

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply