Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
VikingofRock
Aug 24, 2008




ComradeCosmobot posted:

as long as you can transparently substitute an accessor for the typed field access, sure

if your language can't do that, you can be sure that your business requirements will suddenly change to require you to touch the new foo.last_updated field every time you change the value of foo.butt

Does anyone know if we will be able to overload operator .() in C++20? Because if so, :parrot:

Adbot
ADBOT LOVES YOU

Soricidus
Oct 21, 2010
freedom-hating statist shill

Arcsech posted:

that would be a reasonable argument if java had a way to declare a variable as non-null and have that verified by the type checker

it does not

javac doesn’t have it built in, no, but the nullness checking built into intellij works well. I believe the checker framework does something equivalent if you’re not using the good ide.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

only time I would enforce private fields w/ trivial getters/setters would be as part of a public API. because in that case changing a field access into a function call really has a significant cost (breaking third-party code).

and I imagine if you're in a sufficiently ~~enterprise~~ development environment, you could be working on the same library with another department whose pointy-eared boss would yell if you refactored their calling code from 'int x = y.butt' to 'int x = y.getButt()', making it effectively third-party code even within the same assembly

otherwise, keeping it as field access is more informative and your IDE should handle the refactoring trivially

Volte
Oct 4, 2004

woosh woosh
the method by which you expose your class's internal data layout for direct user manipulation seems barely relevant. the very concept should be avoided. if you need a record type, use a POD struct. the default access specifier is a hint about this. you can even combine POD structs and classes for the best of both worlds. otherwise, your classes should have roles and their methods should be semantically relevant to those roles, not tightly coupled to the internal data layout of the class. (sometimes they will be incidentally the same, like a size/resize pair of methods that may boil down to a trivial getter/setter, but as long as the role of those methods is to get and set the conceptual size of the object and not to set a variable called 'size', i think it's fine)

jeffery
Jan 1, 2013
you think this thing will optimize by may?

Private Speech
Mar 30, 2011

I HAVE EVEN MORE WORTHLESS BEANIE BABIES IN MY COLLECTION THAN I HAVE WORTHLESS POSTS IN THE BEANIE BABY THREAD YET I STILL HAVE THE TEMERITY TO CRITICIZE OTHERS' COLLECTIONS

IF YOU SEE ME TALKING ABOUT BEANIE BABIES, PLEASE TELL ME TO

EAT. SHIT.


VikingofRock posted:

Does anyone know if we will be able to overload operator .() in C++20? Because if so, :parrot:

python sortof has this via the @property decorator, but then again python has a lot of things

gonadic io
Feb 16, 2011

>>=
I have two types of IDs, type A and type B. I need to store their 1:1 mapping in a way that is accessible from multiple different k8s microservices and in a way that introduces as little latency as is reasonable. Lookups are done in both directions, ie converting between the two freely.

Only one of these services needs to write into it, and not very often. The rest read from it, and often.

At this point I'm thinking either directly a db or possibly a tiny rest client over a db. One table, two columns.

redleader
Aug 18, 2005

Engage according to operational parameters
my friend, have you considered mongodb?

cinci zoo sniper
Mar 15, 2013




redleader posted:

my friend, have you considered mongodb?

we're supposed to learn from mistakes itt :negative:

Finster Dexter
Oct 20, 2014

Beyond is Finster's mad vision of Earth transformed.

gonadic io posted:

I have two types of IDs, type A and type B. I need to store their 1:1 mapping in a way that is accessible from multiple different k8s microservices and in a way that introduces as little latency as is reasonable. Lookups are done in both directions, ie converting between the two freely.

Only one of these services needs to write into it, and not very often. The rest read from it, and often.

At this point I'm thinking either directly a db or possibly a tiny rest client over a db. One table, two columns.

gently caress REST, use grpc

Dijkstracula
Mar 18, 2003

You can't spell 'vector field' without me, Professor!

yeah this doesn't sound like you have any real constraints, tech-wise, just so long as you can atomically-update the A->B and B->A mappings (if you can tolerate non-atomic updates your job is even easier). And, since your workloads are read-heavy you can do the ol "stick a cache in front of your DB" thing.

What do you mean concretely when you say "minimal latency"? What's your expected RPS? What's your current k8s deployment like (if you're living across datacentres / AZs then choosing DB X over DB Y won't matter when your requests are bounded by speed of light between DCs)

Carthag Tuek
Oct 15, 2005

Tider skal komme,
tider skal henrulle,
slægt skal følge slægters gang



if only one service ever writes, what about just concatenating them to one id with a special character or whatever in between

nm i guess some services want id A and some want id B?

gonadic io
Feb 16, 2011

>>=

redleader posted:

my friend, have you considered mongodb?

We currently use cosmos lol

Our k8s stuff is all in one region, not guaranteed to be on the same node. Some of it talks to kafka too, and some of it talks to cosmos.

Krankenstyle posted:

if only one service ever writes, what about just concatenating them to one id with a special character or whatever in between

nm i guess some services want id A and some want id B?

Yeah and also we might pass one around before the other is created at all (so before the mapping entry) . Cool idea though.

Aramoro
Jun 1, 2012




Dijkstracula posted:

yeah this doesn't sound like you have any real constraints, tech-wise, just so long as you can atomically-update the A->B and B->A mappings (if you can tolerate non-atomic updates your job is even easier). And, since your workloads are read-heavy you can do the ol "stick a cache in front of your DB" thing.

What do you mean concretely when you say "minimal latency"? What's your expected RPS? What's your current k8s deployment like (if you're living across datacentres / AZs then choosing DB X over DB Y won't matter when your requests are bounded by speed of light between DCs)

Little REST service with an L2 cache sounds like the best option for a read often write seldom that'll have the best read performance I would think.

Finster Dexter
Oct 20, 2014

Beyond is Finster's mad vision of Earth transformed.
okay you guys that have k8s, are your clusters fiddly as hell? Like, our clusters (including prod) are unstable and the CEO constantly rips on the CTO because of k8s with stuff like "i think all our problems are because of k8s, etc. etc."

DONT THREAD ON ME
Oct 1, 2002

by Nyc_Tattoo
Floss Finder

Finster Dexter posted:

okay you guys that have k8s, are your clusters fiddly as hell? Like, our clusters (including prod) are unstable and the CEO constantly rips on the CTO because of k8s with stuff like "i think all our problems are because of k8s, etc. etc."

lwe deployed it at my last company but it was just one devops dude who sorta knew what he was doing and me helping him. it did not solve our problems and only made them worse (which i had warned but whatever). we were way better off with just deploying our own stuff on aws.

at the end of the day it's a stateful, although distributed, application and you're putting 100% of your eggs in that basket. you can't just roll it out with rancher and expect it to work, like we did.

gonadic io
Feb 16, 2011

>>=
ours is fine on gcp. idk what you mean by unstable like are nodes going down and bringing all your poo poo down with them? lol as hell

Destroyenator
Dec 27, 2004

Don't ask me lady, I live in beer
do you need this to be the source of truth for those mappings or can you recover them from somewhere else in a full disaster scenario?
because i'd probably just use a managed redis service and let them all go directly to it. adding a tiny rest api in front of a db is possible but if you control all of the services and want the least failure points, why not*?

*good reasons: you want better metrics on access patterns, you want to ensure only some can write and not rely on well behaved services to not overwrite poo poo, you need to protect the service availability from a runaway client that DoSes you and you want the api to handle throttling different incoming requests.

Arcsech
Aug 5, 2008

Finster Dexter posted:

okay you guys that have k8s, are your clusters fiddly as hell? Like, our clusters (including prod) are unstable and the CEO constantly rips on the CTO because of k8s with stuff like "i think all our problems are because of k8s, etc. etc."

i have not personally run kubernetes, but i know some folks who do

either have a dedicated team to handle its bullshit or pay google/some other cloud provider to do it for you

suffix
Jul 27, 2013

Wheeee!

Finster Dexter posted:

okay you guys that have k8s, are your clusters fiddly as hell? Like, our clusters (including prod) are unstable and the CEO constantly rips on the CTO because of k8s with stuff like "i think all our problems are because of k8s, etc. etc."

on gcp, ime kubernetes itself is stable and reliable - even the persistent storage stuff which was a surprise to me
otoh nginx ingress, google load balancers and the kubernetes network routing are all untrustworthy and liable to drop trafific,
there's churn in the cluster with upscaling/downscaling, upgrading etc.
and of course our own services manage to misbehave and get evicted, or on rare occasions starve other more critical services

overall it has simplified ops but it's basically chaos monkey, you'll have a bad time if you don't have redundancy and retry logic

carry on then
Jul 10, 2010

by VideoGames

(and can't post for 10 years!)

one of our big new products is basically kubernetes with a bunch of stuff piled on top and it has major reliability issues in the field

could be kube, could be us, ¿por que no los dos?

necrotic
Aug 2, 2005
I owe my brother big time for this!
we use k8s for everything, across multiple clouds. We're even shipping on-premise packages using k8s so we don't have to mess with a bunch of different infrastructures. Doing our first non-cloud deploy soon, too.

There are definitely some pain points but it's been great to have a consistent platform no matter where it is deployed, for any scale.

necrotic
Aug 2, 2005
I owe my brother big time for this!

suffix posted:

or on rare occasions starve other more critical services

Your critical services should have matching resource requests and limits so they cannot be starved. Proper resource limits and requests are hugely important to a stable k8s experience. Also pod priorities.

Carthag Tuek
Oct 15, 2005

Tider skal komme,
tider skal henrulle,
slægt skal følge slægters gang



are read-only class properties possible in python? i can make instance ones with @property but ive been googling a bunch and not finding anything for classes...?

CRIP EATIN BREAD
Jun 24, 2002

Hey stop worrying bout my acting bitch, and worry about your WACK ass music. In the mean time... Eat a hot bowl of Dicks! Ice T



Soiled Meat

Finster Dexter posted:

okay you guys that have k8s, are your clusters fiddly as hell? Like, our clusters (including prod) are unstable and the CEO constantly rips on the CTO because of k8s with stuff like "i think all our problems are because of k8s, etc. etc."

we deployed our cluster about a year ago and it's been rock solid. never have to gently caress with k8s itself since it just seems to keep going without much intervention besides an upgrade. plus we are saving a lot of money and can give some of our smaller clients deployments in multiple availability zones without spinning up dedicated EC2 instances for them.

Private Speech
Mar 30, 2011

I HAVE EVEN MORE WORTHLESS BEANIE BABIES IN MY COLLECTION THAN I HAVE WORTHLESS POSTS IN THE BEANIE BABY THREAD YET I STILL HAVE THE TEMERITY TO CRITICIZE OTHERS' COLLECTIONS

IF YOU SEE ME TALKING ABOUT BEANIE BABIES, PLEASE TELL ME TO

EAT. SHIT.


Krankenstyle posted:

are read-only class properties possible in python? i can make instance ones with @property but ive been googling a bunch and not finding anything for classes...?

you can overload __setattr__, but most of the time you just prepend __ to your property and hope that all your users are well-behaved (they won't be)

it depends on what you're trying to do

e: I guess the property() function can do what you want better than __setattr__ (it's different from the decorator, you'd put it into class context)

e2: as with everything in python, it's terrible

Private Speech fucked around with this message at 13:15 on Mar 13, 2019

gonadic io
Feb 16, 2011

>>=

Aramoro posted:

Little REST service with an L2 cache sounds like the best option for a read often write seldom that'll have the best read performance I would think.

Update to this: it has Been Decided that we're going to have gcp memorystore as the backer, and the a running kafka connect pod with the kafka-redis-connector plug in running. This is apparently easier than writing a 10 line bespoke app.

Carthag Tuek
Oct 15, 2005

Tider skal komme,
tider skal henrulle,
slægt skal følge slægters gang



Private Speech posted:

you can overload __setattr__, but most of the time you just prepend __ to your property and hope that all your users are well-behaved (they won't be)

it depends on what you're trying to do

e: I guess the property() function can do what you want better than __setattr__ (it's different from the decorator, you'd put it into class context)

e2: as with everything in python, it's terrible

ugh yeah. thx.

also found a thousand lovely implementations here:
https://stackoverflow.com/questions/128573/using-property-on-classmethods

im just gonna stick with a class method

jeffery
Jan 1, 2013
having a non-computer guy as an admin

Finster Dexter
Oct 20, 2014

Beyond is Finster's mad vision of Earth transformed.

gonadic io posted:

ours is fine on gcp. idk what you mean by unstable like are nodes going down and bringing all your poo poo down with them? lol as hell

For starters. We've had that happen multiple times, but also annoying bullshit like k8s deciding to move pods off a hosed up node, but then for whatever raisin it can't provision persistent volumes in the zone that node is in... so we end up having to do weird bullshit like re-provision nodes in different zones or create special pvc storage classes that are restricted by zone.

It loving sucks and there are new things every day that make me hate k8s, but I'm not sure if it's k8s or just bad configuration on our part. I dunno, I'm not really that experienced with the ops side of things, so have had to rely on CTO to do all this.

Finster Dexter
Oct 20, 2014

Beyond is Finster's mad vision of Earth transformed.

suffix posted:

kubernetes network routing are all untrustworthy and liable to drop trafific,

WHAT WHAT WHAT??

Is this documented anywhere... this seems like 100% a deal-breaker, imo. Our system is a giant piece of poo poo built by Russian contractors that is tightly coupled with ultimate trust in NATS as a message queue, and if traffic is just lost, that could explain a LOT of issues we've been having.

If this is a documented known issue, then we need to get off it ASAP

necrotic
Aug 2, 2005
I owe my brother big time for this!

Finster Dexter posted:

For starters. We've had that happen multiple times, but also annoying bullshit like k8s deciding to move pods off a hosed up node, but then for whatever raisin it can't provision persistent volumes in the zone that node is in... so we end up having to do weird bullshit like re-provision nodes in different zones or create special pvc storage classes that are restricted by zone.

It loving sucks and there are new things every day that make me hate k8s, but I'm not sure if it's k8s or just bad configuration on our part. I dunno, I'm not really that experienced with the ops side of things, so have had to rely on CTO to do all this.

Yeah volumes can be annoying, especially in AWS. There are improvements around making sure the pod ends up on a node in the correct AZ but the best approach is to set the affinity of those deployments to either only one AZ, and to use pod priorities to ensure those pods can be scheduled correctly. GCP and Azure don't have this issue as badly.

necrotic
Aug 2, 2005
I owe my brother big time for this!

Finster Dexter posted:

WHAT WHAT WHAT??

Is this documented anywhere... this seems like 100% a deal-breaker, imo. Our system is a giant piece of poo poo built by Russian contractors that is tightly coupled with ultimate trust in NATS as a message queue, and if traffic is just lost, that could explain a LOT of issues we've been having.

If this is a documented known issue, then we need to get off it ASAP

There's a lot of different ways this can be solved, depending on the load balancer in use. In GCP the http balancers have a fixed 10 minute keep alive that you need to be aware of and have your services support. We moved to an L4 balancer with an internal ingress controller which helped a lot, but has it's own annoyances.

I have some articles I can share, just need to go find them again.

Finster Dexter
Oct 20, 2014

Beyond is Finster's mad vision of Earth transformed.
Yeah we moved to L4 load balancers as well, iirc.

suffix
Jul 27, 2013

Wheeee!

Finster Dexter posted:

WHAT WHAT WHAT??

Is this documented anywhere... this seems like 100% a deal-breaker, imo. Our system is a giant piece of poo poo built by Russian contractors that is tightly coupled with ultimate trust in NATS as a message queue, and if traffic is just lost, that could explain a LOT of issues we've been having.

If this is a documented known issue, then we need to get off it ASAP

packet loss is a possibliity in any network so lol if you're not confirming delivery
in some limited testing we saw more than there reasonably should be, especially when pods went up and down

no idea about the cause but i figured some combination of endpoint updates being slow and stuff like
https://tech.xing.com/a-reason-for-unexplained-connection-timeouts-on-kubernetes-docker-abd041cf7e02
i don't know if that one has been fixed or not

we're using tcp connections so mostly dropped packages would be retried and just result in some latency jitter but some times we had connection timeouts
all stuff that should show up in your logs and metrics

everything should be retried in any case but just in case we try keep client-facing stuff mostly static, e.g. less aggressive autoscaling so we have a fixed set of pods

suffix fucked around with this message at 18:31 on Mar 13, 2019

DONT THREAD ON ME
Oct 1, 2002

by Nyc_Tattoo
Floss Finder
i am going through a second level of imposter syndrome wherein i doubt my ability to ever give a gently caress about solving a business problem beyond finding whatever cute and satisfying technical solutions that i can. i am seriously considering going back to someplace like redhat where i can just write easy code at work and spend my free time programming stuff i enjoy.

CRIP EATIN BREAD
Jun 24, 2002

Hey stop worrying bout my acting bitch, and worry about your WACK ass music. In the mean time... Eat a hot bowl of Dicks! Ice T



Soiled Meat

Finster Dexter posted:

WHAT WHAT WHAT??

Is this documented anywhere... this seems like 100% a deal-breaker, imo. Our system is a giant piece of poo poo built by Russian contractors that is tightly coupled with ultimate trust in NATS as a message queue, and if traffic is just lost, that could explain a LOT of issues we've been having.

If this is a documented known issue, then we need to get off it ASAP

what do you do if a switch or availability zone goes down?

MononcQc
May 29, 2007

DONT THREAD ON ME posted:

i am going through a second level of imposter syndrome wherein i doubt my ability to ever give a gently caress about solving a business problem beyond finding whatever cute and satisfying technical solutions that i can. i am seriously considering going back to someplace like redhat where i can just write easy code at work and spend my free time programming stuff i enjoy.

ask to see if you can get more direct information on how a customer (whether real or not) actually uses the thing. It might turn out to be helpful to gain more empathy in their use cases or why it's important, and alternatively provide away to actually implement nothing at all and still leave them happy.

but it's hard as hell to be caring about people who don't exist or don't care about a feature they have you add just so it fills a checklist on a b2b contract demand when the checklist item has just been copied from competition and you gotta have all the boxes even if nobody cares

CRIP EATIN BREAD
Jun 24, 2002

Hey stop worrying bout my acting bitch, and worry about your WACK ass music. In the mean time... Eat a hot bowl of Dicks! Ice T



Soiled Meat

necrotic posted:

Yeah volumes can be annoying, especially in AWS. There are improvements around making sure the pod ends up on a node in the correct AZ but the best approach is to set the affinity of those deployments to either only one AZ, and to use pod priorities to ensure those pods can be scheduled correctly. GCP and Azure don't have this issue as badly.

I'm not sure if it's a function of kops or kubernetes itself, but all the nodes get labelled with "failure-domain.beta.kubernetes.io/zone=AWS_AZ" that you can easily use for node affinity to a specific AZ. this is useful if you actually have persistent volume claims. that's shortcoming of EBS though, since a volume is created in an AZ and not available in other AZs.

we don't actually have any persistent volumes because everything is in S3 or a database.

what state do you need to maintain in a file? seems like you would have the same issue just using AWS autoscaling groups. this really isnt a kubernetes specific issue.
for ASGs I guess you can create an ASG in each AZ and attach the volume on startup via user data scripts, but that's basically the same as using node affinity per deployment.

Adbot
ADBOT LOVES YOU

DONT THREAD ON ME
Oct 1, 2002

by Nyc_Tattoo
Floss Finder

MononcQc posted:

ask to see if you can get more direct information on how a customer (whether real or not) actually uses the thing. It might turn out to be helpful to gain more empathy in their use cases or why it's important, and alternatively provide away to actually implement nothing at all and still leave them happy.

but it's hard as hell to be caring about people who don't exist or don't care about a feature they have you add just so it fills a checklist on a b2b contract demand when the checklist item has just been copied from competition and you gotta have all the boxes even if nobody cares

your advice is really good and it's benefited me in the past. as you say, though, I think i'm mostly traumatized by my previous job where we didn't really have any users and we were building a very complicated product under the steering of people who were just guessing at what the users wanted.

my biggest problem with this job hunting cycle has been reflecting positively on what i did at my previous company. i was huge for morale and a critical player but we were working on something that was fundamentally never going to work and basically all of my efforts ended in failure. i grew a lot as a programmer, but when i get questions like 'what did you accomplish at your previous company' i've had a hard time putting a nice bow on that.

anyhow i should be posting in the interviewing thread, sorry.

DONT THREAD ON ME fucked around with this message at 20:04 on Mar 13, 2019

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply