ComradeCosmobot posted:as long as you can transparently substitute an accessor for the typed field access, sure Does anyone know if we will be able to overload operator .() in C++20? Because if so,
|
|
# ? Mar 12, 2019 07:42 |
|
|
# ? Apr 28, 2024 03:31 |
|
Arcsech posted:that would be a reasonable argument if java had a way to declare a variable as non-null and have that verified by the type checker javac doesn’t have it built in, no, but the nullness checking built into intellij works well. I believe the checker framework does something equivalent if you’re not using the good ide.
|
# ? Mar 12, 2019 09:07 |
|
only time I would enforce private fields w/ trivial getters/setters would be as part of a public API. because in that case changing a field access into a function call really has a significant cost (breaking third-party code). and I imagine if you're in a sufficiently ~~enterprise~~ development environment, you could be working on the same library with another department whose pointy-eared boss would yell if you refactored their calling code from 'int x = y.butt' to 'int x = y.getButt()', making it effectively third-party code even within the same assembly otherwise, keeping it as field access is more informative and your IDE should handle the refactoring trivially
|
# ? Mar 12, 2019 10:41 |
|
the method by which you expose your class's internal data layout for direct user manipulation seems barely relevant. the very concept should be avoided. if you need a record type, use a POD struct. the default access specifier is a hint about this. you can even combine POD structs and classes for the best of both worlds. otherwise, your classes should have roles and their methods should be semantically relevant to those roles, not tightly coupled to the internal data layout of the class. (sometimes they will be incidentally the same, like a size/resize pair of methods that may boil down to a trivial getter/setter, but as long as the role of those methods is to get and set the conceptual size of the object and not to set a variable called 'size', i think it's fine)
|
# ? Mar 12, 2019 14:26 |
you think this thing will optimize by may?
|
|
# ? Mar 12, 2019 16:20 |
|
VikingofRock posted:Does anyone know if we will be able to overload operator .() in C++20? Because if so, python sortof has this via the @property decorator, but then again python has a lot of things
|
# ? Mar 12, 2019 18:04 |
|
I have two types of IDs, type A and type B. I need to store their 1:1 mapping in a way that is accessible from multiple different k8s microservices and in a way that introduces as little latency as is reasonable. Lookups are done in both directions, ie converting between the two freely. Only one of these services needs to write into it, and not very often. The rest read from it, and often. At this point I'm thinking either directly a db or possibly a tiny rest client over a db. One table, two columns.
|
# ? Mar 12, 2019 19:11 |
|
my friend, have you considered mongodb?
|
# ? Mar 12, 2019 19:38 |
redleader posted:my friend, have you considered mongodb? we're supposed to learn from mistakes itt
|
|
# ? Mar 12, 2019 19:43 |
|
gonadic io posted:I have two types of IDs, type A and type B. I need to store their 1:1 mapping in a way that is accessible from multiple different k8s microservices and in a way that introduces as little latency as is reasonable. Lookups are done in both directions, ie converting between the two freely. gently caress REST, use grpc
|
# ? Mar 12, 2019 19:43 |
|
yeah this doesn't sound like you have any real constraints, tech-wise, just so long as you can atomically-update the A->B and B->A mappings (if you can tolerate non-atomic updates your job is even easier). And, since your workloads are read-heavy you can do the ol "stick a cache in front of your DB" thing. What do you mean concretely when you say "minimal latency"? What's your expected RPS? What's your current k8s deployment like (if you're living across datacentres / AZs then choosing DB X over DB Y won't matter when your requests are bounded by speed of light between DCs)
|
# ? Mar 12, 2019 19:46 |
|
nm i guess some services want id A and some want id B?
|
# ? Mar 12, 2019 20:25 |
|
redleader posted:my friend, have you considered mongodb? We currently use cosmos lol Our k8s stuff is all in one region, not guaranteed to be on the same node. Some of it talks to kafka too, and some of it talks to cosmos. Krankenstyle posted:
Yeah and also we might pass one around before the other is created at all (so before the mapping entry) . Cool idea though.
|
# ? Mar 12, 2019 20:32 |
|
Dijkstracula posted:yeah this doesn't sound like you have any real constraints, tech-wise, just so long as you can atomically-update the A->B and B->A mappings (if you can tolerate non-atomic updates your job is even easier). And, since your workloads are read-heavy you can do the ol "stick a cache in front of your DB" thing. Little REST service with an L2 cache sounds like the best option for a read often write seldom that'll have the best read performance I would think.
|
# ? Mar 12, 2019 20:47 |
|
okay you guys that have k8s, are your clusters fiddly as hell? Like, our clusters (including prod) are unstable and the CEO constantly rips on the CTO because of k8s with stuff like "i think all our problems are because of k8s, etc. etc."
|
# ? Mar 12, 2019 23:06 |
|
Finster Dexter posted:okay you guys that have k8s, are your clusters fiddly as hell? Like, our clusters (including prod) are unstable and the CEO constantly rips on the CTO because of k8s with stuff like "i think all our problems are because of k8s, etc. etc." lwe deployed it at my last company but it was just one devops dude who sorta knew what he was doing and me helping him. it did not solve our problems and only made them worse (which i had warned but whatever). we were way better off with just deploying our own stuff on aws. at the end of the day it's a stateful, although distributed, application and you're putting 100% of your eggs in that basket. you can't just roll it out with rancher and expect it to work, like we did.
|
# ? Mar 13, 2019 00:05 |
|
ours is fine on gcp. idk what you mean by unstable like are nodes going down and bringing all your poo poo down with them? lol as hell
|
# ? Mar 13, 2019 00:15 |
|
do you need this to be the source of truth for those mappings or can you recover them from somewhere else in a full disaster scenario? because i'd probably just use a managed redis service and let them all go directly to it. adding a tiny rest api in front of a db is possible but if you control all of the services and want the least failure points, why not*? *good reasons: you want better metrics on access patterns, you want to ensure only some can write and not rely on well behaved services to not overwrite poo poo, you need to protect the service availability from a runaway client that DoSes you and you want the api to handle throttling different incoming requests.
|
# ? Mar 13, 2019 00:48 |
|
Finster Dexter posted:okay you guys that have k8s, are your clusters fiddly as hell? Like, our clusters (including prod) are unstable and the CEO constantly rips on the CTO because of k8s with stuff like "i think all our problems are because of k8s, etc. etc." i have not personally run kubernetes, but i know some folks who do either have a dedicated team to handle its bullshit or pay google/some other cloud provider to do it for you
|
# ? Mar 13, 2019 01:09 |
|
Finster Dexter posted:okay you guys that have k8s, are your clusters fiddly as hell? Like, our clusters (including prod) are unstable and the CEO constantly rips on the CTO because of k8s with stuff like "i think all our problems are because of k8s, etc. etc." on gcp, ime kubernetes itself is stable and reliable - even the persistent storage stuff which was a surprise to me otoh nginx ingress, google load balancers and the kubernetes network routing are all untrustworthy and liable to drop trafific, there's churn in the cluster with upscaling/downscaling, upgrading etc. and of course our own services manage to misbehave and get evicted, or on rare occasions starve other more critical services overall it has simplified ops but it's basically chaos monkey, you'll have a bad time if you don't have redundancy and retry logic
|
# ? Mar 13, 2019 01:48 |
|
one of our big new products is basically kubernetes with a bunch of stuff piled on top and it has major reliability issues in the field could be kube, could be us, ¿por que no los dos?
|
# ? Mar 13, 2019 01:52 |
|
we use k8s for everything, across multiple clouds. We're even shipping on-premise packages using k8s so we don't have to mess with a bunch of different infrastructures. Doing our first non-cloud deploy soon, too. There are definitely some pain points but it's been great to have a consistent platform no matter where it is deployed, for any scale.
|
# ? Mar 13, 2019 05:01 |
|
suffix posted:or on rare occasions starve other more critical services Your critical services should have matching resource requests and limits so they cannot be starved. Proper resource limits and requests are hugely important to a stable k8s experience. Also pod priorities.
|
# ? Mar 13, 2019 05:08 |
|
are read-only class properties possible in python? i can make instance ones with @property but ive been googling a bunch and not finding anything for classes...?
|
# ? Mar 13, 2019 05:53 |
|
Finster Dexter posted:okay you guys that have k8s, are your clusters fiddly as hell? Like, our clusters (including prod) are unstable and the CEO constantly rips on the CTO because of k8s with stuff like "i think all our problems are because of k8s, etc. etc." we deployed our cluster about a year ago and it's been rock solid. never have to gently caress with k8s itself since it just seems to keep going without much intervention besides an upgrade. plus we are saving a lot of money and can give some of our smaller clients deployments in multiple availability zones without spinning up dedicated EC2 instances for them.
|
# ? Mar 13, 2019 13:01 |
|
Krankenstyle posted:are read-only class properties possible in python? i can make instance ones with @property but ive been googling a bunch and not finding anything for classes...? you can overload __setattr__, but most of the time you just prepend __ to your property and hope that all your users are well-behaved (they won't be) it depends on what you're trying to do e: I guess the property() function can do what you want better than __setattr__ (it's different from the decorator, you'd put it into class context) e2: as with everything in python, it's terrible Private Speech fucked around with this message at 13:15 on Mar 13, 2019 |
# ? Mar 13, 2019 13:10 |
|
Aramoro posted:Little REST service with an L2 cache sounds like the best option for a read often write seldom that'll have the best read performance I would think. Update to this: it has Been Decided that we're going to have gcp memorystore as the backer, and the a running kafka connect pod with the kafka-redis-connector plug in running. This is apparently easier than writing a 10 line bespoke app.
|
# ? Mar 13, 2019 13:26 |
|
Private Speech posted:you can overload __setattr__, but most of the time you just prepend __ to your property and hope that all your users are well-behaved (they won't be) ugh yeah. thx. also found a thousand lovely implementations here: https://stackoverflow.com/questions/128573/using-property-on-classmethods im just gonna stick with a class method
|
# ? Mar 13, 2019 15:08 |
having a non-computer guy as an admin
|
|
# ? Mar 13, 2019 15:56 |
|
gonadic io posted:ours is fine on gcp. idk what you mean by unstable like are nodes going down and bringing all your poo poo down with them? lol as hell For starters. We've had that happen multiple times, but also annoying bullshit like k8s deciding to move pods off a hosed up node, but then for whatever raisin it can't provision persistent volumes in the zone that node is in... so we end up having to do weird bullshit like re-provision nodes in different zones or create special pvc storage classes that are restricted by zone. It loving sucks and there are new things every day that make me hate k8s, but I'm not sure if it's k8s or just bad configuration on our part. I dunno, I'm not really that experienced with the ops side of things, so have had to rely on CTO to do all this.
|
# ? Mar 13, 2019 16:30 |
|
suffix posted:kubernetes network routing are all untrustworthy and liable to drop trafific, WHAT WHAT WHAT?? Is this documented anywhere... this seems like 100% a deal-breaker, imo. Our system is a giant piece of poo poo built by Russian contractors that is tightly coupled with ultimate trust in NATS as a message queue, and if traffic is just lost, that could explain a LOT of issues we've been having. If this is a documented known issue, then we need to get off it ASAP
|
# ? Mar 13, 2019 16:33 |
|
Finster Dexter posted:For starters. We've had that happen multiple times, but also annoying bullshit like k8s deciding to move pods off a hosed up node, but then for whatever raisin it can't provision persistent volumes in the zone that node is in... so we end up having to do weird bullshit like re-provision nodes in different zones or create special pvc storage classes that are restricted by zone. Yeah volumes can be annoying, especially in AWS. There are improvements around making sure the pod ends up on a node in the correct AZ but the best approach is to set the affinity of those deployments to either only one AZ, and to use pod priorities to ensure those pods can be scheduled correctly. GCP and Azure don't have this issue as badly.
|
# ? Mar 13, 2019 16:38 |
|
Finster Dexter posted:WHAT WHAT WHAT?? There's a lot of different ways this can be solved, depending on the load balancer in use. In GCP the http balancers have a fixed 10 minute keep alive that you need to be aware of and have your services support. We moved to an L4 balancer with an internal ingress controller which helped a lot, but has it's own annoyances. I have some articles I can share, just need to go find them again.
|
# ? Mar 13, 2019 16:40 |
|
Yeah we moved to L4 load balancers as well, iirc.
|
# ? Mar 13, 2019 16:59 |
|
Finster Dexter posted:WHAT WHAT WHAT?? packet loss is a possibliity in any network so lol if you're not confirming delivery in some limited testing we saw more than there reasonably should be, especially when pods went up and down no idea about the cause but i figured some combination of endpoint updates being slow and stuff like https://tech.xing.com/a-reason-for-unexplained-connection-timeouts-on-kubernetes-docker-abd041cf7e02 i don't know if that one has been fixed or not we're using tcp connections so mostly dropped packages would be retried and just result in some latency jitter but some times we had connection timeouts all stuff that should show up in your logs and metrics everything should be retried in any case but just in case we try keep client-facing stuff mostly static, e.g. less aggressive autoscaling so we have a fixed set of pods suffix fucked around with this message at 18:31 on Mar 13, 2019 |
# ? Mar 13, 2019 18:28 |
|
i am going through a second level of imposter syndrome wherein i doubt my ability to ever give a gently caress about solving a business problem beyond finding whatever cute and satisfying technical solutions that i can. i am seriously considering going back to someplace like redhat where i can just write easy code at work and spend my free time programming stuff i enjoy.
|
# ? Mar 13, 2019 19:50 |
|
Finster Dexter posted:WHAT WHAT WHAT?? what do you do if a switch or availability zone goes down?
|
# ? Mar 13, 2019 19:53 |
|
DONT THREAD ON ME posted:i am going through a second level of imposter syndrome wherein i doubt my ability to ever give a gently caress about solving a business problem beyond finding whatever cute and satisfying technical solutions that i can. i am seriously considering going back to someplace like redhat where i can just write easy code at work and spend my free time programming stuff i enjoy. ask to see if you can get more direct information on how a customer (whether real or not) actually uses the thing. It might turn out to be helpful to gain more empathy in their use cases or why it's important, and alternatively provide away to actually implement nothing at all and still leave them happy. but it's hard as hell to be caring about people who don't exist or don't care about a feature they have you add just so it fills a checklist on a b2b contract demand when the checklist item has just been copied from competition and you gotta have all the boxes even if nobody cares
|
# ? Mar 13, 2019 19:56 |
|
necrotic posted:Yeah volumes can be annoying, especially in AWS. There are improvements around making sure the pod ends up on a node in the correct AZ but the best approach is to set the affinity of those deployments to either only one AZ, and to use pod priorities to ensure those pods can be scheduled correctly. GCP and Azure don't have this issue as badly. I'm not sure if it's a function of kops or kubernetes itself, but all the nodes get labelled with "failure-domain.beta.kubernetes.io/zone=AWS_AZ" that you can easily use for node affinity to a specific AZ. this is useful if you actually have persistent volume claims. that's shortcoming of EBS though, since a volume is created in an AZ and not available in other AZs. we don't actually have any persistent volumes because everything is in S3 or a database. what state do you need to maintain in a file? seems like you would have the same issue just using AWS autoscaling groups. this really isnt a kubernetes specific issue. for ASGs I guess you can create an ASG in each AZ and attach the volume on startup via user data scripts, but that's basically the same as using node affinity per deployment.
|
# ? Mar 13, 2019 19:58 |
|
|
# ? Apr 28, 2024 03:31 |
|
MononcQc posted:ask to see if you can get more direct information on how a customer (whether real or not) actually uses the thing. It might turn out to be helpful to gain more empathy in their use cases or why it's important, and alternatively provide away to actually implement nothing at all and still leave them happy. your advice is really good and it's benefited me in the past. as you say, though, I think i'm mostly traumatized by my previous job where we didn't really have any users and we were building a very complicated product under the steering of people who were just guessing at what the users wanted. my biggest problem with this job hunting cycle has been reflecting positively on what i did at my previous company. i was huge for morale and a critical player but we were working on something that was fundamentally never going to work and basically all of my efforts ended in failure. i grew a lot as a programmer, but when i get questions like 'what did you accomplish at your previous company' i've had a hard time putting a nice bow on that. anyhow i should be posting in the interviewing thread, sorry. DONT THREAD ON ME fucked around with this message at 20:04 on Mar 13, 2019 |
# ? Mar 13, 2019 20:02 |