Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
12 rats tied together
Sep 7, 2006

xtal posted:

Insomnia is like an even worse version of a Git GUI. Just grow up and use cURL like a professional.

just import requests and create a requests.session, imo

Adbot
ADBOT LOVES YOU

12 rats tied together
Sep 7, 2006

soap was a way to build web apis, everyone dropped it because the only people who like xml do so as a personality trait

swagger is a way of describing an api, a bunch of tools (such as swashbuckle) build on it to make fancy documentation pages

postman is a program that exists because the web browser included with microsoft windows doesnt have a way for desktop users to easily send http requests other than "get me this page"


all of this stuff you would learn in the first 3 months of employment by osmosis, so dont worry about it

12 rats tied together
Sep 7, 2006

javascript would be fine if the javascript programmer lifecycle eventually culminated in "understanding web technology in general (especially cors)", but for some reason it usually turns into inventing ways to write javascript in more and worse places

12 rats tied together
Sep 7, 2006

Pie Colony posted:

the bay area isn't the only place in the US to get paid and it's dumb to pretend like it is. FAANGs have multiple offices around the US. maybe your base salary living in colorado is 150k instead of 200k, but your 150k/yr stock grant is the same. and post-covid the remote opportunities have only increased, i'm fully remote clearing 500k/yr (although i have to stay within the US unfortunately)

also fully remote and i cleared 500k in one year, previous year was 260 something. this is all pre taxes of course

future years would have been also around 500 but that place was actually killing me so i left despite the vesting schedule

12 rats tied together
Sep 7, 2006

you dont even need a high school diploma to break into figgieland, just a willingness to make an idiot of yourself in tech interviews over and over again and then, eventually, to relocate across the country. i've absolutely bombed over 70 tech inteviews.

the relocation part is probably optional now

12 rats tied together
Sep 7, 2006

if your discord bot uses the websocket streaming interface thing for channel presence and you can speak intelligently about that in an interview you have already satisfied the prior experience requirement for a junior developer

12 rats tied together
Sep 7, 2006

jesus WEP posted:

the bitwise xor is just a magic spell you learn that will swap the characters in place

for networking peeps the spell is bitwise and, and we use it to find the network portion of an ipv4 address, and then either send the packet to the default gateway (if the address is not in the same network) or directly to the host

12 rats tied together
Sep 7, 2006

they are all bullshit, yeah

if all you want to know about is requests per time, that's reasonably classifiable as a metric, so you can cut a lot of it out by looking for metrics tooling instead of "observability". prometheus is the latest emerging standard here and to give them some credit its quite easy to instrument your code with it, assuming that your webserver doesn't already have a prometheus exporter for requests/sec

observability in theory includes logs, events, metrics, and usually some kind of tooling for reading them all in a useful way to find correlations and poo poo. in practice, every monitoring-adjacent vendor slaps the term on their website for SEO

12 rats tied together
Sep 7, 2006

abraham linksys posted:

i'd always avoided prometheus since when i originally looked into it, it seemed like it required self-hosting - like, even if you used a "cloud" variant you're still supposed to have an on-prem prometheus database that you send stuff to the cloud from - but it looks like grafana labs recently(?) introduced a thing called the "grafana agent" which is "prometheus without the database since you're just sending all your poo poo to the cloud"

for enterprise use, yes absolutely. for a low stakes project though the prometheus scraper is its own (ephemeral) storage, so you'd basically just be running a sidecar process on your server that looks for metrics at localhost:6579 or whatever, and then you'd run the AlertManager project also on the same box, looking at localhost:whatever your scraper port is.

you'll lose all of your data when the scraper crashes/goes down but it would catch "requests per 30 seconds now is 500% higher than the previous 30 seconds" reliably enough to be useful.

if you have the inclination/effort to use a more involved setup for it, that's definitely for the best, but in a pinch you could deploy the local scraper with a single query+alert, and forward that to a local AlertManager which pings your phone over AWS SNS or whatever

12 rats tied together
Sep 7, 2006

a really nice thing about prometheus that i think a lot of corp monitoring teams and enterprise orgs look over is that its comprised of decoupled but extremely easy to replicate pieces. you don't need a "staging" for your prometheus architecture, if you want to experiment with the query syntax literally all you have to do is spin up the prometheus scraper container and change the config file around

you can point your own scraper at a production server and query the same exact metrics that production scrapes, and you can run the same exact queries that production scrapers to to experiment with alerts, aggregations, etc


the downside is that the prometheus query language could not be any more hostile without becoming an actual parody language

12 rats tied together
Sep 7, 2006

they're right, if you can get a job where the only thing you have to do is care about an arbitrary number of the exact same webserver, you have an easy job. rest and vest

12 rats tied together
Sep 7, 2006

back when kubernetes had breaking api changes every 22 hours and the AWS ECS agent would lose connectivity to the ECS service every 10 minutes i worked at a place that did exactly that (docker-compose and ansible) as a homemade container scheduler and it worked really well

after i left (2017) they switched to kubernetes which meant hiring a bunch of people who knew kubernetes to reimplement exactly the same pattern, with no additional benefits, but with more agents this time

basically what im saying is not only is that a totally fine one box deployment system, thats a legitimate multi-box deployment system as well

12 rats tied together
Sep 7, 2006

Plank Walker posted:

what options do i have for chaining together a bunch of rest microservices into a single rest api call? these "micro" services all do long running operations that may or may not fail so ideally i'd have logging at each step plus the ability to conditionally retry steps if e.g. one step gives a 503 error without having to re-run the entire workflow up to that step, and also ideally would be deploying to .net core

since this is .net you probably have to do all of this work yourself but theres kind of 2 approaches here that i usually see in the wild:

1) split all of your rest calls into async jobs and when someone hits your rest endpoint, submit them all at the same time. make sure your dependent jobs have a little spinner step that runs, checks for all of their ancestor's completion status, and reschedules themselves at some random time in the future if they arent ready to run yet

2) define a directed graph class and a directed graph node class and then turn each rest call into a directed graph node class instance. implement onsuccess/onfailure callbacks for them such that you can link them together, and then in your parent directed graph class, implement a graph walk function that goes through them from root to end by invoking their onsuccess/onfailure links

12 rats tied together fucked around with this message at 20:01 on Apr 7, 2021

12 rats tied together
Sep 7, 2006

there are times when its reasonable to expect a medium amount of perf for local testing. im happy to announce that the objectively best work device is a macbook pro and you use the office web apps, or better yet, the google business suite

12 rats tied together
Sep 7, 2006

Powerful Two-Hander posted:

have they made the support for lists of things better in blazor? when I tried it last it got messy as soon as you tried to add elements to lists of objects

lists of things isnt even good in c# so i have no idea how they would fix this in blazor

12 rats tied together
Sep 7, 2006

venv is for package isolation, pyenv is for version isolation, fortunately pyenv-virtualenv exists so you can do both

e: i use pipenv, though

12 rats tied together
Sep 7, 2006

conda is basically both put together but iirc its mostly for data science doers because it has nice packages for installing bullshit-dependency-chain scientific libraries

i think often people actually still pip install poo poo inside a "conda environment", they just use conda to install whatever specific bundle of binary dependencies are required for the specific version of python matplotlib works with this specific helper wrapper for autogenerating d3.js visualizations from that specific version of scipy and also it includes a compiled version of some nvidia thing that lets numpy use your gpu

or whatever. i dont actually use any of that poo poo but i worked at a consulting firm for a log time where everyone did

12 rats tied together
Sep 7, 2006

am i going loving crazy here? there isn't a self dunder, is there?

12 rats tied together
Sep 7, 2006

Corla Plankun posted:

`abstract sealed private ataraxic vorpal List<Fart> buttfarter (Velocity velocity)`: a statment dreamed up by the completely deranged

the best part of working in a lovely c# code base is finding all the overloads and extension methods for abstract sealed private ataraxic List<Fart> buttfarter since the signature is so loving long that you have to scroll right to find the part thats different

12 rats tied together
Sep 7, 2006

i would suggest flask + alembic over django but you did say your hosting provider has some django support already

12 rats tied together
Sep 7, 2006

bob dobbs is dead posted:

i would only suggest this to peeps who are sick of djangos poo poo
:hai:

12 rats tied together
Sep 7, 2006

agreed but i would add that the point of complexity where python becomes bad is such a high watermark that the vast majority of web services/platforms should be written and glued together fully in python, but terminate at more performant or domain specific tooling when needed

none of that applies if you are working outside the domain of web poo poo, of course

12 rats tied together
Sep 7, 2006

Share Bear posted:

back end application? depends on how big and traffic i suppose

i think the typical stateless back end application can scale to an arbitrarily large amount of "big" and "traffic" in python, or more accurately, its going to blow up and become lovely but it will have everything to do with service design and nothing to do with python specifically

you could write grubhub again today fully in python, cassandra is still java and postgres is still c. the bottleneck will be because you did something loving stupid in cassandra or postgres and not because the 30 line flask application running in a container processes an order in 10 seconds of local time + 1.5 second network round trip compared to a c++ program that does it in 0 seconds of local time + 1.5 second network round trip

12 rats tied together
Sep 7, 2006

what kind of deployment has a model where a single(?) python process is polling for work and assigning it to one of 20,000 cores?

12 rats tied together
Sep 7, 2006

ok I think I follow - I assume there's some sort of load balancer here that is pushing work to like, gunicorn or some type of webserver, which is spawning workers inefficiently across a bunch of servers that each have a bunch of cores

one way to address this would be to push serialized jobs or work_needed events to a location that your workers pull from or react to. you can be a lot more intentional about which python processes are running on your servers this way, probably use something like supervisord and taskset to pin python processes to each core instead of relying on the webserver scheduler

you could do this in push-mode too by configuring each worker on its own tcp port and then having your load balancer forward to every pid+port. thinking about it for a sec this is pretty much how a container scheduler implementation would work too

12 rats tied together
Sep 7, 2006

epitaph posted:

assumes you know how many workers you should reasonably run. easy to calculate if all time is spent on cpu, harder with lots of switching between waiting and running. we have a system now where workers are allowed to accept if they receive a token from a worker going to sleep (i.e. waiting in select/poll for some rpc/db call to complete). it works ok.

i don't really like scaling anything based on CPU, especially on a server that is primarily running a GC language. i would much rather have "we know that our service operating on this hardware can usually process x events/sec" and then scale based on outstanding unprocessed events. this requires that all of your events are similar in time-to-process, which has been the case for all systems ive worked on that do 7 figures of events/sec, but probably is not always true.

in adtech where our events came right from "someone somewhere opened a webpage" and we couldn't know that the events existed until they did, you can usually do some holt winters poo poo to get a baseline, but we just scaled on CPU and it was "fine"

the token model is really clever though, i guess the workers have plenty of time to generate a secure token since they're just chilling waiting for network i/o, and then you could monitor the amount of currently valid tokens to get an easy view into # of active workers, or put limits on allowed active tokens to maybe avoid the classic self-DDOS scenarios.

12 rats tied together
Sep 7, 2006

it describes almost all business activity across every business

12 rats tied together
Sep 7, 2006

modern languages like rust, golang, python, et.c intentionally not supporting overloads is one of the best decisions each of them has made

12 rats tied together
Sep 7, 2006

Jabor posted:

it actually makes total sense for Calculator.Add(float, float) to return a float, while Calculator.Add(int, int) returns an int

this is an example of something that should be a language feature (operator overloading), not a type system feature (method overloading)

its ok if 1+1 returns an int and 1.0+1 returns a float, it's not ok if FindThing returns a row id or a record uuid depending on whether or not i pass it either

e: for example python will let you "plus sign operator" a float to an int, but it won't let you call the add method on an int with a float parameter

12 rats tied together fucked around with this message at 19:07 on May 19, 2021

12 rats tied together
Sep 7, 2006

sure. syntax feature is probably a better word to use, the words on the screen that developers interpret and derive meaning from

the distinction is that operators are already syntax convenience, its ok to overload them further so that DeliveryOrder can be trivially inserted into an array, or whatever. whether or not you correctly overload the add operator so that people can insert them into arrays with + vs array.append is just an implementation detail, the common understanding is that the object will be inserted into the array

when someone is looking at DeliveryOrder.GetPaymentDetails, they are interested in the business logic, which is not common understanding and should not be hidden away behind GetPaymentDetails(String: StripeTranscationId) vs GetPaymentDetails(Uuid: OurTransactionUuid)

12 rats tied together
Sep 7, 2006

i disagree completely but i did construct this example out of thin air bullshit with a bunch of invisible carried context, it does track with my actual objections to operationalized method overloads in large codebases though

"our transaction id" vs "stripe transaction id" implies a fundamentally different code path, which would be best modeled in different methods this time because stripe is a remote payment processor and our transaction uuid is, for most businesses, likely just a record that we received an order and processed it through stripe or some other processor

it is possible to write method overloads that provide a useful abstraction, yes, and its also possible to write some absolute loving dogshit where you have 8 factorial different parameter combinations. if you allow them, you have to police them, which a bunch more work immediately but also a time bomb for when you eventually try to decouple the processor from the emitter and you start popping poo poo off of a queue.

which overload do we hit? who knows, thats why my PR adjusts the signature for all 16 of these overloads even though we haven't processed a StripeTransaction(id, timestamp, source_country, is_prepaid, reprocess=True) in the past 4 years. i would rather see GetPaymentDetails(FulfilledTransaction: transaction), where FulfilledTransaction is a type that has a provider field, which could be stripe, and which is not an overload

12 rats tied together
Sep 7, 2006

thank you for the swift post. yes, this delivery order.get payment details is a bad thing to discuss hypothetically.

i guess you can characterize my position as more overloads means more function signatures means more code janitoring required to achieve the same level of code quality

12 rats tied together
Sep 7, 2006

its true that the source of the problem is not from whether or not method overloading vs control flow constructs were used, but the assertion i am making is that:

code:
do_thing(1) [...]
do_thing(1, 2) [...]
do_thing(1, 2, 3) [...]
<repeats based on aggregate organizational rot>
is materially worse than:
code:
do_thing(1) // but 1 is an encapsulation/abstraction and requires unpacking and deferring to control flow
i say this because "guessing where the demarcation is between the exactly correct overflows" is a way harder rule to follow for anyone, let alone juniors, than "if you ever check isNull, you arent supposed to be here, write another function with a different name"

the first one you are almost guaranteed to be at least partially wrong at all times, the second one, you find out when you're wrong right away

12 rats tied together
Sep 7, 2006

Private Speech posted:

it seems more like a complaint coming from non-statically-typed perspective that doesn't always apply otherwise
none of the popular general purpose dynamic languages in use today support method overloading as discussed, they almost universally require you to take an alternative approach where you pass an abstraction, unpack it, and cede to control flow

overloading is fairly fundamental to c++ but i dont think it would be fair to characterize c++ and its syntax a good and healthy ecosystem that all programming languages should strive to emulate. recent attempts at systems programming languages do not support method overloading at all, as a design choice, even

Soricidus posted:

method overloading is fine and the people who use it to write unreadable code would not, in fact, suddenly start writing readable code if the language forced them to call their methods “doStuff1” and “doStuff2”

i never like going down this path of reasoning because it always ends up in some dead-end argument like this, which i even made myself last page. zooming out for a second: people will always write bad code, but, it requires less work on the code janitor to go "doStuff2 is stupid, please come up with a real name and justification for this new abstraction, PR rejected" than it is for a code janitor to review the array of overloads scaling between 1-8 parameters (plus this PR which is introducing parameter #9) and make it suck less

the rule of thumb "if you are checking isNull, this is the wrong place for your code" is correct >99% of the time and avoids >99% of wrong abstraction building

12 rats tied together
Sep 7, 2006

even in a static lang doStuff(1) + doStuff(1,2) and doStuff1 + doStuff2 are exactly the same thing to the compiler: different functions.

overloading is syntactical convenience that lets you not have to come up with a new function name, which is useful and good only if you continually do the work to ensure that your overloads shouldn't have different function names

12 rats tied together
Sep 7, 2006

Boiled Water posted:

I don't disagree, it's a great editor, but the learning curve is so steep it might as well be a circle
this is an accurate depiction except each time you loop the circle you transcend into another state of being that non-terminal-editor-users will never understand

12 rats tied together
Sep 7, 2006

yagni shouldn't even be classified as "extreme programming". nothing is ever correct period, never mind "the first time"

12 rats tied together
Sep 7, 2006

Plorkyeran posted:

that specific example compares unfavorably to using a regular expression ("^\s*([a-zA-Z]\w*)\s*$")

i agree with you on almost everything, but not this lmfao

12 rats tied together
Sep 7, 2006

the only acceptable regex is the one in the ci tool that fails a pr if it detects that someone is trying to merge a regex

Adbot
ADBOT LOVES YOU

12 rats tied together
Sep 7, 2006

theres a lot of low hanging fruit you can knock out quickly but yeah you quickly hit diminishing returns trying to make normal python (esp. cpython) go fast in a single process.

op you should rewrite it in cython

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply