Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
FlapYoJacks
Feb 12, 2009

Beamed posted:

meanwhile the company you left is still millions of dollars in the hole right?

happy endings are so heartwarming

Fun story about that!
They hired a guy, one of my ex-coworker's whom I am still friends with gchated me and offered me lunch for a quick brain dump for him, of which I agreed.

The lunch was excellent, and the new guy was not only lied to repeatedly about the sheer scale of what he was getting into, but also everything he would be responsible for.

He quit the next day. :v:


Reminder: A month before I quit, the CEO at that place said he could replace me in 2 weeks.

Edit*
At the lunch, I didn't talk about the company, or the politics inside, as those * may * have changed.
The only thing I talked about was what I was responsible for, what I did, the overall architecture of the projects, what I did for DevOps, the servers I setup (Chef, Ovirt, Jenkins, Gitlab), how I implemented drivers, build scripts, programming languages, etc etc.

I may not like the company, but I won't talk about anything I don't know about either, and I certainly won't gossip to somebody I don't know.

FlapYoJacks fucked around with this message at 23:55 on Oct 29, 2018

Adbot
ADBOT LOVES YOU

DaTroof
Nov 16, 2000

CC LIMERICK CONTEST GRAND CHAMPION
There once was a poster named Troof
Who was getting quite long in the toof

ratbert90 posted:

Fun story about that!
They hired a guy, one of my ex-coworker's whom I am still friends with gchated me and offered me lunch for a quick brain dump for him, of which I agreed.

The lunch was excellent, and the new guy was not only lied to repeatedly about the sheer scale of what he was getting into, but also everything he would be responsible for.

He quit the next day. :v:


Reminder: A month before I quit, the CEO at that place said he could replace me in 2 weeks.

Edit*
At the lunch, I didn't talk about the company, or the politics inside, as those * may * have changed.
The only thing I talked about was what I was responsible for, what I did, the overall architecture of the projects, what I did for DevOps, the servers I setup (Chef, Ovirt, Jenkins, Gitlab), how I implemented drivers, build scripts, programming languages, etc etc.

I may not like the company, but I won't talk about anything I don't know about either, and I certainly won't gossip to somebody I don't know.

your yospos story arc has been fascinating and i wish you the best

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost

MononcQc posted:

my fav bit of the kafka architecture is whenever someone notices that oh yeah topic compaction is not enough to guarantee reliable long term storage (i.e. re-partitioning fucks with all the keys and therefore linear history of entries) so you need another canonical data source to act as a kind of backup, and so what you do is put a consumer that materializes the views in a DB.

But that's nice because you can use the DB for some direct querying. Except for some stateful component doing stream analysis over historical data; every time that component restarts, you need to sync the whole state to build the thing afresh, but doing this from a DB is not super simple so you do it from Kafka, but since Kafka can't necessarily tell you it has all the data and the DB is the one that's canonically right, you end up building ad-hoc diffs between a DB and a live stream for every restart

And there's like no good solution, you just cover your ears and hope you never make it to that point because you know you'll be hosed janitoring and reconciliating two data sources that don't necessarily have a good way to talk to each other aside from some small component/microservice written in a language only 1 person knew and they left 3 months ago

they named kafka correctly, in other words

Oneiros
Jan 12, 2007



MononcQc posted:

my fav bit of the kafka architecture is whenever someone notices that oh yeah topic compaction is not enough to guarantee reliable long term storage (i.e. re-partitioning fucks with all the keys and therefore linear history of entries) so you need another canonical data source to act as a kind of backup, and so what you do is put a consumer that materializes the views in a DB.

But that's nice because you can use the DB for some direct querying. Except for some stateful component doing stream analysis over historical data; every time that component restarts, you need to sync the whole state to build the thing afresh, but doing this from a DB is not super simple so you do it from Kafka, but since Kafka can't necessarily tell you it has all the data and the DB is the one that's canonically right, you end up building ad-hoc diffs between a DB and a live stream for every restart

And there's like no good solution, you just cover your ears and hope you never make it to that point because you know you'll be hosed janitoring and reconciliating two data sources that don't necessarily have a good way to talk to each other aside from some small component/microservice written in a language only 1 person knew and they left 3 months ago

Welp, time to go break the news to the team I've been working with that just implemented a new service using multiple compacted topics as long-term datastores.

Oneiros fucked around with this message at 04:01 on Oct 30, 2018

pokeyman
Nov 26, 2006

That elephant ate my entire platoon.
what’s the catch with amazon aurora vs. a postgres in rds? I’m not very conversant in aws so naturally it’s falling on me to set a thing up. where by thing I mean boring crud app that may one day have hundreds of users

also if the correct answer is "just install postgres on an ec2 instance" I’m ok with it

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost

pokeyman posted:

what’s the catch with amazon aurora vs. a postgres in rds? I’m not very conversant in aws so naturally it’s falling on me to set a thing up. where by thing I mean boring crud app that may one day have hundreds of users

also if the correct answer is "just install postgres on an ec2 instance" I’m ok with it

you gotta say how big it is in rds, you don't in aurora. you also get better graphs. they say its faster but it's mostly a lie. for this you pay a solid chunk more

if you have "hundreds of users" its not worth it

CRIP EATIN BREAD
Jun 24, 2002

Hey stop worrying bout my acting bitch, and worry about your WACK ass music. In the mean time... Eat a hot bowl of Dicks! Ice T



Soiled Meat
i talked to someone who knows about the internals of aurora stuff and they said there's really no downside to it, and it shouldn't have any issues.

however, you can do just plain RDS, still.

weird thing is the smallest instance type you can use for aurora in postgres is way bigger than the smallest mysql aurora. it's pretty expensive at the low end.

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
not having to janitor the postgres install in an ec2 instance is a big enough gain that i would recommend using rds, tho

ThePeavstenator
Dec 18, 2012

:burger::burger::burger::burger::burger:

Establish the Buns

:burger::burger::burger::burger::burger:

toiletbrush posted:

I don't know if it's still the case, but the commonly suggested JSON library for .NET used to spot 'date like' strings in JSON requests and reformat them into it's own format, even if the field being deserialised is a string, making validation that a date was provided in a certain format a massive pain in the rear end/impossible. The author still insists this is the correct behaviour.

Yeah I think I posted about this a while ago. I still pop into that GitHub issue to see it still be updated :allears:

e: yup lol

ThePeavstenator posted:

lol, spent 3 hours in our dumbest and most simple API that literally just does CRUD operations trying to figure out why a consumer storing an ISO date string "2018-07-23T16:20:00Z" was getting back "7/23/2018 4:20:00 PM". The API isn't supposed to parse the string at all and the only transformation any data undergoes involves mapping the API data model to a DB data model before storing it in Cosmos DB. Weirdly, the string was actually correctly stored as an ISO string in Cosmos DB as well.

Turns out that Newtonsoft.Json, dependency of several Microsoft packages, including the Cosmos DB one has a really cool feature: It parses ISO date strings from JSON as DateTimes by default even though JSON has no concept of a Date type. :suicide:

ThePeavstenator posted:

best part of that 2 y/o (and still recently active) github issue:

thomaslevesque posted:

@JamesNK could you explain the reasons behind this design? It seems pretty strange to me that a string value should arbitrarily be interpreted as a date just because it looks like a date...

JamesNK posted:

Because JSON doesn't have a date format.

ThePeavstenator fucked around with this message at 05:23 on Oct 30, 2018

CRIP EATIN BREAD
Jun 24, 2002

Hey stop worrying bout my acting bitch, and worry about your WACK ass music. In the mean time... Eat a hot bowl of Dicks! Ice T



Soiled Meat
also wait i missed "hundreds of users" lol, just throw it on the smallest regular RDS instance you can, you'll be fine.

rds owns, postgres owns

prisoner of waffles
May 8, 2007

Ah! well a-day! what evil looks
Had I from old and young!
Instead of the cross, the fishmech
About my neck was hung.

ratbert90 posted:


The lunch was excellent, and the new guy was not only lied to repeatedly about the sheer scale of what he was getting into, but also everything he would be responsible for.

He quit the next day. :v:


Reminder: A month before I quit, the CEO at that place said he could replace me in 2 weeks.

fucken nice

pokeyman
Nov 26, 2006

That elephant ate my entire platoon.
ty team

redleader
Aug 18, 2005

Engage according to operational parameters

ratbert90 posted:

Fun story about that!
They hired a guy, one of my ex-coworker's whom I am still friends with gchated me and offered me lunch for a quick brain dump for him, of which I agreed.

The lunch was excellent, and the new guy was not only lied to repeatedly about the sheer scale of what he was getting into, but also everything he would be responsible for.

He quit the next day. :v:


Reminder: A month before I quit, the CEO at that place said he could replace me in 2 weeks.

Edit*
At the lunch, I didn't talk about the company, or the politics inside, as those * may * have changed.
The only thing I talked about was what I was responsible for, what I did, the overall architecture of the projects, what I did for DevOps, the servers I setup (Chef, Ovirt, Jenkins, Gitlab), how I implemented drivers, build scripts, programming languages, etc etc.

I may not like the company, but I won't talk about anything I don't know about either, and I certainly won't gossip to somebody I don't know.

your stories are the wind beneath my wings

Nomnom Cookie
Aug 30, 2009



bob dobbs is dead posted:

you gotta say how big it is in rds, you don't in aurora. you also get better graphs. they say its faster but it's mostly a lie. for this you pay a solid chunk more

if you have "hundreds of users" its not worth it

aurora can be 64tb vs 6tb for rds. yes, this mattered. replication is totally different and better than what you can do with rds (drbd for ha and binlog replication for read replicas)

upgrades on rds are not bulletproof so have fun with your ticket if it busts. np if you have enterprise support and get the good response time

I would not touch rds, if possible, because of the upgrade problems. if you have a dba already then consider ec2 otherwise shell out for aurora. or just don’t do major upgrades in place if that is feasible

Nomnom Cookie
Aug 30, 2009



I cannot stress enough how much it sucks to have a production db totally hosed and inaccessible and be stuck with a 24 hour turnaround playing ticket tag with support peons

Finster Dexter
Oct 20, 2014

Beyond is Finster's mad vision of Earth transformed.
Doing a major in place upgrade seems like an obv bad idea, though. Unless you're running a super trivial web thing I guess.

I've only been devops'ing for like 4 months so I'm obv terrible, but seems like just doing a standard blue-green setup completely mitigates the issue here, and sure you get better support times with aurora, but seems moot if the upgrade with aurora is just as issue-prone as anything else in rds.

Nomnom Cookie
Aug 30, 2009



Finster Dexter posted:

Doing a major in place upgrade seems like an obv bad idea, though. Unless you're running a super trivial web thing I guess.

I've only been devops'ing for like 4 months so I'm obv terrible, but seems like just doing a standard blue-green setup completely mitigates the issue here, and sure you get better support times with aurora, but seems moot if the upgrade with aurora is just as issue-prone as anything else in rds.

you mean, create a replica, wait for it to sync, stop all writers, wait for replication to catch up, run the upgrade, switch your dns, and restart the writers? upgrade takes multiple days, mostly snapshot time, when you have several tb. stopping writes for that long would have been visible to customers and the upgrade button worked fine in dev (similar data size, same MySQL version). might be ok for 5.6 to replicate from 5.5 in which case the downtime would be minimal and acceptable

but if you’re going through all that, what is rds saving you, exactly? May as well run on ec2

aurora afaik doesn’t have any big bang upgrades to deal with cause they forked at 5.6.10 and just cherry pick fixes from upstream. they also have the ability to mangle it however they want to handle upgrades as they like rather than write tooling around what upstream poops out. since migrating from rds to aurora that db has gone from a constant headache to solid & ignorable

cinci zoo sniper
Mar 15, 2013




ratbert90 posted:

Fun fact about my current boss:
He not only promised that on the current project that we would not only get time to re-architect, but he delivered on that promise.

We had 6 full months to rewrite and re-architect an (admittedly far better) application, and in the end, it's the nicest software with full test coverage and end to end tests that scales well.

He will fight tooth and nail for us, and our team has earned a rep of being fixers now. :v: This is why India was so incredibly scared of us taking over even a tiny bit of their code, because they knew what I am doing would happen. They had been ignoring my requests for Jenkins access for months, and then they got a new DevOps guy in America. A bottle of scotch and 1 day later, I had the code, how they built, and how they deployed, and I am now starting to ask questions that they don't want to ask.

Questions such as:
Why do we have a 92Mb sql file in an application you swear is microservices based? :v:

what do you even put in sql to get it to 92mb, 10 million entries hardcoded into where foo in (bar) clause? :staredog:

redleader
Aug 18, 2005

Engage according to operational parameters

cinci zoo sniper posted:

what do you even put in sql to get it to 92mb, 10 million entries hardcoded into where foo in (bar) clause? :staredog:

you know as well as i do that it's tons of xml

cinci zoo sniper
Mar 15, 2013




redleader posted:

you know as well as i do that it's tons of xml

that’s a option even more “why” than the one i mentioned

redleader
Aug 18, 2005

Engage according to operational parameters

cinci zoo sniper posted:

that’s a option even more “why” than the one i mentioned

how else are you gonna fake an object db in a real db?

redleader
Aug 18, 2005

Engage according to operational parameters
i'm guessing the entire schema of that database is:

SQL code:
CREATE TABLE objects (id int not null auto_increment, data xml null)

cinci zoo sniper
Mar 15, 2013




:suicide:

CRIP EATIN BREAD
Jun 24, 2002

Hey stop worrying bout my acting bitch, and worry about your WACK ass music. In the mean time... Eat a hot bowl of Dicks! Ice T



Soiled Meat

Kevin Mitnick P.E. posted:

you mean, create a replica, wait for it to sync, stop all writers, wait for replication to catch up, run the upgrade, switch your dns, and restart the writers? upgrade takes multiple days, mostly snapshot time, when you have several tb. stopping writes for that long would have been visible to customers and the upgrade button worked fine in dev (similar data size, same MySQL version). might be ok for 5.6 to replicate from 5.5 in which case the downtime would be minimal and acceptable

but if you’re going through all that, what is rds saving you, exactly? May as well run on ec2

aurora afaik doesn’t have any big bang upgrades to deal with cause they forked at 5.6.10 and just cherry pick fixes from upstream. they also have the ability to mangle it however they want to handle upgrades as they like rather than write tooling around what upstream poops out. since migrating from rds to aurora that db has gone from a constant headache to solid & ignorable

i think the biggest issue is you're using the mysql flavor of RDS.

why?

cinci zoo sniper
Mar 15, 2013




CRIP EATIN BREAD posted:

i think the biggest issue is you're using the mysql flavor of RDS.

why?

yeah, this. mysql 8 i can undersrand, būt greenfield 5 in tyool 2018

CRIP EATIN BREAD
Jun 24, 2002

Hey stop worrying bout my acting bitch, and worry about your WACK ass music. In the mean time... Eat a hot bowl of Dicks! Ice T



Soiled Meat
we've been running a postgres RDS instance for ~4 years now.

backups are automatic, upgrades take mere seconds, and unplanned outages: 0.

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

all i know about databases is that if i need one i should probably use postgres

and ideally leave it to someone else

duz
Jul 11, 2005

Come on Ilhan, lets go bag us a shitpost


cinci zoo sniper posted:

what do you even put in sql to get it to 92mb, 10 million entries hardcoded into where foo in (bar) clause? :staredog:

all the pngs used including some that arn't

Finster Dexter
Oct 20, 2014

Beyond is Finster's mad vision of Earth transformed.

CRIP EATIN BREAD posted:

i think the biggest issue is you're using the mysql flavor of RDS.

why?

Yeah, I stupidly assumed Postgres. MySql is a garbage fire inside a dumpster fire, imo.

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

pointer to brownstar

Nomnom Cookie
Aug 30, 2009



CRIP EATIN BREAD posted:

i think the biggest issue is you're using the mysql flavor of RDS.

why?

decision made before I was hired

cinci zoo sniper
Mar 15, 2013




Kevin Mitnick P.E. posted:

decision made before I was hired

i feel you. all our gigantic legacy poo poo has mysql backends

Finster Dexter
Oct 20, 2014

Beyond is Finster's mad vision of Earth transformed.
I wonder if MariaDB is any better than mysql.

Also, I wonder if anything came about with those rumors that oracle would just actively make mysql suck a lot even more to push people into OracleDB upsales.

Finster Dexter fucked around with this message at 16:05 on Oct 30, 2018

cinci zoo sniper
Mar 15, 2013




Finster Dexter posted:

I wonder if MariaDB is any better than mysql.

Also, I wonder if anything came about with those rumors that oracle would just actively make mysql suck a lot to push people into OracleDB upsales.

maria had parity with mysql for most used, not sure if that has held since oracle released mysql 8, which is doing giant strides to catch up with postgre

CRIP EATIN BREAD
Jun 24, 2002

Hey stop worrying bout my acting bitch, and worry about your WACK ass music. In the mean time... Eat a hot bowl of Dicks! Ice T



Soiled Meat
instead of playing catch-up with an inferior database i'd recommend just using postgres because i can't imagine a single use-case where mysql > postgres.

except maybe in billable hours for maintenance?

Nomnom Cookie
Aug 30, 2009



i know how to do a mysql and would need upwards of two hours to transfer those skills to postgres so i could definitely see speccing mysql when idgaf and am not accountable for the success or failure of the product

cinci zoo sniper
Mar 15, 2013




CRIP EATIN BREAD posted:

instead of playing catch-up with an inferior database i'd recommend just using postgres because i can't imagine a single use-case where mysql > postgres.

except maybe in billable hours for maintenance?

just commenting on maria, i do recommend options other than postgre only to people who don’t need rdbms recommendations

cinci zoo sniper
Mar 15, 2013




until double digit terabytes it works splendidly

Arcsech
Aug 5, 2008

CRIP EATIN BREAD posted:

instead of playing catch-up with an inferior database i'd recommend just using postgres because i can't imagine a single use-case where mysql > postgres.

except maybe in billable hours for maintenance?

isnt mysql slightly faster if you're basically just using it as a key-value store?

but if you want a key-value store just use redis instead

Adbot
ADBOT LOVES YOU

FlapYoJacks
Feb 12, 2009

cinci zoo sniper posted:

what do you even put in sql to get it to 92mb, 10 million entries hardcoded into where foo in (bar) clause? :staredog:

cat bullshit_db.sql |wc -l
7179

It's also apparently a full snapshot from the latest release, so that's.... terrible as well.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply