New around here? Register your SA Forums Account here!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Chopstick Dystopia
Jun 16, 2010


lowest high and highest low loser of: WEED WEE
k
I'm getting better at querying mongo and I hate it.

Adbot
ADBOT LOVES YOU

pseudorandom
Jun 16, 2010



Yam Slacker

CRIP EATIN BREAD posted:

I've never met a person asking about a K/V store that had a workload that even came close to requiring something other than a postgres table with the default database settings.


poo poo.

This is a fantastic time for this discussion to pop up, since I'm supposed to start implementing something like this soon. I managed to convince my team to go serverless for a new feature that's small but could have high volume. We were already planning to use a an AWS Lambda thing to handle intake, but I suggested using DynamoDB as a K/V store rather than doing some caching strategy to lookup data from our actual server+db.

The K/V store will likely only be storing a few thousand rows at most for the foreseeable future. I know I'm a terrible programmer, but am I the terrible programmer?

Adhemar
Jan 20, 2004

Kellner, da ist ein scheussliches Biest in meiner Suppe.

pseudorandom posted:

poo poo.

This is a fantastic time for this discussion to pop up, since I'm supposed to start implementing something like this soon. I managed to convince my team to go serverless for a new feature that's small but could have high volume. We were already planning to use a an AWS Lambda thing to handle intake, but I suggested using DynamoDB as a K/V store rather than doing some caching strategy to lookup data from our actual server+db.

The K/V store will likely only be storing a few thousand rows at most for the foreseeable future. I know I'm a terrible programmer, but am I the terrible programmer?

Seems fine. I’d had to know more about your use case and proposed design to determine how terrible of a programmer you are.

Aramoro
Jun 1, 2012




redleader posted:

what rdbms and full-text search thing do you prefer/have used for this?

We support Oracle, SQL Server and Azure SQL with Solr for text searching using JMS to update the Solr cluster on CRUD. The RDBMS is not fully within our control which is why we support multiple vendors there..The JMS we were already using for something else so it all slotted together pretty nicely. We have one outstanding issue of indexing multilingual data correctly but we have a solution for that we think.

Finster Dexter
Oct 20, 2014

Beyond is Finster's mad vision of Earth transformed.
I've never used Solr, but I used elasticsearch back in its pre-1.0 days, and hoo boy was it not production ready. I will say that I've been using it again lately and it's a lot more stable than it used to be, and has suited needs so far. The thing I still hate about it is the json-based DSL for querying. It's fairly obtuse and I almost always have to go to documentation even to do something very simple.

No idea how that compares to Solr, and I'd like to spend time checking that out, next chance I get.

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.
I just got a robocall from Belize. which of you is doing this?

CPColin
Sep 9, 2003

Big ol' smile.
:iiam:

Arcsech
Aug 5, 2008

Finster Dexter posted:

I've never used Solr, but I used elasticsearch back in its pre-1.0 days, and hoo boy was it not production ready. I will say that I've been using it again lately and it's a lot more stable than it used to be, and has suited needs so far. The thing I still hate about it is the json-based DSL for querying. It's fairly obtuse and I almost always have to go to documentation even to do something very simple.

No idea how that compares to Solr, and I'd like to spend time checking that out, next chance I get.

elasticsearch got a lot more stable, yeah, especially with the new distributed consensus layer introduced in 7.0 and the newish sequence number stuff

yeah the json dsl is kind of obtuse sometimes but if you’re okay using the free default distribution (vs the OSS version) it speaks sql now (tho still no joins)

Corla Plankun
May 8, 2007

improve the lives of everyone

leper khan posted:

I just got a robocall from Belize. which of you is doing this?

couldnt be me; i "died", am "dead"

matti
Mar 31, 2019

working on my dumb library
code:
int main(void)
{
	freopen("log.txt", "w", stderr);

	if (!crql_open_term(struct frame_info, frame, frame_cb,
	    CRQL_TRUE_COLOR, 0, 0, "Hello, world!", NULL) {
		return crql_print_error();
	}

 	for (;;);
}
 
int frame_cb(struct crql_msg *msg)
{
	struct frame_info *frame = crql_get_frame(msg);

	switch (msg->type) {
	case CRQL_CREATE:
		frame->str = msg->create;
 		return 1;
	case CRQL_DRAW:
		return draw_frame(frame, &msg->draw);
 	case CRQL_KEY:
 		return read_key(frame, &msg->key);
 	case CRQL_QUIT:
		exit(msg->quit);
	}

	return crql_default_cb(msg);
}
im really happy about the ergonomics here for being a win32 style message passing tui api.
if your compiler doesnt support anonymous unions go to hell.
e: coupla fixes
ee: api design when drunk is hard

matti fucked around with this message at 23:54 on Jul 26, 2019

Symbolic Butt
Mar 22, 2009

(_!_)
Buglord

matti posted:

if your compiler doesnt support anonymous unions go to hell.

matti
Mar 31, 2019

The freopen function first attempts to close any file that is associated with the specified stream. Failure to close the file is ignored. The error and end-of-file indicators for the stream are cleared.

pff. like you would hope a failure would set the error flag but no actually the exact opposite happens. what a world

matti
Mar 31, 2019

im sure it all works in practice but when writing code examples for documentation id rather not do anything platform specific since you soon get into a chicken and egg situation

Nomnom Cookie
Aug 30, 2009



Arcsech posted:

elasticsearch got a lot more stable, yeah, especially with the new distributed consensus layer introduced in 7.0 and the newish sequence number stuff

yeah the json dsl is kind of obtuse sometimes but if you’re okay using the free default distribution (vs the OSS version) it speaks sql now (tho still no joins)

also you can do a major version upgrade without a cluster restart. this is new in...6 I think. maybe 7. 1.x was pretty drat painful, 2.x not so much, 5.x pretty ok except for the restart issue

Nomnom Cookie
Aug 30, 2009



here's something i think would work good but haven't tried. set up a replication client for mysql/postgresql that turns binlog into kafka records. feed that into solr/elasticsearch. i'm sure one can think of lots of uses for a db changelog besides updating search

yeah i'm far from the first one to think of that. debezium does exactly what i was thinking of

pram
Jun 10, 2001
lol uhh its a pretty common use case, it even has an acronym bro (change data capture)

pram
Jun 10, 2001
also dont use kafka thx

Nomnom Cookie
Aug 30, 2009



only other time i've ever encountered that acronym it was plastered on an oracle product so i assumed it was dumb bullshit

simble
May 11, 2004

there’s even an app for it called kafka connect

DaTroof
Nov 16, 2000

CC LIMERICK CONTEST GRAND CHAMPION
There once was a poster named Troof
Who was getting quite long in the toof

Nomnom Cookie posted:

only other time i've ever encountered that acronym it was plastered on an oracle product so i assumed it was dumb bullshit

it's worth looking at to see if your db architecture is hosed up but as a general rule regarding oracle your instinct is correct

Plorkyeran
Mar 21, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed

matti posted:

The freopen function first attempts to close any file that is associated with the specified stream. Failure to close the file is ignored. The error and end-of-file indicators for the stream are cleared.

pff. like you would hope a failure would set the error flag but no actually the exact opposite happens. what a world

reporting failures when closing a file is pretty pointless

DaTroof
Nov 16, 2000

CC LIMERICK CONTEST GRAND CHAMPION
There once was a poster named Troof
Who was getting quite long in the toof

Plorkyeran posted:

reporting failures when closing a file is pretty pointless

what

Plorkyeran
Mar 21, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
what sort of errors are you expecting to get from fclose() that you will be able to actually do anything with? if you want to handle errors from the implicit flush just call flush explicitly first.

DaTroof
Nov 16, 2000

CC LIMERICK CONTEST GRAND CHAMPION
There once was a poster named Troof
Who was getting quite long in the toof
are you saying the fclose could succeed even though the flush didn't

not trying to argue here, i sincerely think i'm confused

e: if youre saying the most significant error message will come from the flush, i guess we're on the same page

DaTroof fucked around with this message at 03:51 on Jul 27, 2019

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

always make sure your fids are clunked, dammit

Plorkyeran
Mar 21, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed

DaTroof posted:

are you saying the fclose could succeed even though the flush didn't

not trying to argue here, i sincerely think i'm confused

e: if youre saying the most significant error message will come from the flush, i guess we're on the same page

freopen() does the following three steps:

1. flush the file
2. close the file
3. open a new file

the actual close step can only fail due to being given an invalid fd, so there aren't any actual errors it can report. flushing the file can fail even when given valid input, but the problem with reporting errors from that step in freopen() is that there's overlap between the error codes of write() and open(); if you get an EIO from freopen() you wouldn't know if it failed to flush, or if it failed to create the file you asked it to open. if you want to handle errors when flushing you need to instead call fflush() first, and after that succeeds it's no longer possible for fclose() to fail.

pseudorandom name
May 6, 2007

close() can fail with EIO

Plorkyeran
Mar 21, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
[EIO] A previously-uncommitted write(2) encountered an input/output error.

you don't have a previously-uncommited write if you explicitly commit your writes before reopening

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

EIEIO

Sapozhnik
Jan 2, 2005

Nap Ghost
there's a big stink rn about linux's fsync(2) error reporting behavior being dumb and backwards and causing data corruption in postgres

long story short if a block device IO error causes a write to be lost then the next fsync fails but after that the error is considered "reported" and all subsequent fsyncs succeed.

Sapozhnik
Jan 2, 2005

Nap Ghost

Sapozhnik posted:

there's a big stink rn

but enough about my posting

Plorkyeran
Mar 21, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed

Sapozhnik posted:

there's a big stink rn about linux's fsync(2) error reporting behavior being dumb and backwards and causing data corruption in postgres

long story short if a block device IO error causes a write to be lost then the next fsync fails but after that the error is considered "reported" and all subsequent fsyncs succeed.

the really awful part is that unless you're running a really recent kernel version the "next" fsync() may be one in a different process, so you can call write() immediately followed by fsync(), get success from both of them, and still have the write be lost because someone else ate your error

Bloody
Mar 3, 2013

lol Linux is so bad

redleader
Aug 18, 2005

Engage according to operational parameters

Bloody posted:

lol computers is so bad

redleader
Aug 18, 2005

Engage according to operational parameters
can't wait for the inevitable Linus email where he repeatedly insults the postgres devs' intelligence for daring to find a problem with the kernel

Shaggar
Apr 26, 2006

Bloody posted:

lol Linux is so bad

Shaggar
Apr 26, 2006

redleader posted:

can't wait for the inevitable Linus email where he repeatedly insults the postgres devs' intelligence for daring to find a problem with the kernel

linux tech tips: the database is bad!

pram
Jun 10, 2001

Bloody posted:

lol Linux is so bad

my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine

Plorkyeran posted:

the really awful part is that unless you're running a really recent kernel version the "next" fsync() may be one in a different process, so you can call write() immediately followed by fsync(), get success from both of them, and still have the write be lost because someone else ate your error

ah gently caress


is there a nice writeup somewhere on this?

Adbot
ADBOT LOVES YOU

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

fsucc()

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply