Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Jabor
Jul 16, 2010

#1 Loser at SpaceChem

H2Eau posted:

i guess I'm just spoiled, because we break our js frontends up into tiny pieces a fresh npm i and webpack takes a whole 15 - 30 seconds

15 minutes is some real bullshit

30 seconds is too long imo, our incremental builds take about that long and it's long enough to break your flow.

ideally you'd be down in the single digits.

Adbot
ADBOT LOVES YOU

animist
Aug 28, 2018

DaTroof posted:

that was my first guess. possible bonus, someone added `rm -rf node_modules` to the start of the build because they couldn't figure out how else to resolve dependency changes

:thunk:

prisoner of waffles
May 8, 2007

Ah! well a-day! what evil looks
Had I from old and young!
Instead of the cross, the fishmech
About my neck was hung.
Heh, rm -rfing the whole node_modules directory sure does a lot to goose npm metrics

Corla Plankun
May 8, 2007

improve the lives of everyone

CRIP EATIN BREAD posted:

other cool thing: having a support@companyname.com creating JIRA support tickets automatically and somehow one of our vendors for our satellite comm stuff started sending emails there for some stupid conference.

so now we got JIRA task spam which is great because JIRA wasn't slow enough as it is.



everyone involved in the creation of JIRA should be euthanized. making plugins use REST calls while still being INSTALLED INSIDE THE GODDAMN SOFTWARE ITSELF is ridiculous. REST is not IPC........

ops recently migrated our local confluence install to cloud atlassian version and it is somehow even slower than our already loving insanely slow jira

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?

CRIP EATIN BREAD posted:

i looked at the build process and i see that there's something called rimraf and it's a javascript port of rm -rf...........

:thunk:

holy poo poo I was right, it really is Mirror Universe Lisp

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?

brap posted:

there's nothing particularly wrong with what the package does.

yes there is, JavaScript doesn’t need to touch the filesystem at all

it’s for animating web pages and client side validation of their form elements

redleader
Aug 18, 2005

Engage according to operational parameters
i like xslt

i also may have stockholm syndrome

Nomnom Cookie
Aug 30, 2009



xslt is a really cool idea with the worst syntax ever created for anything

edit: like somebody read about correspondences between sexps and xml and thought woah that means xml is code :2bong:

floatman
Mar 17, 2009
I have a database that stores json as strings. These jsons are taken out of the database and wrapped in an XML format to be sent over http. Bonus: the json values are CSV strings.

Anyway, how do you tell people you work with that just because they can't solve the problem doesn't mean the problem isn't simple? It's literally a mocking expectation that is not being hit with the correct expected arguments.
Me: have you broken the unit down expectation to not use the convenience method but instead use a closure so that you can inspect that the objects are the same and the object comparison algorithms are sane?
Them: no that can't be the problem the problem should be this obscure stack overflow page from 2002.

Carthag Tuek
Oct 15, 2005

Tider skal komme,
tider skal henrulle,
slćgt skal fřlge slćgters gang



2 month anniversary on contacting some researchers about an article they wrote & the software they said would be made available :woop:

the professor keeps saying itll be ready "next week" and the students are silent as the grave when he CCs them lol

good thing its not business critical for us but just a cool thing we wanna try out

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

c tp s: when i grow up i want to marry System.Collections.Concurrent.ConcurrentQueue<T>

prisoner of waffles
May 8, 2007

Ah! well a-day! what evil looks
Had I from old and young!
Instead of the cross, the fishmech
About my neck was hung.

NihilCredo posted:

c tp s: when i grow up i want to marry System.Collections.Concurrent.ConcurrentQueue<T>

Get in line, buddy.


(Better move to Utah?)

Main Paineframe
Oct 27, 2010
maintaining a large xslt-based project is living hell

unless there's some kind of tooling that makes it easier to deal with, but if so, we don't use it here

it's just notepad++ and "find in file" all the way down

CPColin
Sep 9, 2003

Big ol' smile.

prisoner of waffles posted:

Get in line, buddy.

Let's all race to get in line!

prisoner of waffles
May 8, 2007

Ah! well a-day! what evil looks
Had I from old and young!
Instead of the cross, the fishmech
About my neck was hung.

Main Paineframe posted:

maintaining a large xslt-based project is living hell

unless there's some kind of tooling that makes it easier to deal with, but if so, we don't use it here

it's just notepad++ and "find in file" all the way down

lomarf, sorry for you're lots

e: genuinely sorry, main paineframe, u seem like a cool and learned poster

Powerful Two-Hander
Mar 10, 2004

Mods please change my name to "Tooter Skeleton" TIA.


last time I had to do any "proper" xml I tried using the visual studio tools and we'll, that sucked.

altova xmlspy used to be good but that was 10 years ago

also I once spent about 2 days writing Regex transforms in xslt 2.0 which wasn't supported by anything at all at the time except some weird command line package

30 TO 50 FERAL HOG
Mar 2, 2005



xml isnt bad but attributes vs elements is real annoying and honestly json just does it better

cinci zoo sniper
Mar 15, 2013




BIGFOOT EROTICA posted:

xml isnt bad but attributes vs elements is real annoying and honestly json just does it better

agreed, attributes vs elements inconsistency irks me. imo, for document storage (rather than transport) such thing as xml attribute should not exist. just make a tree however nested and deep, and if you need some attributive properties - add them as elements

prisoner of waffles
May 8, 2007

Ah! well a-day! what evil looks
Had I from old and young!
Instead of the cross, the fishmech
About my neck was hung.
attributes are guaranteed to have 0 or 1 cardinality and their order is insignificant tho

Soricidus
Oct 21, 2010
freedom-hating statist shill

BIGFOOT EROTICA posted:

xml isnt bad but attributes vs elements is real annoying and honestly json just does it better

json is frustratingly close to being decent. allow comments and allow trailing commas and maybe actually define some rules on how numbers should be interpreted and it’d be pretty good

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

prisoner of waffles posted:

attributes are guaranteed to have 0 or 1 cardinality and their order is insignificant tho

xsd can enforce both requirements though, and xsd is one of the best reasons to use xml anyway

without an xsd, an automatic checker can only tell you "i don't have the slightest clue which elements appear in this documents and which attributes they may or may not have, but by God you can rest assured none of them appear more than once" which isn't terribly useful

with an xsd, it's a completely redundant feature that introduces an entirely parallel system of syntax/grammar/parsing/navigating

NihilCredo fucked around with this message at 18:59 on Oct 25, 2018

Xarn
Jun 26, 2015
schema-less formats are bad though

cinci zoo sniper
Mar 15, 2013




prisoner of waffles posted:

attributes are guaranteed to have 0 or 1 cardinality and their order is insignificant tho



what i'm getting at is that in my work i have several dozen of xml document providers i work with and each of them does either top or bottom of pic above and it's pain in the rear end that i have to retrieve data one way there, and another way here

i could rephrase my complaint as follows - for a given business document, feel free to use attributes for technical properties, but use strictly elements for business information

that is about as arbitrary as the xml standard defines the use of elements vs attributes, so my wish is for attributes to not exist at all and leave no options to people other than do the bottom way. in abstract world, i'd have everyone do like in the 3rd example here

cinci zoo sniper
Mar 15, 2013




also im not sure i understand what xml attribute cardinality refers to (language barrier generally), and i concur with posters above that json should have normal number handling, comments, and formal schema support for me to like it

TheFluff
Dec 13, 2006

FRIENDS, LISTEN TO ME
I AM A SEAGULL
OF WEALTH AND TASTE
https://json-schema.org/

galenanorth
May 19, 2016

bob dobbs is dead posted:

next time, do small samples first, do the caching, do cache clearance. dunno which scraping lib you're using

Well, I decided to cut my losses and reduce the search area down to from the entire U.S. to a rectangle bounding Maine. There was a problem where the Walmart store locator returns zero results when it can't interpret the address, rather than returning an error, and I expect the other store locators which require addresses will do the same thing. I was able to work around it and detect all Walmart locations in Maine by instructing the program to go a level deeper and subdivide the grid unit into four grid units, so that one of the addresses generated by Google would likely be valid.

I implemented server response caching, though I still have to do some refactoring so that the program doesn't have to pause like when it's requesting from the server, and that's going great

Edit: I couldn't find a scraping lib that handles geospatial scraping from store locators over a grid, aside from this GitHub project. I had to make several months of modifications because it was written like it was by someone used to another programming language, in terms of indentation and style, it kept track of a lot of statistics and logging information that I didn't need and made it harder to read. It had a lot of bugs directly in the algorithm like doing a radius search consisting of a circle in a square unit instead of a square unit in a circle so that locations in the gaps between circles were missed. It didn't keep track of unique records, so there were duplicates. That sort of thing.

galenanorth fucked around with this message at 19:48 on Oct 25, 2018

cinci zoo sniper
Mar 15, 2013





i know about it. the real world adoption rate is 0, as far as i am concerned

Chalks
Sep 30, 2009

cinci zoo sniper posted:

also im not sure i understand what xml attribute cardinality refers to (language barrier generally), and i concur with posters above that json should have normal number handling, comments, and formal schema support for me to like it

XML has some good features, but on the other hand, CDATA

Shaggar
Apr 26, 2006

cinci zoo sniper posted:

also im not sure i understand what xml attribute cardinality refers to (language barrier generally), and i concur with posters above that json should have normal number handling, comments, and formal schema support for me to like it

attributes are unique so you can only have 0 or 1 attribute with the same name on an element. you can have 0 to N elements with the same name under an element.

realistically you just enforce it all with schemas so who really cares

gonadic io
Feb 16, 2011

>>=

we're considering using this over avro

Soricidus
Oct 21, 2010
freedom-hating statist shill

cinci zoo sniper posted:

i know about it. the real world adoption rate is 0, as far as i am concerned

i asked a vendor about it recently, to which they said they totally used it, just not for the product I cared about

but then xml schema usage is similarly patchy ime

Zaxxon
Feb 14, 2004

Wir Tanzen Mekanik

gonadic io posted:

we're considering using this over avro

It's better than nothing it's not particularly great. I don't have much experience with avro so I can't really compare. We use it at my job.

redleader
Aug 18, 2005

Engage according to operational parameters

Main Paineframe posted:

maintaining a large xslt-based project is living hell

unless there's some kind of tooling that makes it easier to deal with, but if so, we don't use it here

it's just notepad++ and "find in file" all the way down

oh, didn't realise i had a co-worker here

TheFluff
Dec 13, 2006

FRIENDS, LISTEN TO ME
I AM A SEAGULL
OF WEALTH AND TASTE

cinci zoo sniper posted:

i know about it. the real world adoption rate is 0, as far as i am concerned

i provided schema files for an api we provided to an outside contractor but i don't think they used those

i used it in our own tests though

Nomnom Cookie
Aug 30, 2009



Shaggar posted:

attributes are unique so you can only have 0 or 1 attribute with the same name on an element. you can have 0 to N elements with the same name under an element.

realistically you just enforce it all with schemas so who really cares

realistically, as someone who consumes XML, there are three cases

1. there is no schema
2. there is a schema but it is wrong
3. the xml is generated by serialization and the schema is redundant

the correct approach in all three cases is "read it into a tree and grab what you need with xpath"

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Kevin Mitnick P.E. posted:

realistically, as someone who consumes XML, there are three cases

1. there is no schema
2. there is a schema but it is wrong
3. the xml is generated by serialization and the schema is redundant

the correct approach in all three cases is "read it into a tree and grab what you need with xpath"

case 2 means being able to say 'hey, we have objective evidence that the problem isn't on our side and it wasn't a case of miscommunication, it's *your* software that is not respecting the specs' which saves a crazy amount of time and therefore money

Shaggar
Apr 26, 2006
in my case its
1) its matches schema so it gets parsed
2) it doesn't match schema so it doesn't get parsed
3) client pays to transform their thing into something that matches schema so it can be parsed.

cinci zoo sniper
Mar 15, 2013




Shaggar posted:

attributes are unique so you can only have 0 or 1 attribute with the same name on an element. you can have 0 to N elements with the same name under an element.

realistically you just enforce it all with schemas so who really cares

ah, yeah i see what cardinality refers to now. i often deal with multiples of element - say, address has 5 address blocks to note places where said person lived over the years, so that is fine for what i work with

cinci zoo sniper
Mar 15, 2013




Soricidus posted:

i asked a vendor about it recently, to which they said they totally used it, just not for the product I cared about

but then xml schema usage is similarly patchy ime

my luck with xml schemas is much better, all vendors so far given them as soon as asked. one small vendor asked to sign extra nda prior to providing schema, for it specifically, but that’s it - also not my thing to care about, i just forward these things to legal and say “please sign” since we have bunch of nda agreements with each vendor anyways, on top of general legal requirements we adhere to as a company
that deals with pii

Adbot
ADBOT LOVES YOU

Nomnom Cookie
Aug 30, 2009



NihilCredo posted:

case 2 means being able to say 'hey, we have objective evidence that the problem isn't on our side and it wasn't a case of miscommunication, it's *your* software that is not respecting the specs' which saves a crazy amount of time and therefore money

yeah if that's an option then have at it i guess. in our case if we try to tell data providers their data is poo poo their response is gonna be "lol buy a spoon then" and possibly referring us to other people who have purchased spoons to eat their poo poo with. or maybe they just have nfk and couldn't fix it if they wanted to

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply