|
H2Eau posted:i guess I'm just spoiled, because we break our js frontends up into tiny pieces a fresh npm i and webpack takes a whole 15 - 30 seconds 30 seconds is too long imo, our incremental builds take about that long and it's long enough to break your flow. ideally you'd be down in the single digits.
|
# ? Oct 25, 2018 02:02 |
|
|
# ? Oct 7, 2024 15:29 |
|
DaTroof posted:that was my first guess. possible bonus, someone added `rm -rf node_modules` to the start of the build because they couldn't figure out how else to resolve dependency changes
|
# ? Oct 25, 2018 02:03 |
|
Heh, rm -rfing the whole node_modules directory sure does a lot to goose npm metrics
|
# ? Oct 25, 2018 02:12 |
|
CRIP EATIN BREAD posted:other cool thing: having a support@companyname.com creating JIRA support tickets automatically and somehow one of our vendors for our satellite comm stuff started sending emails there for some stupid conference. ops recently migrated our local confluence install to cloud atlassian version and it is somehow even slower than our already loving insanely slow jira
|
# ? Oct 25, 2018 02:55 |
|
CRIP EATIN BREAD posted:i looked at the build process and i see that there's something called rimraf and it's a javascript port of rm -rf........... holy poo poo I was right, it really is Mirror Universe Lisp
|
# ? Oct 25, 2018 05:40 |
|
brap posted:there's nothing particularly wrong with what the package does. yes there is, JavaScript doesn’t need to touch the filesystem at all it’s for animating web pages and client side validation of their form elements
|
# ? Oct 25, 2018 05:42 |
|
i like xslt i also may have stockholm syndrome
|
# ? Oct 25, 2018 07:18 |
|
xslt is a really cool idea with the worst syntax ever created for anything edit: like somebody read about correspondences between sexps and xml and thought woah that means xml is code
|
# ? Oct 25, 2018 07:28 |
|
I have a database that stores json as strings. These jsons are taken out of the database and wrapped in an XML format to be sent over http. Bonus: the json values are CSV strings. Anyway, how do you tell people you work with that just because they can't solve the problem doesn't mean the problem isn't simple? It's literally a mocking expectation that is not being hit with the correct expected arguments. Me: have you broken the unit down expectation to not use the convenience method but instead use a closure so that you can inspect that the objects are the same and the object comparison algorithms are sane? Them: no that can't be the problem the problem should be this obscure stack overflow page from 2002.
|
# ? Oct 25, 2018 08:10 |
|
2 month anniversary on contacting some researchers about an article they wrote & the software they said would be made available the professor keeps saying itll be ready "next week" and the students are silent as the grave when he CCs them lol good thing its not business critical for us but just a cool thing we wanna try out
|
# ? Oct 25, 2018 10:35 |
|
c tp s: when i grow up i want to marry System.Collections.Concurrent.ConcurrentQueue<T>
|
# ? Oct 25, 2018 13:55 |
|
NihilCredo posted:c tp s: when i grow up i want to marry System.Collections.Concurrent.ConcurrentQueue<T> Get in line, buddy. (Better move to Utah?)
|
# ? Oct 25, 2018 14:05 |
|
maintaining a large xslt-based project is living hell unless there's some kind of tooling that makes it easier to deal with, but if so, we don't use it here it's just notepad++ and "find in file" all the way down
|
# ? Oct 25, 2018 14:11 |
|
prisoner of waffles posted:Get in line, buddy. Let's all race to get in line!
|
# ? Oct 25, 2018 15:39 |
|
Main Paineframe posted:maintaining a large xslt-based project is living hell lomarf, sorry for you're lots e: genuinely sorry, main paineframe, u seem like a cool and learned poster
|
# ? Oct 25, 2018 16:21 |
|
last time I had to do any "proper" xml I tried using the visual studio tools and we'll, that sucked. altova xmlspy used to be good but that was 10 years ago also I once spent about 2 days writing Regex transforms in xslt 2.0 which wasn't supported by anything at all at the time except some weird command line package
|
# ? Oct 25, 2018 18:32 |
|
xml isnt bad but attributes vs elements is real annoying and honestly json just does it better
|
# ? Oct 25, 2018 18:35 |
BIGFOOT EROTICA posted:xml isnt bad but attributes vs elements is real annoying and honestly json just does it better agreed, attributes vs elements inconsistency irks me. imo, for document storage (rather than transport) such thing as xml attribute should not exist. just make a tree however nested and deep, and if you need some attributive properties - add them as elements
|
|
# ? Oct 25, 2018 18:40 |
|
attributes are guaranteed to have 0 or 1 cardinality and their order is insignificant tho
|
# ? Oct 25, 2018 18:51 |
|
BIGFOOT EROTICA posted:xml isnt bad but attributes vs elements is real annoying and honestly json just does it better json is frustratingly close to being decent. allow comments and allow trailing commas and maybe actually define some rules on how numbers should be interpreted and it’d be pretty good
|
# ? Oct 25, 2018 18:51 |
|
prisoner of waffles posted:attributes are guaranteed to have 0 or 1 cardinality and their order is insignificant tho xsd can enforce both requirements though, and xsd is one of the best reasons to use xml anyway without an xsd, an automatic checker can only tell you "i don't have the slightest clue which elements appear in this documents and which attributes they may or may not have, but by God you can rest assured none of them appear more than once" which isn't terribly useful with an xsd, it's a completely redundant feature that introduces an entirely parallel system of syntax/grammar/parsing/navigating NihilCredo fucked around with this message at 18:59 on Oct 25, 2018 |
# ? Oct 25, 2018 18:56 |
|
schema-less formats are bad though
|
# ? Oct 25, 2018 18:57 |
prisoner of waffles posted:attributes are guaranteed to have 0 or 1 cardinality and their order is insignificant tho what i'm getting at is that in my work i have several dozen of xml document providers i work with and each of them does either top or bottom of pic above and it's pain in the rear end that i have to retrieve data one way there, and another way here i could rephrase my complaint as follows - for a given business document, feel free to use attributes for technical properties, but use strictly elements for business information that is about as arbitrary as the xml standard defines the use of elements vs attributes, so my wish is for attributes to not exist at all and leave no options to people other than do the bottom way. in abstract world, i'd have everyone do like in the 3rd example here
|
|
# ? Oct 25, 2018 19:03 |
also im not sure i understand what xml attribute cardinality refers to (language barrier generally), and i concur with posters above that json should have normal number handling, comments, and formal schema support for me to like it
|
|
# ? Oct 25, 2018 19:05 |
|
https://json-schema.org/
|
# ? Oct 25, 2018 19:06 |
|
bob dobbs is dead posted:next time, do small samples first, do the caching, do cache clearance. dunno which scraping lib you're using Well, I decided to cut my losses and reduce the search area down to from the entire U.S. to a rectangle bounding Maine. There was a problem where the Walmart store locator returns zero results when it can't interpret the address, rather than returning an error, and I expect the other store locators which require addresses will do the same thing. I was able to work around it and detect all Walmart locations in Maine by instructing the program to go a level deeper and subdivide the grid unit into four grid units, so that one of the addresses generated by Google would likely be valid. I implemented server response caching, though I still have to do some refactoring so that the program doesn't have to pause like when it's requesting from the server, and that's going great Edit: I couldn't find a scraping lib that handles geospatial scraping from store locators over a grid, aside from this GitHub project. I had to make several months of modifications because it was written like it was by someone used to another programming language, in terms of indentation and style, it kept track of a lot of statistics and logging information that I didn't need and made it harder to read. It had a lot of bugs directly in the algorithm like doing a radius search consisting of a circle in a square unit instead of a square unit in a circle so that locations in the gaps between circles were missed. It didn't keep track of unique records, so there were duplicates. That sort of thing. galenanorth fucked around with this message at 19:48 on Oct 25, 2018 |
# ? Oct 25, 2018 19:09 |
i know about it. the real world adoption rate is 0, as far as i am concerned
|
|
# ? Oct 25, 2018 19:11 |
|
cinci zoo sniper posted:also im not sure i understand what xml attribute cardinality refers to (language barrier generally), and i concur with posters above that json should have normal number handling, comments, and formal schema support for me to like it XML has some good features, but on the other hand, CDATA
|
# ? Oct 25, 2018 19:14 |
|
cinci zoo sniper posted:also im not sure i understand what xml attribute cardinality refers to (language barrier generally), and i concur with posters above that json should have normal number handling, comments, and formal schema support for me to like it attributes are unique so you can only have 0 or 1 attribute with the same name on an element. you can have 0 to N elements with the same name under an element. realistically you just enforce it all with schemas so who really cares
|
# ? Oct 25, 2018 19:15 |
|
we're considering using this over avro
|
# ? Oct 25, 2018 19:17 |
|
cinci zoo sniper posted:i know about it. the real world adoption rate is 0, as far as i am concerned i asked a vendor about it recently, to which they said they totally used it, just not for the product I cared about but then xml schema usage is similarly patchy ime
|
# ? Oct 25, 2018 19:19 |
|
gonadic io posted:we're considering using this over avro It's better than nothing it's not particularly great. I don't have much experience with avro so I can't really compare. We use it at my job.
|
# ? Oct 25, 2018 19:20 |
|
Main Paineframe posted:maintaining a large xslt-based project is living hell oh, didn't realise i had a co-worker here
|
# ? Oct 25, 2018 19:23 |
|
cinci zoo sniper posted:i know about it. the real world adoption rate is 0, as far as i am concerned i provided schema files for an api we provided to an outside contractor but i don't think they used those i used it in our own tests though
|
# ? Oct 25, 2018 19:23 |
|
Shaggar posted:attributes are unique so you can only have 0 or 1 attribute with the same name on an element. you can have 0 to N elements with the same name under an element. realistically, as someone who consumes XML, there are three cases 1. there is no schema 2. there is a schema but it is wrong 3. the xml is generated by serialization and the schema is redundant the correct approach in all three cases is "read it into a tree and grab what you need with xpath"
|
# ? Oct 25, 2018 19:33 |
|
Kevin Mitnick P.E. posted:realistically, as someone who consumes XML, there are three cases case 2 means being able to say 'hey, we have objective evidence that the problem isn't on our side and it wasn't a case of miscommunication, it's *your* software that is not respecting the specs' which saves a crazy amount of time and therefore money
|
# ? Oct 25, 2018 19:38 |
|
in my case its 1) its matches schema so it gets parsed 2) it doesn't match schema so it doesn't get parsed 3) client pays to transform their thing into something that matches schema so it can be parsed.
|
# ? Oct 25, 2018 19:40 |
Shaggar posted:attributes are unique so you can only have 0 or 1 attribute with the same name on an element. you can have 0 to N elements with the same name under an element. ah, yeah i see what cardinality refers to now. i often deal with multiples of element - say, address has 5 address blocks to note places where said person lived over the years, so that is fine for what i work with
|
|
# ? Oct 25, 2018 19:41 |
Soricidus posted:i asked a vendor about it recently, to which they said they totally used it, just not for the product I cared about my luck with xml schemas is much better, all vendors so far given them as soon as asked. one small vendor asked to sign extra nda prior to providing schema, for it specifically, but that’s it - also not my thing to care about, i just forward these things to legal and say “please sign” since we have bunch of nda agreements with each vendor anyways, on top of general legal requirements we adhere to as a company that deals with pii
|
|
# ? Oct 25, 2018 19:43 |
|
|
# ? Oct 7, 2024 15:29 |
|
NihilCredo posted:case 2 means being able to say 'hey, we have objective evidence that the problem isn't on our side and it wasn't a case of miscommunication, it's *your* software that is not respecting the specs' which saves a crazy amount of time and therefore money yeah if that's an option then have at it i guess. in our case if we try to tell data providers their data is poo poo their response is gonna be "lol buy a spoon then" and possibly referring us to other people who have purchased spoons to eat their poo poo with. or maybe they just have nfk and couldn't fix it if they wanted to
|
# ? Oct 25, 2018 19:46 |