Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice
The Dunning-Kruger effect. That is my only explanation for this stuff.

Anyone is entitled to write crappy code. We all do it when prototyping something or when the schedule demands we get a product out the door.

When a good programmer gets reports of his code running slowly or realizes he can't add a feature due to a bad design decision.... He fixes the code. He runs a profiler on it. He tries to research better algorithms. Especially when that person makes millions of dollars and now has his or her full time to devote to the product and the ability to hire additional developers.

Code's history is excusable. It's present condition is not.

Thats what gets me: People who don't just write crappy code - they are proud of it and continue to actively avoid learning a better way of doing things. The only explanation is that their incompetence is so pronounced they are unable to recognize just how incompetent they are.


edit: removed derail to focus on the real point

Simulated fucked around with this message at 20:30 on Apr 9, 2012

Adbot
ADBOT LOVES YOU

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice

dis astranagant posted:

I'm still having trouble wrapping my head around how that's even supposed to work. The words all parse but I just can't make the logic happen.

The result of the first ternary expression is fed in as the bool test of the next ternary expression, so imagine that the first test has parents around it and has its result coerced to Boolean to feed the next ternary on the right.

Having trouble understanding why it outputs horse means you have functioning brain cells left.

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice

Suspicious Dish posted:

They're considering adding a promise/Deferred to the standard library instead of using continuation passing, which should help things a bit. Unfortunately, you can't emulate inlineCallbacks.

Event loops aren't that big a deal when debugging, you just have to learn to deal with asynchronous programming.

There are two major problems with node.js in my opinion: 1) that JavaScript has no facilities for asynchronous programming outside of continuation passing (if you use Mozilla JS, they have generator, which means that you can implement a simple trampoline). And 2) this may have changed considerably since 2010, but when I last tried to implement an application in node I felt like I was catching up with very rapid stdlib changes. I was handling multipart uploads, and twice in 2-3 months _ry decided that the multipart form API wasn't good enough, and rewrote it from scratch.

I don't think _ry ever wrote anything based on node.js in that timeframe that was in production. This goes back to my belief that when you're not working in the trenches of your own platform every day, you cannot possibly know what the best addition to your own platform is.

And that's why C# is adding async/await. Let the compiler turn your code inside out later.

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice
Apparently all database columns are varchar too. That sure explains a lot of the MySQL databases I see. Foreign keys? Pfffffft, that's for losers.

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice

Huragok posted:

Wait, all textual columns are varchar, or all columns are varchar?

All. That's the only explanation I can think of for why auto-converting all string comparisons to numbers "makes things easier" when working with a database. My naive approach would be to store numbers in the database and convert the request parameter to a number explicitly, thus performing a basic sanity check against a random value submitted from an anonymous internet user, but then again I'm not a PHP developer.

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice
I was making a joke about people who just create varchar columns for everything because I (wrongly) assumed the mySQL API couldn't be that stupid.

Then I made a joke about the "naive" approach because even an entry-level programmer should know that you should both perform sanity checks and validity checks on *anything* sent to you, especially when it comes from the web.

My attempt at humor has been outfoxed by awful open source software yet again.

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice
ASP.Net will let you do the same thing, so do the MVC partial views. It's just shorthand for wrapping the text between the code bits in a string then doing an output stream write of that string.

Wait, it's PHP it probably does something much dumber than that...

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice

Strong Sauce posted:

prepared statements/parameterized queries have only been in PHP since ~5.0, that's not exactly a solution I could have used.

:what:

Holy gently caress.

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice
I try to answer uncommon questions on SO, but then again I only have like 130 points/karma/epeen/whatever. I don't answer questions to get points and don't give a poo poo how many I have. Just help out your fellow devs and be satisfied in doing a good job.

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice
The real horror is that those things are still live (as of this posting). Google alerts, CVE alerts, etc anyone? Anyone?

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice

Zombywuf posted:

CGI is a perfectly good way of running code.

True, just not your code :razz:

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice

Zombywuf posted:

And continuing the Adobe theme, OMG WTF PDF. The land where remote code execution is a feature.

OMG is loving right. I watched the whole talk and drat. Not only is Acrobat a piece of poo poo but the file format itself is insane. She had one file that was simultaneously the Windows calculator EXE that worked, a PDF that executed JavaScript, and a ZIP file.

PDF itself is ambiguous - you can replace objects by just appending new replacements on the end. But you can specify strings by length first or by end token so you can do fun stuff like have one object at the front of the file with a really long length but embed an end of stream token early, this in one mode allowing you to read the rest of the file as part of that object but when Acrobat renders it it will use the correct data. You can also embed commands that execute arbitrary programs with arbitrary arguments. Only in the most recent version does Acrobat even prompt you first - older versions just blindly execute it when the document is opened.

It gets crazier because PDF and ZIP support deflate so you can hide a malicious payload in the ZIP as a PDF exploit deflated, but then hide the PDF headers and tags in the ZIP file attribute fields so a virus scanner will decompress it and not find any PDF documents to examine. ZIP will ignore the comments/attributes and the ZIP bits are enclosed in a PDF object stream. Parts of the PDF API can only be answered by rendering the document, meaning you can hide your encrypted stuff using a key that is only available by loving rendering the PDF document, including executing other bits of code necessary to properly render it (even including OpenGL calls). You can also have different bits that only apply to Printers, yet another avenue of attack. Did I mention that PDFs can access databases via ADBC and there is no security or restrictions on it? And they can do XMLRPC/Ajax calls?


I am seriously considering never opening another PDF file.


P.S. she took a known two year old PDF exploit, performed a few trivially easy tricks, and it turns out that only two of 30+ antivirus packages identified it. None were well-known names, meaning almost very single computer with Acrobat installed is 100% vulnerable.

Simulated fucked around with this message at 06:08 on May 7, 2012

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice
The video is four parts and somewhat annoying to watch because the presenter is obviously an inexperienced public speaker but I suggest you watch it anyway. PDF is seriously a horror. At least as bad as PHP.

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice

shrughes posted:

You end up never knowing what format your code is really in, and that has side effects like writing ad-hoc perl scripts to help with large renamings or refactorings much harder. That's not the only side effect, you surely get others because you decided to make things complicated instead of keeping them simple.

I agree about not storing/editing ASTs but you are missing the point on this one.

The compiler has the best knowledge about what each symbol in the code means. It knows the difference between a type named Foo and a local variable named Foo, something that is extremely difficult to get right with a regex. That's part of the reason MS is exposing the compiler as a service in the next version of VS - so you can write code that writes code, with full access to the compiler and intellisense. It's an extremely powerful concept once you wrap your brain around it.

Bottom line: refactors should never be dumb text replacements... They should use the compilers knowledge of the code to make sure the factor is verifiably correct. Same for renames.


Edit: as for diffs, why does it have to be one or the other? Why can't the diff tool use both textual analysis and compiler services to figure things out? I'm not aware of any such tool but I wish there was one. It is often immediately obvious that the diff tool has picked up a stray bit of code between two methods and made hash of the file, where if it had even rudimentary knowledge of the context it would know it was looking at two different methods. It could even trivially handle things like source code reorganization.

Simulated fucked around with this message at 02:49 on May 10, 2012

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice

Plorkyeran posted:

Extending the syntax generally breaks IDEs and such.

Hello attributes/annotations, nice to meet you!

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice
Hey man, VB was doing automatic silent type coercion before it was cool :colbert: . Variant says get off my lawn you drat kids!

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice
If you control the output, use tab-separated. Tabs are almost *never* part of the valid data stream for these kinds of files. Why the hell would you use a really common character like commas or quotes? That would be like making the ? mark your end of line character instead of... I dunno, the unprintable CR/LF which are also almost never part of a valid data stream.

Hell is other programmers.

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice
It is truly the best kind of troll. Just enough crazy to keep you guessing. Bravo sir, bravo. :golfclap:

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice
Why is having a machine-readable way to document data interchange (whether XML schema or the proposed JSON schema) a bad idea?

Without this it makes it really difficult to write tools to interact with these APIs, relying on the programmer to read the doc and code it all by hand. I wish I could point a tool to a URL and have it materialize classes matching the JSON data structure, complete with enums for flags, etc. Thats one of the few good things about the WebService/WS way. Just point at the WSDL URL and any modern tool can spit out all the boilerplate easily. My objection to XML is how drat complicated it is, not the idea in general.

It does make me wonder... Would an HTTP HELP verb serve such a purpose? Instead of everyone coming up with their own API doc format, could you figure out some way to support it directly? I havent really thought this through so there may be some good reasons against it, but I like the idea of some standard machine-readable way to communicate about APIs and data structures.


As an aside, it seems like people who only ever deal with twitter-esque services are the ones who never see the need for anything even a little bit complicated. Not everything in the world is a simple picture upload or blog post, sometimes you have to support really complex things like 850 telephone service orders or student school records that require precision and/or complex amounts of data in complex relationships. Edit: which are a massive bitch to code by hand.

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice

xf86enodev posted:

This is btw the main reason why JSON is the best data exchange format.

I have to write these APIs and just like relying on the user to make security decisions is an automatic failure, relying on the programmer to read and fully comprehend the docs is doomed from the start. I have no desire to reinvent XML (or God forbid XSD) but some basic ability to describe my API in a standardized way would be extremely helpful. Depending on my library/platform, it could even be automatic and verified by the compiler to match the code (to some degree).


Jonnty posted:

Are you suggesting some sort of automated parsing system? Excellent idea, we could call it a "parser" and put it in the standard library of Literally Every Language.

JavaScript's idea of what constitutes an object doesn't always mesh with other languages... And some platforms may or may not have JSON parsers. Everyone ends up reinventing the wheel over and over.

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice

xf86enodev posted:

There is a standardised way to describe an API, it's called English. Sure there's a margin of error depending on how well your dudes write and read but this margin grows smaller the better they cooperate with each other.
On the other hand making people try and comprehend specs goes a long way towards catching design errors early on. Relying too much on tools to autogenerate and verify your stuff only leads to a false sense of security of "Welp, it compiles. Time to move on"

The big selling point of XML was that it's machine-readable and platform-agnostic. In my opinion this is all moot as long as we still have people writing code. And that code is bound by esoteric implementation details. So the easier it is for humans to grasp the structure and layout of data the better.

So how's that hint/intellisense working out for you when you are four layers deep into a complex JSON structure and can't remember the field name? Or the value is a choice between 6 possibilities? Just look it up on the web right? So why bother having intellisense at all?

Most people who miss my point never have to work with data structures more complex than a twitter post, picture upload, or blog entry so they imagine the world to be just that simple. I'm not saying you are guilty of that though, it's just a general observation.

JSON is worth using because browswers have parsers for it and because using REST it traverses proxies, firewalls, etc. It has other benefits but those are significant when you want to be as broad as possible in your support. The same API can service some web widget and an integration partner. You don't have a mile-long config file (ugh WCF you rear end in a top hat). I merely wished for an agreed-upon standard way to describe a REST API over HTTP and JSON data structures using JSON itself. I don't feel like that adds much complexity/overhead and you are certainly free to ignore it if you don't have a use for it.


One example of what I am invisioning is a tool that can scan the JSON structure and automatically create an appropriate CoreData backing store and Objective-C classes and insert the boilerplate RESTKit code for you, including stuff like enum values. That doesn't free you from writing the docs or having to understand the API, but it does give you caching, offine editing, querying, and a bunch of other stuff without having to manually create the entities and keep them in sync with the JSON, or manually creating the RESTKit mapping.

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice

xf86enodev posted:

That's a good one. I don't use intellisense et al. Sure, I use auto-completion to speed up my typing but there just isn't a tool that can take away my responsibility to "get the big picture". Sometimes I take notes on a piece of paper if things get too complicated or on days when there are too many distractions.

But when data structures or pieces of business logic get to big to be handled by a human being that's a design error.

So you are reframing your avoidance of productivity enhancements as somehow my failure to understand the big picture?

What does understanding the big picture have to do with using tools (or even code) to generate code? Are you pre-supposing that if I use a tool to auto-generate classes from specs I can't possibly understand how the underlying mechansim and protocols work? You also seem to be mixing up "big picture" with "menial copy-paste from the API spec".

Who said the problems you are capable of imagining fully represent the desirable design spaces? Just because you haven't worked with complicated systems doesn't mean they don't exist or even that they are inelegantly designed.

I am trying to be careful of accusing you of being an rear end here but it really seems like you are being one of those :smug: web 2.0 asses that think anything more complex than a twitter post is too complicated. News flash: you try designing an interface for accurately transmitting medical records and tell me you memorized all the categories, status codes, routing codes, etc. Or a students school records. Or an 850 order for telephone service. Or a multi-hundred-GB multidimensional data set with dynamically calculated recursive properties on a distributed real-time system that versions, branches, and merges data (not code!). I've worked on all of these... Some were less elegant than others but all were extremely complex real-world systems that no one person can ever know everything about which is precisely why I loving focus on the big picture and let the compiler/tools do the menial poo poo for me (unless there is a low-level problem in which case I disassemble the libraries, grab packets with wireshark, or do whatever else is needed to solve the problem because using high-level tools doesn't mean I don't understand how the whole system works).

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice
C#, ASP.Net MVC with the Razor engine, and Entity Framework 4 kick rear end. Also the nuget package support gives you a very fink/macports/package manager type of interface to automatically pull dependencies in. Also the EntityFramework power tools rock - right-click, reverse engineer classes, and boom it spits out POD objects corresponding to your existing database. Never write explicit data access code again, but with LINQ you can still create query expressions functionally that get translated into joins and the like under the hood.

C# is a great language to learn - with the lambdas and LINQ, dynamic support, delegates (function pointers), type inference, async/await, etc it has far surpassed Java and will give you a good stepping stone to the other C languages if you ever want to move that way.

Plus learning a new platform will break all those assumptions you have in your brain that come from always working on Unix, or Apache, or in dynamic languages, etc. Until you step into something completely different you won't realize how limited you were.

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice
PHP is a great example of mediocrity at work... Some dude created his own templating system, some other dudes hack on it, wanna-be coders everywhere adopt it en-masse. Popularity used as proof that it's great.

For reference I saw all sorts of similar horrors from coders back when I was doing ASP classic/VBScript... Including a whole host of Byzantine logic and "fun" type coercion crap. The difference is that VBScript was designed as a language by people who knew what they were doing so it produces far fewer WTFs from the platform and language itself.

Did you love the mixing of logic and presentation in Classic ASP, but hate all that useless language design? PHP: Party like its 1998!

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice
I've done this before for certain cases but we always made the "system" IDs be negative numbers so regular transactions could go on with IDENTITY/auto values and we could do identity inserts for the system records (which are infrequent enough that we can just scan the table for new IDs since it isn't concurrent either).

Of course when it is up to me I just add a column to store the record type like a sane person.

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice

necrobobsledder posted:

From what I've seen of Coffeescript, its primary use case is in helping write more OOP-like code using Javascript without getting lost in closure after closure. .

I would argue that this is not necessarily a feature... At least not for someone who isn't a JavaScript expert. Forcing your brain to unlearn existing OO class-based systems then to really live and breathe prototypes+closures expands your understanding in the same way learning multiple spoken languages as a child does.

Then you can circle back around and learn to hate JavaScript all over again, like a parent who bent over backwards to give you everything in life, but was also horribly abusive. You love them but you also wish they would hurry up and die.

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice
I'm not surprised that people who's only RDBMS experience is with MySQL think that SQL is overrated. Most of the MySQL databases I come across have no indexes whatsoever. I explained what a View was and why you'd want one to a PHP "developer" recently and it was a literal :psyboom:

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice
That website's not terrible at first glance but I haven't read it in depth.

The Wikipedia article isn't half bad either: http://en.wikipedia.org/wiki/Database_index

At its most basic level, anytime you have a medium-to-large table and you regularly run a query against a column (it appears in the WHERE clause), that column should be indexed. What I tell beginners is that an index is kinda like a hash table or dictionary, only instead of looking up objects you can look up database rows very quickly. Obviously that's not strictly true and there is a lot more to it than that but it gets the point across.

The best scenario is where you take your top X queries and create an index for each one with the columns that appear in the WHERE clause (and for Oracle/MySQL often in the same order). When you do that, the query can return all the matching rows in a trivial amount of time even if there are millions of rows, whereas a table scan could take minutes or more.

Another good rule of thumb is to index foreign keys which will usually help your joins speed up by quite a bit, and if your database is any good at query optimization it should be able to stitch together indexes across various joins, groups, etc. For example if one of the tables in the join has 100 million rows, the best plan is probably to use whatever indexes that table has to filter it down to the 100 rows you care about before bothering to join against any of the other tables. SQLServer and Oracle will also split the query up and execute parts in parallel if possible, can't recall if MySQL is doing that as of yet.

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice

ToxicFrog posted:

Except when doing so would make your code cleaner, more concise, or easier to debug, in which case the compiler will just take a steaming poo poo in your mouth and tell you to convert by hand.

There are a lot of things I dislike about Java but its schizophrenia regarding objects vs primitives probably heads the list.

Oh poo poo, C# has for-loop iterators that uses almost identical syntax to JavaScript... Let's deliberately pick a different syntax.

Oh poo poo, C# added Generics... Let's add them too. Oh all these generic parameters? gently caress 'em, it's all Object at runtime baby!

Oh poo poo, C# added lambda expressions, closures, and functions as first-class objects... Well, who wants to actually persist variables modified in the outer scope, that poo poo's for sperglords. We'll have our own lambdas, and blackjack, and hookers... In fact forget the lambdas. And the blackjack. Eh gently caress it, forget the whole thing.

Repeat ad-infinitum.

P.S. what's a Delegate? :v:

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice

rjmccall posted:

C# added generics about a year after Java did, and their decision to use reified generics instead of type erasure forced a huge compatibility break that was actually a serious problem for C# programmers at the time (and still noticeably bloats C# deployments). The Java developers decided they really didn't have that option give that they had a large base of deployed software that they cared about.


Java added closures via anonymous inner classes in 1997, in Java 1.1. When they did so, they decided on a particular, somewhat conservative model for capturing variables. Since then, they've decided that all of their closure-like extensions should behave essentially the same way, i.e. no mutable captures.

You can fault them for these decisions, but saying the language features were added in response to C# is pretty dumb.

You know I think youre a genius dude, I wasn't serious, more lamenting the design-by-committee approach and eternally slow pace, something C++ has also long suffered from. Actually that applies to a lot of things. There's a huge benefit to having a single group make decisions (see Objective-C and LLVM from Apple and C# from MS), but it has drawbacks too.

I look at Java programmers the way people in this thread look at PHP programmers. Once you've used something much better, it's hard to go back. Java was a good step forward but it hasn't kept up with the times. I'm also starting to understand why the LISP people are so insufferable... Once you start using functions as first-class objects it just opens up whole new ways to write better code with less boilerplate, anything else seems primitive by comparison.


I'm not sure what you're referencing with the c# transition. The run times installed side by side and you can use old assemblies in the newer one so it was relatively painless. Now that we're on the other side it's a non-issue. On bloat, it only instantiates the generic template once for all reference types so I'm not sure what you are referencing. On value types it instantiates once per type to avoid the automatic box/unbox people were discussing earlier.


Edit: just to be clear, I do not think Java stole, borrowed, or did anything in response to C#, and neither language is truly new as they're in the C family and borrow from a lot of earlier work.

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice

Plorkyeran posted:

FWIW I don't think anyone is particularly happy about having things like PILE OF POO in Unicode, but a universal character set that supports everything is sort of forced to include every retarded thing someone has decided to encode as characters.

Like everything else in Unicode, once you know the history it makes a ton of sense and you can't imagine it any other way.

When Apple was designing the iPhone, it turned out that Japanese Emoji were non-standard and unique to different carriers, so sending an Emoji to someone on another network was impossible. They proposed making their Emoji glyphs part of the Unicode standard (and to my knowledge donated them for public use). I believe Google also helped out. Thus we now have 722 carrier-agnostic, platform-agnostic standardized glyphs, including a pile of poop. (Some were explicitly included to map to the old carrier-specific symbols.)

This is precisely what Unicode is for, otherwise you end up with incompatible nation/vendor/carrier/whatever-specific character sets.


edit: On Unicode in general, it nearly broke some of my coworker's brains when I explained to them that 0x82 is not the latin accented "é"; ASCII files were being read as UTF-8 and coming into Unicode with the missing glyph symbol everywhere and I pointed out that, by definition, UTF-8 only specifies 0-127, and to read the files properly you had to make the user give you the Latin-1 ASCII codepage so that 0x82 "é" could be mapped to the appropriate Unicode code point. They literally could not understand how 0x41 isn't "A", it's just a number that happens to accidentally* map to the same code point in ASCII and UTF-8.

I decided that explaining Unicode is just an abstract list of code points, you need fonts with glyphs to draw and encodings to process strings was a bridge too far.

*purposefully, I know

Simulated fucked around with this message at 03:57 on Mar 19, 2013

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice

comedyblissoption posted:

You just defended a pile of poo poo being in unicode.

Well... that's my point. It seems stupid at first but when you realize it enabled Emoji to be cross-carrier and cross-platform, it makes total sense. The real horror is whatever Japanese carrier that included a pile of poop in their original implementation.

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice

yaoi prophet posted:

I mean, I guess the tone of your bug report doesn't matter if you don't care about it actually being fixed and just want sweet sweet internet points. Where did this weird trend of blaming Tumblr for everything bad come from?

Overly-sensitive social justice warriors who constantly try to one-up each other about how totally inclusive and equality-minded they are while simultaneously attacking everyone else they think beneath them, or the ones who see signs of *insert issue here* even in everyday mundane actions. For some reason this stuff has really manifested itself in certain corners of Tumblr.

You non-PoC cis-privileged scumbag.

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice
When you don't know how to teach/manage or measure performance, just pick some random things you *can* measure and use that.

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice

Jabor posted:

Not just JVM optimizations, CPU-level optimizations can break this as well (unless you want to kill performance by having your compiler put memory barriers in what looks like single-threaded code).

Especially once you go outside of x86-land and its almost-strict memory ordering.

Bingo... Just because the JVM (or CLR) promises not to reorder things doesn't mean poo poo to the CPU. Now that even your phone is multi-core, the phrase "concurrent" really loving means concurrent. The local CPU may never even see an inconsistent view but the thread may run on a different core that does. Don't be clever, just use synchronization primitives in a straightforward way. The number of times a profiler showed synchronization as the bottleneck compared to the number of times people claimed it was must be around 1:100.

Programs are meaninglessly executed by meaningless machines producing meaningless output exactly as designed. Concurrency is a great way to prove how little the computer adheres to your idea of what is/is not possible or logical.

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice

Suspicious Dish posted:

It's implemented at the filesystem driver level, so unless something is trying super hard to prevent you from using forward slashes, it should just work.

The real horror is MAX_PATH, a gift that will keep on giving.

As runner up, I nominate varargs/va_list. No, don't update the ABI or come up with some way to know the number and types/sizes of the arguments, just make every single function and user of those functions manually keep track of two pieces of state... The metadata about the argument list and the actual arguments themselves. Why specify information once when twice will do?

Actually strike that, as far as specifying information twice goes I nominate C's headers and #include as the ultimate waste of time. How many millions of hours of programmer effort can we waste? It's a giant study, only you aren't compensated and you can't opt out until you die. And no one gets a doctoral thesis out of it.

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice

Sinestro posted:

Using ClearCase is like volunteering to get mugged so you can then have a icepick jammed in your eye socket.

And then the entire city is leveled by a nuclear explosion, instantly incinerating your lifeless husk of a body.

I can't decide if the fat client "NFS share masquerading as a repo using flaky custom NFS drivers" or CCRC "gently caress your loving discordance detected fucker" is worse. What I can say is if you ever wanted to apply C's manual header and #include management to your source control system, config specs are right up your alley.


I am trying to check in about 5000 updated files for a fairly big upgrade of a third party library right now. I have been doing this for over a week now and it still isn't done. If I try to do too many at once, java consumes 1.2GB of memory, hits the 32-bit limit, and dies. It takes about 4-6 seconds per file, not including other overhead. No one else can touch that branch because commits aren't done as a transaction so the whole thing is hosed until I'm done. That also includes merges so using separate branches doesn't help.

Mercurial or git could have done it in two minutes.

gently caress.

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice

Uziel posted:

This is a horror I had to face. Our prototype system was in limited testing to see if the idea would work in the real world rather than in theory and only supposed to be used by a subset of users. Over the weekend the business owner's director launched all of their areas and users without asking then said we couldn't take it offline because it was "business critical".

This is the oldest trick in the book for bypassing what people consider red tape. If a system is just for limited testing, make sure you stick checks in the code that blow everything up if the limits are exceeded.

On the other hand, corporate IT departments are crap so I don't necessarily blame anyone for trying it. In larger organizations they seem to exist to prevent anyone from spending money on anything. I was once on a support call for a large Fortune-100 company and asked if we could setup a temporary database to debug an issue. I was told their outsourced IT charges them $1200 to create a database and it would take 6 weeks to get the necessary approvals. :psyduck:

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice
On a slightly different note, I've said it before but when I first saw that a message to nil in Objective-C does nothing and returns zero I thought it was the dumbest thing ever. Now I realize that's the way all languages should work. 99.9% of the time when some reference is null, I just want to ignore it and/or skip that section of code. It is extremely rare that I want to throw an exception, assert, crash, etc.

Yet in most languages, I have to constantly litter my code with if (dicks != null) { dicks.butts(); }


If you aren't going to do that, then all references should be non-Nullable unless they are Nullable<T>. At least then I would know.

Adbot
ADBOT LOVES YOU

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice

Dren posted:

One of the main goals of Go was to solve the problem of the C compiler taking forever to parse nested C header includes. I don't support the practice of forcing users to include all header dependencies, I think it's better to write a self contained header with guards and suffer the compilation slowdown, but your code probably compiles faster for the trouble you're enduring.

Or you know, just have support for #import because its the compilers loving job to figure it out. Oh, I've seen this header before? Cool, just skip it. There. Done. No duplicates. We also need @import in the standard like yesterday too. I've already told the compiler about my class 27 different times, why not make me also tell the linker? I so thoroughly enjoy telling the computer poo poo it should already know, repeatedly. Otherwise, like why get out of bed in the morning?


Now this is crazy talk I know, but why not just have the compiler scan my source files and suss out the prototypes itself, with I don't know... Maybe letting me just stick my qualifiers next to the implementation instead of a completely separate file. Then the compiler could spit out a machine-readable version of the metadata. I wouldn't have to mess with headers at all! Nah... That's too crazy. After all, a compiler that did that would use like 1 whole MB of memory! We can't go around wasting memory willy-nilly! Our PDP-11 only has 512k! Better that we make all programmers deal with it for all eternity.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply