|
Break after return, while on the minor side of the horrors here, still looks awful and should be throwing a warning.
|
# ? Oct 20, 2017 17:38 |
|
|
# ? Jun 9, 2024 02:57 |
|
JawnV6 posted:Break after return, while on the minor side of the horrors here, still looks awful and should be throwing a warning. A lot of people ignore warnings unfortunately.
|
# ? Oct 20, 2017 17:56 |
JawnV6 posted:Break after return, while on the minor side of the horrors here, still looks awful and should be throwing a warning. In Java, isn't unreachable code after a return an actual error? That may be one thing it does right. (Maybe not, I do occasionally place a return early in a function to disable part of it for debugging.)
|
|
# ? Oct 20, 2017 18:17 |
|
nielsm posted:In Java, isn't unreachable code after a return an actual error? That may be one thing it does right. Yup, it causes a compile error
|
# ? Oct 20, 2017 18:21 |
|
Dumb Lowtax posted:Send them the equivalent short code and make them justify why it's any longer Not pictured: a comment above where I did exactly this. he could have copy pasted it. Evidently the lightbulb finally came on when I suggested he REPLs the two bits of code and compared results. His PR was finally approved E: also without break after returns!
|
# ? Oct 20, 2017 19:26 |
|
nielsm posted:In Java, isn't unreachable code after a return an actual error? That may be one thing it does right. Me too, you gotta if(true) return; though.
|
# ? Oct 20, 2017 20:04 |
|
AstuteCat posted:Yes. That is all that this function, far as I can tell, is intended to do. crazy theory here: parseInt reverses the sign of the data in memory, so it's necessary to call parseInt twice on each variable in this function in order to keep the signs consistent because writing a version of parseInt that doesn't flip the sign of whatever data is passed in would just be pointless busy work why would you ever do that
|
# ? Oct 21, 2017 08:33 |
QuarkJets posted:crazy theory here: parseInt reverses the sign of the data in memory, so it's necessary to call parseInt twice on each variable in this function in order to keep the signs consistent If parseInt modifies the value it takes, doesn't that make this UB (if this is C/C++)? Since the evaluation order is unspecified.
|
|
# ? Oct 21, 2017 08:50 |
|
How would a comparison on the returned values even occur if the two function evaluations didn't finish before the comparison?
|
# ? Oct 21, 2017 11:09 |
|
This is JavaScript code so, really, there are bigger problems at play here than just the code in question. In the end it was just a simple inversion of the logic making it less readable and a failure to engage brain when this was pointed out.
|
# ? Oct 21, 2017 13:48 |
QuarkJets posted:How would a comparison on the returned values even occur if the two function evaluations didn't finish before the comparison? Good point. I didn't think that one through.
|
|
# ? Oct 21, 2017 19:03 |
|
Is there a version of Hanlon's Razor that says it's more likely the programmer was just a poo poo developer who develops a convoluted mess rather than any sort of higher-level thinking going on that you're missing? Corollary: If you use ++i in any way that's not directly interchangable with i++, then you're not as clever as you think you are. MisterZimbu fucked around with this message at 13:44 on Oct 22, 2017 |
# ? Oct 22, 2017 13:41 |
|
MisterZimbu posted:Is there a version of Hanlon's Razor that says it's more likely the programmer was just a poo poo developer who develops a convoluted mess rather than any sort of higher-level thinking going on that you're missing? Sturgeon's Law: 90% of everything is crud. Statistically, the code you're looking at is crud. quote:Corollary: If you use ++i in any way that's not directly interchangable with i++, then you're not as clever as you think you are. i++ is the horror here behavior-wise, but they're both horrors in terms of the kind of code they enable. Access and mutation should not be combined into a single operator!
|
# ? Oct 22, 2017 14:48 |
|
TooMuchAbstraction posted:i++ is the horror here behavior-wise, but they're both horrors in terms of the kind of code they enable. Access and mutation should not be combined into a single operator! there are contexts where combining access and mutation is important admittedly I can’t think of any where it isn’t also important for this to be atomic, which ++ isn’t
|
# ? Oct 22, 2017 16:11 |
I do i++/++i array subscript sometimes, but I don't really think I'm being clever. I also realize I'm just making it harder to read for other maintainers. I'm just too lazy to write the extra line/there's some subconscious or neolithic-brain part of me that seems to think that conserving code lines is in any way a good thing.
|
|
# ? Oct 22, 2017 18:02 |
|
Didn’t it come about because a lot of architectures have read-then-increment, decrement-then-read, etc kind of instructions?
|
# ? Oct 22, 2017 18:10 |
|
Yes. And that’s very useful, in an early C context where the language is a high-level assembler without any particularly sophisticated optimiser. Less so nowadays, and frankly it should never have been a part of Java or C#, but I guess the habit is hard to kick.
|
# ? Oct 22, 2017 19:06 |
|
Go got it right by making it a statement.
|
# ? Oct 22, 2017 19:43 |
|
sarehu posted:Go got it right by making it a statement. Haskell got it right by making it immutable.
|
# ? Oct 22, 2017 19:53 |
|
Soricidus posted:atomic, which ++ isn’t It isn't? Doesn't it compile down to like a single processor instruction?
|
# ? Oct 23, 2017 17:26 |
|
Increment is three operations: fetch value, increment, store new value, the operation can be interrupted in-between the fetch and store. On Intel x86 you get free atomic load and store operations as long as the data is aligned and equal or smaller than the machine word size, everything else needs a special function. MrMoo fucked around with this message at 17:33 on Oct 23, 2017 |
# ? Oct 23, 2017 17:28 |
|
Doom Mathematic posted:It isn't? Doesn't it compile down to like a single processor instruction? single processor instruction does not necessarily imply atomic, and the c specification doesn't guarantee it will compile to a single processor instruction anyway
|
# ? Oct 23, 2017 17:31 |
MrMoo posted:The load and store are separate memory accesses, the only atomic standard ops on x86 are load and store. To be fair, if it's a small loop with not much logic, and the counted variable isn't used outside of it, the compiler could probably justify keeping it in a register if your target has enough of them.
|
|
# ? Oct 23, 2017 17:34 |
|
Joda posted:To be fair, if it's a small loop with not much logic, and the counted variable isn't used outside of it, the compiler could probably justify keeping it in a register if your target has enough of them. yes, but if you care about whether an operation is atomic or not, you want stronger guarantees than “the compiler could probably justify ...” so arguably ++ and -- are actively harmful, because they make something look like a single operation that is not in fact guaranteed to be one. it’s too late to fix that in C but there was really no excuse for keeping it there in Java — although at least AtomicInteger is easy to find if you need it
|
# ? Oct 23, 2017 18:18 |
|
Soricidus posted:yes, but if you care about whether an operation is atomic or not, you want stronger guarantees than the compiler could probably justify ... Plenty of languages that don't have ++ still have +=, which is similarly non-atomic but potentially confusing to people with a naive understanding of atomicity. Other things that do not make any atomicity guarantees at the PL level: simple loads and stores (well, except in Java, and even Java doesn't guarantee atomicity for long and double). I'm generally a strong believer in the ability of programming languages to promote correctness, but atomicity is really an exception, because there's a fundamental conflict of goals: programming languages intentionally try to encourage abstraction and composition, but abstraction and composition inherently break low-level atomicity unless you have a drastically invasive high-level design like transactional memory. And transactional memory is pretty much a dead idea, and for good reasons. The better design approach for PLs is to "define it away" by providing more effective tools for concurrency and memory isolation.
|
# ? Oct 23, 2017 21:13 |
|
you'll only be able to take away my ++ out of my cold, dead hands; come at me bro
|
# ? Oct 23, 2017 22:10 |
|
QuarkJets posted:you'll only be able to take away my ++ out of my cold, dead hands; come at me bro i += 1 is only 3 more characters. It's time to move on.
|
# ? Oct 23, 2017 22:12 |
|
TooMuchAbstraction posted:i += 1 is only 3 more characters. It's time to move on. That doesn't need to exist either. i = i + 1 is fine.
|
# ? Oct 23, 2017 23:04 |
|
Yeah, and who needs multiplication, it's just: while((j = j - 1)) i = i + i;
|
# ? Oct 23, 2017 23:09 |
Go literally thinks you should do that for integer exponentiation
|
|
# ? Oct 23, 2017 23:20 |
|
code:
code:
|
# ? Oct 23, 2017 23:23 |
|
JawnV6 posted:
"Idiomatic" C code is, quite often, too clever by half yes. The biggest problem with the *dst++ = *src++ construction is that the operator evaluation order is kind of obtuse. Also, there's nothing wrong with code:
code:
If you're working in C you absolutely should be able to read code that uses the "increment pointer and perform some operation on the location in one line" construction because it's common. The only thing *a_ptr++ really has going for it is... well... the fact that it's idiomatic. It's common and widely known so it conveys dense, easily parsable information by people who are already familiar with the construction. It's useful by inertia alone, but I'm not confident it was a construction that ever should have become popular because it's hella confusing if you're not familiar with it. Especially the K&R strcpy cousin that adds a while loop, assignment side effects, and null terminators into the mix, which is most peoples first time seeing that construction nowadays and is one of the most impenetrable things this side of languages invented for code golf if you've never seen it before. Linear Zoetrope fucked around with this message at 00:00 on Oct 24, 2017 |
# ? Oct 23, 2017 23:51 |
|
What the hell is typeof, that can't be standard C.
|
# ? Oct 24, 2017 01:59 |
|
qsvui posted:What the hell is typeof, that can't be standard C. It's a common extension. It's also completely unnecessary here because sizeof can take an expression operand.
|
# ? Oct 24, 2017 02:13 |
|
rjmccall posted:And transactional memory is pretty much a dead idea, and for good reasons. The better design approach for PLs is to "define it away" by providing more effective tools for concurrency and memory isolation. Is "something went wrong, I guess" at the end of a giant batch of memory ops rolling back the entire thing not a useful behavior?
|
# ? Oct 24, 2017 02:48 |
|
It's just lost a lot of interest. TM works alright if you basically don't have contention. If you do have contention, though, guaranteeing progress is hard even in practical cases — you basically need all other threads to gradually stop trying to begin transactions, even unrelated ones, because you can't easily know ahead of time whether a transaction is indeed unrelated — so either the overhead is much higher than advertised or you have really hard-to-predict and hard-to-analyze failure modes. That statement about progress is also true of things like compare-and-swap loops but much less likely in practice, in part because of the nature of things generally being attempted but also because progress is supposed to be the sort of thing you're aware of as a programmer using atomics, and nobody ever said atomics were a feature for novices, whereas people do make grandiose claims about how easy TM makes lock-free concurrency. Also, people have been pinning their performance hopes on hardware solutions, but hardware TM is inevitably going to have fixed transaction size limits, which means that a lot of things which would be merely bad ideas suddenly become impossible. For example, you can use TM to splice something out of a linked data structure (like a doubly-linked list) because you only need to touch some bounded number of nodes, but you generally can't use hardware TM to safely walk the entire data structure because the transaction would grow linearly with the size.
|
# ? Oct 24, 2017 04:14 |
|
Linear Zoetrope posted:Or even code:
|
# ? Oct 24, 2017 05:51 |
|
JawnV6 posted:
|
# ? Oct 24, 2017 07:19 |
|
rjmccall posted:I'm generally a strong believer in the ability of programming languages to promote correctness, but atomicity is really an exception, because there's a fundamental conflict of goals: programming languages intentionally try to encourage abstraction and composition, but abstraction and composition inherently break low-level atomicity unless you have a drastically invasive high-level design like transactional memory.
|
# ? Oct 24, 2017 08:09 |
|
|
# ? Jun 9, 2024 02:57 |
|
JawnV6 posted:Wait I wanted to touch on this. More than the first x86 rollout being patched away, what's happening with it? It was *the* concurrency savior back when I was learning. Software transactional memory (as seen in functional languages) also hasn't seen as wide usage as was hoped. I think what happened is that a bunch of people invented techniques to make parallel programming safe, but then realised that the real challenge is to also make it fast. You see this happening over and over again (although some research twists the story by indeed focusing on performance, albeit asymptotic cost on implausible machine models).
|
# ? Oct 24, 2017 09:02 |