|
ExcessBLarg! posted:The maintainer of OpenSSL telling users to get lost really isn't viable. Yes it is. ExcessBLarg! posted:It is the job of a package maintainer to patch programs in the necessary ways to make them fit well with the rest of the distribution, which may include handling bugs upstream won't acknowledge. No it's not, and this is a mistake.
|
# ? Aug 27, 2016 23:08 |
|
|
# ? Apr 23, 2024 23:58 |
|
ExcessBLarg! posted:The maintainer of OpenSSL telling users to get lost really isn't viable. It is the job of a package maintainer to patch programs in the necessary ways to make them fit well with the rest of the distribution, which may include handling bugs upstream won't acknowledge. Debian might not do this as much as Ubuntu or Mint or something, but they still do it. Is there no way to get Valgrind to put in exceptions? Maybe that could have been integrated into the installation package?
|
# ? Aug 27, 2016 23:44 |
Qwertycoatl posted:Compilers really will optimise that sort of thing away: Before anyone objects to you using an int instead of a unsigned char[], the same thing happens with the latter: C code:
code:
|
|
# ? Aug 28, 2016 00:31 |
|
vOv posted:Hmm, I guess you're right. Reading from a trap representation is undefined, and the only type guaranteed not to have any trap representations is unsigned char (the code in question just used char). To get undefined behavior it still has to be possible for that memory to contain a trap representation, right? So on almost all architectures, even ones with registers that can trap, you would still be able to copy an array of shorts from one memory address to another without initializing anything. Which would make your code slightly less portable, but mostly in an academic sense. Or is that wrong, and the compiler can pretend there's a trap? Edit: VikingofRock posted:
That doesn't really prove anything, because the compiler might just be pretending that your uninitialized memory contains whatever value is most convenient for optimization. Look what happens if you poke x a little bit: code:
code:
Dylan16807 fucked around with this message at 01:22 on Aug 28, 2016 |
# ? Aug 28, 2016 01:08 |
|
Absurd Alhazred posted:Is there no way to get Valgrind to put in exceptions? Maybe that could have been integrated into the installation package? There is a way, but a distro shouldn't be doing that behind the user's back unless they're absolutely certain it's a false positive
|
# ? Aug 28, 2016 01:35 |
|
Series DD Funding posted:There is a way, but a distro shouldn't be doing that behind the user's back unless they're absolutely certain it's a false positive I was thinking more of a "Allow OpenSSL to add itself as an exception to Valgrind to avoid spurious errors. (N/y)" kind of thing.
|
# ? Aug 28, 2016 01:56 |
|
Why would anyone want valgrind not to warn about the uninitialised memory read bugs in openssl? They were bugs.
|
# ? Aug 28, 2016 03:06 |
Dylan16807 posted:To get undefined behavior it still has to be possible for that memory to contain a trap representation, right? So on almost all architectures, even ones with registers that can trap, you would still be able to copy an array of shorts from one memory address to another without initializing anything. Which would make your code slightly less portable, but mostly in an academic sense. Or is that wrong, and the compiler can pretend there's a trap? Maybe I'm misunderstanding this, but I don't think that's actually copying the value from y to x? I might've lost track of this discussion though.
|
|
# ? Aug 28, 2016 03:10 |
|
return0 posted:Why would anyone want valgrind not to warn about the uninitialised memory read bugs in openssl? They were bugs. The claim was that they were using uninitialized memory reads as an intentional entropy source. It might be unwise, but that makes it a poor design decision, not a bug.
|
# ? Aug 28, 2016 03:16 |
|
VikingofRock posted:Maybe I'm misunderstanding this, but I don't think that's actually copying the value from y to x? I might've lost track of this discussion though. It's adding code to initialize y to 0 before coyping it, then optimizing further to just "return 0;". It's not getting you any useful entropy, but it's also not hurting anything the way undefined behavior would. It depends on which optimization you're worried about. Edit: y not x Dylan16807 fucked around with this message at 03:52 on Aug 28, 2016 |
# ? Aug 28, 2016 03:40 |
Dylan16807 posted:It's adding code to initialize x to 0 before coyping it, then optimizing further to just "return 0;". It's not getting you any useful entropy, but it's also not hurting anything the way undefined behavior would. Fair enough. I googled around a bit, and I found a couple articles talking about this. This article (by my reading) says that this was UB in C99 but not C11, and this article claims that reading an indeterminate value is UB full stop. I don't really know what to make of it all myself.
|
|
# ? Aug 28, 2016 03:50 |
|
Dylan16807 posted:It's adding code to initialize y to 0 before coyping it, then optimizing further to just "return 0;". It's not getting you any useful entropy, but it's also not hurting anything the way undefined behavior would.
|
# ? Aug 28, 2016 04:38 |
|
Volte posted:That sounds like undefined behaviour to me. If you change x /= 2 to x += 1 then the whole thing just changes to retq. Then change return x to return x + 1 and nothing changes in the assembly code at all. The compiler knows that x is an indeterminate value, and so any expression that uses it must be indeterminate as well. Kind of like how NaN propagates. If the compiler can figure out that the return value of your RNG function is actually the result of an expression containing an indeterminate value, then it may well return something bogus instead of using any of the other sources of entropy. You could end up with a random number function that returns 0 every time just because you threw in an uninitialized variable. I think that the idea that it "can't hurt" is based on the flawed assumption that every line of code you write is going to be compiled into an equivalent assembly instruction---that reading from an uninitialized memory location is going to actually do just that--when in fact reading such a value is also a very strong statement to the compiler basically saying "this value is a wildcard, do whatever you want". There's a difference between undefined behavior and an unspecified value.
|
# ? Aug 28, 2016 05:29 |
|
Volte posted:That sounds like undefined behaviour to me. If you change x /= 2 to x += 1 then the whole thing just changes to retq. Then change return x to return x + 1 and nothing changes in the assembly code at all. The compiler knows that x is an indeterminate value, and so any expression that uses it must be indeterminate as well. Kind of like how NaN propagates. The difference is that [unknown value] divided by 2 has constraints on what values it can contain, so it can't return garbage. [unknown value] + 1 could be anything, so it optimizes out the math. It defines [unknown value] as whatever was in eax minus 1. If it was actually Undefined Behavior, the compiler could return an impossible value, or throw out the entire function, and every function that calls it. quote:If the compiler can figure out that the return value of your RNG function is actually the result of an expression containing an indeterminate value, then it may well return something bogus instead of using any of the other sources of entropy. You could end up with a random number function that returns 0 every time just because you threw in an uninitialized variable. I think that the idea that it "can't hurt" is based on the flawed assumption that every line of code you write is going to be compiled into an equivalent assembly instruction---that reading from an uninitialized memory location is going to actually do just that--when in fact reading such a value is also a very strong statement to the compiler basically saying "this value is a wildcard, do whatever you want". Dylan16807 fucked around with this message at 06:12 on Aug 28, 2016 |
# ? Aug 28, 2016 06:07 |
|
Suspicious Dish posted:No it's not, and this is a mistake. The OpenSSL bug was a big mistake, for sure, but the motivation behind it was consistent with Debian packaging procedures. Absurd Alhazred posted:The claim was that they were using uninitialized memory reads as an intentional entropy source. It might be unwise, but that makes it a poor design decision, not a bug.
|
# ? Aug 28, 2016 06:23 |
|
ExcessBLarg! posted:This is how the Debian project operates though. Many packages have some amount of patching to deal with distribution integration issues. For stable releases there's also often bug fixes done at the package level for things that upstream hasn't acknowledged, or won't support, or that needs to be backported to the version Debian is including in stable. As someone who's code has been packaged and broken by Debian, I don't like their policy, especially when it's probably caused me multiple weeks of combined effort tracking down upstream bugs to bogus patches in Debian that I cannot get removed.
|
# ? Aug 28, 2016 06:35 |
|
Probably dumb question: malloc doesn't initialize like calloc, so other than performance reasons why not just malloc some swath of memory, read that, and then immediately free it?
|
# ? Aug 28, 2016 08:29 |
|
malloc does initialize memory if you need to grab a new page. well, "initialize", since zero pages are always zero, from the kernel, for security reasons.
|
# ? Aug 28, 2016 08:57 |
|
Jsor posted:Probably dumb question: malloc doesn't initialize like calloc, so other than performance reasons why not just malloc some swath of memory, read that, and then immediately free it? The compiler can optimize that just the same. Like, getting rid of unnecessary malloc/free pairs is a thing too.
|
# ? Aug 28, 2016 09:08 |
|
Jsor posted:Probably dumb question: malloc doesn't initialize like calloc, so other than performance reasons why not just malloc some swath of memory, read that, and then immediately free it? There are all sorts of these bullshit ways you can try to add more entropy, but that is what they are: just bullshit. None of them are guaranteed to provide any meaningful entropy, so you always need some solid mechanism (or several) that can stand by itself anyway. Why not just stick to that one, and avoid the potential problems of additional code that does weird stuff?
|
# ? Aug 28, 2016 10:02 |
|
ExcessBLarg! posted:So when you get regular bug reports from angry users running Valgrind on their own programs, which complains about libssl reading from uninitialized memory, what would you do? not touch it, I'm a volunteer whos job is packaging, not forking. if people get angry about compiler warnings they should complain to openssl
|
# ? Aug 28, 2016 12:13 |
|
nah i'll go in and make a change in a library that impacts the security of millions of people without any oversight
|
# ? Aug 28, 2016 12:14 |
|
ExcessBLarg! posted:This is how the Debian project operates though. Many packages have some amount of patching to deal with distribution integration issues. For stable releases there's also often bug fixes done at the package level for things that upstream hasn't acknowledged, or won't support, or that needs to be backported to the version Debian is including in stable. This wasn't a feature written upstream, a feature that would be supported upstream, nor was it backporting. It was silencing errors in a compile. It was silencing errors in a tool quote:The OpenSSL bug was a big mistake, for sure, but the motivation behind it was consistent with Debian packaging procedures. No, it was a naive change made with no thought or oversight that impacted countless users and the project itself. quote:Bug or not, I think everyone (but OpenSSL) viewed it as distasteful and that OpenSSL probably just shouldn't do that. Years later the LibreSSL guys found a shitload more distasteful things that OpenSSL was doing, intentional or not, and gutted the whole thing. They at least did it the right way. Yes, OpenSSL is bad, but asking a packaging volunteer to rewrite or maintain such a specialist domain package is not the debian way. The work debian does is to make it work, make it compile, make it redistributable. This was about silencing a compiler error and a debugging tool by commenting out a line of code without any real understanding of what it did or why it was there. I wouldn't do it because I have already seen what happens countless times before when a naive developer fixes compiler warnings by shotgun until they're silent
|
# ? Aug 28, 2016 12:19 |
|
Dylan16807 posted:The difference is that [unknown value] divided by 2 has constraints on what values it can contain, so it can't return garbage. [unknown value] + 1 could be anything, so it optimizes out the math. It defines [unknown value] as whatever was in eax minus 1. If it was actually Undefined Behavior, the compiler could return an impossible value, or throw out the entire function, and every function that calls it. quote:For example if a copy of your data happened to be in that memory location and you xor, oops you now have a big zero. But in the debian case it's running a hash function over all the accumulated data, so the addition of uninitialized bytes should be safe from any accidental data loss.
|
# ? Aug 28, 2016 17:36 |
|
This is why it's important to call bullshit every time someone trots out the "C is a high-level assembler and it's obvious to everyone how this will be translated to machine code…" gambit.
|
# ? Aug 28, 2016 17:53 |
|
I wouldn't be surprised if compiling predictability goes up the higher level you go.
|
# ? Aug 28, 2016 18:19 |
|
Yeah, and then the JIT and GC are unpredictable.
|
# ? Aug 28, 2016 18:23 |
|
My understanding is that (please correct me if I'm wrong here): 1) Uninitialized locations contain "Indeterminate values" 2) "Indeterminate values" are either "unspecified values" or "trap representations," except for unsigned char, which is guaranteed to never be a trap representation 3) A trap representation is a bit sequence which doesn't represent a valid value of the type. Using a trap representation is undefined behaviour (anything can happen). 4) An unspecified value is just that - a value which is unspecified. That doesn't mean the compiler is free to do anything at all (as with undefined behaviour). 5) Even though the spec only guarantees that unsigned char has no trap representation, it's possible that given type in a given implementation will have no trap representations. For example, I believe that the usual int implementation has no trap representations since all possible 32 (or 64?) bit sequences are valid values. Therefore an "indeterminate value" of an int in such an implementation is aways an unspecified value, and there should be no undefined behaviour from reading an uninitialized int in such an implementation. In conclusion, the c spec was written for the convenience of compiler writers and not c programmers and we should probably all use rust or something.
|
# ? Aug 28, 2016 18:24 |
|
It wouldn't be an issue if people were sticking to well-defined behaviour which has no chance of being optimized away. Taking advantage of "tricks" is almost always going to cause you pain somewhere down the road.
|
# ? Aug 28, 2016 18:49 |
|
comedyblissoption posted:the single responsibility principle is an opportunity for extreme bikeshedding Evil_Greven fucked around with this message at 18:57 on Aug 28, 2016 |
# ? Aug 28, 2016 18:52 |
|
Undefined and unspecified behaviour are different but my understanding is that computations that use unspecified values (at least in the non-unsigned-char case) are undefined behaviour. What "happens to work in practice in most cases" isn't really relevant since that's exactly the pitfall that should be avoided.
|
# ? Aug 28, 2016 18:54 |
|
Volte posted:Undefined and unspecified behaviour are different but my understanding is that computations that use unspecified values (at least in the non-unsigned-char case) are undefined behaviour. I was going to respond to this but I realized I would just be writing the exact same post I just wrote verbatim. Please see the distinction between indeterminate values (which could have trap representations and UB) and unspecified values.
|
# ? Aug 28, 2016 18:59 |
|
Volte posted:Undefined and unspecified behaviour are different but my understanding is that computations that use unspecified values (at least in the non-unsigned-char case) are undefined behaviour. What "happens to work in practice in most cases" isn't really relevant since that's exactly the pitfall that should be avoided. (That's if you take the address of the value. If you don't take the address then it's undefined no matter what. Because C is as arcane as possible.)
|
# ? Aug 28, 2016 19:05 |
|
I said reading from unspecified values is undefined behaviour.quote:The behavior is undefined in the following circumstances: quote:3.17.2
|
# ? Aug 28, 2016 19:10 |
|
Volte posted:I said reading from unspecified values is undefined behaviour. quote:Certain object representations need not represent a value of the object type. If the stored value of an object has such a representation and is read by an lvalue expression that does not have character type, the behavior is undefined. If such a representation is produced by a side effect that modifies all or any part of the object by an lvalue expression that does not have character type, the behavior is undefined. Such a representation is called a trap representation. quote:If the lvalue designates an object of automatic storage duration that could have been declared with register storage class (never had its address taken), and that object is uninitialized (not declared with an initializer, and no assignment to it has been performed prior to the use), the behavior is undefined.
|
# ? Aug 28, 2016 19:40 |
|
There's also this post on the topic http://blog.frama-c.com/index.php?post/2013/03/13/indeterminate-undefined If I run that test code on a current version of gcc, I get "j:0 c:1" So GCC has gone out of their way to stop interpreting this as undefined behavior.
|
# ? Aug 28, 2016 19:48 |
|
Debian packager induced bugs are unfortunately not at all uncommon.
|
# ? Aug 28, 2016 23:17 |
|
tef posted:Yes, OpenSSL is bad, but asking a packaging volunteer to rewrite or maintain such a specialist domain package is not the debian way. It is the Debian way. Debian sees free software upstreams not as partners, but as code repositories they can pull from to build their own OS. When something happens that the Debian team doesn't like, instead of attempting to coordinate on fixes, they will just say "fork you, got mine" (IceWeasel, xscreensaver, OpenSSL, mICQ, quodlibet, the GNOME/systemd mess, etc.) Debian sees their own staff as smarter and more capable than upstream, and I've encountered this mentality many, many times.
|
# ? Aug 28, 2016 23:51 |
|
I am shocked, shocked that free software advocates are making their own modifications to source code.
|
# ? Aug 29, 2016 01:37 |
|
|
# ? Apr 23, 2024 23:58 |
|
Trying to fix bugs at the package level instead of contributing your code to the project oughtta be a coding horror
|
# ? Aug 29, 2016 01:40 |