|
Jabor posted:Is this why people implement their own memory allocator because they think the platform one isn't fast enough? Usually people implement their own memory allocator for special cases where it is faster than an allocator that is fast for the general case, and where performance is important?
|
# ? Apr 19, 2014 15:51 |
|
|
# ? Apr 25, 2024 16:53 |
|
Jabor posted:Is this why people implement their own memory allocator because they think the platform one isn't fast enough? There are lots of workloads for which a general purpose allocator isn't optimally fast or efficient. It's often quite easy to measure; every browser or VM developer has spent time in those mines, done internal recycling, customized size classification and region-release policies, etc. Would you spread the entropy across the CPU pools, then? That would seem to give worse randomness to the vast majority of cases (in which there isn't significant contention for urandom).
|
# ? Apr 19, 2014 16:04 |
|
Subjunctive posted:There are lots of workloads for which a general purpose allocator isn't optimally fast or efficient. It's often quite easy to measure; every browser or VM developer has spent time in those mines, done internal recycling, customized size classification and region-release policies, etc. That was a snarky reference to the recent Heartbleed kerfuffle, which was exacerbated by OpenSSL using a custom allocation pool, nominally due to performance concerns. quote:Would you spread the entropy across the CPU pools, then? That would seem to give worse randomness to the vast majority of cases (in which there isn't significant contention for urandom). You can give all the pools most of the entropy, as long as they're in different states and you remix frequently. The pool being depleted of entropy while still giving you numbers is already something you need to be concerned about with urandom, anything where getting worse performance is preferable to having that happen should be using /dev/random instead.
|
# ? Apr 19, 2014 16:22 |
|
Jabor posted:You can give all the pools most of the entropy, as long as they're in different states and you remix frequently. The pool being depleted of entropy while still giving you numbers is already something you need to be concerned about with urandom, anything where getting worse performance is preferable to having that happen should be using /dev/random instead. That makes sense. All else fails you can feed in the private key, right?
|
# ? Apr 19, 2014 16:54 |
|
This post keeps coming up. http://sockpuppet.org/blog/2014/02/25/safely-generate-random-numbers/ Written in the style of Coda Hale's bcrypt post. However, I still haven't found the answer to my question: what happens when /dev/urandom runs out of entropy in the pool? He has this quote: quote:But /dev/random also tries to keep track of how much entropy remains in its kernel pool, and will occasionally go on strike if it decides not enough remains. This design is as silly as I’ve made it sound; it’s akin to AES-CTR blocking based on how much “key” is left in the “keystream”. But it doesn't sound silly to me at all. Key reuse and management is a serious issue and one of the biggest issues facing practical cryptosystems today. Reusing your key is what's broken a lot of cryptosystems in the past. So, what does urandom do when it runs out of entropy in the pool? Return Mersenne Twister data?
|
# ? Apr 19, 2014 17:01 |
|
Suspicious Dish posted:Return Mersenne Twister data? I was looking into this earlier today, and turned up this presentation. Key slides are p11 and p13, which suggest it uses a nonlinear feedback shift register. Soo yeah not a great idea to use it for cryptographic purposes.
|
# ? Apr 19, 2014 17:12 |
|
That's what it uses to put entropy in the pool from random events. What I'm wondering is what happens when the entropy runs out.
|
# ? Apr 19, 2014 17:53 |
|
This was discussed a while ago and there was a link about how urandom and random both pull from the same prng, it's just that random tries to estimate how much entropy is in the pool and it will block if there's not enough. The article went on to suggest that the measurement of entropy in the pool was a very inexact thing (and therefore sort of silly) and recommended that people use urandom without worrying about it since you're trusting the prng either way. There were some cases where urandom shouldn't be used like in a VM when the system is fresh because the VM can't get entropy from keyboard/mouse and some other sources that real hw uses.
|
# ? Apr 19, 2014 18:37 |
|
For some reason I thought Intel and AMD chips were shipping with hardware RNGs at this point, something about using the CPU temp sensor. I guess not.
|
# ? Apr 19, 2014 19:08 |
|
Dren posted:This was discussed a while ago and there was a link about how urandom and random both pull from the same prng, it's just that random tries to estimate how much entropy is in the pool and it will block if there's not enough. The article went on to suggest that the measurement of entropy in the pool was a very inexact thing (and therefore sort of silly) and recommended that people use urandom without worrying about it since you're trusting the prng either way. There were some cases where urandom shouldn't be used like in a VM when the system is fresh because the VM can't get entropy from keyboard/mouse and some other sources that real hw uses. I can imagine that if you know somebody used urandom without entropy for a very long, you can see repeating patterns in the PRNG output which would allow you to predict keys and other sorts of things.
|
# ? Apr 19, 2014 19:17 |
|
revmoo posted:For some reason I thought Intel and AMD chips were shipping with hardware RNGs at this point, something about using the CPU temp sensor. I guess not. Nobody trusts hardware RNGs.
|
# ? Apr 19, 2014 19:56 |
|
Suspicious Dish posted:I can imagine that if you know somebody used urandom without entropy for a very long, you can see repeating patterns in the PRNG output which would allow you to predict keys and other sorts of things.
|
# ? Apr 19, 2014 20:57 |
|
I'm pretty sure that modern Linux snapshots something and then uses it to seed (u)random on startup, which should make VMs OK other than the first boot. Isn't Linux unusual in having distinct random and urandom these days anyway?
|
# ? Apr 19, 2014 20:59 |
|
Suspicious Dish posted:I can imagine that if you know somebody used urandom without entropy for a very long, you can see repeating patterns in the PRNG output which would allow you to predict keys and other sorts of things. One of the points the guy made but didn't delve into very much, and he made clear that this was a matter of some debate, was that the function used to measure entropy for the purposes of /dev/random is pretty naive. Point being that even if there is "enough" entropy the measure of "enough" isn't very good to start with so whether you use /dev/random or /dev/urandom in reality the quality of randomness is largely in the hands of the prng. I really can't recall where the article was but I'll poke around and see if I can dig it up. Subjunctive posted:I'm pretty sure that modern Linux snapshots something and then uses it to seed (u)random on startup, which should make VMs OK other than the first boot. With VMs it was brought up that creating your ssh keys pretty soon after install was a bad idea but that you could do something to seed the VM's entropy from the host OS and then the snapshot thing you're talking about would take over and stuff would be fine. edit: here's the article http://www.2uo.de/myths-about-urandom/ this one got brought up too but it's not as good as the first: http://sockpuppet.org/blog/2014/02/25/safely-generate-random-numbers/ Dren fucked around with this message at 23:14 on Apr 19, 2014 |
# ? Apr 19, 2014 23:05 |
|
Subjunctive posted:Isn't Linux unusual in having distinct random and urandom these days anyway?
|
# ? Apr 20, 2014 00:04 |
|
Suspicious Dish posted:That's what it uses to put entropy in the pool from random events. What I'm wondering is what happens when the entropy runs out. Dren posted:edit: here's the article http://www.2uo.de/myths-about-urandom/ Dren's article answers Suspicious Dish's question basically with "it doesn't happen". The CSPRNG used by both urandom and random can use 256 bits of entropy to generate a stream of cryptographically random (unpredictable) numbers for longer than one could need. I have a moderate-to-severe nerdcrush on Thomas Ptacek, but I agree that the first article is better. (djb argues that more mixing of entropy can hurt, if you are using a malicious source. I don't think that attack worries me as much as "attacker gets snapshot of RNG state", but it's interesting to consider.) In the mid-nineties, a friend of mine broke SSL in early Netscape versions because they seeded the RNG with the current time. This caused a future co-worker of mine to shave his head in shame. That we're even having this sort of conversation on a comedy forum makes me so warm inside.
|
# ? Apr 20, 2014 01:44 |
|
Yep. My (incorrect) assumption was that mixing in more true entropy obtained from "true random" sources was required for the CSPRNG to continue to be secure. It seems that given 256 bits as an initial seed, and never adding more, the CSPRNG is quite good at generating 'secure' random numbers for a long time. Adding more entropy over time is optional, and only defends against the attacker knowing the CSPRNG's internal state. I'm not sure how well I trust that (it seems to me that if you obtained 100 or so numbers, you might be able to figure out the internal state and the next one in the chain) but it's nice to know.
|
# ? Apr 20, 2014 01:51 |
|
Subjunctive posted:Dren's article answers Suspicious Dish's question basically with "it doesn't happen". The CSPRNG used by both urandom and random can use 256 bits of entropy to generate a stream of cryptographically random (unpredictable) numbers for longer than one could need. I have a moderate-to-severe nerdcrush on Thomas Ptacek, but I agree that the first article is better. (djb argues that more mixing of entropy can hurt, if you are using a malicious source. I don't think that attack worries me as much as "attacker gets snapshot of RNG state", but it's interesting to consider.) Keep in mind that a big problem with /dev/random or /dev/urandom usage is pointed out in the last few sentences of the first article for VMs that do stuff like restore from snapshots all the time, since the random state always gets restored as well. That's something I hadn't thought of before I read the article even though it seems incredibly obvious.
|
# ? Apr 20, 2014 02:14 |
|
Suspicious Dish posted:I'm not sure how well I trust that (it seems to me that if you obtained 100 or so numbers, you might be able to figure out the internal state and the next one in the chain) but it's nice to know. For some non-CS PRNGs, that's the case; if you get (I think) 650 consecutive outputs from Mersenne Twister, you can reconstruct the internal state and predict the rest. For CSPRNGs, though, your statement is sort of equivalent to this: given 100 words of cipher text encrypted with a 256-bit key, you can figure out what the next byte would be. CSPRNGs are built on the same methods as our best stream ciphers (AES, Twofish, etc), so predictability would lead to many forms of compromise. (Roughly, I think it works like this: initialize a counter to a secret value n. Use a f.e. 256-bit key to encrypt a sequence of n, n+1, .... The resulting cipher text is a cryptographically strong sequence of bits. IANAcryptographer, though, and it's been more than a decade since I was intimate with Yarrow internals.) Edit: ^^^^^ Dren posted:Keep in mind that a big problem with /dev/random or /dev/urandom usage is pointed out in the last few sentences of the first article for VMs that do stuff like restore from snapshots all the time, since the random state always gets restored as well. That's something I hadn't thought of before I read the article even though it seems incredibly obvious. Yeah, you need state hygiene (though the state injection is adding entropy to the pool rather than replacing it, AIUI, so you also get whatever natural input comes from booting the machine, getting an IP address, waiting on I/O). I wonder how we handle that for machine reimaging in clusters at work, now that I think about it. I'm sure it's something reasonable, because we have very wise people on the problem, but I'm curious... Subjunctive fucked around with this message at 02:37 on Apr 20, 2014 |
# ? Apr 20, 2014 02:32 |
|
Yeah, I'm not a cryptographer as well. Unfortunately, cryptography has been sold to me as an arcane set of magic I shouldn't touch lest I create my own terrible cryptosystem, so I don't really know how to begin to answer such a question either. Nor would I know what a satisfying proof would look like.
|
# ? Apr 20, 2014 02:42 |
|
quote:(Roughly, I think it works like this: initialize a counter to a secret value n. Use a f.e. 256-bit key to encrypt a sequence of n, n+1, .... The resulting cipher text is a cryptographically strong sequence of bits. IANAcryptographer, though, and it's been more than a decade since I was intimate with Yarrow internals.) You could pick n = 0 and you'll get a cryptographically strong sequence of bits. That's just encrypting /dev/zero in CTR mode (and a zero nonce). The reason to do something fancier than that is because if you get hacked and your RNG state is leaked, you don't want your server's random number history to be recoverable.
|
# ? Apr 20, 2014 03:31 |
|
shrughes posted:You could pick n = 0 and you'll get a cryptographically strong sequence of bits. That's just encrypting /dev/zero in CTR mode (and a zero nonce). The reason to do something fancier than that is because if you get hacked and your RNG state is leaked, you don't want your server's random number history to be recoverable.
|
# ? Apr 20, 2014 04:31 |
|
shrughes posted:You could pick n = 0 and you'll get a cryptographically strong sequence of bits. That's just encrypting /dev/zero in CTR mode (and a zero nonce). The reason to do something fancier than that is because if you get hacked and your RNG state is leaked, you don't want your server's random number history to be recoverable. coffeetable fucked around with this message at 09:19 on Apr 20, 2014 |
# ? Apr 20, 2014 09:11 |
|
If they need a source of randomness may I suggest BYOB
|
# ? Apr 20, 2014 09:25 |
|
Subjunctive posted:(Roughly, I think it works like this: initialize a counter to a secret value n. Use a f.e. 256-bit key to encrypt a sequence of n, n+1, .... The resulting cipher text is a cryptographically strong sequence of bits. IANAcryptographer, though, and it's been more than a decade since I was intimate with Yarrow internals.) So from the theoretical side, the key result is the Goldreich-Levin Theorem, which says if you have
This is useful for CSPRNG purposes because it means you can take a uniformly random n bit input and "stretch" out it to a n+1 bit output that also looks uniformly random to computationally bounded observers, and which can't be used to deduce the internal state x (again, at least by computationally-bounded observers). Even better, you can safely repeat this process a polynomial number of times! coffeetable fucked around with this message at 10:24 on Apr 20, 2014 |
# ? Apr 20, 2014 10:19 |
|
coffeetable posted:If you fix n = 0 then you get a cryptographically trivial sequence of bits, since there exists an efficient probabilistic algorithm that given the output so far can predict the next bit with odds significantly better than 1/2. PRNGs and CSPRNGs can't be defined in a wholly deterministic setting. They need to have access to some sort of randomness. The key is your source of randomness.
|
# ? Apr 20, 2014 13:17 |
|
shrughes posted:The key is your source of randomness. Does this mean that one doesn't need to be concerned about known-plaintext attacks against the underlying cipher? Wikipedia says that the initial counter state needs to be kept secret for both block and stream underlying ciphers, but I admit that I'm not sure as to the exact reason.
|
# ? Apr 20, 2014 14:05 |
|
Subjunctive posted:Does this mean that one doesn't need to be concerned about known-plaintext attacks against the underlying cipher? Right. Because ciphers that can't withstand known-plaintext attacks are broken. Subjunctive posted:Wikipedia says that the initial counter state needs to be kept secret for both block and stream underlying ciphers, but I admit that I'm not sure as to the exact reason. No it doesn't.
|
# ? Apr 20, 2014 14:34 |
|
shrughes posted:Right. Because ciphers that can't withstand known-plaintext attacks are broken. Right, duh. quote:No it doesn't. ? Wikipedia posted:
Can I impose on you to elaborate? It sounds like I'm missing something, but I'm not sure what it is.
|
# ? Apr 20, 2014 15:08 |
|
Subjunctive posted:Can I impose on you to elaborate? It sounds like I'm missing something, but I'm not sure what it is. So right now we're talking about this part of the Wikipedia article: quote:•A secure block cipher can be converted into a CSPRNG by running it in counter mode. This is done by choosing a random key and encrypting a zero, then encrypting a 1, then encrypting a 2, etc. The counter can also be started at an arbitrary number other than zero. Obviously, the period will be 2n for an n-bit block cipher; equally obviously, the initial values (i.e., key and "plaintext") must not become known to an attacker, however good this CSPRNG construction might be. Otherwise, all security will be lost. Here Wikipedia says the counter state can be started at an arbitrary number other than zero. It doesn't say it has to be. Not only is that what Wikipedia says, it's also true. Also, it's not important that the "plaintext" here be kept from the attacker (as long as the key is kept secret). It can be all zeros. What's important is that the key be kept secret, and that the counter values don't get reused with the same key.
|
# ? Apr 20, 2014 16:00 |
|
shrughes posted:Here Wikipedia says the counter state can be started at an arbitrary number other than zero. It doesn't say it has to be. Not only is that what Wikipedia says, it's also true. I am not having a good Sunday. Doesn't it say quite explicitly that "the initial values (i.e., key and "plaintext") must not become known to an attacker"? Do you mean that the counter sequence isn't the plaintext? If that's the case, what is the plaintext for this purpose? If counter starts at the same zero each time, is that not the same as having a fixed IV? I guess that's only a problem if there's key reuse? (Thanks for your patience.)
|
# ? Apr 20, 2014 16:53 |
|
Both the key AND the plaintext have to be secret. If just the plaintext is known the cipher based CSPRNG is not compromised. If just the key is known it depends but it's significantly more dangerous than known plaintext.
|
# ? Apr 20, 2014 17:35 |
|
Subjunctive posted:I am not having a good Sunday. Doesn't it say quite explicitly that "the initial values (i.e., key and "plaintext") must not become known to an attacker"? Do you mean that the counter sequence isn't the plaintext? If that's the case, what is the plaintext for this purpose? That line is wrong, or badly written. The counter sequence and the plaintext (which can just be zero) can become known to the attacker, but the key can't. The plaintext is what the plaintext is when you run CTR mode for encryption -- take the counter value, encrypt it with the key, XOR it with the plaintext, and you've got the ciphertext. After all, with a block cipher, you should be able to give an attacker the outputs of encrypting 0, 1, 2, 3, 4, ..., N, or any values, with a given key, without them being able to figure out what the key is or what the output for any other input value would look like. quote:If counter starts at the same zero each time, is that not the same as having a fixed IV? I guess that's only a problem if there's key reuse? Right, you don't want to reuse the same key with the same counter. That's why in CTR mode you use a different nonce when reusing the key for different multi-block messages (the nonce taking the high half of the counter and the lower half being incremented each block).
|
# ? Apr 20, 2014 17:36 |
|
Thanks, that makes sense.
|
# ? Apr 20, 2014 20:45 |
|
Volmarias posted:Not using sizeof for the buffer I generally do this: code:
|
# ? Apr 20, 2014 20:53 |
|
There's nothing wrong with using snprintf, the horror was using snprintf to fill a string and then printing that string using fprintf. He could have just skipped everything involving snprintf altogether.
|
# ? Apr 20, 2014 20:55 |
|
astr0man posted:There's nothing wrong with using snprintf, the horror was using snprintf to fill a string and then printing that string using fprintf. He could have just skipped everything involving snprintf altogether. Ah, that makes sense. I try to adhear to POSIX standards as best as I can when writing C. I also use: while((fgets, buff, sizeof(buff), fp) != NULL); to fill a buffer from the output of popen as well. Is there a cleaner way to do that?
|
# ? Apr 20, 2014 20:58 |
|
ratbert90 posted:Am I the coding horror? I thought snprintf was fine for known string sizes. Some code only checks snprintf for truncation, assuming that with certain use cases it's impossible to error. Other code checks for both truncation and error, but the only error case being "snprintf(...) == -1)". To me, if you're going to check the result of snprintf at all, it makes the most sense to ensure it's in the range [0, size), since, even if a negative return value other than -1 is implausible/"impossible", it's just easy to check for it too. Even if I "know" the usage of snprintf can't result on truncation or error on the platform, I'd still do a result check as an assert "just in case". If you really know the usage of snprintf can't result in truncation or error, or more likely just don't care (i.e., preparing a string for log output) and specifically don't want to check the result, then casting the return value to "(void)" at least states that it's a conscious decision to not check the return value instead of an oversight. Although in this case, you definitely care if the string is truncated when passing it to popen.
|
# ? Apr 20, 2014 22:30 |
|
ratbert90 posted:while((fgets, buff, sizeof(buff), fp) != NULL); to fill a buffer from the output of popen as well. Is there a cleaner way to do that?
|
# ? Apr 20, 2014 22:42 |
|
|
# ? Apr 25, 2024 16:53 |
|
astr0man posted:There's nothing wrong with using snprintf, the horror was using snprintf to fill a string and then printing that string using fprintf. He could have just skipped everything involving snprintf altogether.
|
# ? Apr 20, 2014 22:54 |