Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
return0
Apr 11, 2007

Jabor posted:

Is this why people implement their own memory allocator because they think the platform one isn't fast enough?

That said, there isn't a reason not to have per-cpu urandom state if someone wants to put in the work to implement it.

Usually people implement their own memory allocator for special cases where it is faster than an allocator that is fast for the general case, and where performance is important?

Adbot
ADBOT LOVES YOU

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Jabor posted:

Is this why people implement their own memory allocator because they think the platform one isn't fast enough?

That said, there isn't a reason not to have per-cpu urandom state if someone wants to put in the work to implement it.

There are lots of workloads for which a general purpose allocator isn't optimally fast or efficient. It's often quite easy to measure; every browser or VM developer has spent time in those mines, done internal recycling, customized size classification and region-release policies, etc.

Would you spread the entropy across the CPU pools, then? That would seem to give worse randomness to the vast majority of cases (in which there isn't significant contention for urandom).

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Subjunctive posted:

There are lots of workloads for which a general purpose allocator isn't optimally fast or efficient. It's often quite easy to measure; every browser or VM developer has spent time in those mines, done internal recycling, customized size classification and region-release policies, etc.

That was a snarky reference to the recent Heartbleed kerfuffle, which was exacerbated by OpenSSL using a custom allocation pool, nominally due to performance concerns.

quote:

Would you spread the entropy across the CPU pools, then? That would seem to give worse randomness to the vast majority of cases (in which there isn't significant contention for urandom).

You can give all the pools most of the entropy, as long as they're in different states and you remix frequently. The pool being depleted of entropy while still giving you numbers is already something you need to be concerned about with urandom, anything where getting worse performance is preferable to having that happen should be using /dev/random instead.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Jabor posted:

You can give all the pools most of the entropy, as long as they're in different states and you remix frequently. The pool being depleted of entropy while still giving you numbers is already something you need to be concerned about with urandom, anything where getting worse performance is preferable to having that happen should be using /dev/random instead.

That makes sense. All else fails you can feed in the private key, right?

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
This post keeps coming up.

http://sockpuppet.org/blog/2014/02/25/safely-generate-random-numbers/

Written in the style of Coda Hale's bcrypt post.

However, I still haven't found the answer to my question: what happens when /dev/urandom runs out of entropy in the pool?

He has this quote:

quote:

But /dev/random also tries to keep track of how much entropy remains in its kernel pool, and will occasionally go on strike if it decides not enough remains. This design is as silly as I’ve made it sound; it’s akin to AES-CTR blocking based on how much “key” is left in the “keystream”.

But it doesn't sound silly to me at all. Key reuse and management is a serious issue and one of the biggest issues facing practical cryptosystems today. Reusing your key is what's broken a lot of cryptosystems in the past.

So, what does urandom do when it runs out of entropy in the pool? Return Mersenne Twister data?

coffeetable
Feb 5, 2006

TELL ME AGAIN HOW GREAT BRITAIN WOULD BE IF IT WAS RULED BY THE MERCILESS JACKBOOT OF PRINCE CHARLES

YES I DO TALK TO PLANTS ACTUALLY

Suspicious Dish posted:

Return Mersenne Twister data?

I was looking into this earlier today, and turned up this presentation. Key slides are p11 and p13, which suggest it uses a nonlinear feedback shift register.

Soo yeah not a great idea to use it for cryptographic purposes.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
That's what it uses to put entropy in the pool from random events. What I'm wondering is what happens when the entropy runs out.

Dren
Jan 5, 2001

Pillbug
This was discussed a while ago and there was a link about how urandom and random both pull from the same prng, it's just that random tries to estimate how much entropy is in the pool and it will block if there's not enough. The article went on to suggest that the measurement of entropy in the pool was a very inexact thing (and therefore sort of silly) and recommended that people use urandom without worrying about it since you're trusting the prng either way. There were some cases where urandom shouldn't be used like in a VM when the system is fresh because the VM can't get entropy from keyboard/mouse and some other sources that real hw uses.

revmoo
May 25, 2006

#basta
For some reason I thought Intel and AMD chips were shipping with hardware RNGs at this point, something about using the CPU temp sensor. I guess not.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

Dren posted:

This was discussed a while ago and there was a link about how urandom and random both pull from the same prng, it's just that random tries to estimate how much entropy is in the pool and it will block if there's not enough. The article went on to suggest that the measurement of entropy in the pool was a very inexact thing (and therefore sort of silly) and recommended that people use urandom without worrying about it since you're trusting the prng either way. There were some cases where urandom shouldn't be used like in a VM when the system is fresh because the VM can't get entropy from keyboard/mouse and some other sources that real hw uses.

I can imagine that if you know somebody used urandom without entropy for a very long, you can see repeating patterns in the PRNG output which would allow you to predict keys and other sorts of things.

pseudorandom name
May 6, 2007

revmoo posted:

For some reason I thought Intel and AMD chips were shipping with hardware RNGs at this point, something about using the CPU temp sensor. I guess not.

Nobody trusts hardware RNGs.

Soricidus
Oct 21, 2010
freedom-hating statist shill

Suspicious Dish posted:

I can imagine that if you know somebody used urandom without entropy for a very long, you can see repeating patterns in the PRNG output which would allow you to predict keys and other sorts of things.
In a practical timeframe? Only if urandom uses a very insecure PRNG, and in general these algorithms are adequately secure - the published attacks I can think of have relied on accessing the entropy pool or predicting its state soon after startup. If the entropy pool is random and secret then you really don't really need to add more, unless there are weaknesses that haven't been published yet.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

I'm pretty sure that modern Linux snapshots something and then uses it to seed (u)random on startup, which should make VMs OK other than the first boot.

Isn't Linux unusual in having distinct random and urandom these days anyway?

Dren
Jan 5, 2001

Pillbug

Suspicious Dish posted:

I can imagine that if you know somebody used urandom without entropy for a very long, you can see repeating patterns in the PRNG output which would allow you to predict keys and other sorts of things.

One of the points the guy made but didn't delve into very much, and he made clear that this was a matter of some debate, was that the function used to measure entropy for the purposes of /dev/random is pretty naive. Point being that even if there is "enough" entropy the measure of "enough" isn't very good to start with so whether you use /dev/random or /dev/urandom in reality the quality of randomness is largely in the hands of the prng.

I really can't recall where the article was but I'll poke around and see if I can dig it up.

Subjunctive posted:

I'm pretty sure that modern Linux snapshots something and then uses it to seed (u)random on startup, which should make VMs OK other than the first boot.

Isn't Linux unusual in having distinct random and urandom these days anyway?

With VMs it was brought up that creating your ssh keys pretty soon after install was a bad idea but that you could do something to seed the VM's entropy from the host OS and then the snapshot thing you're talking about would take over and stuff would be fine.

edit: here's the article http://www.2uo.de/myths-about-urandom/
this one got brought up too but it's not as good as the first: http://sockpuppet.org/blog/2014/02/25/safely-generate-random-numbers/

Dren fucked around with this message at 23:14 on Apr 19, 2014

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed

Subjunctive posted:

Isn't Linux unusual in having distinct random and urandom these days anyway?
/dev/random and /dev/urandom are the same thing on FreeBSD and OS X/iOS, and I suspect the other BSDs do the same thing.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Suspicious Dish posted:

That's what it uses to put entropy in the pool from random events. What I'm wondering is what happens when the entropy runs out.

Dren posted:

edit: here's the article http://www.2uo.de/myths-about-urandom/
this one got brought up too but it's not as good as the first: http://sockpuppet.org/blog/2014/02/25/safely-generate-random-numbers/

Dren's article answers Suspicious Dish's question basically with "it doesn't happen". The CSPRNG used by both urandom and random can use 256 bits of entropy to generate a stream of cryptographically random (unpredictable) numbers for longer than one could need. I have a moderate-to-severe nerdcrush on Thomas Ptacek, but I agree that the first article is better. (djb argues that more mixing of entropy can hurt, if you are using a malicious source. I don't think that attack worries me as much as "attacker gets snapshot of RNG state", but it's interesting to consider.)

In the mid-nineties, a friend of mine broke SSL in early Netscape versions because they seeded the RNG with the current time. This caused a future co-worker of mine to shave his head in shame. That we're even having this sort of conversation on a comedy forum makes me so warm inside.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
Yep. My (incorrect) assumption was that mixing in more true entropy obtained from "true random" sources was required for the CSPRNG to continue to be secure. It seems that given 256 bits as an initial seed, and never adding more, the CSPRNG is quite good at generating 'secure' random numbers for a long time. Adding more entropy over time is optional, and only defends against the attacker knowing the CSPRNG's internal state.

I'm not sure how well I trust that (it seems to me that if you obtained 100 or so numbers, you might be able to figure out the internal state and the next one in the chain) but it's nice to know.

Dren
Jan 5, 2001

Pillbug

Subjunctive posted:

Dren's article answers Suspicious Dish's question basically with "it doesn't happen". The CSPRNG used by both urandom and random can use 256 bits of entropy to generate a stream of cryptographically random (unpredictable) numbers for longer than one could need. I have a moderate-to-severe nerdcrush on Thomas Ptacek, but I agree that the first article is better. (djb argues that more mixing of entropy can hurt, if you are using a malicious source. I don't think that attack worries me as much as "attacker gets snapshot of RNG state", but it's interesting to consider.)

In the mid-nineties, a friend of mine broke SSL in early Netscape versions because they seeded the RNG with the current time. This caused a future co-worker of mine to shave his head in shame. That we're even having this sort of conversation on a comedy forum makes me so warm inside.

Keep in mind that a big problem with /dev/random or /dev/urandom usage is pointed out in the last few sentences of the first article for VMs that do stuff like restore from snapshots all the time, since the random state always gets restored as well. That's something I hadn't thought of before I read the article even though it seems incredibly obvious.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Suspicious Dish posted:

I'm not sure how well I trust that (it seems to me that if you obtained 100 or so numbers, you might be able to figure out the internal state and the next one in the chain) but it's nice to know.

For some non-CS PRNGs, that's the case; if you get (I think) 650 consecutive outputs from Mersenne Twister, you can reconstruct the internal state and predict the rest. For CSPRNGs, though, your statement is sort of equivalent to this: given 100 words of cipher text encrypted with a 256-bit key, you can figure out what the next byte would be. CSPRNGs are built on the same methods as our best stream ciphers (AES, Twofish, etc), so predictability would lead to many forms of compromise. (Roughly, I think it works like this: initialize a counter to a secret value n. Use a f.e. 256-bit key to encrypt a sequence of n, n+1, .... The resulting cipher text is a cryptographically strong sequence of bits. IANAcryptographer, though, and it's been more than a decade since I was intimate with Yarrow internals.)

Edit: ^^^^^

Dren posted:

Keep in mind that a big problem with /dev/random or /dev/urandom usage is pointed out in the last few sentences of the first article for VMs that do stuff like restore from snapshots all the time, since the random state always gets restored as well. That's something I hadn't thought of before I read the article even though it seems incredibly obvious.

Yeah, you need state hygiene (though the state injection is adding entropy to the pool rather than replacing it, AIUI, so you also get whatever natural input comes from booting the machine, getting an IP address, waiting on I/O). I wonder how we handle that for machine reimaging in clusters at work, now that I think about it. I'm sure it's something reasonable, because we have very wise people on the problem, but I'm curious...

Subjunctive fucked around with this message at 02:37 on Apr 20, 2014

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
Yeah, I'm not a cryptographer as well. Unfortunately, cryptography has been sold to me as an arcane set of magic I shouldn't touch lest I create my own terrible cryptosystem, so I don't really know how to begin to answer such a question either. Nor would I know what a satisfying proof would look like. :sigh:

shrughes
Oct 11, 2008

(call/cc call/cc)

quote:

(Roughly, I think it works like this: initialize a counter to a secret value n. Use a f.e. 256-bit key to encrypt a sequence of n, n+1, .... The resulting cipher text is a cryptographically strong sequence of bits. IANAcryptographer, though, and it's been more than a decade since I was intimate with Yarrow internals.)

You could pick n = 0 and you'll get a cryptographically strong sequence of bits. That's just encrypting /dev/zero in CTR mode (and a zero nonce). The reason to do something fancier than that is because if you get hacked and your RNG state is leaked, you don't want your server's random number history to be recoverable.

Zemyla
Aug 6, 2008

I'll take her off your hands. Pleasure doing business with you!

shrughes posted:

You could pick n = 0 and you'll get a cryptographically strong sequence of bits. That's just encrypting /dev/zero in CTR mode (and a zero nonce). The reason to do something fancier than that is because if you get hacked and your RNG state is leaked, you don't want your server's random number history to be recoverable.
Also,the reason to include actual entropy is because if a hacker goes get your RNG state, you don't want him to be able to use it for all time. It'll eventually "go sour" for him.

coffeetable
Feb 5, 2006

TELL ME AGAIN HOW GREAT BRITAIN WOULD BE IF IT WAS RULED BY THE MERCILESS JACKBOOT OF PRINCE CHARLES

YES I DO TALK TO PLANTS ACTUALLY

shrughes posted:

You could pick n = 0 and you'll get a cryptographically strong sequence of bits. That's just encrypting /dev/zero in CTR mode (and a zero nonce). The reason to do something fancier than that is because if you get hacked and your RNG state is leaked, you don't want your server's random number history to be recoverable.
If you fix n = 0 then you get a cryptographically trivial sequence of bits, since there exists an efficient probabilistic algorithm that given the output so far can predict the next bit with odds significantly better than 1/2. PRNGs and CSPRNGs can't be defined in a wholly deterministic setting. They need to have access to some sort of randomness.

coffeetable fucked around with this message at 09:19 on Apr 20, 2014

Deus Rex
Mar 5, 2005

If they need a source of randomness may I suggest BYOB :D

coffeetable
Feb 5, 2006

TELL ME AGAIN HOW GREAT BRITAIN WOULD BE IF IT WAS RULED BY THE MERCILESS JACKBOOT OF PRINCE CHARLES

YES I DO TALK TO PLANTS ACTUALLY

Subjunctive posted:

(Roughly, I think it works like this: initialize a counter to a secret value n. Use a f.e. 256-bit key to encrypt a sequence of n, n+1, .... The resulting cipher text is a cryptographically strong sequence of bits. IANAcryptographer, though, and it's been more than a decade since I was intimate with Yarrow internals.)

So from the theoretical side, the key result is the Goldreich-Levin Theorem, which says if you have
  • a random private bitstring x
  • a random public bitstring r
  • a one-way permutation f
  • a function b(x, r) that's 1 if there's an odd number of 1s in x & r and 0 otherwise
then anyone who only knows f(x) and r cannot guess the bit b(x, r).

This is useful for CSPRNG purposes because it means you can take a uniformly random n bit input and "stretch" out it to a n+1 bit output that also looks uniformly random to computationally bounded observers, and which can't be used to deduce the internal state x (again, at least by computationally-bounded observers). Even better, you can safely repeat this process a polynomial number of times!

coffeetable fucked around with this message at 10:24 on Apr 20, 2014

shrughes
Oct 11, 2008

(call/cc call/cc)

coffeetable posted:

If you fix n = 0 then you get a cryptographically trivial sequence of bits, since there exists an efficient probabilistic algorithm that given the output so far can predict the next bit with odds significantly better than 1/2. PRNGs and CSPRNGs can't be defined in a wholly deterministic setting. They need to have access to some sort of randomness.

The key is your source of randomness.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

shrughes posted:

The key is your source of randomness.

Does this mean that one doesn't need to be concerned about known-plaintext attacks against the underlying cipher?

Wikipedia says that the initial counter state needs to be kept secret for both block and stream underlying ciphers, but I admit that I'm not sure as to the exact reason.

shrughes
Oct 11, 2008

(call/cc call/cc)

Subjunctive posted:

Does this mean that one doesn't need to be concerned about known-plaintext attacks against the underlying cipher?

Right. Because ciphers that can't withstand known-plaintext attacks are broken.

Subjunctive posted:

Wikipedia says that the initial counter state needs to be kept secret for both block and stream underlying ciphers, but I admit that I'm not sure as to the exact reason.

No it doesn't.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

shrughes posted:

Right. Because ciphers that can't withstand known-plaintext attacks are broken.

Right, duh.

quote:

No it doesn't.

?

Wikipedia posted:

  • A secure block cipher can be converted into a CSPRNG by running it in counter mode. This is done by choosing a random key and encrypting a zero, then encrypting a 1, then encrypting a 2, etc. The counter can also be started at an arbitrary number other than zero. Obviously, the period will be 2n for an n-bit block cipher; equally obviously, the initial values (i.e., key and "plaintext") must not become known to an attacker, however good this CSPRNG construction might be. Otherwise, all security will be lost.
  • A cryptographically secure hash of a counter might also act as a good CSPRNG in some cases. In this case, it is also necessary that the initial value of this counter is random and secret. However, there has been little study of these algorithms for use in this manner, and at least some authors warn against this use.
  • Most stream ciphers work by generating a pseudorandom stream of bits that are combined (almost always XORed) with the plaintext; running the cipher on a counter will return a new pseudorandom stream, possibly with a longer period. The cipher is only secure if the original stream is a good CSPRNG (this is not always the case: see RC4 cipher). Again, the initial state must be kept secret.

Can I impose on you to elaborate? It sounds like I'm missing something, but I'm not sure what it is.

shrughes
Oct 11, 2008

(call/cc call/cc)

Subjunctive posted:

Can I impose on you to elaborate? It sounds like I'm missing something, but I'm not sure what it is.

So right now we're talking about this part of the Wikipedia article:

quote:

•A secure block cipher can be converted into a CSPRNG by running it in counter mode. This is done by choosing a random key and encrypting a zero, then encrypting a 1, then encrypting a 2, etc. The counter can also be started at an arbitrary number other than zero. Obviously, the period will be 2n for an n-bit block cipher; equally obviously, the initial values (i.e., key and "plaintext") must not become known to an attacker, however good this CSPRNG construction might be. Otherwise, all security will be lost.

Here Wikipedia says the counter state can be started at an arbitrary number other than zero. It doesn't say it has to be. Not only is that what Wikipedia says, it's also true.

Also, it's not important that the "plaintext" here be kept from the attacker (as long as the key is kept secret). It can be all zeros. What's important is that the key be kept secret, and that the counter values don't get reused with the same key.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

shrughes posted:

Here Wikipedia says the counter state can be started at an arbitrary number other than zero. It doesn't say it has to be. Not only is that what Wikipedia says, it's also true.

Also, it's not important that the "plaintext" here be kept from the attacker (as long as the key is kept secret).

I am not having a good Sunday. Doesn't it say quite explicitly that "the initial values (i.e., key and "plaintext") must not become known to an attacker"? Do you mean that the counter sequence isn't the plaintext? If that's the case, what is the plaintext for this purpose?

If counter starts at the same zero each time, is that not the same as having a fixed IV? I guess that's only a problem if there's key reuse?

(Thanks for your patience.)

KaneTW
Dec 2, 2011

Both the key AND the plaintext have to be secret. If just the plaintext is known the cipher based CSPRNG is not compromised. If just the key is known it depends but it's significantly more dangerous than known plaintext.

shrughes
Oct 11, 2008

(call/cc call/cc)

Subjunctive posted:

I am not having a good Sunday. Doesn't it say quite explicitly that "the initial values (i.e., key and "plaintext") must not become known to an attacker"? Do you mean that the counter sequence isn't the plaintext? If that's the case, what is the plaintext for this purpose?

That line is wrong, or badly written. The counter sequence and the plaintext (which can just be zero) can become known to the attacker, but the key can't.

The plaintext is what the plaintext is when you run CTR mode for encryption -- take the counter value, encrypt it with the key, XOR it with the plaintext, and you've got the ciphertext.

After all, with a block cipher, you should be able to give an attacker the outputs of encrypting 0, 1, 2, 3, 4, ..., N, or any values, with a given key, without them being able to figure out what the key is or what the output for any other input value would look like.

quote:

If counter starts at the same zero each time, is that not the same as having a fixed IV? I guess that's only a problem if there's key reuse?

Right, you don't want to reuse the same key with the same counter. That's why in CTR mode you use a different nonce when reusing the key for different multi-block messages (the nonce taking the high half of the counter and the lower half being incremented each block).

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Thanks, that makes sense.

FlapYoJacks
Feb 12, 2009

Volmarias posted:

Not using sizeof for the buffer

No guarantee that the message will fit in the buffer

Not using fprintf :frog:

I generally do this:

code:

FILE *fp;
char cmd[MAX_BUFF] = {0};
int retval = 0;
snprintf(cmd, sizeof(cmd), "butt farts");

fp = popen(cmd, "r");
if (!fp)
{
    printf("Butt did not produce farts, errno: %i\n", errno);
    return -1;
}

some_string_handling...

retval = pclose(fp);
if (retval != 0)
{
   more error handling;
   return -1;
}

Am I the coding horror? I thought snprintf was fine for known string sizes. :psyduck:

astr0man
Feb 21, 2007

hollyeo deuroga
There's nothing wrong with using snprintf, the horror was using snprintf to fill a string and then printing that string using fprintf. He could have just skipped everything involving snprintf altogether.

FlapYoJacks
Feb 12, 2009

astr0man posted:

There's nothing wrong with using snprintf, the horror was using snprintf to fill a string and then printing that string using fprintf. He could have just skipped everything involving snprintf altogether.

Ah, that makes sense. I try to adhear to POSIX standards as best as I can when writing C. I also use:

while((fgets, buff, sizeof(buff), fp) != NULL); to fill a buffer from the output of popen as well. Is there a cleaner way to do that?

ExcessBLarg!
Sep 1, 2001

ratbert90 posted:

Am I the coding horror? I thought snprintf was fine for known string sizes. :psyduck:
Only if you don't check the return value of snprintf for truncation or error. Actually I should qualify that a bit since there's not universal agreement on best practice.

Some code only checks snprintf for truncation, assuming that with certain use cases it's impossible to error. Other code checks for both truncation and error, but the only error case being "snprintf(...) == -1)". To me, if you're going to check the result of snprintf at all, it makes the most sense to ensure it's in the range [0, size), since, even if a negative return value other than -1 is implausible/"impossible", it's just easy to check for it too.

Even if I "know" the usage of snprintf can't result on truncation or error on the platform, I'd still do a result check as an assert "just in case".

If you really know the usage of snprintf can't result in truncation or error, or more likely just don't care (i.e., preparing a string for log output) and specifically don't want to check the result, then casting the return value to "(void)" at least states that it's a conscious decision to not check the return value instead of an oversight. Although in this case, you definitely care if the string is truncated when passing it to popen.

ExcessBLarg!
Sep 1, 2001

ratbert90 posted:

while((fgets, buff, sizeof(buff), fp) != NULL); to fill a buffer from the output of popen as well. Is there a cleaner way to do that?
fread shouldn't return short (except error or EOF) so you don't need to call it in a loop. Except, apparently there are cases where fread has returned short as result of libc bugs. Furthermore, fread doesn't need to scan for linebreaks like fgets does.

Adbot
ADBOT LOVES YOU

Zemyla
Aug 6, 2008

I'll take her off your hands. Pleasure doing business with you!

astr0man posted:

There's nothing wrong with using snprintf, the horror was using snprintf to fill a string and then printing that string using fprintf. He could have just skipped everything involving snprintf altogether.
Also, the horror is not doing fprintf(fp, "%s", buf) or fputs(fp, buf). If buf has a % in it, then it'll cause all sorts of problems since the stack would have garbage on the end.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply