|
rotor posted:n&1 is a little weird, but I don't see what makes it a coding horror!! It's at least hardware dependent. Unless it says somewhere in the C++ specification that integers have to be represented exactly as they are commonly today...
|
# ? Jan 6, 2009 00:56 |
|
|
# ? Jun 10, 2024 10:24 |
|
Ugg boots posted:It's at least hardware dependent. Unless it says somewhere in the C++ specification that integers have to be represented exactly as they are commonly today... Do you mean 2's compliment instead of sign bit or something? In that case it would still work for all positive numbers.
|
# ? Jan 6, 2009 00:57 |
|
hexadecimal posted:Do you mean 2's compliment instead of sign bit or something? In that case it would still work for all positive numbers. Where does it say that 00000001 is 1? What if hardware specifies that the sign bit is the least significant bit instead of the most significant bit? What if it stores the least significant bit in the highest significant bits place for integers? Edit: Also stop PMing people and get in #cobol, it's more fun.
|
# ? Jan 6, 2009 01:00 |
|
The bigger horror is that an optimizing compiler (i.e. all of them) will convert x mod 2n to an appropriate bitwise operation. Don't try to outsmart the compiler with micro-optimizations.
|
# ? Jan 6, 2009 01:02 |
|
Ugg boots posted:Where does it say that 00000001 is 1? What if hardware specifies that the sign bit is the least significant bit instead of the most significant bit? What if it stores the least significant bit in the highest significant bits place for integers? The LSB that is the LSB is not the true LSB! Hail Discordia! You're right in a global sense, of course, at least on some globe where you said something right.
|
# ? Jan 6, 2009 01:31 |
|
for the record, (n&1) is about twice as fast as (n%2) in javascript modulus: 2237 bittwiddle: 1498 code:
rotor fucked around with this message at 01:35 on Jan 6, 2009 |
# ? Jan 6, 2009 01:31 |
|
rotor posted:for the record, (n&1) is about twice as fast as (n%2) in javascript Good lord, it had better be a lot faster than that. n%2 is a floating-point operation in JavaScript. EDIT: also, JavaScript obviously doesn't have an optimizing compiler. rjmccall fucked around with this message at 01:37 on Jan 6, 2009 |
# ? Jan 6, 2009 01:33 |
|
rjmccall posted:Good lord, it had better be a lot faster than that. n%2 is a floating-point operation in JavaScript. iunno man, run the benchmark urself
|
# ? Jan 6, 2009 01:36 |
|
rotor posted:iunno man, run the benchmark urself Folly of microbenchmarks, I think. Your benchmark time is probably dominated by loop and general interpretation overhead. What did you run it in?
|
# ? Jan 6, 2009 01:40 |
|
rjmccall posted:Folly of microbenchmarks, I think. Your benchmark time is probably dominated by loop and general interpretation overhead. What did you run it in? build a better one then. Ran it in FF3 and IE6. IE has this thing where it yells at you if the script blocks too long, so those results are suspect.
|
# ? Jan 6, 2009 01:42 |
|
rjmccall posted:Folly of microbenchmarks, I think. Your benchmark time is probably dominated by loop and general interpretation overhead. What did you run it in? Uhhhhhhh it's the same loop. I ran a simple benchmark in C; without any optimizations, & is faster than %. With -O3 they are essentially the same.
|
# ? Jan 6, 2009 01:43 |
|
narbsy posted:Uhhhhhhh it's the same loop. Right. So let's say operation A takes 1ms to do a million iterations in 1ms, and operation B takes 100ms to do a million iterations, and loop overhead costs you 100ms. So the benchmark takes 101ms for A and 200ms for B; voilà, a 100x difference turns into in a 2x difference. And trust me that that sort of loop overhead is not unreasonable in naive interpreters. There are tons of articles out there about how silly this sort of microbenchmarking is, but you should at least subtract out the empty-loop overhead. The reason I asked about the browser you were using was just curiosity (also you should always mention the host platform/configuration when reporting benchmark results); Firefox's JS implementation leapt forward in 3.1. rjmccall fucked around with this message at 01:55 on Jan 6, 2009 |
# ? Jan 6, 2009 01:53 |
|
rjmccall posted:Right. So let's say operation A takes 1ms to do a million iterations in 1ms, and operation B takes 100ms to do a million iterations, and loop overhead costs you 100ms. So the benchmark takes 101ms for A and 200ms for B; voilà, a 100x difference turns into in a 2x difference. And trust me that that sort of loop overhead is not unreasonable in naive interpreters. it's the exact same loop, but like I said feel free to make a better one. I'm sorry, but simply saying "It must be X, regardless of what the benchmarks may say" is a little too close to faith-based computing for my tastes.
|
# ? Jan 6, 2009 01:56 |
|
Just out of curiosity, can anyone name a CPU in significant use today that doesn't use 8 bits per byte and/or twos complement to represent signed integers? Edit: well blow me down, maybe there are a few network appliances out there that handle one's complement arithmetic because: quote:The IPv4 header checksum uses ones' complement arithmetic. minato fucked around with this message at 02:10 on Jan 6, 2009 |
# ? Jan 6, 2009 02:01 |
|
rjmccall posted:The reason I asked about the browser you were using was just curiosity (also you should always mention the host platform/configuration when reporting benchmark results); Firefox's JS implementation leapt forward in 3.1. I posted the source, run it your drat self on whatever platform you like and check the results, jeez
|
# ? Jan 6, 2009 02:03 |
|
code:
modulus: 3237 bittwiddle: 1884
|
# ? Jan 6, 2009 02:35 |
|
rotor posted:it's the exact same loop, but like I said feel free to make a better one. I'm sorry, but simply saying "It must be X, regardless of what the benchmarks may say" is a little too close to faith-based computing for my tastes. That's not what I'm saying at all. I'm saying your benchmark is crap and doesn't say anything useful, so we should ignore it. Look, I adjusted your benchmark to also post the results of an empty loop, then got stable results for a bunch of a different implementations on the machine I'm using to post this. There are some good reasons why that still doesn't capture the performance difference properly, but whatever, you don't care. All these results are on a 2.16GHz core duo, 2GB RAM, MacOS 10.5.6, fairly heavy background load. Firefox 3.0.5: 600ms empty, 1150ms modulus, 800ms twiddle, i.e. 550ms modulus, 200ms twiddle. Firefox Jan. 5th nightly: 45/172/54, i.e. 127ms modulus, 7ms twiddle. Safari 3.2.1: 650/1450/1100, i.e. 800ms modulus, 450ms twiddle. WebKit nightly: 50/163/68, i.e. 113ms modulus, 18ms twiddle. So I see the performance difference as anything from 2x (when you would have said 1.3x) to 18x (when you would said 3x).
|
# ? Jan 6, 2009 02:35 |
|
Wow, you people sure got worked up over & vs. % I just hope this doesn't lead to ww3. (The axis of %)
|
# ? Jan 6, 2009 02:38 |
|
rjmccall posted:That's not what I'm saying at all. I'm saying your benchmark is crap and doesn't say anything useful, so we should ignore it. quote:Firefox 3.0.5: 600ms empty, 1150ms modulus, 800ms twiddle, i.e. 550ms modulus, 200ms twiddle. 2x, 3x, meh. Seems like it's in the same ballpark to me. Good to see the other engine results tho. rotor fucked around with this message at 07:24 on Jan 6, 2009 |
# ? Jan 6, 2009 02:42 |
|
Ugg boots posted:Where does it say that 00000001 is 1? Now, I agree that doing &1 to test for even/odd is silly, but this is a pretty dumb point. If you can't assume something as elementary as that, then bitwise operations quickly become nearly (completely?) useless.
|
# ? Jan 6, 2009 04:24 |
|
Why do you consider &1 to be silly? Is it code clarity, performance, portability, or something else? Code clarity I could understand, and same goes for performance unless we're talking about some highly-optimized graphics rendering loop, but I can't imagine anyone seriously citing portability.
|
# ? Jan 6, 2009 05:29 |
|
minato posted:Why do you consider &1 to be silly? Is it code clarity, performance, portability, or something else? I'd consider it silly for two reasons, the first of which being that it is unclear to use a bitwise operation to determine even/oddness instead of using modulo as is convention, and also because it falls under the category of if(p), which has been mentioned as it too assumes 0 is false. The second reason is that if the code is C, as hexadecimal was thinking of, the performance benefit really isn't large. It doesn't show up much on un-optimized code, and barely at all under optimized code.
|
# ? Jan 6, 2009 07:00 |
|
Steve French posted:Now, I agree that doing &1 to test for even/odd is silly, but this is a pretty dumb point. If you can't assume something as elementary as that, then bitwise operations quickly become nearly (completely?) useless. No they aren't. They only become useless if you're only doing bitwise operations on integers expecting the bitwise operations to have a result on the numerical representation. In all worlds doing 0xFF & 0x01 would equal 0x01, but not in all numerical representations does the 0x01 bit represent the same thing. What if, for example, a system only had a floating point math processor, so all integers are floating point numbers. Ignoring the precision problems, & 0x01 would not tell you if the number is even or odd, but in all representations it would give you the least significant bit.
|
# ? Jan 6, 2009 07:06 |
|
Ugg boots posted:What if, for example, a system only had a floating point math processor, so all integers are floating point numbers. Ignoring the precision problems, & 0x01 would not tell you if the number is even or odd, but in all representations it would give you the least significant bit. Well, considering we are talking about C, and in C bitwise operators may only be applied to integral operands, then that's another pretty bogus "what if." It's entirely reasonable to assume that the representation of a particular positive integer value is its binary equivalent (aside from endian-ness issues)
|
# ? Jan 6, 2009 07:15 |
|
Ugg boots posted:No they aren't. They only become useless if you're only doing bitwise operations on integers expecting the bitwise operations to have a result on the numerical representation. In all worlds doing 0xFF & 0x01 would equal 0x01, but not in all numerical representations does the 0x01 bit represent the same thing. What if, for example, a system only had a floating point math processor, so all integers are floating point numbers. Ignoring the precision problems, & 0x01 would not tell you if the number is even or odd, but in all representations it would give you the least significant bit. If I were floWenoL, I would be posting "Look how stupid you are." Bitwise anding a floating point representation with the floating point representation of 1 won't give you the least significant bit; it will give you some power of 2 or NaN. Arguing a point by bringing up these fantasmical scenarios is worthless jerkery.
|
# ? Jan 6, 2009 07:17 |
|
shrughes posted:If I were floWenoL, I would be posting "Look how stupid you are." Bitwise anding a floating point representation with the floating point representation of 1 won't give you the least significant bit; it will give you some power of 2 or NaN. Arguing a point by bringing up these fantasmical scenarios is worthless jerkery. Not even that. As I said above, bitwise operators don't even work with floating point operands: "error: invalid operands to binary &"
|
# ? Jan 6, 2009 07:30 |
|
This whole bit-fiddling discussion seems pretty silly to me. Just look it up in the relevant language standard (ie. C89/90/99 or whatever the relevant, current C++ standard is) what the meaning of this bit-wise operations is supposed to be, and from then on it's the compiler writer's job to ensure that the appropriate semantics of the operations are respected on whatever crazy moon-architecture you're compiling your program for. And while I've no clue about the situation as regards bit-wise operators in C++, iirc the most recent C language standards all prescribe that they operate as expected on at least the positive integers. So while I'm pretty sure that the result of an expression like (-1)>>2 is undefined*, 1<<4 or 17&1 are all well-defined expressions with semantics defined by the language standard, even if the constants involved are of a signed integer type. *unlike ((unsigned)(-1))>>2 because the most recent C standards *require* conversions from signed to an unsigned integer types to result in a 2's complement representation, i.e. ((unsigned)(-1)) is guaranteed to be UINT_MAX. EDIT: Oh and also to the people who posted things like code:
PrBacterio fucked around with this message at 08:23 on Jan 6, 2009 |
# ? Jan 6, 2009 08:17 |
|
Why are we doing performance evaluations on an interpreted language?
|
# ? Jan 6, 2009 17:21 |
|
ih8ualot posted:Why are we doing performance evaluations on an interpreted language? yeah man it's javascript who cares about performance right? the only time programmers should care about performance is when they're programming games and real-time systems.
|
# ? Jan 6, 2009 17:27 |
|
Because cycle counting is too depressing
|
# ? Jan 6, 2009 17:29 |
|
ih8ualot posted:Why are we doing performance evaluations on an interpreted language? Google does it, why can't we?
|
# ? Jan 6, 2009 21:45 |
|
rotor posted:yeah man it's javascript who cares about performance right? the only time programmers should care about performance is when they're programming games and real-time systems. I didn't mean it like that. Of course performance is important for any application you write. I just meant that in terms of determining instruction efficiency, results from a compiled and optimized language might prove to be a better metric. But I could be wrong.
|
# ? Jan 6, 2009 23:14 |
|
ih8ualot posted:I didn't mean it like that. Of course performance is important for any application you write. Actually, the best metric would be results from the language you're using.
|
# ? Jan 6, 2009 23:15 |
|
ih8ualot posted:I just meant that in terms of determining instruction efficiency, results from a compiled and optimized language might prove to be a better metric. But I could be wrong. did you miss the part where there's no difference between the two in C and a big difference in javascript?
|
# ? Jan 6, 2009 23:34 |
|
Well, I just got served.
|
# ? Jan 7, 2009 01:19 |
|
rotor: I just wanted to say that I missed you while you were gone.
|
# ? Jan 7, 2009 02:39 |
|
geetee posted:rotor: I just wanted to say that I missed you while you were gone. I missed u too. protip: never go on vacation with small children, it sucks.
|
# ? Jan 7, 2009 02:42 |
|
rotor posted:I missed u too. protip: never go on vacation with small children, it sucks. But what would you do if you need to feast?
|
# ? Jan 7, 2009 05:04 |
|
GUYS GUYS What if the internal representation was trinary? WHAT THEN? &1 is a PORTABILITY HORROR get a grip seriously!
|
# ? Jan 7, 2009 15:03 |
|
|
# ? Jun 10, 2024 10:24 |
|
Ugg boots posted:What if, for example, a system only had a floating point math processor, so all integers are floating point numbers. Hahaha better yet, how do you define modulus for float numbers? 2.0 % 2.0 = 0 2.1 % 2.0 exception modulus on float with remainder, core dumped heeen fucked around with this message at 15:25 on Jan 7, 2009 |
# ? Jan 7, 2009 15:07 |