Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
floWenoL
Oct 23, 2002

FastEddie posted:

No, they're bitwise shifts, because they operate on bits, rather than the boolean interpretation of the collections of bits. I have no idea what a logical shift would be.

Just another name.

Adbot
ADBOT LOVES YOU

floWenoL
Oct 23, 2002

Presto posted:

If the object is signed, it does an arithmetic shift which keeps the sign bit.

No, if the integer is signed and negative, the behavior is implementation-defined.

floWenoL
Oct 23, 2002

Ari posted:

This happens whenever I need to reference a function from math.h or stdlib.h - I just gave an example here from math.h. The stdio functions work fine though.

Try "-lm".

floWenoL
Oct 23, 2002

HB posted:

Yeah, sorry, I meant that its size was determined at compile-time rather than typing-time.

What is this "typing-time"?

floWenoL
Oct 23, 2002

Is it legal to reinterpret_cast a char * to signed char * or unsigned char *? Some sources online indicate 'yes', but I can't find anything in the standard to back me up.

floWenoL
Oct 23, 2002

That Turkey Story posted:

Yes.

I mean 'legal' as in it is guaranteed to do what you expect when you read from or write to the pointer. Can you give me where in the standard implies that? 5.2.10.7 seems to indicate otherwise:

5.2.10.7 posted:

A pointer to an object can be explicitly converted to a pointer to an object of different type.65) Except that converting an rvalue of type “pointer to T1” to the type “pointer to T2” (where T1 and T2 are object types
and where the alignment requirements of T2 are no stricter than those of T1) and back to its original type yields the original pointer value, the result of such a pointer conversion is unspecified.

floWenoL
Oct 23, 2002

Dransparency posted:

Even though I don't need the support for pointers, would this be a better approach for my problem?

Yes.

quote:

I've been avoiding it so far because I want to make this as standardized as possible, plus I don't know the syntax for a member function with additional template parameters from its class. Can I just use nested templates?

Yes. Depending on what exactly class Foo is supposed to do, there might not even be a need for it.

floWenoL
Oct 23, 2002

Plastic Jesus posted:

I came across this a year or two ago:

Apparently, you don't know enough to check the dates on what you read on the internet as that was written in 1989. The line:

quote:

The result is often thousands of needless lines of code passing through the lexical analyzer, which is (in good compilers) the most expensive phase.

should have also raised a red flag as compilers these days skip files they see are entirely protected by an if guard.

floWenoL
Oct 23, 2002

Presto posted:

His only argument is that multiple inclusions slow down the compiler. My response would be: Yes? And?

And compile time is a major bottleneck for large C/C++ programs. Pike's advice might be outdated, but including (non-system) headers willy-nilly is a good way to kill compile performance.

floWenoL
Oct 23, 2002

ColdPie posted:

Yeah, not to mention they must be kept in a precise order. One.h must come before Two.h! Hope you don't forget!

Seems like a good idea on the surface, but yeah, that just doesn't work. Though it does seem like there should be a way to prevent this (actual picture from one of my recent projects):

You know what else probably seemed like a good idea on the surface? You uploading an 800kb png and thumbnailing it.

floWenoL fucked around with this message at 05:24 on Mar 27, 2008

floWenoL
Oct 23, 2002

ColdPie posted:

I figure there should be a way to prevent that kind of thing but hey v:)v

No programming language will ever prevent bad design. :colbert:

floWenoL
Oct 23, 2002

Plastic Jesus posted:

I just said that I don't do this for libraries for exactly the reasons you just listed. I do it within my own projects for exactly the reasons I just listed. I will probably flatten things when they go to release so that you do not kill me. But during development it is _nice_ to know where dependencies lie.

It is, but there are tools like makedepend for that. It is retarded to do something so tedious and error-prone manually for such little benefit.

floWenoL
Oct 23, 2002

Presto posted:

I have about 750,000 lines of code at work (C/C++/Fortran) that builds in 4 1/2 minutes on a fairly old dual 2.2 GHz Xeon machine with parallel make, so compile time isn't a real concern.

Honestly that's not a very large project; have you tried building one of Firefox, KDE, or OpenOffice lately? Anyway, the issue is not the from-scratch build time, but the one-line-header change build time and from making sure that that isn't approximately equal to the from-scratch build time.

floWenoL
Oct 23, 2002

Drx Capio posted:

I think the complexity of sorting an array of 20 numbers is pretty insignificant, and I only gave the pseudocode as an example. The person writing the program would probably prioritize good random numbers over a fast shuffle. Besides, I would probably just use random_shuffle anyway.

I think the person writing the program would probably prioritize "having my program not crash" over anything else, which sorting with a random metric would do. To fix that, you'd have to make sure that your predicate returns consistent results if you pass it the same pair twice, at which point your level of complexity is way beyond just doing the shuffle.

floWenoL
Oct 23, 2002

more falafel please posted:

NEVER assume this. You have no idea how many times this comes up. If you want your code to compile on 64-bit, DO NOT ever assume this. Like, ever.

Ever.

Or at least use uintptr_t :shobon:.

floWenoL
Oct 23, 2002

ShoulderDaemon posted:

Because int* foo, bar; is the same as int *foo; int bar;, not int *foo; int *bar;.

Some C++ coding styles do put the * next to the type, but they also mandate (or strongly suggest) one variable per declaration.

floWenoL
Oct 23, 2002

Smackbilly posted:

Okay but if you're eschewing all forms of exceptions, then you're also tossing out STL and basically working in C-with-classes. I suppose some people do that, though.

What are you talking about? The STL throws exceptions in only a few places.

floWenoL
Oct 23, 2002

Smackbilly posted:

Always and never, respectively. There's no reason to use C I/O functions in C++ when there are C++ I/O functions that do everything the C functions do.

Except the C++ I/O functions are much more verbose and heavyweight. Also, good luck using C++ I/O for binary I/O.

floWenoL
Oct 23, 2002

Insurrectum posted:

Alright, so I'm trying to teach myself C++, and to begin I just wanted to write a simple adventure game program that just creates a series of rooms with descriptions and North, South, East, and West rooms. I'm really just trying to get a feel for pointers, since I never used them in Java.

Yes you have, they're just called 'references' and you can't do arithmetic on them. The problem is that you're thinking that "Room" is a reference type like in Java whereas in C++ it's actually a value type; that is, getnorthroom() etc. should all return Room * and setnorthroom() etc. should all take in Room *.

floWenoL
Oct 23, 2002

ColdPie posted:

I use that all the time and see no reason why it would be considered bad practice given that the first enum is explicitly defined to be 0 (or is that actually part of the standard?). It's pretty obvious what's going on and completely removes the maintenance for keeping track of the number of elements.

On the other hand, it introduces an invalid value (kNumWhatever) that is treated like a valid value.

floWenoL
Oct 23, 2002

Mustach posted:

Why do you use a struct declaration instead of a namespace?

Because not everyone programs in C++.

floWenoL
Oct 23, 2002

Mustach posted:

That makes sense, thanks.
He could only have been talking about C++. In C, declaring an enumeration inside of a struct doesn't offer the same benefits as in C++ — the enum types and constants are all still in the same namespace, as if they were outside of the

You're absolutely right; I don't know what I was thinking.

floWenoL
Oct 23, 2002

JoeNotCharles posted:

code:
[code]
$ git reset --hard HEAD WebKit.pro
Cannot do partial --hard reset.
I think git checkout is what I want, but what the hell is that error message?

You can do a partial checkin and then a hard reset:

code:
git add -i
<interactively pick which chunks to check in>
git commit
git reset --hard

floWenoL
Oct 23, 2002

What do you guys think of this?

floWenoL
Oct 23, 2002

Avenging Dentist posted:

Also the "no using in .cc files" rule is laffo. That's pretty much the only place you should have using declarations.

The wording is confusing, but if you expand the arrow, it explains that you are allowed to use using in .cc files.

And yes, for anyone who isn't aware, the arrows do something, namely provide more details.

floWenoL
Oct 23, 2002

That Turkey Story posted:

C++ nerd meltdown

Trap loving sprung!

Edit:
Actually kind of backfired because now I want to do a point-by-point rebuttal. :argh:

floWenoL fucked around with this message at 09:53 on Jun 30, 2008

floWenoL
Oct 23, 2002

Zombywuf posted:

Also, exceptions, use them.

Boy howdy it sure must be nice to live in your world where performance doesn't matter!

floWenoL
Oct 23, 2002

That Turkey Story posted:

That was initially what I thought, Flowenol, but if you read their rationale, performance isn't even mentioned:

It isn't? Perhaps I've said too much. :tinfoil:

floWenoL
Oct 23, 2002

Zombywuf posted:

How much of a performance hit are exceptions in modern compilers? Obviously throwing exceptions is a big hit, but does just having the exception handling code there cause much of a slowdown?

It's more of a memory hit I believe, but yes, it's there (mostly from RTTI). Don't believe the C++ 'pay only for what you use' hype.

floWenoL
Oct 23, 2002

That Turkey Story posted:

This is C++. You can pass objects as references, that's how the language works. Using non-const references for parameters which modify an object as opposed to a pointer is the preferred C++ way of having functions which modify arguments as it removes the issue of making it unclear about the handling of null pointers and overall it simplifies syntax. If it is unclear that an argument to a function should take a non-const reference then that is generally a poorly named function and/or the person trying to use it doesn't understand it's purpose, which either way is a programmer error. The only time I see people have issues with this are when they come from a C background where references just didn't exist so they just got used to always passing out parameters by pointer. There is a better solution in C++ so use it and stop clinging to an out-dated C-style approach.

Just because it's the "preferred C++ way" doesn't mean it's the best way. Your argument as to the clarity of a function taking in a non-const reference equally applies to whether it can take in a NULL pointer argument. If the only time you've seen issues with this is with people from a strong C background (not necessarily a bad thing) then you obviously have not done much maintenance programming; I'd hate to have to grep through mounds of header files just to find where a variable is modified instead of simply searching for where a pointer to said variable is passed in, which would (mostly) suffice with this convention.

That Turkey Story posted:

As for auto_ptr, I agree with you. In particular, there really isn't an alternative to auto_ptr for returning a dynamically allocated object and having the result be properly automatically managed. You simply cannot do this with scoped_ptr. When C++0x comes around this will be a different story, but at the moment, auto_ptr is the ideal standard solution.

Leaving aside the brokenness of having assignment mean 'move' instead of 'copy' sometimes (C++ overloads so many things already, why not poach an operator for this? I suggest "x << y;"), honestly once you remove exception handling that removes maybe 80% of the need for smart pointers, and in the remaining cases auto_ptr is hardly an 'ideal' solution; look at all the people that have tried to write replacements for it, even Alexandrescu!

floWenoL
Oct 23, 2002

ZorbaTHut posted:

I strongly agree with the Google style guide here :v:

I've seen a surprising number of bugs caused by people writing reasonable, sensible equations that break in unexpected horrifying ways when the types involved are unsigned. For example:

I was going to reply to that (and I still plan to address TTS's other points) but I think TTS meant that he was just tired, and he wasn't replying to that specifically?

floWenoL
Oct 23, 2002

TSDK posted:

When talking about failures caused specifically by unchecked integer overflow, using an unsigned int instead of an int doesn't actually improve all that much on your rate of failure in general. You then have to weight up the relatively small gain in failure rate versus the likelihood that unsigned ints could cause a problem.

In my opinion, the likelihood that an unsigned int would cause a problem relative to a signed int is very very small indeed, so I'll take that failure rate improvement happily and go on my merry unsigned way.

I'd say it's more probable that you'd get bitten by unsigned integer underflow than overflow. You also have to take into account that using unsigned inhibits compiler optimizations and so that 'free' range boost may come at a performance cost.

Really, 'signed' and 'unsigned' are truly horrible names as the differences between the two are more than just their signedness. I guess 'signed-with-undefined-overflow' and 'modulo-some-power-of-2' don't exactly roll off the tongue quite as easily.

quote:

In a related anecdote, I've been bitten before by using video encoder software that just gave up after the first 2Gb of a 3Gb file and blanked the image (but not the sound) for the last 3rd of the movie clip. Clearly someone with an always-int mentality had written the file loader for the image compression part, and someone with an unsigned-int-mentality had written the file loader for the sound compression part :)

That's not a problem with unsigned vs. signed; that's a problem with using 32 bits instead of 64 for file/memory sizes. :v

floWenoL
Oct 23, 2002

That Turkey Story posted:

Wait, what? If anything I can think of cases where unsigned operations can be optimized whereas signed operations cannot, not the other way around. Maybe you know something I don't, but either way, that doesn't change the fact that using an unsigned type may be correct whereas a signed type isn't (or vice versa). Pick your type based on what makes your code more correct.

Overflow behavior for signed integers is undefined, and so that enables the compiler to pretty much assume that signed integers don't overflow and make optimizations appropriately. Consider,

code:
bool foo(int a) {
  return (a + 3) > a;
}
The compiler can assume that a + 3 doesn't overflow and thus can compile the function to simply:

code:
_Z3fooi:
.LFB2:
        movl    $1, %eax
        ret
whereas replacing 'int' with 'unsigned int' would force rollover behavior, and thus the compiler cannot assume that a + 3 > a:

code:
.globl _Z3fooj
        .type   _Z3fooj, @function
_Z3fooj:
.LFB2:
        leal    3(%rdi), %eax
        cmpl    %eax, %edi
        setb    %al
        movzbl  %al, %eax
        ret
Of course, there may be optimizations that work for unsigned only (>=0 tests can be eliminated, etc.) but I'm pretty sure those are far less applicable in most code.

In any case, the recommendation (as I understand it) applies to what to pick as a default signedness. If you need unsigned of course you should use it, but the number of use cases where you really need unsigned (other than bitfields) and not just a bigger int is rare.

floWenoL
Oct 23, 2002

Avenging Dentist posted:

So your argument basically boils down to "but what if I really want to use the wrong tool for this job?"

Yes, iterators are the right tool for the job 100% of the time! :downs:

floWenoL
Oct 23, 2002

That Turkey Story posted:

First, before going into anything at all, I'd recommend using iterators here which as a side-effect avoids the issue of sign entirely, and if for some reason you didn't do that, I'd still say say use string::size_type instead of int and just don't write an algorithm which relies on negative values.

It's worth pointing out that using iterators here isn't correct either. For an empty string, begin() == end(), and so you'd be comparing one before the beginning of the array, which isn't necessarily valid (according to the standard). Not to mention the fact that if you used == instead of < (as is common with iterators) that would be an error, too.

floWenoL
Oct 23, 2002

Avenging Dentist posted:

Then show me an example where both iterators are inappropriate and the unsigned-ness of size_t is an issue. :colbert:

It's not that iterators are entirely inappropriate, it's just that sometimes using indices is clearer; recommending "always use iterators" is the C++ equivalent to the (equally fallacious) mantra of "always use pointers". If you have a vector and you know you won't have enough elements to run into size issues, using iterators is unnecessary, verbose, and in fact may introduce bugs due to the fact that you have to repeat the name of the container twice, which is exacerbated by the fact that said verbosity encourages copying-and-pasting iterator-using for loops as to avoid having to type out vector<blah blah>::const_iterator or "typename T::const_iterator" (don't forget the typename!) yet again.

floWenoL
Oct 23, 2002

Entheogen posted:

I use Java for bigger projects because its easier to work with. C++ is good for small programs when you just want to calculate some stuff though.

Yeah, C++ is a toy language that will never be used for large-scale applications or for applications that require performance.

quote:

oh ok, is all the fuss about because many functions take size_t instead of int? i don't see how this could become an issue unless you are dealing with large indecies.

I like how you jumped in the discussion without knowing anything of what's being discussed.

It's spelled "indices", btw.

floWenoL
Oct 23, 2002

more falafel please posted:

You don't even need BOOST_FOREACH for this, just use std::for_each. It'll work for anything that has a T::iterator type (and begin(), end() and operator++ on the iterator), and makes way more sense than using the macro version.

I think I would kill myself if I had to define a new functor every time i wanted loop with an iterator (or deal with C++'s bastardized version of functional programming).

floWenoL
Oct 23, 2002

Zombywuf posted:

std::swap will usually be optimised to do this internally, i.e. it woun't copy the contents of the vectors, just swap the pointers to the internal arrays. It's also better from a readability perspective, i.e. swap(A, B) does what it says.

Just a nitpick, but you mean A.swap(B). I'm not sure if there's a specialized version of swap(A, B) for vectors, but it's worth noting that if it is, with ADL it is not std::swap that will be called (which is good as std::swap works by copying) unless you do std::swap(A, B).

Adbot
ADBOT LOVES YOU

floWenoL
Oct 23, 2002

Zombywuf posted:

Your code doesn't copy, my point was that std::swap does it in a cleaner way.

Unless someone overrode operator& for vectors somewhere before. :q:

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply