Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
PrBacterio
Jul 19, 2000

Zakath posted:

Just like today C# compiled this:

code:
private void SomeFunc()
{
    someNumber =+ someOtherNumber;
}
I just couldn't understand why the value of someNumber wasn't increasing...
In some early versions of C this would have actually done exactly what you intended, the C language family's familiar and popular combined arithmetic assignment operators used to look just like that, i.e. "=+", "=*" &c, originally, until they were changed into their now familiar current form precisely because of the ambiguity of the other form with statements of the kind you just posted :engleft:

Adbot
ADBOT LOVES YOU

Null Pointer
May 20, 2004

Oh no!

Zombywuf posted:

To be fair the compiler designer is supposed to have done that for you.

"Any sufficiently smart compiler..." is the start of a lot of questionable ideas, not the least of which is Itanium. A lot of really simple-sounding optimizations turn out to be very difficult or impossible.

tef
May 30, 2004

-> some l-system crap ->
I would argue that the language in question isn't exactly amenable to the sort of static analysis that would aid such a optimisation. A sufficiently smart compiler can only do so much with a language.

I feel obliged to mention chapel here http://chapel.cray.com/ :3:

Opinion Haver
Apr 9, 2007

Null Pointer posted:

"Any sufficiently smart compiler..." is the start of a lot of questionable ideas, not the least of which is Itanium. A lot of really simple-sounding optimizations turn out to be very difficult or impossible.

What exactly happened with Itanium?

HORATIO HORNBLOWER
Sep 21, 2002

no ambition,
no talent,
no chance

Zakath posted:

Just like today C# compiled this:

code:
private void SomeFunc()
{
    someNumber =+ someOtherNumber;
}
I just couldn't understand why the value of someNumber wasn't increasing...

This is also valid C.

code:
int main(void)
{
	int x = 1;
	int y = 2;

	x = +y;

	return x;
}
The unary + operator (which is a no-op under all conditions) was included solely for symmetry wth unary - and as I recall dmr (rip) later considered it something of a mistake.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
For some reason, it's been picked up in JavaScript because it calls [[ValueOf]] and does nothing else. So you'll see stuff like:

Dumb JavaScript Tutorials posted:

You can use +new Date to get the current time in milliseconds!

not realizing that it's just the same as (new Date).valueOf(), or (new Date) + 0 or something like that.

Oh, and because you can overload it in Python, you can do some fun tricks with it. In fact, my ailment.py is a coding horror in and of itself. Just for some mindless fun.

Null Pointer
May 20, 2004

Oh no!

yaoi prophet posted:

What exactly happened with Itanium?
IA-64 makes the compiler (or assembly programmer) responsible for almost everything, including instruction scheduling and data hazard resolution. The first Itanium chip was delayed for some years, in large part due to the fact that compilers turned out to be harder to write than the HP engineers thought. As far as I know the compilers still aren't very good.

Some day, when the space aliens come and give us their hypothetical future space alien compiler technology, we will all be running Itanium processors.

Bozart
Oct 28, 2006

Give me the finger.

Null Pointer posted:

IA-64 makes the compiler (or assembly programmer) responsible for almost everything, including instruction scheduling and data hazard resolution. The first Itanium chip was delayed for some years, in large part due to the fact that compilers turned out to be harder to write than the HP engineers thought. As far as I know the compilers still aren't very good.

This isn't entirely true, since it used this bizzaro idea of branch predication instead of branch prediction to resolve OOO hazards. I always figured that the compilers faced a chicken and the egg problem instead of an issue with difficulty. Then again I never took a class in compiler design while in college, but I did learn computer architecture from a bitter ex-Intel guy who correctly called AMD's resurgence on the back of x86's zombie power.

Zombywuf
Mar 29, 2008

Null Pointer posted:

"Any sufficiently smart compiler..." is the start of a lot of questionable ideas, not the least of which is Itanium. A lot of really simple-sounding optimizations turn out to be very difficult or impossible.

It's not a particularly complex bit of code, and it seems like nearly every single CPU tech enhancement of the last 20 years has been about making this kind of code faster. The horror is that all the enhancements have made it nearly impossible to optimise for.

Blotto Skorzany
Nov 7, 2008

He's a PSoC, loose and runnin'
came the whisper from each lip
And he's here to do some business with
the bad ADC on his chip
bad ADC on his chiiiiip

Zombywuf posted:

It's not a particularly complex bit of code, and it seems like nearly every single CPU tech enhancement of the last 20 years has been about making this kind of code faster. The horror is that all the enhancements have made it nearly impossible to optimise for.

It's not 'all the enhancements' that have done it really, it's only two that are ruining the party on a corner case of very specific alignments.

It's not hard to avoid at all if you know the pitfall is there, but knowing the pitfall is there is a doozy :(

Zhentar
Sep 28, 2003

Brilliant Master Genius
I wouldn't exactly call it 'ruining the party' either - the fact that the one loop version is only three times slower is actually a success of CPU tech enhancement. Going back to the Pentium Pro again, it's L1 cache was only 2-way set associative - meaning it could only keep 2 of those 4 arrays in the L1 cache at a time. Every single iteration of the single loop version would involve multiple L1 cache misses, which I'm betting would lead to far more than a 3 times difference between the two versions.

Deus Rex
Mar 5, 2005

markup on the abortion of a website I've inherited is littered with these:

code:
<span style="font-weight: bold">OH THE HUMANITY</span>

Null Pointer
May 20, 2004

Oh no!

Zombywuf posted:

It's not a particularly complex bit of code, and it seems like nearly every single CPU tech enhancement of the last 20 years has been about making this kind of code faster. The horror is that all the enhancements have made it nearly impossible to optimise for.

This specific loop splitting example is not a trivial optimization. For example, synonymous virtual addresses could introduce a true (RAW) dependency between loop iterations, and it is either impractical or impossible to detect this situation. This is obviously a rare counter-example, but it can happen, and as a language implementer you now have a choice to make about how you will allow the user to express this possibility.

For these sorts of optimizations in general, at compile-time you know absolutely nothing about the memory hierarchy of the target machine. There are very many possible configurations and they all have distinct pathological cases which are potentially dependent on input. This means any optimization needs to be chosen at run-time, which effectively means you will need to emit a different branch for every known type of cache.

Zombywuf
Mar 29, 2008

Null Pointer posted:

For these sorts of optimizations in general, at compile-time you know absolutely nothing about the memory hierarchy of the target machine. There are very many possible configurations and they all have distinct pathological cases which are potentially dependent on input. This means any optimization needs to be chosen at run-time, which effectively means you will need to emit a different branch for every known type of cache.

Soooo, what you're saying is that optimising for modern hardware is the real horror?

Null Pointer
May 20, 2004

Oh no!

Zombywuf posted:

Soooo, what you're saying is that optimising for modern hardware is the real horror?

No, the real horror is expecting the compiler to solve a design problem.

If you are interested in high performance there are many safe assumptions you can make at the design level: spatial and temporal locality, and sequential access with a short stride, are all good decisions regardless of the specific processor you are using. You can't say nearly as much at the level of bit fiddling. (Edit: try the loop example from the Stack Overflow post using a single array with the values from A, B, C and D interleaved. The compiler and CPU cannot help you when you choose a stupid data structure like four arbitrary arrays.)

Modern hardware might be difficult to understand, but it really is a lot faster. Keep in mind that the pathological cases are simply the same as not having a cache at all.

Null Pointer fucked around with this message at 22:33 on Dec 20, 2011

Zombywuf
Mar 29, 2008

Null Pointer posted:

Modern hardware might be difficult to understand, but it really is a lot faster. Keep in mind that the pathological cases are simply the same as not having a cache at all.

Faster, but harder to optimise for. Basically I miss the 68K.

blorpy
Jan 5, 2005

It isn't harder to optimize for because everyone who graduated from a CS program and had a course on computer architecture can reason about it.

Zombywuf
Mar 29, 2008

You say that, but then such people write out-of-place quicksort.

Janitor Prime
Jan 22, 2004

PC LOAD LETTER

What da fuck does that mean

Fun Shoe
My computer architecture class was a sham. I still don't know why I should give a about a cpu cache.

blorpy
Jan 5, 2005

Zombywuf posted:

You say that, but then such people write out-of-place quicksort.
They weren't true Scotsmen.

tef
May 30, 2004

-> some l-system crap ->

MEAT TREAT posted:

My computer architecture class was a sham. I still don't know why I should give a about a cpu cache.

you might say you're cache oblivious

Wheany
Mar 17, 2006

Spinyahahahahahahahahahahahaha!

Doctor Rope

Deus Rex posted:

markup on the abortion of a website I've inherited is littered with these:

code:
<span style="font-weight: bold">OH THE HUMANITY</span>

code:
<style>
span[style="font-weight: bold"]{
    font-weight:normal !important;
}
</style>
Problem solved.

Opinion Haver
Apr 9, 2007

Wheany posted:

code:
<style>
span[style="font-weight: bold"]{
    font-weight:normal !important;
}
</style>
Problem solved.

The next time I really want to gently caress with a web dev I'm using this trick on them.

Bozart
Oct 28, 2006

Give me the finger.

Markov Chain Chomp posted:

It isn't harder to optimize for because everyone who graduated from a CS program and had a course on computer architecture can reason about it.

This is pretty silly since the compiler is abstracting away the specifics of the architecture and the architecture is abstracting away the implementation of the cpu. The benefit of knowing specific cache edge cases and how the compiler interacts with them is only significant in a few domains that I can think of, and in almost all cases the time spent would be less valuable than code improvements that are better in all environments or if that is somehow impossible, getting faster hardware. Don't rice out your programs. Also don't expect big O to be anything more than a guideline - discontinuous performance is practically inevitable. The only horror is that this is a very broad problem produced by the complexities of trying to make faster computers.

Impotence
Nov 8, 2010
Lipstick Apathy

yaoi prophet posted:

The next time I really want to gently caress with a web dev I'm using this trick on them.

Don't forget that certain things carry weight so if they use multiple CSS files you can also override them or use comically specific > rules

blorpy
Jan 5, 2005

Bozart posted:

This is pretty silly since the compiler is abstracting away the specifics of the architecture and the architecture is abstracting away the implementation of the cpu. The benefit of knowing specific cache edge cases and how the compiler interacts with them is only significant in a few domains that I can think of, and in almost all cases the time spent would be less valuable than code improvements that are better in all environments or if that is somehow impossible, getting faster hardware. Don't rice out your programs. Also don't expect big O to be anything more than a guideline - discontinuous performance is practically inevitable. The only horror is that this is a very broad problem produced by the complexities of trying to make faster computers.

A good guideline for debating someone is to talk about things that are relevant to what they're talking about. Zombywuf said that optimizing is hard. I said it isn't. Your argument is that you shouldn't do it or something. Maybe you quoted the wrong person?

evensevenone
May 12, 2001
Glass is a solid.
Sure, any CS student who has taken an architecture class can reason about what optimizations might make sense for a certain processor if they make certain assumptions about what their compiler does, but how many people keep up-to-date on what every new processor does, what every new version of their compiler does with their processor, and are willing to maintain processor-specific code paths that take advantage of their optimizations, and check to make sure their optimizations actually provide a performance benefit, and continue to provide a benefit as things evolve?

Unless you have an incredibly stable platform (i.e. writing for consoles or HPC) it's stupid and contrary to the intended development model for the industry. It means you have more complicated, harder-to-maintain code with increased likelihood of regressions occurring with new compiler features or new processors.

w00tz0r
Aug 10, 2006

I'm just so god damn happy.
"If we use Unicode instead of UTF-8, we don't have to worry about character sizes."

Zemyla
Aug 6, 2008

I'll take her off your hands. Pleasure doing business with you!

pigdog posted:

:psyduck:

Reminds me of one of the first BASIC programs I wrote as a 11-year-old. I can't even remember the syntax of the language, but the gist of it was like this:

code:
10 initialize_graphics
20 x = rand(screen_width)
30 y = rand(screen_height)
40 color = rand(number_of_colors)
50 putpixel(x, y, color)
60 GOTO 20
Can you guess what the output was?




It should have "snowed" randomly colored pixels, but in reality the pixels developed on the screen clear, diagonal, bands of color like this: "///". It totally boggled my teacher's mind. :smug:
Linear congruential generators are the real horror. Unfortunately, they're still popular. Dammit, George Marsaglia is smarter than you are, use his methods!

Fake edit: Dammit, he's dead? Now it's only a matter of time before Knuth goes, too. :<

The Gripper
Sep 14, 2004
i am winner

Zemyla posted:

Linear congruential generators are the real horror. Unfortunately, they're still popular. Dammit, George Marsaglia is smarter than you are, use his methods!

Fake edit: Dammit, he's dead? Now it's only a matter of time before Knuth goes, too. :<
I always liked this DNS cache poisoning write-up for how it visualizes the issues some RNG systems have had (and still have). About half-way through they plot the results of calls to the implementations random() (for transaction IDs) and the results are really simple patterns.

It also goes into some detail about how implementations like that introduce vulnerabilities, the first example being that the next random number (in the system they were demonstrating) was predictable if you knew only 3 previous generated numbers.

It's a good brief read guys, read it!

The Gripper fucked around with this message at 21:45 on Dec 21, 2011

Bozart
Oct 28, 2006

Give me the finger.

Markov Chain Chomp posted:

A good guideline for debating someone is to talk about things that are relevant to what they're talking about. Zombywuf said that optimizing is hard. I said it isn't. Your argument is that you shouldn't do it or something. Maybe you quoted the wrong person?

You basically got what I was saying, "you shouldn't do it [almost all of the time]" and thought that wasn't relevant to what you were saying?

It isn't easy because "everyone who graduated from a CS program and had a course on computer architecture can reason about it." You don't have to reason about it at all, because if you are trying to optimize at that level you are almost always doing something wrong, because it is specific to the hardware you are running on.

blorpy
Jan 5, 2005

.

blorpy fucked around with this message at 01:25 on Dec 22, 2011

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

w00tz0r posted:

"If we use Unicode instead of UTF-8, we don't have to worry about character sizes."

This is true, if by "Unicode" they mean UTF-32. And if you ignore combining characters.

Hammerite fucked around with this message at 13:54 on Dec 22, 2011

Dicky B
Mar 23, 2004

code:
switch (pname)
{
	case GL_TEXTURE_ENV_MODE:	if (p==GL_COMBINE)
								{
									bExtensionUsed = GL_TRUE;
								}
								break;

	case GL_COMBINE_RGB:
	case GL_COMBINE_ALPHA:		switch(p)
								{
									case GL_INTERPOLATE:    if (!gles::limits::inst().bOrthogonalTexEnvCombine)
															{
																static bool warning_displayed = false;
																if (!warning_displayed)
																{
																	logger << "implementation has limited support for GL_INTERPOLATE, which application will fail to represent.\n";
																	warning_displayed = true;
																}
															}
									case GL_REPLACE:
									case GL_MODULATE:
									case GL_ADD:
									case GL_ADD_SIGNED:
									case GL_SUBTRACT:
									case GL_DOT3_RGB:
									case GL_DOT3_RGBA:		bExtensionUsed = GL_TRUE; break;
									default:				gles::error::set(GL_INVALID_ENUM);
															return;
								}
								break;

	case host::GL::SOURCE0_RGB:
	case host::GL::SOURCE1_RGB:
	case host::GL::SOURCE2_RGB:
	case host::GL::SOURCE0_ALPHA:
	case host::GL::SOURCE1_ALPHA:
	case host::GL::SOURCE2_ALPHA:		switch(p)
								{
									case GL_TEXTURE:
									case GL_CONSTANT:
									case GL_PRIMARY_COLOR:
									case GL_PREVIOUS:       bExtensionUsed = GL_TRUE; break;
									default:				gles::error::set(GL_INVALID_ENUM);
															return;
								}
								break;

	case GL_OPERAND0_RGB:
	case GL_OPERAND1_RGB:
	case GL_OPERAND2_RGB:		switch(p)
								{
									case GL_SRC_COLOR:
									case GL_ONE_MINUS_SRC_COLOR:
									case GL_SRC_ALPHA:
									case GL_ONE_MINUS_SRC_ALPHA:	bExtensionUsed = GL_TRUE; break;
									default:						gles::error::set(GL_INVALID_ENUM);
																	return;
								}
								break;

	case GL_OPERAND0_ALPHA:
	case GL_OPERAND1_ALPHA:
	case GL_OPERAND2_ALPHA:		switch(p)
								{
									case GL_SRC_ALPHA:
									case GL_ONE_MINUS_SRC_ALPHA:	bExtensionUsed = GL_TRUE; break;
									default:						gles::error::set(GL_INVALID_ENUM);
																	return;
								}
								break;

	case GL_RGB_SCALE:
	case GL_ALPHA_SCALE:		bExtensionUsed = GL_TRUE;
								if (params[0]!=1.0f && params[0]!=2.0f && params[0]!=4.0f)
								{
									gles::error::set(GL_INVALID_ENUM);
									return;
								}
								break;
}

Hughlander
May 11, 2005

Hammerite posted:

This is true, if by "Unicode" they mean UTF-32. And if you ignore combining characters.

For the low low price of having to worry about endian issues!

Kilson
Jan 16, 2003

I EAT LITTLE CHILDREN FOR BREAKFAST !!11!!1!!!!111!

Dicky B posted:

code:
switch (pname)
{
	case GL_TEXTURE_ENV_MODE:	if (p==GL_COMBINE)
								{
									bExtensionUsed = GL_TRUE;
								}
								break;



Is the horror the indentation? I can't really get past that part to even look at the code.

Dicky B
Mar 23, 2004

The code by itself is pretty horrible but the indentation is what made me laugh when I saw it.

This entire codebase is pretty wacko

code:
template <class ApiType, typename R,
          typename A1 = nil, typename A2 = nil, typename A3 = nil,
          typename A4 = nil, typename A5 = nil, typename A6 = nil,
          typename A7 = nil, typename A8 = nil, typename A9 = nil,
          typename A10 = nil,typename A11 = nil>
class Functor :
    public FunctorBase<ApiType, R(APIENTRY*)(A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11)>
{
    private:
        typedef FunctorBase<ApiType, R(APIENTRY*)(A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11)> BaseName;
        bool m_supported_within_context;
                                            
    public:
        typedef R(APIENTRY* proc_t)(A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11);


        Functor ( const char* proc_name
                  , const char* alt1 = 0
                  , const char* alt2 = 0
                  ) :
            FunctorBase<ApiType, R(APIENTRY*)(A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11)>
                (proc_name, alt1, alt2), m_supported_within_context(true)
        {}

        void disactivate_call()
        {
            m_supported_within_context = false;
                                                   
            std::string nps = "WARNING: Your Hardware is not capable to run " + BaseName::proc_names[0] + "\n";
            myoutput(nps.c_str());
        }

        R operator()(A1 a1, A2 a2, A3 a3, A4 a4, A5 a5, A6 a6, A7 a7, A8 a8, A9 a9, A10 a10, A11 a11)
        {
            R ret = 0;
            if(!m_supported_within_context) {
                std::string nps = "ERROR: Your Hardware is not capable to run " + BaseName::proc_names[0] + "needed to emulate you App! \n";
                myoutput(nps.c_str());
                return ret;
            }
            this->pre_call();
            if (this->proc) ret = this->proc(a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11);
            this->post_call();
            return ret;

        }
};

PalmTreeFun
Apr 25, 2010

*toot*

Dicky B posted:

The code by itself is pretty horrible but the indentation is what made me laugh when I saw it.

This entire codebase is pretty wacko

Oh my god this poo poo poo poo Makes makes Me me Want want To to Barf barf looking at it.

feedmegin
Jul 30, 2008

Null Pointer posted:

IA-64 makes the compiler (or assembly programmer) responsible for almost everything, including instruction scheduling and data hazard resolution. The first Itanium chip was delayed for some years, in large part due to the fact that compilers turned out to be harder to write than the HP engineers thought. As far as I know the compilers still aren't very good.

Not really. In theory the idea was that the Itanium chips could be very simple (and thus very highly clocked) because of this philosophy, but the instruction set architecture doesn't really reflect that. It's actually pretty complex, partly because it was a design by committee between Intel and HP (the latter wanting to provide good backwards compatibility with PA-RISC, their previous instruction set architecture). Bad compilers could have been forgiven a lot if the chips were like twice as fast as their competitors, but they weren't.

Adbot
ADBOT LOVES YOU

Scaevolus
Apr 16, 2007

yaoi prophet posted:

What exactly happened with Itanium?

Over-optimistic sales forecasts. :v:

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply