Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

JawnV6 posted:

With the furious attention to power management, I'm surprised enthusiasts are still able to wring correct performance out of overclocks. Intel's very good at characterizing speed paths and binning. Overclocking is built on the assumption that they screwed up one of those. Or, as I suspect, the speed path that put a part into a lower bin isn't hit by the OC stress test.
Overclocking isn't about assuming that Intel's validation or binning isn't good, it's about being willing to run outside of nominal specifications or within additional constraints. If Intel says a CPU will do 3.5Ghz at 1.20v at 105C on a lovely motherboard with four DIMMs, what will it do at 1.25v on a good motherboard with two DIMMs? I'm certainly not complaining about the reduced overclocking headroom that comes from making chips more energy-efficient, but overclocking has sucked on the last two generations purely because Intel was lazy about IHS mounting.

How stable and well-validated an overclock is depends on the goals of the overclocker. There are overclockers that don't care about stability and reliability and just toss a 4670K in their Gigabyte motherboard, turn up Load-Line Calibration, then just jack up the clocks until they can't make it through a round of CoD. Someone who knows what they're doing and feeds the CPU stable and safe power, keeps temperatures within check, and does extensive testing with a variety of intense workloads. Most importantly, it's a decision that you want to have an overclock that is stable vs merely able to complete a benchmark.

Adbot
ADBOT LOVES YOU

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Alereon posted:

I'm certainly not complaining about the reduced overclocking headroom that comes from making chips more energy-efficient, but overclocking has sucked on the last two generations purely because Intel was lazy about IHS mounting.

I really disagree with this, fundamentally... I don't think we're able to properly, intuitively appreciate the differences between older, 2D transistor lithography on the 32nm and up processes and the modern tri-gate transistors on 22nm and upcoming sub-22nm processes. And I don't think that overclockers even really try to get into an understanding of the logic of the CPUs to understand how to try to validate them at home. Asus makes a lot of money off of their enthusiast boards (even in non-overclocker configurations), so they have a vested interest in making a general assessment of the overclocking range of a given chip, but it says nothing about real, vital, long-term stability or the kinds of low level damage that definitely occur over time.

I just don't trust overclocking anymore, and am perfectly fine with the trajectory away from it (of necessity, and the right move) towards efficiency. I think that with Haswell in particular we don't have the proper tools to assess stability... And that our definitions of stability are poorly framed as a result. Passing a bunch of tests that don't get to the reason that it wasn't binned higher from the factory probably doesn't mean much. The lack of bluescreens only means that whatever low-level instability there is, isn't well exposed and may not manifest for some time. Yeah, I found that out the hard way with my 2600K, but at least that was the culmination of a very long time using the 2D lithography and larger transistors which, I suspect, were inherently less vulnerable to the heat-->resistance-->longevity issue that contributes to architectural degradation over time and thus instability.

I understand and can sympathize with the argument that high end chips aimed at overclockers should not disadvantage them from the outset and that the IHS issue is a real gently caress-up on their part in that regard, since it can present an artificial limitation strictly due to heat transfer bottleneck that is totally underneath the IHS, but I don't think that that gets to the larger question of why overclockers do overclocking, or how they expect or hope the future will go. The end result of engineering for efficiency just doesn't leave a space for chips to be run out of specification, because specifications will be tighter and tighter in the (true!) name of efficiency, and thus few if any avenues for wringing more out of the chips by ignoring other limitations. I don't think those limitations will be what they used to be.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Agreed posted:

My opinion. Really Good Overclocking pretty much ended with the 2600K

If we're going for price/overclock, 2500K is king, otherwise we could just say 2700K, I imagine most 2700K samples clock as high if not higher than 2600K. Can't say for sure, though.

JawnV6
Jul 4, 2004

So hot ...

Alereon posted:

Someone who knows what they're doing and feeds the CPU stable and safe power, keeps temperatures within check, and does extensive testing with a variety of intense workloads.

This person is closer to the CoD kid than Intel Validation. By orders of magnitude.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

HalloKitty posted:

If we're going for price/overclock, 2500K is king, otherwise we could just say 2700K, I imagine most 2700K samples clock as high if not higher than 2600K. Can't say for sure, though.

It's academic, but the fact that you're making the distinction shows that even among enthusiasts there's an understanding of purpose-built products. I bought my 2600K before 2700Ks were even a thing, and it ran 4.7GHz as stable as I could possibly determine from June 2011 to around the middle of this year. In retrospect, I am fairly certain that there were low level instabilities all along, but not significant enough to manifest as anything that would trip up the imperfect tools we have for attempting, in our blundering way (by comparison to Intel), to validate a given clock well outside of specifications, and not significant enough to bluescreen - but who knows what the unseen cost to rarely-used performance was when it comes to all of the usage scenarios of the sophisticated logic of such small, vulnerable transistors over such a span?

I put much more faith in Intel or AMD to validate their products for a given clockrate than I do in us. Or in Asus' engineering staff, not because they're incompetent (far from it!) but because they have a real dog in the fight of getting people to keep overclocking whether it's actually a good idea from a cost:benefit standpoint, whether or not we can intuitively evaluate risk in the middle term, what we consider to be a system's useful life.

I'm just saying, I have reasons that I'll not mourn overclocking when it goes away in the name of efficiency - and I will appreciate the change from all the questionable, specifications-be-damned poo poo we do to these processors to get them to run faster than we actually need them to almost all the time to instead appreciating guaranteed good performance (barring expected industrial yield product malfunctions). It'll be a change in the overall ecosystem of building computers, most of all for those of us who call ourselves enthusiasts and bolt multi-pound contraptions to the socket to adequately cool the temperatures that result from the out of spec power we burn through these things, but I don't think it will be a change for the worse.

It'll also be nice to see LLC just go away completely. I wonder how much instability that particular decision has caused, given that it's very specifically and intentionally loving with the power delivery behavior Intel designed the processors for. All in the name of a little more performance... A little more value. Fundamentally that's what I doubt going forward, the value prospect of overclocking.

gary oldmans diary
Sep 26, 2005
I was interested in seeing low-power desktop boards with the soldered R series chips, once upon a time. Man, did I pick the wrong horse.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Agreed posted:

I really disagree with this, fundamentally... I don't think we're able to properly, intuitively appreciate the differences between older, 2D transistor lithography on the 32nm and up processes and the modern tri-gate transistors on 22nm and upcoming sub-22nm processes. And I don't think that overclockers even really try to get into an understanding of the logic of the CPUs to understand how to try to validate them at home.
This is only relevant to extreme overclocking though, not the vast majority who don't want to push ridiculous currents through their chips to squeeze out the last 100-200Mhz. My problem with Haswell overclocking is the IHS issue, simply because I'd be happy with what I would get (and others have got) after delidding and properly remounting. The frustration is that my already reasonably set expectations are being missed.

quote:

I just don't trust overclocking anymore, and am perfectly fine with the trajectory away from it (of necessity, and the right move) towards efficiency. I think that with Haswell in particular we don't have the proper tools to assess stability... And that our definitions of stability are poorly framed as a result. Passing a bunch of tests that don't get to the reason that it wasn't binned higher from the factory probably doesn't mean much. The lack of bluescreens only means that whatever low-level instability there is, isn't well exposed and may not manifest for some time. Yeah, I found that out the hard way with my 2600K, but at least that was the culmination of a very long time using the 2D lithography and larger transistors which, I suspect, were inherently less vulnerable to the heat-->resistance-->longevity issue that contributes to architectural degradation over time and thus instability.
I feel like you're letting the choices you made color your perceptions of overclocking in general. You could have just as easily backed off a bit on clockspeeds and voltages and had a very different experience. Nobody reasonable has ever believed that any stable overclock is safe, if you want to make risky choices with temperatures or voltages to achieve high clockspeeds that is a personal decision. Part of the problem is similar to gambling, it's easy to get carried away. I always make conservative decisions before I start overclocking about the voltages I consider safe and don't exceed them. It really comes down to whether you want a safe and stable overclock or you want the last few hundred Mhz. Safe and stable may not mean "as far as you can then back 5%", to me it primarily involves looking at the "elbow" of the voltage/power graphs. Then again, during my formative years I was overclocking to desperately keep old hardware usable that I couldn't afford to replace, so I always had to be super careful not to break it, so maybe I'm more cautious than most.

I think the idea that overclocking has to die for efficiency is a bit overblown. Yes of course overclocking headroom has to shrink, that's a fine trade for me. Anything the processor can do can be functionally tested, and while you'll never achieve the confidence of Intel's validation you're not overclocking the processor in a medical device so if it crashes during a dota game just turn it down 100Mhz. Hell I'd just switch to the enthusiast platform if it wasn't a goddamned generation behind...

Hogburto posted:

I was interested in seeing low-power desktop boards with the soldered R series chips, once upon a time. Man, did I pick the wrong horse.
Alternatively, you picked the obviously correct horse and so did everyone else, so you didn't win anything :)

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Alereon posted:

This is only relevant to extreme overclocking though, not the vast majority who don't want to push ridiculous currents through their chips to squeeze out the last 100-200Mhz. My problem with Haswell overclocking is the IHS issue, simply because I'd be happy with what I would get (and others have got) after delidding and properly remounting. The frustration is that my already reasonably set expectations are being missed.

I feel like you're letting the choices you made color your perceptions of overclocking in general. [:words: on your own overclocking procedure goes here]

1. Your definition of moderate overclocking here can't be said to genuinely shift the goalpoasts but I'd almost say it's an accidental effort to do so - they just weren't established in the first place. I know we're discussing in good faith, don't think I'm suggesting otherwise. But the idea of "moderate" overclocking is the temporary gap that neither matches previous capabilities of overclocking nor differs substantially from the trajectory I've noted that I am personally satisfied with, man. Because...

2. Every 100-200mhz gained through whatever method of validation you personally consider to be stable is just as much "free performance" over a non-overclocking configuration as the 100-200mhz before it. As that shrinks, and unless there is some dramatic shift away from engineering for efficiency that suddenly introduces a lot of untapped overhead it will continue to shrink, the difference will be even less practical and more "well what the heck why not." Which is just as easily "what the heck, why?"

As for me, come on, it's me, do you think I won't continue to overclock while it's available? I feel like the whole bit about how you overclock is a bit of a non-sequitur; what performance I was able to get, I got within Intel's voltage and temperature specifications. There are simply consequences to overclocking, and two years of running fast was good while it lasted but reduced the useful life of the chip at that speed. I've backed down to 4.5GHz and it's back to "stable" (that word means so much and it means so little in this context). The only thing informed by my experiences is that I don't intend to be aiming high or pulling any stunts like delidding to grab some nebulous performance crown (with the Haswell build that I've already got all the parts for to replace this computer).

I'm still playing the "free" performance game, I'll just be glad when it's over, and as far as the future of overclocking, I feel like I can see the writing on the wall pretty clearly. Limited time only. Enjoy it while it lasts and try not to cook your chips.

Josh Lyman
May 24, 2009


Agreed posted:

The enthusiast sector could use a break anyway, I'm hoping that engineering toward efficiency kills overclocking dead for good at some point in the near future because there just isn't any headroom to be had on any variation of current processes. It's too much hassle for too little reward, and they provide solutions that are appropriate to various goals at understandable price points. If overclockers get left out in the cold, and I count, I've got a shitload of 200mm case fans and a big, three-fan NH-D14 and all that jazz, fine, gently caress it, I'll come inside where it's warm and stop wasting money to wrestle performance out of parts that only have the overhead sometimes and be happier to buy fully warrantied and validated chips with higher core counts, etc., for future projects that call for more processing power than can be supplied by a stock configuration.
I don't think that's the tradeoff we're facing. The choice seems to be a 3.4GHz part that might run at 4GHz vs. a part that just runs at 3.4GHz.

I will say, however, that my 3570K + 16GB + SSD config doesn't really leave me wanting on the general use front. User experience improvements, as they have been for the last half decade, come almost entirely from the GPU and sweet, sweet 27" monitors.

Josh Lyman fucked around with this message at 10:16 on Oct 19, 2013

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Josh Lyman posted:

I don't think that's the tradeoff we're facing. The choice seems to be a 3.4GHz part that might run at 4GHz vs. a part that just runs at 3.4GHz.

I will say, however, that my 3570K + 16GB + SSD config doesn't really leave me wanting on the general use front. User experience improvements, as they have been for the last half decade, come almost entirely from the GPU and sweet, sweet 27" monitors.

I don't think it's really a choice, though, is the thing. This is the correct, smart, necessary way to continue building processors for a range of tasks; I'd like to see the numbers because I suspect that overclocking enthusiasts are a rounding error away from being completely irrelevant compared to everyone else (consumer and enterprise). So it's more about getting comfy with the future than it is about voting on a thing. I remember when we had something very similar to this discussion when the Anandtech podcast hosted an overclocking guy and many of our resident ASIC EEs weighed in...

The only disagreement seems to be on whether we should be upset about overclocking eventually going away, not whether or not it will. Which, truth be told, seems fairly rhetorical?

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Josh Lyman posted:

I don't think that's the tradeoff we're facing. The choice seems to be a 3.4GHz part that might run at 4GHz vs. a part that just runs at 3.4GHz.

I will say, however, that my 3570K + 16GB + SSD config doesn't really leave me wanting on the general use front. User experience improvements, as they have been for the last half decade, come almost entirely from the GPU and sweet, sweet 27" monitors.

I disagree to some extent, I think the biggest improvement in experience in the last 5 years has been the wider use of SSDs, by a long, long way.

I'm still running a 6970 (unlocked 6950) and I can't find a good reason to upgrade, even though it's a great time to piss away money on graphics cards, and my main monitor is a U2410, which is probably a four year old model now.

Josh Lyman
May 24, 2009


HalloKitty posted:

I disagree to some extent, I think the biggest improvement in experience in the last 5 years has been the wider use of SSDs, by a long, long way.

I'm still running a 6970 (unlocked 6950) and I can't find a good reason to upgrade, even though it's a great time to piss away money on graphics cards, and my main monitor is a U2410, which is probably a four year old model now.
I guess I meant that AFTER the SSD, improvements come from the GPU. Once you switch from HDD to SSD, there's isn't much improvement left on that front, but you can always upgrade your video card.

But yeah, my SSD desktop made my HDD laptop unbearable to use.

Ham Sandwiches
Jul 7, 2000

Agreed posted:

I put much more faith in Intel or AMD to validate their products for a given clockrate than I do in us. Or in Asus' engineering staff, not because they're incompetent (far from it!) but because they have a real dog in the fight of getting people to keep overclocking whether it's actually a good idea from a cost:benefit standpoint, whether or not we can intuitively evaluate risk in the middle term, what we consider to be a system's useful life.

I'm just saying, I have reasons that I'll not mourn overclocking when it goes away in the name of efficiency - and I will appreciate the change from all the questionable, specifications-be-damned poo poo we do to these processors to get them to run faster than we actually need them to almost all the time to instead appreciating guaranteed good performance (barring expected industrial yield product malfunctions). It'll be a change in the overall ecosystem of building computers, most of all for those of us who call ourselves enthusiasts and bolt multi-pound contraptions to the socket to adequately cool the temperatures that result from the out of spec power we burn through these things, but I don't think it will be a change for the worse.

I probably don't have the CPU specific chops to debate what you're saying on a technical level. I just wanted to offer some additional perspective:

I think you make excellent points about the technical aspects of overclocking. If I can paraphrase the main point I get out of this, I take it to mean basically "We don't have the tools or time / expertise to properly validate overclock stability, the manufacturer is in a better position to maximize efficiency and stability. With the new power oriented design of chips the two are related, so the days of free overclocking gains are gone - it will just lead to reduced stability with marginal gains at best."

Yes, I believe that's a true statement if maximum performance and efficiency are the vendor's goal when making the design. From what I've read, even with Haswell, this wasn't really the case. Often times when companies are delivering incremental improvements to their products, they generally have a sense of what's the maximum amount they could improve the product, as well as the desired amount to improve the product. In most companies, there are lots of competing internal pressures in terms of market specific issues, profit margins, desired product positioning, manufacturing concerns, as well as long term or strategic considerations.

So let's use a recent example. Let's ignore the question of whether Intel intentionally used crappy TIM to connect the IHS in Ivy Bridge. Let's just examine the choice they faced once they discovered the issue. What reason does / did Intel have to use a better TIM material in Haswell? Wouldn't that mean you could buy their cheaper chips and have an easier time overclocking them for similar performance to their more expensive chips? Whether you feel that was a factor in the decision or not, it seems to me there was incentive to leave the current TIM. This maintains their preferred relationship between CPU price and performance, and again it seems to me there was little incentive to do otherwise.

I also feel that stability is something that enthusiasts can treat as a bit of a luxury, if they're after performance. If you have a use case that requires Intel validated levels of stability, sure, don't overclock. A medical equipment company probably needs their gear to work properly, but I personally don't care if my computer doesn't exhibit any instability that I notice. I think this is a bit of a 'if there's instability happening on your computer, and it doesn't impact you, who cares' way of looking at it, but that's how it strikes me. If I really really care about my FPS in a CPU bound game, and I get 10-20% more with no crashes in that game by overclocking my CPU - that seems like a gain to me.

Maybe I'm wrong, and these chips are simply as fast and as power efficient as they could possibly be. And maybe they're becoming so complex and delicate that only the manufacturer can validate them at a given frequency. If that's the case then sure - what's the point of overclocking? However, as long as companies have many competing interests internally, and are ultimately about using an entrenched position to maximize profits and shareholder value - then I think the potential for overclocking to be a useful solution to (created?) performance problems still exists. Again, maybe this analogy doesn't hold but I think there's plenty of precedent for a competitor holding back a technology or evolution because it wasn't necessary - Nvidia delaying a product launch because of ATI being behind on their new design, etc. In a market driven world, I think business concerns take precedence over technical performance and ultimately open the door for overclocking to undo certain business driven decisions.

Finally, the last two CPU generations seem to be slowing down in terms of performance gains and their performance is still not enough to satisfy many use cases. I would agree with the idea of "Why bother" on overclocking if the performance was there - but if people still need more performance than they can purchase off the shelf for certain use cases, I also think will tend to drive people towards overclocking. I can't buy a chip from Intel that costs 2x the price of a 4770 and offers any meaningful improvements in single threaded performance, so what are my options?

Zhentar
Sep 28, 2003

Brilliant Master Genius

JawnV6 posted:

With the furious attention to power management, I'm surprised enthusiasts are still able to wring correct performance out of overclocks. Intel's very good at characterizing speed paths and binning. Overclocking is built on the assumption that they screwed up one of those. Or, as I suspect, the speed path that put a part into a lower bin isn't hit by the OC stress test.

Intel characterizes speed paths at a certain TDP. A good part of what overclockers get out of chips is just accepting an extra 50 or 100 watts of power dissipation.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
At the same time, we don't really *validate* the chip there. The best we can do is give it the worst we have and hope it doesn't crash, but it's not anything as rigorous as validation. Overclocking stability testing is to validation as pre-CAD rocket motor design is to post-CAD - instead of knowing exactly what we're doing and why, we're just stumbling around trying different things until the problems are manageable.

P.N.T.M.
Jan 14, 2006

tiny dinosaurs
Fun Shoe
Question:

Sandy/Ivy saw gains in power more than efficiency.

Haswell brought gains in efficiency, more so than power.


I read somewhere (maybe this thread), that the "core ix" series is an evolution of the Pentium III architecture. If that's true, is the recent power ceiling a result of the architecture reaching its upper limit of power optimization?

Is there any recourse? Or is the architecture in fact the pinnacle of Intel's research into consumer chips?

Guitarchitect
Nov 8, 2003

I guess this is the right place to ask this?

Pre-amble:
I currently have a Core i5-760 (LGA 1156) on a Micro ATX motherboard. I'm sick of my computer tower because it's ugly and it's bigger than it needs to be (17" tall) and I did not have the foresight to get a Mini-ITX motherboard. Getting a new case is turning into a frickin' system re-build now that I find out a 1156 board is hard to find new, and nothing 1155/1150 will work with my 1156 chip. Ugh.

Question:
Are the newest i5's worth getting, or is a current i3 almost equivalent to my i5-760? And if you know anything about motherboards - am I right to assume that since LGA 1156 is discontinued it's not worth attempting to find a LGA 1156 Board?

I use my system mainly for media and heavy photo-editing (i'm a photographer) so I'm not at all concerned at making this a powerhouse gaming rig or anything

craig588
Nov 19, 2005

by Nyc_Tattoo

P.N.T.M. posted:

I read somewhere (maybe this thread), that the "core ix" series is an evolution of the Pentium III architecture.

That was the original Core series sometime around 2005-2006, before they added the "ix" part to the Core brand. They were kind of like an extremely highly clocked Pentium 3 with a lot of cache, expanded instruction sets, and multiple cores.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

P.N.T.M. posted:

Question:

Sandy/Ivy saw gains in power more than efficiency.

Haswell brought gains in efficiency, more so than power.

Actually, Sandy was a significant jump in both power and efficiency. Ivy was less of a jump on power than Haswell and was focused on efficiency. Haswell falls somewhere in between the Westmere -> Sandy and Sandy -> Ivy jumps.

quote:

I read somewhere (maybe this thread), that the "core ix" series is an evolution of the Pentium III architecture. If that's true, is the recent power ceiling a result of the architecture reaching its upper limit of power optimization?

It is true, going way, way back. The Core i series (which started with Nehalem) comes from Core 2, comes from Core, comes from Pentium M, came from the Pentium 3 Tualatin mobile-focused laptop CPU. And thereby back on and on, but Tualatin was as far back as Intel rolled back on Pentium 4 to make a mobile chip.

The performance "ceiling" is a misleading term - at stock clocks, Haswell is unambiguously faster than Ivy Bridge and is the fastest x86 microarchitecture ever brought to market. You're talking about the overclocked performance, though, and THAT is a combination of a few factors: 1) the scaling of heat generation with frequency, 2) the limits of parallelization in general workloads, and 3) the focus on efficiency that allows Haswell to scale down to single-digit watts.

For clarity: what you call "power" I'm going to call "performance." "Power" will refer to electrical power.

P = CV2f

The power dissipation of a CPU is equal to its capacitance times the square of voltage times the frequency. Leaving aside capacitance for a moment (which the end-user does not have much of a hand in varying), that equation means that if you increase performance by ramping up clock speed, you will send its power dissipation through the roof. If you increase voltage (required to reach higher clocks), you increase power dissipation through the roof quadratically. Around the end of the Pentium 4 era, Intel realized that power and related cooling requirements made it infeasible to continue increasing performance through clock frequency alone. And it wasn't just heat - anyone can cool a couple hundred watts in something the size of a desktop computer - it was heat density. You can't get rid of that heat effectively when it's being generated by a device the size of a dime, it's just too much too fast for economical materials science to move away.

So enter revised work on Pentium M, becoming Core, and eventually becoming the dominant Core 2 microarchitecture. The whole point of that move is that while you limit the gains in per-core performance, because you have eliminated frequency scaling, you can fit more and more transistors on the chip for a much more reasonable power cost and increase your performance when running multiple tasks in parallel, at the expense of single-core performance.

This same strategy had been part of individual CPU cores for a while. Internally, a core decodes x86 instructions into multiple smaller operations, and then it executes those instructions in parallel. This is a big part of how per-core performance has increased since we stopped scaling frequency. The other big part being adding transistors to do more and more operations in fewer cycles - for example, the AES-NI extensions, which increased the performance of cryptography functions by an order of magnitude using dedicated execution units.

But this parallelism strategy presents a problem at some point your workload is as parallel as it usefully can be. There are parts of computer algorithms that must be executed in sequence and cannot be parallized, and this provides an upper bounds to how much your workload can be sped up by increasing core parallelism or number of cores. So this limits your gains in per-core performance as your chip gets more and more sophisticated, when you're leaving off frequency scaling.

And then you get into increasing efficiency. Remember, again: at stock clocks, Haswell is unambiguously faster. And stock clocks is what Intel cares about, and this is because of the need to increase efficiency. The "performance ceiling" is in overclocked parts.

You can effectively scale a microarchitecture for commercialization by about an order of magnitude. As in, if you want to hit 10W, more or less, you can only top out at 100W, more or less, before it becomes infeasible to increase clock speeds any further (at least, in sellable quantities) and uneconomical (and parallelization-limited) to add more cores. As the requirement to hit lower-power devices has become more important, the trade-off has been the ability to clock higher for small-to-moderate increase in power use.

At a practical level, it helps to think like this: overclocking exploits over-engineering in the signalling speed of signal paths within the processor. But if your chip is over-engineered, that means your signal paths are longer than they need to be, and that means they use more power than they need to.

Intel could continue to make processors focused on higher performance, and easily. But that's not where the money is. The money is in mobile devices and increasing the performance you can get for small amounts of power. And following the money therefore comes inherently at the expense of high-power performance.

The overclocking performance ceiling is primarily economic, with a helping of parallelization limitations.

quote:

Is there any recourse? Or is the architecture in fact the pinnacle of Intel's research into consumer chips?

"Recourse?" Overclocking will wither as efficiency keeps gaining. Stock clocks performance will keep increasing, because increasing stock-clocks performance is what Intel does, but as long as the power target needs to drop, stock clocks performance will increasingly be the best you can get.

This is not the death of faster computers, not by a longshot. It's a consequence of the fact that most people don't need their computers to be faster; they need them to be smaller and last longer on battery. Haswell is absolutely brilliant. It scales from a dual-core 11.5W-max, 4.5W average tablet all the way to a 15-core, ~130W high-performance server chip. It is ludicrously good at what it does, once you know what that is. And judging by Intel's track record, it will continue to get better.

Factory Factory fucked around with this message at 00:39 on Oct 24, 2013

P.N.T.M.
Jan 14, 2006

tiny dinosaurs
Fun Shoe
I didn't want to venture in overclocking with my statement. I have a 3770k myself, and I am leaving it alone until I think I might need a squeeze of juice to keep my rig alive. Call me wasteful, but I got it for a huge discount, and I don't think I can justify OCing it while its default settings is more than enough power for the next two years.

Thanks for translating for me. You answered my question exactly. I heard "Haswell has 10% performance gains over Ivy-bridge" and never looked back at Sandy-B or otherwise. I see that that was a very respectable increase. Also the when you put it like this:

Factory Factory posted:

dual-core 8W-max, 4.5W average tablet all the way to a 15-core, ~130W high-performance server chip

I can understand what a marvel it is. I was worried about 14nm, but now I am looking forward to it.

P.N.T.M. fucked around with this message at 01:04 on Oct 24, 2013

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

craig588 posted:

That was the original Core series sometime around 2005-2006, before they added the "ix" part to the Core brand. They were kind of like an extremely highly clocked Pentium 3 with a lot of cache, expanded instruction sets, and multiple cores.

In a very coarse sense, kinda.

Basically, Netburst (Pentium 4) was a dead end for Intel.

At the same time, Intel.. Israel? I think, had developed the Pentium M, that nice little laptop CPU that kind of brought about usable laptops, with Centrino branding (Pentium M, Intel Wireless card).

Intel tossed Netburst in the bin of history and based the original core off Pentium M, which itself looked back before Pentium 4 to III to start over.

At least that's how I remember it.

Nintendo Kid
Aug 4, 2011

by Smythe
An interesting side effect of that is that a lot of those "list your system specs" programs people would use at the time would identify both Pentium M and first gen Core Duo/Solo chips as "Pentium III" even while identifying the Pentium IV and the contemporary AMD chips correctly.

Sidesaddle Cavalry
Mar 15, 2013

Oh Boy Desert Map
Thank you, Intel, for caving in. Whether or not these chips will come in LGA or some form of BIOS-tweakable BGA form remains to be confirmed.

Ragingsheep
Nov 7, 2009
Can someone give me an indication of how Bay Trail compares to a 2 yr old Sandy Bridge i3?

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Ragingsheep posted:

Can someone give me an indication of how Bay Trail compares to a 2 yr old Sandy Bridge i3?

Pretty wimpy. Half to a third of the performance, depending on the i3.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Ragingsheep posted:

Can someone give me an indication of how Bay Trail compares to a 2 yr old Sandy Bridge i3?
The Pentium 2020M 2.4Ghz in these Anandtech benchmarks is a good stand-in for a 35W mobile Sandy Bridge i3. The 17W Sandy Bridge i3s have clocks around 1.4Ghz which would make Bay Trail similar for single-threaded workloads but faster for multi-threaded, and much more power efficient.

Gwaihir
Dec 8, 2009
Hair Elf
So Intel dropped this sorta bombshell today:
http://www.forbes.com/sites/jeanbaptiste/2013/10/29/exclusive-intel-opens-fabs-to-arm-chips/

They're opening their foundries up to other customers, TSMC style, starting with a next gen ARM chip of all things.

Imagine GPUs jumping all the way from 28nm to Intel's 14nm process, skipping the usual TSMC issues and crap associated with that.

JawnV6
Jul 4, 2004

So hot ...
It's really hard to see how opening up some fab capacity for a FPGA vendor totally guarantees they'll turn around and kneecap Atom efforts by jumping on a low-margin Apple A7 contract. Oh wait an analyst said that. If we're suddenly giving credence to analysts, reminder that Apple laptops are switching to ARM by the end of the year.

regulargonzalez
Aug 18, 2006
UNGH LET ME LICK THOSE BOOTS DADDY HULU ;-* ;-* ;-* YES YES GIVE ME ALL THE CORPORATE CUMMIES :shepspends: :shepspends: :shepspends: ADBLOCK USERS DESERVE THE DEATH PENALTY, DON'T THEY DADDY?
WHEN THE RICH GET RICHER I GET HORNIER :a2m::a2m::a2m::a2m:

I read the OP and scanned the last couple pages and have read several reviews of the Haswell, but haven't been able to find an answer to a fairly basic question: Can I run secondary monitors off the integrated graphics while simultaneously running my main display off my standalone video card?

Thanks!

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
28nm to 14nm with no fab problems is a bit of a red herring. Unless Intel is fundamentally different about how it runs its foundries, the biggest problems are going to be 1) the inescapable practical engineering of fitting a design to a physical chip, and 2) the engineering around the hiccups that the foundry hid from you to make their process sound attractive. Intel delayed the 22nm transition by a couple months and the 14nm transition by a-year-no-wait-no-delay-no-wait-six-months; they may be faster than TSMC, but it's by no means smooth sailing to get a product to market on a new process.

However, it does sound like this is a "premium" sort of deal. Is that just "pay extra for the best process?" Or does it imply insider access to R&D progress and technical reports?

unpronounceable
Apr 4, 2010

You mean we still have another game to go through?!
Fallen Rib

regulargonzalez posted:

I read the OP and scanned the last couple pages and have read several reviews of the Haswell, but haven't been able to find an answer to a fairly basic question: Can I run secondary monitors off the integrated graphics while simultaneously running my main display off my standalone video card?

Thanks!

Yes. Just make sure to install the Intel graphics drivers, and enable the integrated GPU in BIOS.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

That could shake things up. Oh, interesting times. How's Global Foundries doing? :allears:

regulargonzalez
Aug 18, 2006
UNGH LET ME LICK THOSE BOOTS DADDY HULU ;-* ;-* ;-* YES YES GIVE ME ALL THE CORPORATE CUMMIES :shepspends: :shepspends: :shepspends: ADBLOCK USERS DESERVE THE DEATH PENALTY, DON'T THEY DADDY?
WHEN THE RICH GET RICHER I GET HORNIER :a2m::a2m::a2m::a2m:

unpronounceable posted:

Yes. Just make sure to install the Intel graphics drivers, and enable the integrated GPU in BIOS.

Great, thanks!

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Agreed posted:

That could shake things up. Oh, interesting times. How's Global Foundries doing? :allears:

More like Global Floundries. :smuggo:

Naw, they're doing okay. Just hired a real big name away from Samsung and announced a joint venture with the guys who invented semiconductor radio.

canyoneer
Sep 13, 2005


I only have canyoneyes for you
From the first handshake to the first foundry wafer out the door is not a matter of weeks, it's a very long process measured in months or years.

And most market analysts are terrible, and have some really silly ideas of what it takes to run a semiconductor fab.

And this isn't Intel's first encounter with ARM products in their revenue stream.
http://en.wikipedia.org/wiki/XScale
So, this isn't very stunning news.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

canyoneer posted:

And this isn't Intel's first encounter with ARM products in their revenue stream.
http://en.wikipedia.org/wiki/XScale
So, this isn't very stunning news.
To be fair though the market is totally different today. Back then ARM wasn't competing with x86 because it was mostly used in embedded devices, today ARM has a majority share of the computing market and is poised to expand aggressively into the spaces previously occupied by x86 and now x64.

JawnV6 posted:

If we're suddenly giving credence to analysts, reminder that Apple laptops are switching to ARM by the end of the year.
While that was Charlie being Charlie, I think he's a lot more right than anyone was willing to acknowledge prior to Cyclone. He set a 2-3 year timeline (from May 2011) based on an introduction of a 64-bit ARM core from nVidia in Q1 2013. Instead we got a much faster 64-bit ARM core from Apple in Q4 2013, pushing his target dates back to 2014-2015. The fact is that Apple has designed in-house the widest ARM core that exists, and I don't think they'd do that if they planned to keep it only in cellphones and tablets.

Bonus Edit: To expand a bit, why did Apple decide to take the substantial risk of being the first company EVER to release a 64-bit ARM processor, shipping devices based on it before ARM even has the final reference implementations ready for licensing? Further, why did they decide to make that core the widest, most complex ARM core ever designed? Obviously devices will be moving to 4GB of RAM eventually (though that may be 3+ cycles away for Apple since they are slow at RAM), and negotiating the 64-bit transition before it is really needed smooths the way a bit. A wider core has efficiency advantages because it takes a lower clock speed to achieve the same performance, meaning lower voltage. It just seems like they shot right past the optimal point for phones and tablets into something more akin to the new Atoms, though it will take further analysis of both products to say this for sure. It'll also be interesting to see how the 64-bit Kraits look in comparison to Cyclone and previous Kraits.

Alereon fucked around with this message at 02:02 on Oct 31, 2013

roadhead
Dec 25, 2001

Alereon posted:


Bonus Edit: To expand a bit, why did Apple decide to take the substantial risk of being the first company EVER to release a 64-bit ARM processor, shipping devices based on it before ARM even has the final reference implementations ready for licensing? Further, why did they decide to make that core the widest, most complex ARM core ever designed? Obviously devices will be moving to 4GB of RAM eventually (though that may be 3+ cycles away for Apple since they are slow at RAM), and negotiating the 64-bit transition before it is really needed smooths the way a bit. A wider core has efficiency advantages because it takes a lower clock speed to achieve the same performance, meaning lower voltage. It just seems like they shot right past the optimal point for phones and tablets into something more akin to the new Atoms, though it will take further analysis of both products to say this for sure. It'll also be interesting to see how the 64-bit Kraits look in comparison to Cyclone and previous Kraits.

iLaptop. Something cheaper than the Air that runs an ARM build of OS-X ?

Gwaihir
Dec 8, 2009
Hair Elf
I doubt Apple would actually charge significantly less for Arm based versions of it's machines, because why would they? Their target market doesn't give a poo poo what the chip inside the machine is, nor do they know that it might be cheaper for apple to put in there in place of an Intel chip.

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


Apple doesn't have a reason to switch to ARM for notebooks for the same reason that Windows RT has no reason to exist, Intel is competitive in the lower power/high performance SoC market with x86. It may have looked in 2011 like ARM was necessary to jump to the next level of power savings, but that's no longer the case.

Without switching their entire computer line to ARM, an ARM notebook would create fragmentation in their OSX market.

bull3964 fucked around with this message at 15:21 on Oct 31, 2013

Adbot
ADBOT LOVES YOU

computer parts
Nov 18, 2010

PLEASE CLAP

Gwaihir posted:

I doubt Apple would actually charge significantly less for Arm based versions of it's machines, because why would they? Their target market doesn't give a poo poo what the chip inside the machine is, nor do they know that it might be cheaper for apple to put in there in place of an Intel chip.

Yeah, having used Apple computers since I was like 5 the main problem they have always had is a lack of standardization. The switch to Intel was the best move because it gave people a legitimate reason to buy their computers (even if they make lovely drivers for windows, it still runs windows), and it spurred developer interest in the platform as well.

I could maybe see it if the Surface RT was a success and Microsoft actually let people write desktop apps for it but there's really no demand for it unless you want to maximize battery life (and arguably that's what the iPad is for anyway).

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply