Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
cinder
Aug 25, 2000
Has AMD directly addressed the questionable performance of BD versus their existing offerings? I had read through the previously linked thread mentioning AMD's desire to answer questions directly from the enthusiast crowd and I'm interested to see their responses, but it doesn't seem like it has been posted yet.

Adbot
ADBOT LOVES YOU

WhyteRyce
Dec 30, 2001

cinder posted:

Has AMD directly addressed the questionable performance of BD versus their existing offerings? I had read through the previously linked thread mentioning AMD's desire to answer questions directly from the enthusiast crowd and I'm interested to see their responses, but it doesn't seem like it has been posted yet.

There's a thread on the HardOCP forums where Kyle is asking for questions that will be answered by AMD as part of an interview article.

freeforumuser
Aug 11, 2007

cinder posted:

Has AMD directly addressed the questionable performance of BD versus their existing offerings? I had read through the previously linked thread mentioning AMD's desire to answer questions directly from the enthusiast crowd and I'm interested to see their responses, but it doesn't seem like it has been posted yet.

I could understand that, except they totally trolled us so hard lost after all their BD performance and perf/watt pre-launch claims didn't materalize at all in the final product. I doubt asking AMD about how BD sucked so hard is going to give us any honest answers.

Zhentar
Sep 28, 2003

Brilliant Master Genius

PC LOAD LETTER posted:

But then why did they blow all that die space for all that cache if it had little impact on performance and consumed so much more power? Doesn't seem to add up.

Because it is useful for server workloads, and they only designed a single Bulldozer die. I would guess that the decision was made to conserve engineering resources. I think the expense is much larger in concern to die space than power.

rscott
Dec 10, 2009

Zhentar posted:

Because it is useful for server workloads, and they only designed a single Bulldozer die. I would guess that the decision was made to conserve engineering resources. I think the expense is much larger in concern to die space than power.

L3 cache is pretty simple and designing a die with a lower amount/no L3 cache would probably boost their margins quite significantly considering how loving huge the thing is as it stands right now.

PC LOAD LETTER
May 23, 2005
WTF?!

Zhentar posted:

Because it is useful for server workloads, and they only designed a single Bulldozer die. I would guess that the decision was made to conserve engineering resources.
I thought L3 cache was most always easy to add or remove since its modular and doesn't touch anything hinky like the L1 does though. Its fairly fault tolerant too right?

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
My guess is that they didn't want to have to find something to do with all the partially defective, die-harvested server CPUs (which are plentiful due to poo poo yields), since pretty much nobody would buy a two or three module Bulldozer for their server. This way all the really good dies go on to become expensive 16-core Opterons, and the rest end up as desktop FX processors. It took awhile for AMD to release the Athlon IIs using a new die without L3 cache, remember.

Zhentar
Sep 28, 2003

Brilliant Master Genius

PC LOAD LETTER posted:

I thought L3 cache was most always easy to add or remove since its modular and doesn't touch anything hinky like the L1 does though. Its fairly fault tolerant too right?

Removing the L3 cache may not be a big deal, but take a look at the die layout. The L3 cache is in the middle of the die, and on the edges the full length of the die are used by HyperTransport PHYs on one side, and the DDR3 PHY on they other. Actually making the die smaller after taking out the L3 cache would require more significant layout changes. Designing a layout that leaves the L3 cache somewhere easier to chop off would mean increasing the average distance from the cores, increasing latency and hurting performance.

Plus, the L3 cache is still only about 20% of the die area. Given how late Bulldozer was anyway, I'm not sure spending more time to save 20% of the die would have been worth it.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Intel has released the Core i7 2700K, as expected the only difference from the i7 2600K is 100Mhz and $15 (though there's currently a $55 premium on Newegg).

Longinus00
Dec 29, 2005
Ur-Quan
Looks like phoronix finally got around to benchmarking the 8150 (skip to page 6+). I wouldn't normally bring up such a trashy site but these are the first linux benchmarks I know about and it seems to do okay. Too bad about the crazy power draw.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Longinus00 posted:

Looks like phoronix finally got around to benchmarking the 8150 (skip to page 6+). I wouldn't normally bring up such a trashy site but these are the first linux benchmarks I know about and it seems to do okay. Too bad about the crazy power draw.

Do they just not own a 2600K or what's the deal there?

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast
Those benchmarks are interesting, because they show it being competitive a fair few times.
It seems like if your application is threaded enough, and can abuse some of the new features, Bulldozer is pretty reasonable (although as you said, hot/power hungry).

Back in the world of desktops and gaming, lightly threaded scenarios, as most of us will notice - Bulldozer's design was a bad bet.

Longinus00
Dec 29, 2005
Ur-Quan

Agreed posted:

Do they just not own a 2600K or what's the deal there?

Phoronix isn't big enough to get sent production samples or anything, same reason why the review is so late. I think he might have purchased this 8150 out of pocket so it doesn't surprise me he doesn't have a very comprehensive field to test against (notice the lack of hexcore k10).

PC LOAD LETTER
May 23, 2005
WTF?!

Zhentar posted:

Removing the L3 cache may not be a big deal, but take a look at the die layout.
...
Plus, the L3 cache is still only about 20% of the die area.
I'm sure its not a cut n' paste operation to remove or add L3 cache but I somehow doubt it would've been that much of a problem to do in time for launch. AMD likely knew well and good how BD would perform early this year at the very least. AFAIK cache uses a fair amount of power too. The die savings would've been nice, especially considering how drat big BD is when its supposed to be small due to the whole module approach, but cutting the cache would have big power savings too right?

HalloKitty posted:

Those benchmarks are interesting, because they show it being competitive a fair few times. It seems like if your application is threaded enough, and can abuse some of the new features, Bulldozer is pretty reasonable (although as you said, hot/power hungry).
Mmm, still gets beat pretty handily by a unoverclocked i5-2500k many times too though. Given the power consumption I'd be kind've surprised if the "designed for servers" BD takes off in the server market at all.

PC LOAD LETTER fucked around with this message at 09:37 on Oct 25, 2011

Zhentar
Sep 28, 2003

Brilliant Master Genius

PC LOAD LETTER posted:

I'm sure its not a cut n' paste operation to remove or add L3 cache but I somehow doubt it would've been that much of a problem to do in time for launch. AMD likely knew well and good how BD would perform early this year at the very least. AFAIK cache uses a fair amount of power too. The die savings would've been nice, especially considering how drat big BD is when its supposed to be small due to the whole module approach, but cutting the cache would have big power savings too right?

Early this year would have been way too late for that big of a change. I think early last year would have been doable, if not a bit late. And no, I don't think it would have been a big power savings. The caches can make up a pretty large portion of the leakage power, because of the sheer number of transistors in them, and it's harder to power gate them without impacting performance, but that's mostly a concern for idle power. Under load, the cache shouldn't be a significant portion of the power consumption.


Edit: Google's preview of this paper claims a 16MB L3 cache for some Xeon has, on average, a dynamic power consumption of 1.7W. That's at 65nm, so the 32nm BD cache should be capable of even less.

Zhentar fucked around with this message at 15:15 on Oct 25, 2011

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

Intel 6-core, 12 thread processors are stomping BD. They are going to be so hosed when Intel comes out with the next generation, 8-core 16-thread chips.

HalloKitty posted:

Those benchmarks are interesting, because they show it being competitive a fair few times.
It seems like if your application is threaded enough, and can abuse some of the new features, Bulldozer is pretty reasonable (although as you said, hot/power hungry).

Back in the world of desktops and gaming, lightly threaded scenarios, as most of us will notice - Bulldozer's design was a bad bet.

It looks like the other benchmarks did, gets smoked in the single/lightly threaded stuff but can be competetive or even a tick faster than Intel in a few things. The X6 wasn't all that bad in certain tasks. 7-zip, compiling, POVray, Cinema r3D or whatever it's called...

Longinus00 posted:

Phoronix isn't big enough to get sent production samples or anything, same reason why the review is so late. I think he might have purchased this 8150 out of pocket so it doesn't surprise me he doesn't have a very comprehensive field to test against (notice the lack of hexcore k10).

He said he received the standard 'press kit' from AMD so I don't think he bought it on NewEgg.

PC LOAD LETTER
May 23, 2005
WTF?!

Zhentar posted:

Early this year would have been way too late for that big of a change.
I dunno. They sure released a fixed Phenom II quickly IIRC. Different problem but still, they can certainly fix some stuff relatively quickly. I just have a real hard time believing it takes nearly 2 years to move around stuff like the L3 or HT links or whatever. That is almost half as long as it takes to design a whole new CPU core itself.

Zhentar posted:

That's at 65nm, so the 32nm BD cache should be capable of even less.
Wow BD is even more hosed then I thought then. If they can't get the power usage significantly down by lopping off stuff like cache than they probably have no hope of even approaching the power efficiency of Intel's chips until they do a totally new arch.

freeforumuser
Aug 11, 2007

PC LOAD LETTER posted:

I dunno. They sure released a fixed Phenom II quickly IIRC. Different problem but still, they can certainly fix some stuff relatively quickly. I just have a real hard time believing it takes nearly 2 years to move around stuff like the L3 or HT links or whatever. That is almost half as long as it takes to design a whole new CPU core itself.

Wow BD is even more hosed then I thought then. If they can't get the power usage significantly down by lopping off stuff like cache than they probably have no hope of even approaching the power efficiency of Intel's chips until they do a totally new arch.

AMD had an easy way out with Phenom...add an extra 4MB L3 and downside to 45nm. BD can't do neither. Tweaking microarchitectures isn't AMD's strong suit; just look at how long it took them to squeeze a 6% IPC out of Phenom II with Llano (~2.5 years), and how Athlon XP/64/Phenom II uarch was totally unchanged other than the add more cache/smaller process/MOAR COURS over their product cycles.

Zhentar
Sep 28, 2003

Brilliant Master Genius

PC LOAD LETTER posted:

I dunno. They sure released a fixed Phenom II quickly IIRC. Different problem but still, they can certainly fix some stuff relatively quickly. I just have a real hard time believing it takes nearly 2 years to move around stuff like the L3 or HT links or whatever. That is almost half as long as it takes to design a whole new CPU core itself.

Pretty much the fastest possible turnaround for a change is 6 weeks (not counting designing the change itself), and that's for a basic, metal-layer only change (e.g. it doesn't change any transistors, only the wires connecting them). Things like the Phenom II TLB fix involve few, if any, changes to the transistors, and just flip around connections to get slightly different (but correct) behavior.

Moving stuff around means having to do a new floorplan, redoing a lot of the layout, verification, testing, mask generation, and production, plus likely additional revisions to fix issue or improve yields. It takes a long time relative to designing a whole new core because it requires going back to relatively early stages of the design process.

I wasn't able to find any really good sources about the design timeline, so I'm going off of memory about how long this kind of stuff takes... hopefully someone more knowledgeable about the process can fill in details more accurately.

PC LOAD LETTER posted:

Wow BD is even more hosed then I thought then. If they can't get the power usage significantly down by lopping off stuff like cache than they probably have no hope of even approaching the power efficiency of Intel's chips until they do a totally new arch.

BD's power consumption is definitely not some minor design artifact... increasing its efficiency substantially will definitely require substantial design changes. With process and architecture improvements it should be able to get a lot closer to being competitive (maybe with Trinity), but it does seem like AMD made some seriously poor design decisions when it comes to power consumption.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Anandtech has a short news item about the upcoming AMD Trinity APU. Launching in Q1 of 2012, it should include two Piledriver modules and a Radeon HD 7000-series GPU (probably 7600-class), as well as support for DDR3-2133 (the fastest JEDEC DDR3). The GPU is VLIW4 like the Radeon HD 6900-series, so it will probably have 384 shader cores. Shaders come in blocks of 64 so they can't stay with 400, and going from 1600 on the 5870 down to 1536 on the 6970 still offered a pretty nice performance boost thanks to the improved per-shader efficiency and boosted clockspeeds.

PC LOAD LETTER
May 23, 2005
WTF?!
The big thing there IMO is the support for much faster DDR3. Llano's GPU turned out to be more bandwidth limited than I thought so DDR3 2133 will probably make a real big difference.

freeforumuser
Aug 11, 2007

Alereon posted:

Anandtech has a short news item about the upcoming AMD Trinity APU. Launching in Q1 of 2012, it should include two Piledriver modules and a Radeon HD 7000-series GPU (probably 7600-class), as well as support for DDR3-2133 (the fastest JEDEC DDR3). The GPU is VLIW4 like the Radeon HD 6900-series, so it will probably have 384 shader cores. Shaders come in blocks of 64 so they can't stay with 400, and going from 1600 on the 5870 down to 1536 on the 6970 still offered a pretty nice performance boost thanks to the improved per-shader efficiency and boosted clockspeeds.

"In terms of speed, AMD is claiming up to 20% increase over Llano."

Oh, I c wut u did thar AMD. A nice-sounding statement but so ambiguous that is it close to useless. It could very well be a +40% GPU performance with the -20% coming from BD assuming they didn't fixed BD at all.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Keep in mind we're talking about two improved Piledriver modules, die shrunk to 28nm. It would be kind of hard for that not to beat Llano pretty handily (though I also would have said that about a quad-module Bulldozer versus a Phenom II X4...). GPU performance should be pretty nice, probably in the neighborhood of a Radeon HD 6570, given expected clockspeed bumps, the increase to 34GB/sec of bandwidth, and improved VLIW4 architecture.

Doctor Goat
Jan 22, 2005

Where does it hurt?

freeforumuser posted:

"In terms of speed, AMD is claiming up to 20% increase over Llano."

Oh, I c wut u did thar AMD. A nice-sounding statement but so ambiguous that is it close to useless. It could very well be a +40% GPU performance with the -20% coming from BD assuming they didn't fixed BD at all.

Any 20% increase is nice over Llano, which is currently a pretty nice mobile/SFF/HTPC platform. I don't know why anyone would use it in a full desktop, though.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Hog Butcher posted:

Any 20% increase is nice over Llano, which is currently a pretty nice mobile/SFF/HTPC platform. I don't know why anyone would use it in a full desktop, though.
It's pretty much the best option if you want a low-cost system without a dedicated graphicscard. The closest competition is probably the Core i3 2105 (the i3 2125 is only $5 more but has a higher overall platform price too), which has a faster CPU but the HD 3000 is less than half as fast, which in this case is the difference between not/barely playable and running well. While most people probably don't care about games (though one could say the rise of casual gaming and in-browser 3D shooters is changing this), graphics performance is becoming increasingly important as more applications are GPU accelerated. For example, Google Maps just added WebGL support, and responsiveness depends on GPU performance, and every web browser today uses the GPU for rendering. Another major factor for performance and compatibility with these kind of applications is driver quality, and AMD's drivers are lightyears ahead of Intel. Anyone who's tried to get GPU acceleration working for Firefox on a system with Intel graphics (which makes rendering a lot more snappy on slower laptops) knows how painful it can be, especially if you have a laptop from an OEM that's blocked in the official Intel drivers. Intel is also right out if you want GPGPU via DirectComputer/OpenCL/WebCL, which isn't much of a factor right now but definitely will be going forward.

Bonus Edit: Also, the i3 doesn't have Turbo Boost, (neither does the A8-3850) and if Trinity does that could give it a pretty compelling advantage.

Alereon fucked around with this message at 05:48 on Oct 26, 2011

Devian666
Aug 20, 2008

Take some advice Chris.

Fun Shoe
Total platform cost is an interesting point and for desktop llano vs i3 that's one thing. For laptops in my recent experience is completely different.

I had a choice of an i3-2310M, same i3 with NVidia 520M (for a whole $2 NZD more), or an A4 all for very close prices. As far as I'm aware the A4 doesn't really compare to an i3 in terms of performance so I had a look for an A8 laptop in the same shop. The A8 cost 40% more than the other laptops. For the laptop market llano seems to be in the wrong price range.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Devian666 posted:

Total platform cost is an interesting point and for desktop llano vs i3 that's one thing. For laptops in my recent experience is completely different.

I had a choice of an i3-2310M, same i3 with NVidia 520M (for a whole $2 NZD more), or an A4 all for very close prices. As far as I'm aware the A4 doesn't really compare to an i3 in terms of performance so I had a look for an A8 laptop in the same shop. The A8 cost 40% more than the other laptops. For the laptop market llano seems to be in the wrong price range.
That's not really a reasonable comparison, a laptop with an AMD A8 processor is competing with Intel Core i5s, which start from $579 USD on Newegg for an i5 2430M right now. A comparable A8-3800M notebook is only $549, and has twice the cores and much better graphics performance. There's even an A8-3800M notebook with an additional slow HD 6470M 1GB dGPU for only $519, but I'd rather not pay with weight and bettery life. You can get an A6-3400M for $499, which is the same price as an i3 2330M notebook, and again has twice the cores and better graphics performance, but with the added benefit of Turbo over the i3. The AMD A4s are definitely poo poo processors though, given that they don't have the graphics performance leadership or core count to make up for the slow CPU.

The nVidia Geforce 520M is actually SLOWER than HD 3000, the only reason it's included is so you can use applications that aren't compatible with Intel GPUs (and because nVidia is probably giving them away for free at this point). You need a minimum of a Geforce GTX 560M or Radeon HD 6750M before you get performance meaningfully better than integrated graphics. GPUs below this mark have no more memory bandwidth than integrated graphics, and on the nVidia side (and below 6500M on AMD) have only 4 ROPs, which means gaming is off the table.

I'm bummed that we're not seeing more variety of Llano laptops, especially using the MX-series of 45W processors. The higher TDP means markedly better base clock speeds and lower power usage/better battery life (as a rule, the higher the TDP, the higher the power efficiency), at the cost of requiring a slightly better cooling system. This lack of variety is probably due to constrained supply due to low yields on AMD's 32nm process. This is sort of like the situation with the AMD E-350 processors, where they would be absolutely incredible low-cost netbook processors (and the platform cost makes this quite possible), but they can sell every machine they make at the $500 price because of how compellingly better they are than Atoms, so there's no reason to make any that are a better value.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

Are there any credible sales numbers by CPU for laptops?

Mathhole
Jun 2, 2011

rot in hell, wonderbread.
Just saw this:
http://www.prnewswire.com/news-releases/the-portland-group-adds-support-for-amds-bulldozer-architecture-132609878.html

So servers can now be properly optimized to run on bulldozer hardware. When this is done for desktop chips as well, I need to see some new benchmarks.

Civil
Apr 21, 2003

Do you see this? This means "Have a nice day".
I'm an idiot, move on.

movax
Aug 30, 2008

I read the Windows NTDebugging Blog quite often, and they just put up a write up on debugging a CLOCK_WATCHDOG_TIMEOUT, which has recently come into the limelight as happening with Bulldozer. Interested read if you want to see a Microsoft engineer step through and isolate the problem.

Devian666
Aug 20, 2008

Take some advice Chris.

Fun Shoe

Alereon posted:

That's not really a reasonable comparison, a laptop with an AMD A8 processor is competing with Intel Core i5s, which start from $579 USD on Newegg for an i5 2430M right now. A comparable A8-3800M notebook is only $549, and has twice the cores and much better graphics performance. There's even an A8-3800M notebook with an additional slow HD 6470M 1GB dGPU for only $519, but I'd rather not pay with weight and bettery life. You can get an A6-3400M for $499, which is the same price as an i3 2330M notebook, and again has twice the cores and better graphics performance, but with the added benefit of Turbo over the i3. The AMD A4s are definitely poo poo processors though, given that they don't have the graphics performance leadership or core count to make up for the slow CPU.

The nVidia Geforce 520M is actually SLOWER than HD 3000, the only reason it's included is so you can use applications that aren't compatible with Intel GPUs (and because nVidia is probably giving them away for free at this point). You need a minimum of a Geforce GTX 560M or Radeon HD 6750M before you get performance meaningfully better than integrated graphics. GPUs below this mark have no more memory bandwidth than integrated graphics, and on the nVidia side (and below 6500M on AMD) have only 4 ROPs, which means gaming is off the table.

I'm bummed that we're not seeing more variety of Llano laptops, especially using the MX-series of 45W processors. The higher TDP means markedly better base clock speeds and lower power usage/better battery life (as a rule, the higher the TDP, the higher the power efficiency), at the cost of requiring a slightly better cooling system. This lack of variety is probably due to constrained supply due to low yields on AMD's 32nm process. This is sort of like the situation with the AMD E-350 processors, where they would be absolutely incredible low-cost netbook processors (and the platform cost makes this quite possible), but they can sell every machine they make at the $500 price because of how compellingly better they are than Atoms, so there's no reason to make any that are a better value.

I don't get the benefit of newegg as I don't live in the US.

It's not a fair comparison but it's what's available in the shop. There weren't any A6s there which depending on the price may have been enough to convince me, but it was only A4 or A8.

My decision making process was also balancing power consumption, quality of the laptop case, etc. The A4 had a crappy laptop case to top it off. If they are having yield issues with llano then this goes some way to explain why there aren't more options available. It's difficult to compete like the e350 if you can't deliver the cpus.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
My main point was just that the store you went to had lovely selection and pricing, its not really anything to do with the product or how it stacks up in the market.

Devian666
Aug 20, 2008

Take some advice Chris.

Fun Shoe

Alereon posted:

My main point was just that the store you went to had lovely selection and pricing, its not really anything to do with the product or how it stacks up in the market.

I decided to check on pricespy.co.nz to see how prices stack up across the country. The pricing is consistent and of the few llano laptops available the A4 is always priced around the range of an i3-2310M.

Perhaps it would be better to say that my country has lovely selection and pricing.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
News on the next-gen videocard front: nVidia's 28nm die-shrink of its 40nm products is experiencing unexpected problems and will require a respin, pushing schedules back at least another 6 week. The rub is that as a straight shrink they really shouldn't be experiencing problems like this, so it's indicative of a larger issue. Rumors are that the 28nm process is being very problematic, though not as bad as the 40nm process was. These 28nm products are targeted for the mobile market, since it's reasonable to assume that the desktop Kepler designs will probably be too big/hot/late to go into laptops.

Ragingsheep
Nov 7, 2009
Wait, so a 3000HD on my i3 laptop would be better than the 6370m that it currently has?

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Ragingsheep posted:

Wait, so a 3000HD on my i3 laptop would be better than the 6370m that it currently has?
Significantly, yes. Is it possible to disable the graphics card in your laptop?

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast
This site shows them as being pretty similar in performance. You'd probably want to leave it enabled, because it will have dedicated RAM instead of using your system RAM. That, and the drivers are probably better for games. Additionally, here's a nice comparison from AnandTech, comparing 5470m and Intel 3000 HD. The CPUs are a little different (the Macbook there having an advantage), but the 5470m is completely comparable to the 6370m.

But yes, at this level, discrete graphics stop being such a great bullet point.

HalloKitty fucked around with this message at 09:05 on Oct 27, 2011

Ragingsheep
Nov 7, 2009

HalloKitty posted:

This site shows them as being pretty similar in performance. You'd probably want to leave it enabled, because it will have dedicated RAM instead of using your system RAM. That, and the drivers are probably better for games.

But yes, at this level, discrete graphics stop being such a great bullet point.

Well, it's mostly for non-gaming anyway, I have a desktop for that.

Back on topic...is there any indication that Piledriver will actually be competitive with Sandy Bridge, let alone Ivy?

Adbot
ADBOT LOVES YOU

freeforumuser
Aug 11, 2007

Ragingsheep posted:

Well, it's mostly for non-gaming anyway, I have a desktop for that.

Back on topic...is there any indication that Piledriver will actually be competitive with Sandy Bridge, let alone Ivy?



By AMD's own claims of Piledriver being 10% faster than BD (that's assuming an IPC increase and not just 10% higher clock),it wont even touch Llano at per core basis in terms of IPC. It will probably need a ~600MHz higher clock to even match Llano. Ivy Bridge? Furgetaboutit.

  • Locked thread