|
Galler posted:I was really hoping this would provide the basis for a cheap and efficient but beefy diy esxi/xen/whatever platform but it's looking like the extra cores probably don't make up for the extra price/heat/weak single core performance when compared to the Phenom II X6 or 2600. Nothing wrong with an X6 or i5 with 16GB for a pretty solid ESX box.
|
# ? Oct 14, 2011 02:12 |
|
|
# ? Apr 29, 2024 16:20 |
|
You left out: it has been replaced by the previous generation product at a lower price.
|
# ? Oct 14, 2011 02:14 |
|
What the hell are they going to do with that problem? The Phenom II X6 1055T $149.99 should overclock pretty trivially to any of the Black Edition specs, and thus make Bulldozer look foolish. And it also features this ad copy:quote:Energy efficiency is important to AMD, allowing you to enjoy a cool, quiet PC while saving energy and reducing heat, noise and the effect of your computer on the environment. Energy efficiency innovations include Cool’n’Quiet and AMD CoolCore. These technologies reduce power consumption by balancing the processor activity. ... which is rather unfortunate in light of Bulldozer's extraordinary power draw. I mean, do they kill what has been one of their golden geese now that keeping it around means kind of looking like assholes? They could keep having them made, but there's only so much fab time and space available, and obviously they're going to want to scoot people along to Bulldozer. Problem is it's not actually any better, just more expensive, more power hungry, and embarrassing in how not-competitive it is with Intel's current-gen stuff despite prices and marketing that beg you to think otherwise. The sunk cost fallacy starts to look pretty frightening when it's your company's path and perception at stake.
|
# ? Oct 14, 2011 02:40 |
|
wipeout posted:Guy I know vaguely is adamant he wants to buy a CPU for £700 or so; all he does is game.
|
# ? Oct 14, 2011 02:49 |
|
Agreed posted:What the hell are they going to do with that problem? The Phenom II X6 1055T $149.99 should overclock pretty trivially to any of the Black Edition specs, and thus make Bulldozer look foolish. And it also features this ad copy: A die-shrunk 32nm GPUless Llano with L3 cache and AVX would have a smaller die for the same cores, and would have been much faster than BD per clock/core since it is already +6% IPC than 45nm K10. This 8-core hypothetical chip at 3.6GHz (compared to 3.3GHz 1100T) would had around (8/6) * (3600/3300) * (1.06) * 5.9 = 9.09 score in Cinebench 11.5 which would had been truly a multithreaded monster compared to the cocktease 6.0 score of the current FX-8150. It's staggering how the gently caress anyone at AMD would think BD was a good idea when they can already make something much better for much less effort.
|
# ? Oct 14, 2011 07:07 |
|
freeforumuser posted:A die-shrunk 32nm GPUless Llano with L3 cache and AVX would have a smaller die for the same cores, and would have been much faster than BD per clock/core since it is already +6% IPC than 45nm K10. This comparison seems to suggest that even an i3-2100 kicks a 4-core Llano's rear end at most stuff, and that the Llano's performance is virtually identical to a Phenom II X4. While I do agree that it probably would have been a more sensible avenue to pursue, nonetheless I don't really see how it could be a big win for AMD even if they could get reliable 32nm yields of Llano at such a high core count (and considering they're having a lot of trouble with their fab partners getting decent yields with the current processors, I don't think that's a given at all). Maybe there's a light at the end of the tunnel, here, with some future iteration of Llano being AMD's "Pentium M" to get them out of the Bullprescott situation they're in. At least it's already in process, the majority of R&D is sunk and recoverable if their fabrication partners can get on board, but Bulldozer is a total albatross for them because if they release it and immediately say "what the gently caress were we thinking" they look totally incompetent, and I think it's entirely possible the necessary improvements to Llano in order to bring it to desktop competitive performance levels might just be out of reach for them completely for the time being.
|
# ? Oct 14, 2011 07:59 |
|
So Bulldozer is AMD's Merced?
|
# ? Oct 14, 2011 13:26 |
|
Bob Morales posted:So Bulldozer is AMD's Merced? It's not *that* much of a complete waste of money and effort - at least its still x86/x64 and not some one-off instruction set that almost no-one uses. If threads that should be co-operating are being assigned to separate modules and L2 and L3 is being wasted on inter-module communication as opposed to actually caching main memory like it should be, the ripple effect of a simple scheduling fix could be huge. Even 10-15% at this point would at least be enough of an improvement not to look completely foolish next to the 2500k or the 1100T (once they drop the price of the 8150 to $235 or so) - but this is just wild speculation. The seemingly abundant supply of the 6100 means that yields of fully working Bulldozers is probably not that good :/
|
# ? Oct 14, 2011 14:12 |
|
Alereon posted:We should adapt the Programming Language Checklist. This is pretty awesome, heh. You should probably PM Crackbone anyways to get him to add your little Bulldozer blurb to the OP. Sometimes people do read it! Again though, at least this chip manages to mostly "keep pace" with Sandy Bridge outside of a few applications. Not as a big of a performance gulf as say NetBurst and the A64. Intel's marketing and buddy-buddy (honestly kind of suspicious) relationships with OEMs kept power hungry Pentium 4s shipping like crazy even though Athlons were running circles around them. If I recall correctly, Adobe Premiere 6.5 was one benchmark where an Athlon 64 that was behind by nearly a gigahertz in clock would still defeat a Pentium 4 with a very healthy margin. It took until Premiere 7 (Premiere Pro) for the Pentium to start winning that benchmark.
|
# ? Oct 14, 2011 14:46 |
|
Some new developments: http://quinetiam.com/?p=2356 Site claims to be working on a patch that increases performance by ~40% across the board. They claim their patch forces Windows 7 to recognize the Bulldozer Processor as having 8 cores. With the patch their Passmark CPU score increased about 4000 points from 8500 to 13000. What do you guys think, bullshit or are they onto something?
|
# ? Oct 16, 2011 04:06 |
|
quote:The one thing that is for-sure here is that every hardware review website rushed to be the first to publish an AMD FX-8150 review, they all used the same generic benchmarks and NONE did any real world computing. The game is fixed, the big-dog spreads around the most ad-dollars.
|
# ? Oct 16, 2011 04:09 |
|
THAT drat DOG posted:Some new developments: Isn't the whole problem that Windows is recognizing it as 8 full independent cores?
|
# ? Oct 16, 2011 04:11 |
|
The site actually says 40% under specific conditions and not across the board. In relation to the core usage in windows 7. quote:There is most definitely a Windows 7 AMD FX – software patch in the works. By most estimates the AMD Bulldozer FX is underperforming by 40-70% in most Windows 7 benchmarks. By forcing Windows 7 to recognize 8 cpu cores a huge performance hit has happened. The Bulldozer FX-8xxx design… really isn’t 8 cores, it’s a 4 core CPU with an extra integer pipeline on each core. If the FX-8xxx series scale according to the 4 and 6 core Bulldozer design than there is a serious bug in Windows 7 that is crippling the FX-8150 performance. This is about spreading FPU or SSE around between the cores rather than clogging the APU on one core while another is idle. If the patch works there will be some improvement but it'll be completely dependent on the application/game.
|
# ? Oct 16, 2011 04:16 |
|
Won't that end up doing jack-poo poo for the worst case scenario of Bulldozer: single-core application performance?
|
# ? Oct 16, 2011 04:44 |
|
Cool Matty posted:Won't that end up doing jack-poo poo for the worst case scenario of Bulldozer: single-core application performance? Correct. It won't do anything. So, for example, World of Warcraft where the game is one thread and the audio is a second thread (probably all integer) you'll still get the same terrible performance. Though there might be some benefit where the gpu offloads to cpu. It would at least offload to an APU that isn't as heavily loaded. Either way I'll believe results when I see them.
|
# ? Oct 16, 2011 04:55 |
|
Devian666 posted:The site actually says 40% under specific conditions and not across the board. Why is Windows 7 getting poo poo when Linux needed a kernel patch as well?
|
# ? Oct 16, 2011 05:54 |
|
Probably because everyone knows Linux already has compatibility issues with a lot of hardware.
|
# ? Oct 16, 2011 06:36 |
|
Why didnt AMD have a patch ready to go before the review sites got hold of the CPUs?
|
# ? Oct 16, 2011 06:46 |
|
WhyteRyce posted:Why is Windows 7 getting poo poo when Linux needed a kernel patch as well? The linux kernel patch only increased performance by up to 10%, a 40% performance increase is pretty crazy all things considered. AMD has had previous problems with windows kernel scheduling (aka phenom cool and quiet problem) so I suppose this isn't unprecedented. That also a problem that core parking would have solved, it just took until intel got on the case to get that implemented into the kernel. I wouldn't personally believe anything until more people can actually try this stuff out and see what changes are really being done. BlackMK4 posted:Probably because everyone knows Linux already has compatibility issues with a lot of hardware. Considering how bulldozer is supposed to be made to improve server performance I'm sad that all/most the benchmarks so far have been for desktop apps on windows. Hopefully someone will get their hands on some opterons and start doing those tests. Longinus00 fucked around with this message at 08:11 on Oct 16, 2011 |
# ? Oct 16, 2011 08:08 |
|
Longinus00 posted:Considering how bulldozer is supposed to be made to improve server performance I'm sad that all/most the benchmarks so far have been for desktop apps on windows. Hopefully someone will get their hands on some opterons and start doing those tests. The only appropriate server testing I've seen on a mainstream site was on anandtech. They expressed interest in testing the bulldozer cores the same way but didn't have any to test.
|
# ? Oct 16, 2011 12:14 |
|
The performance gains would have to be incredibly dramatic if it were to compete on power draw. I'd imagine that's something people running more than one computer really take into consideration (though it ought to be more on everyone's mind - and it's such a disjunction, too, because they've got ATI putting out incredibly impressive power:performance cards, you can run two 6950s for ~the same or a bit less power draw than one GTX 570/580). It kind of feels like there's a second wind, here, but I don't see it personally. They knew the usage conditions, they had years to make it not be terrible under the usage conditions, what's the deal? Seriously, I would be totally shocked if a patch a few weeks into the launch got them 40% performance, because what the gently caress have they been up to since they got their units working internally and realized "hrm, our processor kinda eats poo poo compared to the competition, what we going to do there?"
|
# ? Oct 16, 2011 12:32 |
|
Longinus00 posted:The linux kernel patch only increased performance by up to 10%, a 40% performance increase is pretty crazy all things considered. AMD has had previous problems with windows kernel scheduling (aka phenom cool and quiet problem) so I suppose this isn't unprecedented. That also a problem that core parking would have solved, it just took until intel got on the case to get that implemented into the kernel. Meh, I will just tell AMD to suck it up. Nobody is going optimize for your CPU when it runs current code molasses slow. How about design a CPU that is actually fast NOW in the first place instead of this pathetic whining, AMD?
|
# ? Oct 16, 2011 12:41 |
|
Devian666 posted:Correct. It won't do anything.
|
# ? Oct 16, 2011 14:44 |
|
adorai posted:If you have 4 threads running that are doing fpu calculations, this patch should increase the performance significantly. As it is windows is loading up "cores" 1-4, which only have access to fpus 1 and 2. Instead, it should load up "cores" 1,3,5, and 7 which will get it access to all four fpus. Which even in single threaded apps, you have other things going on in the background. JF_AMD said that the most efficient use of BD resources was where each module is loaded up with its two threads, and any unused modules are in a low power state. If you had BD running two threads that functioned as you mention, one thread goes on module 1, one thread on module 2 to avoid the FPU being shared. That means the first module has ~half its shared resources sitting idle and using power. Same on module 2. Means the clock speed cannot ramp so high to remain within TDP and less resources are actually in use. If you had 4 threads, you would have ~half of all the shared resources within modules sitting idle at full clock speed, using power. Unavoidable as you can only put unused modules to sleep, not unused cores. Windows7 currently behaves that way sometimes as it has no concept of modules. Core parking in Win8 removes that behavior where possible, and improves performance and power use over 7. This blog guy might be playing Chinese whispers with a developer but it seems bizarre he didn't just quote them, link to them, or show us the "unstable" .reg file. I still want to believe. EDIT - http://quinetiam.com/?p=1810 GRINDCORE MEGGIDO fucked around with this message at 18:23 on Oct 16, 2011 |
# ? Oct 16, 2011 15:06 |
|
wipeout posted:EDIT - It's a conspiracy by intel.
|
# ? Oct 16, 2011 21:32 |
|
Ex-AMD engineer tries to explain (partially) what happened with Bulldozer http://www.xbitlabs.com/news/cpu/display/20111013232215_Ex_AMD_Engineer_Explains_Bulldozer_Fiasco.html quote:Cliff A. Maier, an AMD engineer who left the company several years ago, the chip designer decided to abandon practice of hand-crafting various performance-critical parts of its chips and rely completely on automatic tools. While usage of tools that automatically implement certain technologies into silicon speeds up the design process, they cannot ensure maximum performance and efficiency. Some people in the comments were joking about how this guy is just mad that he was replaced by a robot WhyteRyce fucked around with this message at 06:08 on Oct 17, 2011 |
# ? Oct 17, 2011 05:50 |
|
Wasn't one of Bulldozer's big marketing points "We're not hand-tooling anything, which means we can change to a new process really easily"?
|
# ? Oct 17, 2011 07:31 |
|
I think that had more to do with Bobcat. Still plenty that is lol worthy in the old pre launch BD slides given what we know now though.
PC LOAD LETTER fucked around with this message at 11:38 on Oct 17, 2011 |
# ? Oct 17, 2011 11:29 |
|
PC LOAD LETTER posted:I think that had more to do with Bobcat. Still plenty that is lol worthy in the old pre launch BD slides given what we know now though. If Intel taught us how to design a good x86 processor, it is to improve the decoders, branch predictors and out-of-order resources as much as possible since those are the main bottlenecks. Sharing a decoder with 2 cores is a recipe for disaster aka BD.
|
# ? Oct 17, 2011 14:31 |
|
I don't think 20% slower and 20% larger would have been even a primary reason why AMD's newest batch of processors are so uncompetitive. The performance numbers make the efficiency far worse than that, actually. While it's nice to be able to have some automated tools, for something as performance-critical as a mainstream x86 CPU I'd have thought AMD would have a library of hand-optimized layouts that can be tightly grouped together that are designed to be easier for tools to optimize. That's what I used to do with some tools I used working with FPGAs and it made developing new IP cores much faster with minimal wasted space on the die, and you could still understand the resulting RTL design enough to optimize easily by hand. For almost every performance-sensitive design that was going to be fabbed, we had a contractor we'd hire whose job was literally to take these maybe 300k gate designs and place and route them efficiently by hand - every transistor. These designs were certainly faster, but the contractor became too expensive (and taking way too long), the designs much more complex, and the tools started to catch up for all these designs while he stayed about the same in productivity. Now, with modern CPUs, you can't expect a human to fully P&R a whole 800 million transistor design and hand-optimize it all, so I dunno wtf AMD was doing before they switched to SoC designs. The funny thing about these automatic synthesis and place & route tools is that occasionally they can optimize something a human couldn't have come up with (like modern software compilers) just doing some hand analysis. I was working on a design for what amounts to a DSP and saw that with my logic it had just hard-wired a couple spots to a high signal. Turns out that the cryptographic hash algorithm I was using had some noticeable collisions and the synthesis tools discovered them for me.
|
# ? Oct 17, 2011 14:57 |
|
It sounded like a mix of exaggeration and disgruntled employee. If you follow around his posts you'll see him (repeatedly) make the claim that all the engineers from the golden days are no longer with the company, which is probably more telling of BD issues. Although that too sounds like more disgruntled, ex-employee complaining.
|
# ? Oct 17, 2011 17:01 |
|
Given what he said about BD largely panned out I don't think you can hand wave away what those ex engineers said as "disgruntled employees bitching" or exaggeration or something. As for the scheduler being the problem...I don't think anyone outside of AMD knows exactly what is wrong with BD. Most likely its a combo of several design problems and process issues.
|
# ? Oct 18, 2011 01:50 |
|
PC LOAD LETTER posted:Given what he said about BD largely panned out I don't think you can hand wave away what those ex engineers said as "disgruntled employees bitching" or exaggeration or something. I think those guys are floundering because they don't have enough money. They're having to cut corners somewhere, be it the architecture team, process, software support, packaging, etc. They can't fire on all cylinders. In an ideal world they'd have an army of software engineers preparing drivers and updates for the major operating systems while the hardware team gets the actual hardware ready. If they really have switched to a ton of EDA tools as well, I can see a disconnect between some old guard engineers and fresh guys that studied with EDA in school. I know I'm a baby engineer and I had EDA tools at my disposal during school, but I've had to go back to the dark ages a bit in supporting some legacy products.
|
# ? Oct 18, 2011 02:07 |
|
movax posted:I think those guys are floundering because they don't have enough money. They're having to cut corners somewhere, be it the architecture team, process, software support, packaging, etc. They can't fire on all cylinders. In an ideal world they'd have an army of software engineers preparing drivers and updates for the major operating systems while the hardware team gets the actual hardware ready. The eternal optimist in me wants to say they automated bulldozer while the hand-tuned transistor work was(is) being done for Piledriver. Perhaps these chips are more or less the same at the block-level and all the improvement in PD will be from tweaking the circuits down to as few gates as possible and other tuning. Otherwise I just don't know anymore - this is obviously not the product we needed to come out of AMD to actually keep Intel on their toes. Did they even have to price drop the 2600k in response?
|
# ? Oct 18, 2011 14:12 |
roadhead posted:The eternal optimist in me wants to say they automated bulldozer while the hand-tuned transistor work was(is) being done for Piledriver. I don't think Intel cares at this point. They are the performance leader in the CPU market and AMD is bleeding money trying to move from being a low-cost leader and have fallen on their face at the moment. The only thing they would drop pricing on is the 2500k since it's their main profit driver at this point and would REALLY undercut the small market share AMD has in the first place. I like AMD as the low-cost cpu alternative and it helped me out personally since my old intel system just died over the summer and I was able to replace the Motherboard, CPU and RAM cheaper than getting a new out of production lga775 motherboard with a a little performance boost and room to upgrade in the future. I really wished they didn't hype it as a SB Killer and just do what they do best in not being as powerful but cheaper and just as good.
|
|
# ? Oct 18, 2011 15:13 |
|
I wonder if you could put together an experimental benchmark that demonstrates the performance loss you get when modules aren't being used properly. It could be very simple. First, run a benchmark and have it set to use the first 4 cores that Windows 7 sees. Then, run that benchmark again, but have it use every other core. It would be interesting to see if there was a consistent performance difference.
|
# ? Oct 19, 2011 17:30 |
|
Shouldn't Prime95 be able to do that given its configurable core affinity? How Windows sees them shouldn't have much of an impact on what Prime95 does with 'em, and with specified parameters I would think you could get a sense of relative performance.
|
# ? Oct 19, 2011 17:34 |
|
Just in case you were still interested in Bulldozer http://scalibq.wordpress.com/2011/10/19/amd-bulldozer-can-it-get-even-worse/ quote:A number of reviewers have reported problems with a Blue Screen Of Death on AMD’s Bulldozer, even with stock settings:
|
# ? Oct 19, 2011 19:01 |
|
WhyteRyce posted:Just in case you were still interested in Bulldozer No worry of that. Can barely believe anyone bothered.. That's just tragic. A BSOD shouldn't normally be triggerable by a game any more, especially since it'll be running without admin rights. It must have something to do with the CPU itself, or the way the kernel is assigning threads to the CPU. Delayed, power hungry and therefore hot, uncompetitive even for the price, over-marketed (8-core!) and now buggy. indeed HalloKitty fucked around with this message at 20:50 on Oct 19, 2011 |
# ? Oct 19, 2011 20:48 |
|
|
# ? Apr 29, 2024 16:20 |
|
Given all the bad press, I'm surprised the 8120 and 8150 are both sold out on Newegg
|
# ? Oct 19, 2011 20:53 |