|
The hell? They can't be right, the integrated graphics core has a R 6550? That's what I've got in my desktop right now...
|
# ? May 21, 2011 14:20 |
|
|
# ? Dec 8, 2024 08:29 |
|
Tab8715 posted:The hell? They can't be right, the integrated graphics core has a R 6550? That's what I've got in my desktop right now... Llano's IGP line is .
|
# ? May 21, 2011 15:33 |
|
Tab8715 posted:The hell? They can't be right, the integrated graphics core has a R 6550? That's what I've got in my desktop right now...
|
# ? May 21, 2011 18:38 |
|
Alereon posted:To be fair, it isn't directly comparable to desktop-class cards because of the reduced memory bandwidth. Llano has 29.8GB/sec shared between the CPU cores and GPU, desktop cards (those intended for 3D) have at least 64GB/sec. The lower-end cards intended for HTPC and video applications do have as little as 28.8GB/sec though, so Llano will easily mean videocards are only necessary for gaming. Actually, video cards aren't anywhere near as bandwidth-intensive as their interconnects suggest. Everything below a dual-GPU card actually loses only a handful percent of their performance scaling all the way down to a PCIe x4 connect. Also, PCIe 2.0 x16 only runs at 8 GB/s (500 MB/s per lane times 16 lanes). You might be thinking gigabits. VVVV Ahh, derp. 'kay. Factory Factory fucked around with this message at 19:05 on May 21, 2011 |
# ? May 21, 2011 18:55 |
|
Factory Factory posted:Actually, video cards aren't anywhere near as bandwidth-intensive as their interconnects suggest. Everything below a dual-GPU card actually loses only a handful percent of their performance scaling all the way down to a PCIe x4 connect.
|
# ? May 21, 2011 19:02 |
|
I still expect it to slaughter the Intel HD 3000, though.
|
# ? May 21, 2011 22:14 |
|
Alereon posted:I'm talking about memory bandwidth, a Radeon HD 6570 has a 128-bit 4Ghz GDDR5 memory bus, but the Radeon HD 6550 is going to have to make do with a 128-bit 1866Mhz DDR3 bus, shared with the CPU.
|
# ? May 22, 2011 01:26 |
|
PC LOAD LETTER posted:This is true but if you look at the Zacate APU's and how well they perform with just a single 1066 DDR3 channel then this might not be so bad at all. I have no clue if its because today's CPU's have so much L2/L1 cache that memory bandwidth isn't too important past a certain point or if its because they're hiding a bunch of cache in the GPU itself or something else but AMD appears to be getting some pretty good performance out of relatively low bandwidth. They may actually be able to get close to a "real" 6550. That'd be a heck of a bargain chip if they pull that off, particularly for a laptop. The hell? This is going in a laptop?
|
# ? May 22, 2011 02:41 |
|
Zacate is the mobile/desktop middle ground chip right? So it's going in mid to high end laptops, budget desktops, media HTPC's and there were rumours last year that it could fit into a 12" or larger tablet. Of course it would have had heat/battery issues in a tablet and according to market research nobody wants one bigger than 10".
|
# ? May 22, 2011 03:03 |
|
Verizian posted:Zacate is the mobile/desktop middle ground chip right? So it's going in mid to high end laptops, budget desktops, media HTPC's and there were rumours last year that it could fit into a 12" or larger tablet. Zacate is ultra-portable only. Llano (later BD?) will go into laptops.
|
# ? May 22, 2011 03:15 |
|
What would be neat is if Llano could make gaming on a laptop cheaper and more feasible.
|
# ? May 22, 2011 05:46 |
|
spasticColon posted:What would be neat is if Llano could make gaming on a laptop cheaper and more feasible. That is not too hard. Look at what AMD did with the E350, they are known for GPU excellence.
|
# ? May 22, 2011 22:28 |
|
Sinestro posted:Zacate is ultra-portable only. Llano (later BD?) will go into laptops. Not necessarily. AMD's not putting restrictions on Zacate like Intel has with Atom. As a result, there are a number of sub-$350 15" laptops with E-350s, and I believe that there are some even cheaper models with the C-50. Acer's even got a Windows tablet with an even-lower-power version of the C-50, although reviews haven't been kind. Sinestro posted:That is not too hard. Look at what AMD did with the E350, they are known for GPU excellence. It'll be interesting to see what happens as the power levels move up, though. AMD has a dynamite GPU design team, but throwing a powerful processor and a powerful GPU on the same die mean that you're going to need a lot of cooling when things throttle up. From what I understand, while the E-350 has the GPU and CPU on the same die, it's not really a well-integrated setup; it's a bit like Intel's Clarkdale approach with two discrete areas on the chip that just happen to have a very short on-die interconnect. Mobile limits have always been more about power and cooling than what's capable at the top end of performance, and it remains to be seen how well AMD can turn CPU/GPU integration into power savings. Space Gopher fucked around with this message at 23:15 on May 22, 2011 |
# ? May 22, 2011 23:12 |
|
The Brazos APU is pretty tightly integrated, it's definitely not comparable to Clarkdale or Atom, which don't have integrated memory controllers. Clarkdale actually used separate chips (32nm CPU cores, 45nm northbridge) on the same package, Atom uses a single chip but with two independent ICs linked by an on-die FSB. These approaches require little development work and provide some of the cost savings of integrated memory controllers, but you don't get the performance improvements offered by an IMC (and an actual IMC uses even less power and die space). The main limitation to Brazos performance is the very low CPU clocks and lack of Turbo support, both of which should be remedied in the 28nm die shrink. New chipsets probably also wouldn't hurt (especially for nettops), but overall platform power usage is already pretty low.
|
# ? May 22, 2011 23:48 |
|
Tab8715 posted:The hell? This is going in a laptop? edit: Looks like we've got a good leak on BD clocks and prices from ASUS. e2: Looks like we got some prices for some Llano based laptops. In Euros and has a discrete GPU in it too (common place CF in a low to mid range laptop ahoy!) but still it gives you a good idea of what they'll be like in dollars. PC LOAD LETTER fucked around with this message at 13:15 on May 24, 2011 |
# ? May 23, 2011 04:18 |
|
Anand Lal Shimpi is rather unsubtlely confirming the BD at Computex rumor on Twitter.
Sinestro fucked around with this message at 21:57 on May 24, 2011 |
# ? May 24, 2011 20:19 |
|
I'm kinda over the 8110. Frugality may win out, but i still want it.
|
# ? May 24, 2011 21:44 |
|
PC LOAD LETTER posted:Yea there'll be Llano laptop chips. Model TDP is supposed to 25-45w depending on the chip you get. Obviously the top end one will have the highest TDP so if you want to get that 6550-ish performance + quad PhenomII cores (aka Husky) you can kiss good battery life good bye but decent battery life may still be possible since that power rating is for the CPU+GPU+NB. TDP isn't a great way to look at power consumption and battery life any more. It specifies a sustained maximum power draw, but it doesn't give you any information about how the chip performs with lighter loads. Intel's current Sandy Bridge mobile quads have high TDPs, but still get excellent battery life under typical light-usage scenarios like web browsing because they're aggressive about clocking down, sleeping, and even gating off parts of the CPU that aren't in active use. It remains to be seen if AMD can match Intel's progress on that front, but I wouldn't assume that a 45W TDP automatically means poor runtime.
|
# ? May 24, 2011 22:11 |
|
Space Gopher posted:TDP isn't a great way to look at power consumption and battery life any more. It specifies a sustained maximum power draw, but it doesn't give you any information about how the chip performs with lighter loads. Intel's current Sandy Bridge mobile quads have high TDPs, but still get excellent battery life under typical light-usage scenarios like web browsing because they're aggressive about clocking down, sleeping, and even gating off parts of the CPU that aren't in active use. It remains to be seen if AMD can match Intel's progress on that front, but I wouldn't assume that a 45W TDP automatically means poor runtime. Wow, I am now even more excited about Bulldozer.
|
# ? May 25, 2011 00:09 |
|
Space Gopher posted:TDP isn't a great way to look at power consumption and battery life any more. It specifies a sustained maximum power draw, but it doesn't give you any information about how the chip performs with lighter loads. Intel's current Sandy Bridge mobile quads have high TDPs, but still get excellent battery life under typical light-usage scenarios like web browsing because they're aggressive about clocking down, sleeping, and even gating off parts of the CPU that aren't in active use. It remains to be seen if AMD can match Intel's progress on that front, but I wouldn't assume that a 45W TDP automatically means poor runtime.
|
# ? May 25, 2011 01:55 |
|
PC LOAD LETTER posted:edit: Looks like we've got a good leak on BD clocks and prices from ASUS. Could this be AMD heralding their triumphant return?
|
# ? May 25, 2011 05:39 |
|
If this reaches IPC similar to even Nehalem, then drat
HalloKitty fucked around with this message at 09:19 on May 25, 2011 |
# ? May 25, 2011 09:15 |
|
Fudzilla has an article about AMD's Fusion strategy (direct link to AMD presentation slides), as well as another article with more details about the upcoming Fusion Z-series APUs for tablets. Intel has announced they're slashing the prices on their upcoming 32nm Atoms by about 50%, so competition is really starting to heat up in the low-power computing arena. I'm thinking we'll see Atoms in cheap, low-performance devices (like ChromeOS smartbooks), with Fusion processors making significant headway in Windows-based devices that require higher performance. The real question for AMD longer-term is whether they can write Linux/Android graphics drivers that will allow them to capture some of that market, or if they'll just leave it to the ARM SoCs. AMD has two year until Intel produces a competitive Atom (22nm Silvermont in 2013), so they better take advantage of this time by executing well. Let's all just pray the rumors of Carly Fiorina being selected as the new AMD CEO were false, otherwise AMD is just plain done.
|
# ? May 27, 2011 23:16 |
|
There is some talk that BD isn't going to show up at Computex because "client" means Llano, and the Q3 server release will include the workstation/enthusiast parts.
|
# ? May 28, 2011 18:40 |
|
AMD says it has sold 5 million Fusion APUs so far, and that it is sold out, with demand far exceeding supply. Engadget is patting themselves on the back for predicting the death of the netbook, but I think the reality is that consumers are catching on to the fact that Atoms simply aren't fast enough for netbooks. They were barely adequate when the netbook form factor first appeared, but because the Atom never evolved the computing demands for basic web browsing far outpaced it. It's unfortunate but understandable that Fusion APUs are ending up in low-end laptops rather than the netbooks they'd be perfect for.
|
# ? May 29, 2011 03:14 |
|
Alereon posted:It's unfortunate but understandable that Fusion APUs are ending up in low-end laptops rather than the netbooks they'd be perfect for. I recommended a fusion laptop to one of the guys in the office for the low end cheap laptop for his family to use. Initially his kids moaned at him for buying a lovely laptop, but once they started using it all the moaning went away. Between it having enough cpu and gpu to actually do stuff and the 4.5 hour battery life they're pretty happy with it. AMD will do pretty well once they release their entire range of fusion chips as indicated in the slides.
|
# ? May 29, 2011 11:52 |
|
Bulldozer has been delayed until approximately late July due to performance issues. Apparently the B0 and B1 steppings had unacceptable performance, so AMD is spinning up a B2 stepping and hoping that it will make a big enough difference. So much for those optimistic performance projections
|
# ? May 30, 2011 05:55 |
|
Nordic Hardware is reporting that the Bulldozer lineup has been canceled, replaced with a new lineup with much more conservative clockspeed targets, to be launched in September. This news story came out before we got confirmation at Computex that AMD was going to have to make a new stepping, so it seems pretty likely right now. Since Llano still seems to be on track, I'm thinking this is more of an issue with the Bulldozer architecture than the manufacturing process (though we have no idea what the clockspeed targets are on Llano). Unfortunately, this probably spells the end of any chance AMD had at competing with Sandy Bridge in terms of per-thread performance, and doesn't bode well for their chances with well-threaded workloads when compared to Sandy Bridge-E. It seems like Bulldozer is going to end up like the Thuban hex-core Phenom IIs, not the fastest, but the cheapest way to get 6+ cores if you have an application that can use them all. On the plus side, we know that Intel's next-generation Ivy Bridge CPUs have been pushed back from Q1 2012 to Q2, meaning AMD is going to have a generous period of graphics dominance with Llano and Brazos.
|
# ? May 30, 2011 09:05 |
|
I planned to built a new computer next month, my desktop is still a single core 1.8ghz athlon with an x700 card. So does this affect the release dates of new AM3+ motherboards ?
|
# ? May 30, 2011 09:48 |
|
I think it's fair to say that the 900-series chipsets and boards will be delayed to launch alongside Bulldozer, but that's just a rebrand anyway and you can buy AM3+ boards now (list linked in the OP). Realistically though, you should probably put some serious thought into an Intel Sandy Bridge CPU and Z68 board. AMD could still pull off something awesome, but that's looking less likely and less worth waiting for.
|
# ? May 30, 2011 10:04 |
|
I dunno how kosher it is to repeat this, but Star War Sex Parrot gave an offhand comment in the System Building thread about being both really disappointed with Bulldozer and bound by NDA about the subject. Also mentioned was how it seemed nice for workstations and servers, but otherwise was not impressive. I'm thinking the FPU design backfired - the half-as-many-as-cores, double-wide, bifurcating thing just doesn't sound like it would really hold up to the floating-point-strong Intel Core processors. I was optimistic for BD, but ever since I heard about its design, I felt like it was pushing the Phenom "more cores" strategy out too far. Rather than being designed for today's (or even tomorrow's) most pressing processing needs, the chips are designed toward some highly-parallelized vision of software years from now, putting power in the wrong places compared to what most people usually wait on their processors for now.
|
# ? May 30, 2011 11:00 |
|
Its a drat shame AMD dropped the ball, again. They're lucky they have a decent GPU to cram onto a single die with their older cores. Llano will make a good mainstream chip but that is a real disappointment to those of us who were hoping for something more than that. FPU though? I'd be surprised if the FPU was the problem. For what ever reason the problems seem to pop up with the L1 cache, decoders/schedulers with AMD.
|
# ? May 30, 2011 13:02 |
|
Factory Factory posted:I felt like it was pushing the Phenom "more cores" strategy out too far. Rather than being designed for today's (or even tomorrow's) most pressing processing needs, the chips are designed toward some highly-parallelized vision of software years from now, putting power in the wrong places compared to what most people usually wait on their processors for now.
|
# ? May 30, 2011 13:29 |
|
Factory Factory posted:I'm thinking the FPU design backfired - the half-as-many-as-cores, double-wide, bifurcating thing just doesn't sound like it would really hold up to the floating-point-strong Intel Core processors I have to say I agree with this, it just never made sense to me to do it. Not having an FPU was bad on single core CPUs back in the 90s - why would we make sure that a design in the 2010s lacked an FPU for half the cores? Especially since it seems like it was kinda supposed to be the AMD version of hyperthreading.
|
# ? May 30, 2011 14:03 |
|
fishmech posted:I have to say I agree with this, it just never made sense to me to do it. Not having an FPU was bad on single core CPUs back in the 90s - why would we make sure that a design in the 2010s lacked an FPU for half the cores? That is not true. There are 2 128-bit FPUs in a module, but for 256-bit FP calculation, they can fuse. Unless AVX is being used on one core, there are two FPUs, each tied to one core. Sinestro fucked around with this message at 01:15 on May 31, 2011 |
# ? May 30, 2011 18:05 |
|
We can speculate, but personally I want to see the chips out and benchmarked by someone reliable before I write off AMD
|
# ? May 30, 2011 18:16 |
|
Sinestro posted:That is not true. There are 2 128-bit FPUs in a module, but for 256-bit FP calculation, they can fuse. Unless AVX is bein gused on one core, there are two FPUs, each tied to one core.
|
# ? May 30, 2011 21:31 |
|
That makes lots more sense, especially given the rumors of poor yields and delays from GF's 32nm process.
|
# ? May 30, 2011 22:28 |
|
Alereon posted:The actual problem for AMD appears to have been that Bulldozer couldn't scale to high enough clockspeeds to offer competitive per-thread performance, which is what has always been the big risk of going with >4 core CPUs. The clockspeed is one issue. However, I had a look at the article and there's one thing that sticks in my mind. The main feature of this architecture that I see is going from 3 to 4 decode units. That will certainly provide some improvement with the theoretical best being a 33% increase in instructions converted to microcode per clock cycle, but it is also a limit. I'm wondering when AMD will provide a 5 module unit, and the corresponding 5 decoders. Though this could be linked to the current limits of memory speed.
|
# ? May 31, 2011 01:09 |
|
|
# ? Dec 8, 2024 08:29 |
|
x86 is pretty IPC limited, IIRC the Athlon had 3 decoders and only averaged around 1.5 IPC thorough put. 4 is already overkill, adding a 5th would be a waste. Resources would probably be better spent on a bigger/faster cache or branch prediction or improving clockspeed. I don't believe memory bandwidth is an issue right now either, almost nothing seems to be limited by it for desktop workloads.
|
# ? May 31, 2011 02:15 |