Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
rscott
Dec 10, 2009

BangersInMyKnickers posted:

The problem with AMD going after the high-end market is that product price points are down on the list of consideration after performance and over-clockability, where it looks like Sandy Bridge is going to win out. The profit margins are much better there and all they have to do is bin the better chips appropriately so it is a no-brainer for them to do it, but I seriously doubt it will compete with Intel's high end in any serious way.

The problem with Sandy Bridge and the OC market is they're putting some weird rear end features together. Wanna use the faster integrated graphics in the K series chips and overclock at the same time? Gotta wait for the super expensive z67 or w/e chipset to come out! It doesn't make sense and you know Intel wants to kill overclocking and force everyone to buy the more expensive chips but they can't because they would be ceding mindshare (not really marketshare) to AMD.

Adbot
ADBOT LOVES YOU

rscott
Dec 10, 2009

BangersInMyKnickers posted:

Do you really see a situation where people are going to be looking to overclock and use integrated graphics? Virtually anyone that is going to be overclocking is also doing 3rd work that will warrant discrete graphics in the first place.

Isn't the whole point of the improved integrated graphics to make it not necessary to have a discrete graphics card unless you want to do high end gaming/CAD work or whatever? I know a fair (maybe 20% of my "nerdy" friends) amount of people who have pretty high end CPUs and poo poo GPUs because they don't really play games, they just run a lot of multithreaded applications. And regardless, why include the 3000 series GPU with the K series processors when, if you what you say is true they're rarely going to be used in the first place? The whole thing just doesn't make a lot of sense.

rscott
Dec 10, 2009
AMD was at par or kicked Intel's rear end in peformance and price/performance metrics basically from 1999 to 2006, excepting 2002/early 2003 when T-Breds and Bartons kind of got left in the dust by Northwood P4s. Intel's blunders didn't start with netburst, they started with backing Rambus as a platform for P3s and P4s instead of DDR like AMD did.

rscott
Dec 10, 2009
Basically if Intel had released an updated 440BX chipset with ATA-66 support and official 133FSB support AMD wouldn't have made nearly the gains it made in 2000, VIA wouldn't have made a fuckton of money making chipsets, etc etc etc. Backing Rambus technology was a bigger mistake than the netburst uarch. In fact, really the only big problem with netburst was that it lived about 2 years too long, and that's because no one anticipated hitting the 4GHz wall. Think about it, its been 6 years since netburst was retired and we still don't have CPUs clocking that high. Stuff like Tejas was supposed to be hitting 25GHz by now if you look at old intel roadmaps.

rscott
Dec 10, 2009
Basically every AMD chipset up until the nForce 2 kind of sucked in its own way. Most of VIA's chipsets had lovely memory controller peformance, and most SiS stuff was terrifically unstable.

rscott
Dec 10, 2009
Jesus, this is a blunder on the level of Netburst for AMD, and I don't believe they can afford to make a mistake like that. Hopefully the graphics division can keep the company afloat long enough for AMD to either work the kinks out of the process or pull their heads out of their rear end and deliver a product that doesn't have the IPC efficiency of 8 year old parts.

rscott
Dec 10, 2009

roadhead posted:

I'm hoping against hope the thread scheduling is all whack and threads that should be sharing a module (and L2) aren't so the L3 is being used for things it shouldn't be (under normal circumstances).

The only other explanation I can think up is they really did alter the integer pipelines significantly (lengthened them, I fear) from Phenom II and there will be no way a simple Windows 7 patch can save it.

How could they not see that it wasn't reliably beating the x4 and x6 Phenom IIs a while back, unless they had some synthetic (simulated most likely) situations where it was?

IIRC they did lengthen the int pipelines because they were talking about +30% clock speed over thuban and I don't think that's possible just from a process shrink. Of course GF's 32nm node seems like poo poo (surprise) so I'm sure they lost some of the gains they had there, but that doesn't excuse them for coming out with a uArch in 2011 with 60% less IPC than SB. I don't know how the math works out exactly but I'm willing to guess that would put it roughly at the same IPC as the original K8 uArch.

rscott
Dec 10, 2009

Cool Matty posted:

AMD bought the rest of the stock. Disaster control.

So they're going to end up in dumped in the desert next to a bunch of copies of ET?

rscott
Dec 10, 2009

Zhentar posted:

Because it is useful for server workloads, and they only designed a single Bulldozer die. I would guess that the decision was made to conserve engineering resources. I think the expense is much larger in concern to die space than power.

L3 cache is pretty simple and designing a die with a lower amount/no L3 cache would probably boost their margins quite significantly considering how loving huge the thing is as it stands right now.

rscott
Dec 10, 2009
TMSC having process issues? Well I never!

rscott
Dec 10, 2009
Paper launches, I guess AMD needs something to get the taste of bulldozer out of their mouths?

Adbot
ADBOT LOVES YOU

rscott
Dec 10, 2009
It's cheaper to use higher speed memory with a narrower bus than it is to use a wide bus. Less die space to the memory controller and all. That being said, the GTX680 has exactly the same memory bandwidth as the 580, which is interesting to me because that's basically a bet on nVidia's part that games will be shader limited in the future and not bandwidth limited.

  • Locked thread