Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
It's only server/enterprise workloads that could plausibly make use of that extra speed, though. Intel's efficient use of cache means that the benefits of RAM speed have flattened out around DDR3-1333 for three or four generations now.

Adbot
ADBOT LOVES YOU

TOOT BOOT
May 25, 2010

Factory Factory posted:

It's only server/enterprise workloads that could plausibly make use of that extra speed, though. Intel's efficient use of cache means that the benefits of RAM speed have flattened out around DDR3-1333 for three or four generations now.

That and Dwarf Fortress (seriously)

TOOT BOOT fucked around with this message at 05:48 on Aug 3, 2012

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
More on this, conveniently enough: AnandTech just put out a review of some non-standard form-factor SuperMicro servers, and they digressed for a bit about server memory. Once you've read it, it really becomes clear that DDR4 is basically designed to address all the technical hurdles to increasing per-processor memory density and speed: per-DIMM point to point communication instead of shared-signal channels, lower voltage (i.e. reduced wear and tear, lower power consumption), and specifications for higher speed and possibly increased density through 3D transistors.

E: Upon further reading, AnandTech put out an overview and benchmark of server memory technologies, starting with a digression on some SuperMicro servers.

Factory Factory fucked around with this message at 17:03 on Aug 3, 2012

Richard M Nixon
Apr 26, 2009

"The greatest honor history can bestow is the title of peacemaker."
While preparing for an upgrade to Ivy Bridge, I've found that Xeon CPUs are cheaper than their i7 counterparts. As far as I can tell, aside from the i7 having a GPU (some Xeons even have them) and the Xeon supporting ECC memory, it looks like a good way to save $50 on upgrading.

Is there something I am missing out on? ECC memory isn't compulsory, is it? Consumer chipsets will work with the Xeons just fine?

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Richard M Nixon posted:

While preparing for an upgrade to Ivy Bridge, I've found that Xeon CPUs are cheaper than their i7 counterparts. As far as I can tell, aside from the i7 having a GPU (some Xeons even have them) and the Xeon supporting ECC memory, it looks like a good way to save $50 on upgrading.

Is there something I am missing out on? ECC memory isn't compulsory, is it? Consumer chipsets will work with the Xeons just fine?
My understanding is you don't need to use ECC memory and they will work in consumer boards with a current BIOS. The downsides are lack of overclockability compared to the K-series models, lower clockspeeds and turbo bins, and the processor graphics and QuickSync are pretty important so I wouldn't consider models without that. I'd look more closely at the Core i5 3570K, I know the i7 is sexy but are you REALLY going to use 8 threads or the extra 2MB of cache? And even if you can, do you want to pay that much for the gains? You might consider just spending the difference on a high-end air cooler and better motherboard and overclocking the hell out of it.

mayodreams
Jul 4, 2003


Hello darkness,
my old friend
I built an ESXi host out of a SB Xeon and C206 workstation board using non registered/ecc ram and it works great.

Richard M Nixon
Apr 26, 2009

"The greatest honor history can bestow is the title of peacemaker."

Alereon posted:

My understanding is you don't need to use ECC memory and they will work in consumer boards with a current BIOS. The downsides are lack of overclockability compared to the K-series models, lower clockspeeds and turbo bins, and the processor graphics and QuickSync are pretty important so I wouldn't consider models without that. I'd look more closely at the Core i5 3570K, I know the i7 is sexy but are you REALLY going to use 8 threads or the extra 2MB of cache? And even if you can, do you want to pay that much for the gains? You might consider just spending the difference on a high-end air cooler and better motherboard and overclocking the hell out of it.
QuickSync isn't something I'm familiar with, but after a few minutes reading about it, I'm understanding that it is hardware support for video transcoding? I'm not certain that's a feature I would ever take advantage of, unless there are other uses for it as well.

Overclocking does sound like it would be an issue for Xeons. I'm assuming I could still increase the memory speed, but the locked multiplier would stop higher gains. Although I wasn't even looking at the K-series i7 to begin with. I often see all 8 'cores' being used fairly decently. I wish that I could see just how much the extra cache is helping.

Apparently triple-channel memory was a fad and all the current gen boards are dipping down to 4 sockets on all but the enthusiast models, so I'll be losing 2x2GB unless I upgrade that too..

I'm coming from a Haswell E: Bloomfield (I'm an idiot) i7, so the architectural improvements alone are an enticing reason to upgrade, but it looks like I need to thoroughly explore where my system is weakest at the moment.

mayodreams posted:

I built an ESXi host out of a SB Xeon and C206 workstation board using non registered/ecc ram and it works great.
Good to know that everything would work out if that's the way I go.

Richard M Nixon fucked around with this message at 00:30 on Aug 4, 2012

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
The warning I have is that with UDIMMs, they rapidly become more expensive than RDIMMs on those C20x boards once you start looking for 8GB sticks. It's fine if you're ok with 16GB of RAM ever on a machine, but if you're looking for 32GB or more, you're kinda screwed. That would also mean you'll now need to be looking at E5 Xeons and different motherboards starting around $190. Then there's also the downclocking "feature" of Nehalem with using so many UDIMMs on a board (wish I could find the most proper paper showing how it works, but I believe this is inherited from Nehalem) and all of a sudden RDIMMs look like a wonderful bargain.

JawnV6
Jul 4, 2004

So hot ...

Richard M Nixon posted:

I'm coming from a Haswell i7

You're from the future?!?!

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

JawnV6 posted:

You're from the future?!?!

Based on no context at all, and just that quote, I'd guess he's reeeeally into the upcoming gen and just used "from" in a weird way.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Richard M Nixon posted:

I'm assuming I could still increase the memory speed ...

Sure, if you want nothing to happen except your power bill increasing slightly.

Shaocaholica
Oct 29, 2002

Fig. 5E
Soooooo....memory technology as currently implemented has hit a wall? Is there a bottleneck?

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Not at all. In fact, we were just discussing how DDR4 will enable faster speeds and higher per-server densities in enterprise workloads.

What's happened is that Intel's CPU caches have gotten big enough and well-controlled enough that the chips just doesn't really need faster RAM than stock for peak performance in the vast majority of non-enterprise workloads.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Shaocaholica posted:

Soooooo....memory technology as currently implemented has hit a wall? Is there a bottleneck?

The opposite? Actually? Huge advances in the efficiency of the storage-->execution pathway, the highly-integrated memory pathway, excellent cache usage, etc. means that you don't need RAM faster than DDR3 1333mhz. The only performance scaling these days is from whatever's below 1333mhz (1077? I can't friggin' remember and can't be arsed to look it up, sorry) to 1333mhz 9-9-9-24. Past that, Sandy Bridge sees scaling so minimal as to be pretty much negligible and Ivy Bridge is even better. Servers benefit from different kinds of RAM, so we're strictly talking about desktop space here, and in that realm, Sandy Bridge gets a roughly 3-6% performance increase specifically in HIGHLY RAM-intensive applications (the most in extensive compression/decompression operations, second most in encoding video, and some but not much in rendering audio - I went with 1600MHz because the price was identical and I'll take 2% for free, why not). Ivy Bridge, it's more like 2%-5%, with a similar split in application performance improvements.

Videogames see percentages of a framerate increase between 1333mhz and 2100mhz+ overclocked baloney RAM.

It's not that we're bottlenecked, it's that Intel has managed to systematically integrate all the poo poo that used to bottleneck RAM performance to the extent that now even lower speed RAM sees the same performance as extremely high speed RAM, and the processor can take be taken full advantage of without paying a lot of money and drastically overvolting the 22nm process memory controller for stupid fast RAM. :cool: (although integrated graphics does see scaling iirc, but that's a pretty niche application still, maybe the post-HD4000 will be where laptops start advertising faster RAM or something since the integrated graphics can show more substantial performance scaling.)



On the AMD side of things RAM performance scaling is still a pretty big deal because they're not nearly as well-designed to efficiently use resources, and that goes for Bulldozer, Llano, the whole works. Which reeeeally sucks for them. Having to buy expensive RAM only to get inferior performance anyway does not put smiles on people's faces. Such is life.

Agreed fucked around with this message at 00:13 on Aug 4, 2012

Shaocaholica
Oct 29, 2002

Fig. 5E
But isn't there money to be made selling DDR4 to the masses even if they don't need it? What if you want to run server apps on consumer hardware? No DDR4 for you!

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
The market for those things is, what, a dozen drooling idiots and IT supervisors who are terrible at their jobs?

I mean, you're saying "Why not spend hundreds of thousands, if not millions, of dollars on engineering and fab design to raise the consumer cost of your product without adding any functionality or features?"

Shaocaholica
Oct 29, 2002

Fig. 5E

Factory Factory posted:

I mean, you're saying "Why not spend hundreds of thousands, if not millions, of dollars on engineering and fab design to raise the consumer cost of your product without adding any functionality or features?"

I'm not saying this is a good thing but isn't that how the market works? New tech = more sales. People recycling their old DDR3 ram isn't going to increase sales. If Intel already has a DDR4 controller designed then I don't see how the cost of putting that into consumer chips is all that staggering. Also, isn't it better for everyone to be on the same memory architecture?

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
"Bigger numbers = better" stopped being a driving force in the market with the end of the Pentium 4 line. New technology is only a driver of sales if there's some actual benefit to using it. Giving Haswell DDR4 means:
  • Intel has to design and validate a new memory controller based on hard-to-find, brand new memory across every single product line, and cannot re-use any of their extensive work on DDR3. You can't just re-use the validation from one chip across a whole product line.
  • Motherboard manufacturers have to design entirely new memory topologies and interconnects for its motherboards and cannot re-use any of their extensive work on DDR3.
  • DRAM manufacturers have to shift production resources to making DDR4 SDRAM, and cannot re-use any of their extensive work on DDR3.
  • System manufacturers have to buy all these expensive new parts and cannot use any existing stocks of DDR3, and they may have difficulty getting all this new tech in volume.
So you're jacking up the cost of a system hugely. By the time it gets to the consumer, that $200 Haswell chip that performs a bit better actually adds $400 to the price tag because of all the extra cost needed on the RAM and motherboard. And you can't carry over your old RAM, and the new RAM won't be as cheap as the current stuff for years.

And what did this get you? A number that means nothing to most people is higher, and that's it. You don't launch Word faster, you don't render video faster, you don't play games faster. You have provided no value for the money. Fewer people will now buy your poo poo simply because it's more expensive, and they'd rather spend that much on something else. If the system were cheaper, sure, they'd buy, but it's not, so you lost sales.

It's doubly stupid when one of Haswell's big innovations is a chip-integrated VRM, which is specifically designed to make motherboards easier to design and cheaper, lowering the total cost of the system.

Factory Factory fucked around with this message at 01:25 on Aug 4, 2012

Shaocaholica
Oct 29, 2002

Fig. 5E
Aren't those the same catch-22s for any time we've switched memory architectures? How did the previous transitions work out and how will the DDR4 transition be different if at all?

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Because DDR, DDR2, and DDR3 were all smaller changes than going from DDR3 to DDR4, and those changes were matched with new CPU architectures or platforms that actually needed more bandwidth.

It's not good practice to invest money in things that are loving useless.

movax
Aug 30, 2008

DDR3 moving termination on-DIMM probably lowered the blood pressure of countless engineers, now it's only mildly irritating to do DDR memory layout.

Lower voltage for DDR4 should make mobile people happy too.

Happy_Misanthrope
Aug 3, 2007

"I wanted to kill you, go to your funeral, and anyone who showed up to mourn you, I wanted to kill them too."
Doesn't the "rise" (well, they're not exactly taking over from dedicated GPU's yet of course) of APU's necessitate more bandwidth eventually?

Llano benchmarks showed significant scaling with overclocking memory if I recall. GPU's need all the bandwidth they can get, and if the performance of integrated GPU's keeps increasing I can see the need for increased bandwidth to the CPU/APU.

Being cost-effective to the point where it would make sense to stick with one over a dedicated GPU with its own memory bus is another matter of course.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
In a laptop APU, sure, the lower power consumption, increased bandwidth, and eventually higher density would be great. For desktops, though, the extra cost of faster RAM reaches the point where a discrete GPU makes sense before the faster RAM stops adding performance.

But that's all focusing on AMD stuff. On Intel stuff, DDR3-1600 is still good enough to top out HD 4000 and a quad-core CPU both.

Haswell will include a GPU much faster than HD 4000, we're expecting. But nobody but the engineers working on it know whether it needs more RAM bandwidth at all, let alone whether it needs more than DDR3 can provide. Presumably it would, but it's strategically feasible that it would be focused on improving compute and shader performance without trying to increase its target screen resolution or sustainable AA levels too much.

I imagine that a lot of cost/benefit went into whether to match the chip with DDR4 or DDR3. The way the market is heading - putting as much performance as possible into smaller thermal envelopes and cheaper "good enough" systems - a switch to DDR4 doesn't really mesh with the strategy. Not at this point in the game, anyway.

I mean, look at the Radeon 4850/4870, the GeForce 680, or quad-core Sandy Bridge like the i7-2600K - instead of pushing for "bigger, faster, hotter," they all were compromise products tailored to serve the needs of a large market segment first, then scaled up or down afterwards. The 4870 was the highest-powered single GPU in its line, scaling up only through CrossFire; the 680 is the smaller, gaming-oriented version of Kepler, with Big Kepler apparently reserved only for HPC/enterprise compute products; and Sandy Bridge had a specific, incompatible revision for workstations and servers that skipped the on-chip graphics (a consumer-oriented feature) entirely in return for even more cores. All these products were/are extremely successful, game changers, and market-upsetters, and none of them were built on a "bigger is better" approach.

Shaocaholica
Oct 29, 2002

Fig. 5E
The lower-power/cooler angle is interesting. Aren't there way more sales of laptops than desktops anyway?

movax
Aug 30, 2008

Shaocaholica posted:

The lower-power/cooler angle is interesting. Aren't there way more sales of laptops than desktops anyway?

Yes, even five years ago, tech mags/"journalists" were predicting the doom of the desktop. They have been outselling for awhile now.

Really, laptops used to be incredibly lovely (Pentium 4-M anyone?), but the performance gap has gotten insanely close. Biggest gap is probably GPU performance at this point, but hey, that's why you get a desktop.

Shaocaholica
Oct 29, 2002

Fig. 5E

movax posted:

Yes, even five years ago, tech mags/"journalists" were predicting the doom of the desktop. They have been outselling for awhile now.

Really, laptops used to be incredibly lovely (Pentium 4-M anyone?), but the performance gap has gotten insanely close. Biggest gap is probably GPU performance at this point, but hey, that's why you get a desktop.

Did those external GPU boxes ever catch on? Seems like that would be cheaper than a whole desktop unless there are other bottlenecks to consider.

Even my C2D Thinkpad has a docking station that will take a single slot, half length PCIe GPU. It wouldn't take much to bump that up to a double slot/full length/200W+.

Shaocaholica fucked around with this message at 06:57 on Aug 4, 2012

movax
Aug 30, 2008

Shaocaholica posted:

Did those external GPU boxes ever catch on? Seems like that would be cheaper than a whole desktop unless there are other bottlenecks to consider.

Even my C2D Thinkpad has a docking station that will take a single slot, half length PCIe GPU. It wouldn't take much to bump that up to a double slot/full length/200W+.

The hardware has been around for awhile, but most consumer stuff like ThuderBolt is kind of bandwidth starved (PCIe 2.1 x4). The software support isn't quite there yet either, unfortunately. Work needs to be done by both OS vendors and hardware vendors. I'm sure the BIOS guys have some work to do if they want to roll with ACPI-mediated PCIe hot plug.

Deathreaper
Mar 27, 2010
I remember reading about the Asus XG station a while back:
http://www.pcworld.com/article/128401/external_pciexpress_graphics_for_laptops.html

I thought it was a pretty neat idea but it never materialized :(

Zhentar
Sep 28, 2003

Brilliant Master Genius

Factory Factory posted:

In a laptop APU, sure, the lower power consumption, increased bandwidth, and eventually higher density would be great.

Even then, moving to a faster external memory interface isn't the answer. You want to solder some LPDDR2 right on top of your GPU. When your bus is only a millimeter long, with no routing issues, you can afford to make it ridiculously wide (512 bit or even 1024 bit) to make up for low frequencies, and you get massive power and latency savings in the process. And you get big cost and space savings when you don't have to put that stuff on the motherboard, to boot.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Agreed posted:

that you don't need RAM faster than DDR3 1333mhz.

When I built my Sandy Bridge 2500K system, I could swear the sweet spot was 1600..

Of course, if you start talking about using integrated graphics or AMD Fusion then RAM speed matters more.

Edit: Yeah, it was an AnandTech article: http://www.anandtech.com/show/4503/sandy-bridge-memory-scaling-choosing-the-best-ddr3/8

vvv Their wording: "The sweet spot appears to be at DDR3-1600".

Not that it matters much really, since 1600 is cheap anyway. But the message of staying away from the "EXTREME" stuff is, as usual, a relevant one.

HalloKitty fucked around with this message at 23:08 on Aug 4, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

HalloKitty posted:

When I built my Sandy Bridge 2500K system, I could swear the sweet spot was 1600..

Of course, if you start talking about using integrated graphics or AMD Fusion then RAM speed matters more.

Edit: Yeah, it was an AnandTech article: http://www.anandtech.com/show/4503/sandy-bridge-memory-scaling-choosing-the-best-ddr3/8

We take different messages away from the same article, the performance difference between those two speeds, or for that matter between 1333mhz and the fastest stuff on the market, is not significant according to their own benchmarks. Unless you do a ton of compression/decompression, that is, or video encoding.

Edit: Nothing on this page seems particularly suggestive that you should choose spendy fast RAM over abundant commodity 1333mhz RAM, to my eyes, what are you seeing there that makes 1600 the "sweet spot?"

Shaocaholica
Oct 29, 2002

Fig. 5E
There's a whopping <3% price difference between the cheapest 1333 2x4GB kit on newegg and the 1600 kit. $39 vs $40. Why not?

future ghost
Dec 5, 2005

:byetankie:
Gun Saliva

Shaocaholica posted:

There's a whopping <3% price difference between the cheapest 1333 2x4GB kit on newegg and the 1600 kit. $39 vs $40. Why not?
I went with 1600mhz G-Skill when I built my 2600K system since the price difference was so minimal, so I agree with going up that far.


I sold my 2133mhz review kit though since I'd never use it.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Shaocaholica posted:

There's a whopping <3% price difference between the cheapest 1333 2x4GB kit on newegg and the 1600 kit. $39 vs $40. Why not?

I'm running 16GB of 1600mhz RAM in my system and I built in June 2011 when that cost a poo poo-lot more than it does now. A $1 difference, I'd get the 1600MHZ kit too. But...

The answer to "why not spend more money than you have to?" is an individual question and a slippery slope. You are asking the wrong person, really, I've got a 2600K (from when there was no 2700K), my motherboard's a P67 Sabertooth model, and I have a GTX 580 and a GTX 680 in my computer, 3 SSDs, and 4TB of HDD space. I buy needlessly expensive computer parts regularly. Come on aboard, you can start with RAM and work your way up.

movax
Aug 30, 2008

Agreed posted:

I'm running 16GB of 1600mhz RAM in my system and I built in June 2011 when that cost a poo poo-lot more than it does now. A $1 difference, I'd get the 1600MHZ kit too. But...

The answer to "why not spend more money than you have to?" is an individual question and a slippery slope. You are asking the wrong person, really, I've got a 2600K (from when there was no 2700K), my motherboard's a P67 Sabertooth model, and I have a GTX 580 and a GTX 680 in my computer, 3 SSDs, and 4TB of HDD space. I buy needlessly expensive computer parts regularly. Come on aboard, you can start with RAM and work your way up.

Also remember, the above poster has no plans to put his kid through college ;)

Or those parts will start appearing in SA-Mart.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

movax posted:

Also remember, the above poster has no plans to put his kid through college ;)

Or those parts will start appearing in SA-Mart.

The point is that if it's $1 or $10 or $100 or $1000 overspending is overspending. I can justify many of the aspects of the system, used professionally for real-time audio work... But some is just totally indefensible and the answer is "got it because I wanted it, even though the price difference isn't in line with the performance improvement." Feel free to spend $1, but what about that dude in the parts picking thread trying to get people to spend a $30 premium on 2100mhz+ DDR3? Hey hey, get a 5% improvement! (in stuff you won't do...)

I don't judge how people spend their money but we all try to get people set up with sensible rigs, right? And any wasted money could have been spent better elsewhere or saved. Any amount.

KillHour
Oct 28, 2007


Sometimes, it's worth it to spend money on the self satisfaction alone - even if you never use it.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

KillHour posted:

Sometimes, it's worth it to spend money on the self satisfaction alone - even if you never use it.

This isn't an argument that I'm trying to make, rather a perspective that's taken more or less officially in the parts picking thread. We try to get people to lay out their needs and then from there we can give them a reasonable budget. If they're gamers running a 27" monitor or professionals who bring work home doing a lot of compiling or whatever, nobody tries to say "ok, you need you an i3 system, and integrated graphics should be ample" - but there are a lot of cases where spending more gets you gently caress, all, in actual return on investment. Unless you just like the act of parting with your money, and that act itself is something you value enough to do it for its own sake, in which case there's gotta be better ways to do it than wasting it on capabilities that you will never distinguish in a computer that will never be used to its capabilities.

We're really penny-pinching talking about a literal $1 difference given how little a buck is worth, but it's an attitude. And the answer is that for the vast, vast, vast, vast majority of users, there will never be a good reason to have picked anything greater than DDR3 1333mhz for Sandy Bridge or Ivy Bridge systems. They just will not ever do anything where the extremely-special-limited-case 2% performance difference is of significance to them.

eggyolk
Nov 8, 2007


You're right about RAM, but when people are trying to budget build, that $30 wasted on memory could be the difference between running their favorite game at 60fps versus 30fps. If you've got Benjamins to blow then why bother asking for advice? We're here for the majority of computer builders who are trying to stretch every last dollar for every ounce of usable performance. If someone wants to blow money on a useless upgrade then fine, but don't criticize us for trying to deter them.

Adbot
ADBOT LOVES YOU

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

:psyduck: I'm not, I'm the guy saying that it's a waste of money to get DDR3-1600 even if it only costs an extra buck. Do it anyway if you must but it's still not going to help anything the vast majority of people do. How did you get "that guy thinks we're jerks for trying to help people spend on a budget" from any of my posts? Hell, I even said this:

Agreed posted:

This isn't an argument that I'm trying to make, rather a perspective that's taken more or less officially in the parts picking thread. We try to get people to lay out their needs and then from there we can give them a reasonable budget.

and I'm a regular poster in the parts picking thread, trying to do just that.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply