Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
movax
Aug 30, 2008

necrobobsledder posted:

This is the computer architecture undergraduate gold standard textbook most CS / CE students use. Be forewarned that you're not going to really understand the newfangled features processors have been using for 20+ years out of it, but you'll start to get the theory and basic fundamentals of what makes ISAs and hardware tick (no pun intended). My class had us write some ASM and glue some software-emulated components together to create an ISA and setup I/O and load an OS. We also wrote some hilarious software exploit of our own ISA because so many students were Java weenies. The previous class I had in digital logic had us write a small CPU and I believe a DRAM with some Verilog (early on, just with RTL level circuit emulators). The EE equivalent course would have you do it on a breadboard or in pSpice. However, true to most engineering courses, we learned far more practical things in our hands-on embedded systems course where we started writing kernels and loading them onto real Atmel based CPUs and writing your own network and I/O scheduler as exercises. Although by then I had already done a lot of this in my internships... but it was fun to slack off in class and still do well because you'd already done it before in the Real World when people were busting their rear end around you.

Hah, exactly the book I thought it was, I thought it was pretty good. Reminds me that Imagination Technologies bought 'em at the end of last year. Most of my MIPS experience has been low-level stuff for the PS1/PS2 and using the PIC32 from Microchip (I think that's MIPS M4K). A lot of Japanese vendors (NEC) seem to use MIPS, maybe they're used to synthesizing it / laying it out and have no reason to go over to ARM.

Adbot
ADBOT LOVES YOU

WHERE MY HAT IS AT
Jan 7, 2011

necrobobsledder posted:

This is the computer architecture undergraduate gold standard textbook most CS / CE students use. Be forewarned that you're not going to really understand the newfangled features processors have been using for 20+ years out of it, but you'll start to get the theory and basic fundamentals of what makes ISAs and hardware tick (no pun intended). My class had us write some ASM and glue some software-emulated components together to create an ISA and setup I/O and load an OS. We also wrote some hilarious software exploit of our own ISA because so many students were Java weenies. The previous class I had in digital logic had us write a small CPU and I believe a DRAM with some Verilog (early on, just with RTL level circuit emulators). The EE equivalent course would have you do it on a breadboard or in pSpice. However, true to most engineering courses, we learned far more practical things in our hands-on embedded systems course where we started writing kernels and loading them onto real Atmel based CPUs and writing your own network and I/O scheduler as exercises. Although by then I had already done a lot of this in my internships... but it was fun to slack off in class and still do well because you'd already done it before in the Real World when people were busting their rear end around you.

Cool, maybe I'll order that. I'm hoping we'll get more into this kind of stuff this year in our RTOS dev course, but this gives me something to chew on until the end of summer.

Dotcom656
Apr 7, 2007
I WILL TAKE BETTER PICTURES OF MY DRAWINGS BEFORE POSTING THEM

movax posted:

Hah, exactly the book I thought it was, I thought it was pretty good. Reminds me that Imagination Technologies bought 'em at the end of last year. Most of my MIPS experience has been low-level stuff for the PS1/PS2 and using the PIC32 from Microchip (I think that's MIPS M4K). A lot of Japanese vendors (NEC) seem to use MIPS, maybe they're used to synthesizing it / laying it out and have no reason to go over to ARM.

I learned MIPS (Or rather a clone of it) in my CS Comp organization and later on my architecture class, and I thought it would be useless. I didn't realize NEC and the PS1/PS2 used it a lot. I just changed my major to Comp Engineering so I'm picking up the books people in here recommend since I really like the architecture side of things.

Just took a closer look at the text book. I used a slightly different edition for my CS level class that has the subtitle of "The software hardware interface" Same authors as well. I guess they are the gold standard. I guess it might have been a older edition.

Dotcom656 fucked around with this message at 14:10 on Jul 19, 2013

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I found out I used the second edition. Wow, they spent like 10 years going through two editions and in 8 they pump out 3? Maybe they needed more money due to the decline in their company... but it's not like Hennessy needed the money given he's president of Stanford and all.

It's so common people know of it like it's SICP, CLRS, or perhaps the Tannenbaum networking book - it's H&P / HP for the hardware side of the CS/CE spectrum.

JawnV6
Jul 4, 2004

So hot ...

Dotcom656 posted:

Just took a closer look at the text book. I used a slightly different edition for my CS level class that has the subtitle of "The software hardware interface" Same authors as well. I guess they are the gold standard. I guess it might have been a older edition.

Two different books. Hw/sw interface is the intro text that goes over the basic 5 stage pipeline, Quantitative Approach goes into more depth on superscalar, OoO, Tomasulo, all that fun stuff.

Dotcom656
Apr 7, 2007
I WILL TAKE BETTER PICTURES OF MY DRAWINGS BEFORE POSTING THEM

JawnV6 posted:

Two different books. Hw/sw interface is the intro text that goes over the basic 5 stage pipeline, Quantitative Approach goes into more depth on superscalar, OoO, Tomasulo, all that fun stuff.

Oh. Awesome! I'll have to start reading this one then.

WhyteRyce
Dec 30, 2001

necrobobsledder posted:

I found out I used the second edition. Wow, they spent like 10 years going through two editions and in 8 they pump out 3? Maybe they needed more money due to the decline in their company... but it's not like Hennessy needed the money given he's president of Stanford and all.


I used to think they intentionally left typos in there so they could have an excuse to crank out new editions

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Anandtech's Podcast 22 is up, a special edition with Dustin Sklavos focusing on Haswell on the desktop from an enthusiast perspective.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Anand posted:

In power-sensitive architectures, frequency headroom is wasted power. As power is more sensitively and effectively engineered for, overclocking will be over.

:smith:

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

The hell of it is that there's just no need to kill overclocking, and they can keep making money off of enthusiasts who are grabbing performance not at all for "free" - overclocking costs a premium for a reason.

He's pretty much wrong about "stock voltage overclocking is done," but he's got a reasonable perspective in general. I also disagree with him strongly on the "200mhz isn't anything" idea. No, that's 200mhz. On every core. Pretty useful for multithreaded real-time applications, drat it, I don't want to have to spend a grand to get this kind of performance on a server setup :smith:

Edit 2: I wish the louder guy was better at making his case. He's seeming constantly put on the spot, even though there are good reasons for a lot of the stuff he's trying to state he's just not saying it very well. But he is saying it loudly, which comes across very poorly compared to the calm, collected manner in which the AnandTech guy is making his case. Feels like he knows what he's talking about, and the other guy is ill-prepared.

How about bringing up that crappy IHS installation, beyond just "bad thermal paste" and into "it's too far from the die, seriously, this could possibly be intentional market segmentation if the -E series gets soldered properly done IHSes while people who don't want to pay $600+ get to gently caress around with fixing a bush-league mistake that might not even be a mistake since there's no performance pressure from a strong competitor in the desktop space." What the heck man.

Final edit: Well no wonder, that's the Anand in AnandTech, right? The overclocker guy just clearly doesn't have the same grasp and isn't really trying to be a futurist the way that Anand is; the end of the podcast makes that clear. IPC improvements and power efficiency are very exciting, but performance is all that matters when the difference in power draw can be measured in fractions of a lightbulb. Though I felt that was somewhat disingenuous as a thing to be concerned about in the desktop space - you're not going to feel the extra heat from 140W on a super overclocked CPU compared to half the wattage on a non-overclocked one, that's just not a realistic concern in my opinion. Obvious concern in the mobile space, non-issue for desktops. Lower power consumption all around is great, but overclocking isn't dead yet and shouldn't be - nor should the market be as artificially segmented as it is with no real competition from AMD these days.

Wish a different, better prepared guy had come to discuss the topic but I gleaned from their discussion that there must have been some kind of facebook beef or something, hell, I don't know.

Agreed fucked around with this message at 20:52 on Jul 19, 2013

Chuu
Sep 11, 2004

Grimey Drawer
I think you might slightly be missing the point. We used to get such great overclocking chips because the micro-architectures were engineered to have lots of headroom. One of the tradeoffs was increased power consumption. Unless Intel starts creating separate architectures for the Desktop/Server market and the Mobile market -- they simply are not willing to make that tradeoff anymore. It doesn't really matter that enthusiast users would rather keep things the way they were; their voice is just not strong compared to Mobile and Server needs.

We're not in a world yet where 'stock voltage overclocking is done' but in many ways it's a failure on Intel's part if we don't get there.

(By the way, I personally love the Anandtech podcast; that episode was not really one of the better examples. Try the one from the week before -- where you have an editor who is willing to engage with Anand rather than constantly backing off)

Chuu fucked around with this message at 06:30 on Jul 20, 2013

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Oh, I get it, I just don't like the idea of it. I'm sure I'll come around, and everyone else will too, because 1. we won't have a choice, and 2. it probably won't matter anymore.

Overclocking for professionals is so prosumer it hurts, once you get out of that niche and start getting a regular clientele it starts to make a hell of a lot more sense to run a system that isn't being pushed beyond its boundaries just for stability's sake, 100% stable because Dell or whoever says so and will back it up is fantastic at the point you can afford that. I run my CPU fast to grab the best single-threaded AND multi-threaded performance I can without spending $2500 minimum for a comparably fast Xeon system, because real-time audio with modern high-footprint plugins becomes vastly more demanding the lower the latency gets. Right now, it saves me a lot of money.

Overclocking for fun is a different thing, and in the past has been pretty cool. Moving my old Q9550 from its stock clocks to 3.4GHz/core without even touching the voltage was neat as poo poo. Yes I will take 600MHz for free, thankya! Sandy Bridge will do that too, to a nice degree... But something about these new processors, their 3D transistors, be it size or density or just u-architectural changes (Ivy was a shrink, but it was also changing the whole thing in a significant way there), they just don't behave like the previous ones did. In almost every usage case this is a good thing, because performance doesn't stagnate to the end user like it does to the overclocker, it just doesn't go up as fast as power usage goes down.

And for most of their customers that is the end of the story. I don't blame them, but I can still be upset about it because for now it affects me :v:

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
I think I agree with Anand to the extent that an additional 5% isn't much, but it is galling that the only reason we're not getting an extra 5-10% is because Intel doesn't feel the need to have a good thermal interface between the CPU die and heat spreader because AMD is so far behind there is no competitive pressure. All I really want is for Intel to fix the adhesive thickness issue, go back to a soldered connection, and stop removing ISA features like TSX from K-edition SKUs.

I think the biggest reason Intel did these things was to prevent an overclocked 4770K from being a compelling alternative to Ivy Bridge-E. This shouldn't be necessary because Ivy Bridge-E is an eight-core processor, but Intel only sells six-core harvested versions to consumers since AMD can't push them to offer more. An overclocked quad-core Haswell on an 8-series platform obviously beats a stock hex-core Ivy Bridge-E on a 7-series platform, so Intel removed the ISA features that helped the most and limited the amount of easily attainable overclocking.

ehnus
Apr 16, 2003

Now you're thinking with portals!
I've got a Pineview mini-ITX machine I run Linux on but it's a couple years old. I like that the machine uses basically no power and is completely silent, but it would be kinda nice to upgrade. Any idea when the next generation of Atoms will be available in a desktop-ish form factor?

KennyG
Oct 22, 2002
Here to blow my own horn.

Factory Factory posted:

I'd prefer October/November with 6/8 core CPUs, but it looks like I may have been conflating the 6/8 core stuff with Haswell-E (due out sometime in 2014 and, I'd say, 100% worth waiting for over Ivy Bridge-E).

Care to share why?

TSX? is it really that big a deal unless you are running a database server? I've heard some people talk about this lately, but I can't really figure out why someone would really care that much?

Fake Edit: 33% more cores and DDR4. Ok, I'll buy that argument.

Can someone please explain why someone not doing HFT or similar would care about TSX?

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Well, from the standpoint of market penetration of a Good Idea, it's pretty dumb to cut off ISA improvements from people just for the purposes of market segmentation. Will go far!... to making many apps that *could* benefit from TSX not do so until it's switched on for more users. And helps nobody.

r.y.f.s.o.
Mar 1, 2003
classically trained
I mentioned in this in the building thread but here seems better:

I just picked up someone elses custom built i5-4670s machine for a pretty good price, and I've been googling around so I'm reasonably sure but I want to double check - the S variant is just a lower initial clockspeed, and that's it right?

KennyG
Oct 22, 2002
Here to blow my own horn.

Agreed posted:

Well, from the standpoint of market penetration of a Good Idea, it's pretty dumb to cut off ISA improvements from people just for the purposes of market segmentation. Will go far!... to making many apps that *could* benefit from TSX not do so until it's switched on for more users. And helps nobody.

I agree with you about the idea that instruction sets should not be a segmenting factor in most cases, especially not in the minor sku things like -R vs -K vs stock etc (xeon vs i3/5/7, may be different). However, given that the -E chips don't suffer from that I think you misinterpreted my question. Ivy Bridge doesn't have TSX, so Ivy-E vs Haswell-E doesn't need to worry about market segmentation preventing future adoption (especially due to the substantially small market of the -E chips.) I want to know why anyone would want/need (in concrete terms) a TSX enabled chip.

Sell the feature as if I were a college freshman, or worse a CEO. There are costs associated with getting a TSX chip, and at this point other than High Freq Trading, I can't come up with a use case that substantially benefits from it.

Ninja Rope
Oct 22, 2005

Wee.
If not TSX, VT-d should be in every SKU.

deimos
Nov 30, 2006

Forget it man this bat is whack, it's got poobrain!

KennyG posted:

Sell the feature as if I were a college freshman, or worse a CEO. There are costs associated with getting a TSX chip, and at this point other than High Freq Trading, I can't come up with a use case that substantially benefits from it.

TSX could be adapted to game engines fairly easily, there are a bunch of pathfinding algorithms that would probably run a lot faster with TSX.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

ehnus posted:

I've got a Pineview mini-ITX machine I run Linux on but it's a couple years old. I like that the machine uses basically no power and is completely silent, but it would be kinda nice to upgrade. Any idea when the next generation of Atoms will be available in a desktop-ish form factor?
I think "late 2013" was the last firm guidance we got at the end of 2012, though I'd be surprised if you had something nettopish to buy before Q1 2014. These processors may be under the Celeron or Pentium branding on the desktop.

r.y.f.s.o. posted:

I mentioned in this in the building thread but here seems better:

I just picked up someone elses custom built i5-4670s machine for a pretty good price, and I've been googling around so I'm reasonably sure but I want to double check - the S variant is just a lower initial clockspeed, and that's it right?
S-series processors have a 65W TDP instead of 84W, this means a 300Mhz lower base CPU clock and a somewhat lower average graphics clock. Operation on less than four cores shouldn't be affected TOO much due to the Turbo mode.

movax
Aug 30, 2008

Ninja Rope posted:

If not TSX, VT-d should be in every SKU.

Definitely; Windows XP mode as a minimum is an example of "widely deployed" virtualization. There's market segmentation, and then there's just petty segmentation. Vt-d/Vt-c certainly don't need to be in every SKU (and part of it is the chipset's role any way), but everyone should have virtualization extensions.

I'm sure some Intel goon could comment on this more than I could (and then promptly get fired) but I doubt Vt-x is a feature that can be used to bin chips differently, so it isn't that. (Unless you're just somehow laser fusing off a lot of extra functionality in one fell swoop).

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I don't see the point in VT-d outside enterprise scenarios. PEG passthrough would be the only interesting consumer case, and there's still no real effort in making it work properly, from both software and hardware vendors. At this point, with Windows 8 shipping Hyper-V by default, I'm surprised that neither NVidia nor ATI bothered to use that enlightened driver VMBus poo poo to allow for accelerated graphics inside a VM.

TSX however, I see more of an advantage. Not transactional memory, but lock elision. The operating system and multithreaded apps use locks all over the drat place, so reducing overhead and friction from synchronisation would be a pretty plus.

Gonkish
May 19, 2004

Random question, but how do I calculate the wattage of my CPU? I'm the one that ended up getting a golden chip and have my i5-4670K at 4.8 GHz. I don't know what the draw is on it, though, and I'm wondering if I can support a GTX 770 in addition to the overclock with my SeaSonic X650 Gold 650W PSU.

Wattage makes my head hurt. I hate math, etc.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

You can, don't worry about it.

Gonkish
May 19, 2004

Alright, just wanted to be absolutely positive, prior to dropping the cash. :) Thanks!

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Only caveat would be "do you plan to SLI down the road?" in which case you'd be running up against the limits of the amperage and also probably hitting the component drift since this would presumably some time after the 770 stops being great and you actually need another one to make games pretty. But that's a bad plan, so don't do that anyway :unsmith:

Gonkish
May 19, 2004

Yeah, my current plan is get the 770, and then, maybe two or three (or four?) years down the road, replace it.

canyoneer
Sep 13, 2005


I only have canyoneyes for you

ehnus posted:

I've got a Pineview mini-ITX machine I run Linux on but it's a couple years old. I like that the machine uses basically no power and is completely silent, but it would be kinda nice to upgrade. Any idea when the next generation of Atoms will be available in a desktop-ish form factor?

http://liliputing.com/2013/06/intel-nuc-mini-computers-with-haswell-chips-coming-in-q3-2013.html

If you want to go all out, the rumor is that some of the Haswell-based NUCs will have totally fanless, passive cooling later this year

Jan
Feb 27, 2008

The disruptive powers of excessive national fecundity may have played a greater part in bursting the bonds of convention than either the power of ideas or the errors of autocracy.

canyoneer posted:

http://liliputing.com/2013/06/intel-nuc-mini-computers-with-haswell-chips-coming-in-q3-2013.html

If you want to go all out, the rumor is that some of the Haswell-based NUCs will have totally fanless, passive cooling later this year

Exactly what I've been holding out for. :allears:

Ninja Rope
Oct 22, 2005

Wee.
I don't suppose any of them will support ipmi, will they? :getin:

PUBLIC TOILET
Jun 13, 2009

Is it safe to say that if I were to build a new machine with Xeon, there won't be any new Xeon CPUs until next year? Everything I've heard seems to indicate that new Xeons won't be out until next year and even then they will technically be Ivy Bridge Xeons and not Haswell ones.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

PUBLIC TOILET posted:

Is it safe to say that if I were to build a new machine with Xeon, there won't be any new Xeon CPUs until next year? Everything I've heard seems to indicate that new Xeons won't be out until next year and even then they will technically be Ivy Bridge Xeons and not Haswell ones.

Ivy Bridge Xeons (E5s and E7s) are coming out in September. Haswell Xeons are coming out next year. The Xeon E3 v3 is already Haswell-based but tops out at 4 cores.

cycleback
Dec 3, 2004
The secret to creativity is knowing how to hide your sources
Has anyone seen any information on Ivy Bridge E5-1600 series chips? Any idea of a release date? I am debating purchasing another workstation with an Sandy Bridge E5-1620 or waiting to see if Intel releases updated Ivy Bridge E5-1600 chips. I am a little concerned about the new E5 Xeons because all I have seen are large core counts. I am software license limited and would prefer fewer cores with higher clock speeds, i.e the E5-1600 series.

KennyG
Oct 22, 2002
Here to blow my own horn.

cycleback posted:

I am software license limited and would prefer fewer cores with higher clock speeds, i.e the E5-1600 series.

This brings up an interesting (and often overlooked) thorn in Intel's side. It's no secret that about 8-10 years ago, Intel realized they were approaching a diminishing return problem with processor efficiency on a per-core basis. Obviously, they went multi-core. The problem is that a lot of the licensing is making you pay dearly for the privilege. In the enterprise space, the vendors who licensed based on ability usually did it by processor count/architecture (see Oracle). With the proliferation of multi-core chips, most vendors have stayed with a core definition of processor rather than the socket definition that most people think of as a processor. As core count sky-rockets in the next 10-20 years in a search to find more computing power, the software companies licensing methodology present a serious challenge to Intel's adoption rate and their bottom line.

Oracle's enterprise licensing model is $47,500 * cores * architecture multiplier. Oracle's x86 multiplier has been .5 for years. Today a 2 socket, 8 core xeon will cost you $10k in hardware but $47,500 * 16 * .5 in Oracle DB licensing or almost $400,000 (plus ~$100k a year in 'maintenance') Heaven help us!

It will be interesting to see when/if Intel starts putting on the full court press to get the big vendors to relax their core pricing and move back to a socket based model.

Aquila
Jan 24, 2003

KennyG posted:

This brings up an interesting (and often overlooked) thorn in Intel's side. It's no secret that about 8-10 years ago, Intel realized they were approaching a diminishing return problem with processor efficiency on a per-core basis. Obviously, they went multi-core. The problem is that a lot of the licensing is making you pay dearly for the privilege. In the enterprise space, the vendors who licensed based on ability usually did it by processor count/architecture (see Oracle). With the proliferation of multi-core chips, most vendors have stayed with a core definition of processor rather than the socket definition that most people think of as a processor. As core count sky-rockets in the next 10-20 years in a search to find more computing power, the software companies licensing methodology present a serious challenge to Intel's adoption rate and their bottom line.

Oracle's enterprise licensing model is $47,500 * cores * architecture multiplier. Oracle's x86 multiplier has been .5 for years. Today a 2 socket, 8 core xeon will cost you $10k in hardware but $47,500 * 16 * .5 in Oracle DB licensing or almost $400,000 (plus ~$100k a year in 'maintenance') Heaven help us!

It will be interesting to see when/if Intel starts putting on the full court press to get the big vendors to relax their core pricing and move back to a socket based model.

Holy gently caress I'll remember that number everytime I get angry at postgresql.

Dual socket Ivy Bridge Xeons should be very very fast by all indications. Hopefully as fast as the e3-12xx v2 Ivy Bridge chips that are already available (in single socket) in four core variants. I am however a bit skeptical that dual socket Haswell Xeons will be out in only a year, given how delayed Ivy Bridge Xeons are.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
I think part of the delay is new Xeon E7 chips for more than four sockets, which haven't been refreshed since Westmere EX. Not sure if those are getting refreshed immediately with Haswell-E; I'm not sure the power density per socket could be handled until Broadwell.

As for the Haswell-E out in 2014, even leaving aside Intel saying they want to sync uarchs better between consumer and server, there's a very good reason: DDR4 SDRAM. The server market wants that poo poo bad. Higher speed, lower latency, and higher density. Plus PCIe 4.0 and TSX. There's a lot to sell it even if you have not-old SNB-E hardware.

Factory Factory fucked around with this message at 18:22 on Jul 25, 2013

E820h
Mar 30, 2013

Factory Factory posted:

I think part of the delay is new Xeon E7 chips for more than four sockets, which haven't been refreshed since Westmere EX. Not sure if those are getting refreshed immediately with Haswell-E; I'm not sure the power density per socket could be handled until Broadwell.

As for the Haswell-E out in 2014, even leaving aside Intel saying they want to sync uarchs better between consumer and server, there's a very good reason: DDR4 SDRAM. The server market wants that poo poo bad. Higher speed, lower latency, and higher density. Plus PCIe 4.0 and TSX. There's a lot to sell it even if you have not-old SNB-E hardware.

Is PCIe 4.0 due that soon? I thought there was some teething issues still on the SerDes / channel side.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Beyond the performance improvements (DDR4), it's just plain lower power usage as well, which is oftentimes a bigger factor in datacenter hardware upgrades for enterprises because power costs are so much of the operational expenses of a DC simply because these machines will be basically pegged for their lifetime. With a very modest 8 banks of RAM (people tend to pack blades with 32+ RDIMMs), you're looking at about 30w of power minimum since they're RDIMMs. Multiply across 400+ servers and the 500w rate of sustained power use (that can spike like crazy for peak loads probably) a couple million (especially with the tax breaks + depreciation calculation on durable assets) is hardly much compared to nearly six figures a month in power alone.

Adbot
ADBOT LOVES YOU

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

E820h posted:

Is PCIe 4.0 due that soon? I thought there was some teething issues still on the SerDes / channel side.

It's slated for Skylake for sure, and it would repeat what SNB-E did with PCIe 3.0 (which was supposed to debut with Ivy Bridge). And that was basically, "We can't officially validate this for 3.0 because 3.0 is not done yet, but it's totally 3.0." Then, lo and behold, motherboard manufacturers who did the same thing release a BIOS update and bam, PCIe 3.0.

Though, that said, everything Haswell-E is currently a rumor and could be totally wrong. Even the PCIe Gen4 part isn't consistent in the rumors (some sources and supposed leaks say Gen3), and even if the capability is there, it may only be enabled on later-arriving Xeons that have been properly validated.

Factory Factory fucked around with this message at 22:28 on Jul 25, 2013

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply