Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

spasticColon posted:

Are there any other advantages to the Z68 chipset other than running integrated graphics and overclocking at the same time?
PCI-E port bifurcation, meaning running two PCI-E x8 slots from the x16 connection on the CPU. Come to think of it, this could be how they connected the Thunderbolt controller?

Adbot
ADBOT LOVES YOU

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

movax posted:

You can already bifurcate with the P67 and H67, just depends if your motherboard maker has configured the soft-straps appropriately in BIOS. My P8P67 will happily bifurcate the x16 PEG link into 2 x8 when needed (which is nice considering I have two RAID controllers).

I think the Thunderbolt controller would use a x4 link out of the PCH; it can't support x8, IIRC. Just DP + external PCIe x4.
I'm pretty sure H67 doesn't support bifurcation though, so if they wanted the full PCIe v2.0 x4 bandwidth they may have resorted to connecting it to the CPU, especially if they had any desire to make use of switchable graphics or Quick Sync. If they ran it off the southbridge it would be competing for bandwidth with the other system devices.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Anandtech has a new article up about Intel's Ivy Bridge CPU, Panther Point chipset, and SSD roadmaps. It looks like they're not being too douchey about 7-series chipsets, all consumer chipsets will have IGP and overclocking support. The differentiation is that the H77 chipset doesn't support PCIe port bifurcation, Z75 doesn't support Smart Response Technology (SSD caching), and Z77 adds support for 3-way PCIe port bifurcation (8+4+4) as well as everything else. All chipsets (except X79?) support USB 3.0. Intel also plans to introduce a 20GB SSD 310 specifically for use with SSD caching, lets hope they don't do anything lovely like REQUIRE an Intel SSD.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Ashex posted:

Does anyone know why Intel hasn't released the i5-2500T to retailers yet? My build is all prepped, just waiting on that part :(
The Core i5 2500T/2500S processors are OEM-only and not available at retail. This thread at Anandtech has details from an Intel employee. Your best option would probably be an i5 2400S. Keep in mind that the actual power usage at the same load levels will be identical regardless of TDP, higher-TDP processors can just be loaded harder.

Alereon fucked around with this message at 18:12 on May 6, 2011

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Ashex posted:

I'm building a low power pc and so I'm being super picky about parts (it's all going to be running off a 160W PicoPSU with a DA-2 power brick). I assumed it would be getting to some retailers after I read this review. The reviewer had contacted Intel who said that any retailer could order it from them, so I pre-ordered from eXtremePCgear.com
I think the confusion is that Intel says it's available to anyone who wants to order a tray quantity of 1000 CPUs, which might be a lot for a retailer to stomache on a low-volume item, especially one not intended for retail sale. Unfortunately I think you're stuck with underclocking a Core i5 2400S, if your platform can't tolerate another 20W (which again is only PEAK power, you actually save power with higher TDP processors).

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
We recently had the news that the top end Sandy Bridge-E processors are going to be 8-core processors (rather than 6 as previously assumed), which should give a pretty nice boost. The 130W TDP should provide room for pretty nice Turbo scaling as well.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
You're not transferring bytes around though, you're doing large block transfers where the main limitation is total bandwidth. The main issue with Nehalem was that Lynnfield was a more efficient design that came later, so it was able to do more with less. Sandy Bridge-E is an evolution of Sandy Bridge, so it has the potential to perform better on workloads that respond well to additional memory bandwidth or more than 4 cores (of which there aren't too many).

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Here's Anandtech's review of the Intel Z68 chipset and Smart Response SSD Caching technology.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Anandtech has an article up about the Silvermont architecture, which is the replacement for the Atom. It will be launched in 2013 on the 22nm process, and is a more complex out-of-order design, like modern ARM CPUs and the AMD Bobcat (Atom was a simple in-order design, like older ARM CPUs and the original Pentium). This has the potential to be the first Atom with desktop-class performance, especially if it's paired with a decent integrated GPU (like maybe something derived from Ivy Bridge?)

Alereon fucked around with this message at 22:48 on May 18, 2011

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Anandtech has posted a summary of the Intel 2011 Investor Meeting. They're pushing REALLY hard to get their 32nm Atom SoC (Medfield) into smartphones, but this is an uphill battle as nobody would make a phone based on their 45nm Atom SoC (Moorestown). It's also been revealed that Intel expects to release a 14nm Atom codenamed "Airmont" in 2014 at around the same time as their desktop processors, but we know even less about this than we do about the 22nm "Silvermont" Atom coming in 2013.

The most interesting bit of news is that Intel intends to create a new line of processors in the 10-20W TDP range. These are intended to be significantly more powerful than the <10W Atom CPUs, but presumably more efficient than the severely-underclocked >17W CULV Core CPUs. We don't know anything about the architecture yet, but it will likely either be an enhanced Atom (of the Silvermont generation) or a cut-down derivative of Haswell (Intel's new architecture on 22nm, successor to Ivy Bridge).

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Ivy Bridge has been delayed for another quarter, until Q2 2012. This gives AMD a lot of time to lock in the mainstream and low-end markets with their Llano and Brazos CPUs, thanks to their overwhelming graphics performance. To cross-post from the Bulldozer thread, AMD is in serious trouble, with reports that the planned Bulldozer launch has been canceled due to inability to meet clock speed targets, and that instead AMD will launch a series of less aggressively-clocked Bulldozer CPUs based on a new stepping in September. Anandtech got confirmation at Computex that the launch would be delayed so that AMD could spin a new Bulldozer stepping due to poor performance, but their estimates were a launch in July, with no mention of having to cancel the previously planned models.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
More selected Computex Intel news:

Ivy Bridge CPUs will have configurable dynamic TDPs, making Turbo Boost dramatically more effective. Rather than allowing some slight headroom over the rated TDP, Turbo Boost will now enable the CPU to more than double it for short periods, dramatically improving both responsiveness and power efficiency (the higher TDP a processor is allowed, the more energy efficient it is). The CPU will clock back down to normal when the load goes away or the temperature rises outside of the optimal range. The "configurable" part of the TDP is that the processor can change its TDP based on the situation, the example was docking a laptop in a docking station with power and improved cooling, allowing the processor to raise its TDP target. Ivy Bridge is a "tick" in Intel's tick-tock product cadence, where ticks are die shrinks with minimal architectural changes, and tocks are new architectures on the same process. Because Ivy Bridge has much more substantial changes than are typical of just a die shrink, Intel has taken to calling it a "Tick+". Not a new architecture (that's Haswell), but not just a die-shrink either. Also, Ivy Bridge will have USB3.0 and Thunderbolt integrated into the chipset.

Intel has announced their new Ultrabook platform, which is intended to be a new form factor for ultra-thin, ultra-light, high-performance notebooks with long battery life, a <$1000 price, and with a suite of specific Intel technologies. The term "Ultrabook" has been trademarked by Intel, you'll only see a laptop called an Ultrabook if it meets Intel's trademark licensing requirements. SSDs (or at least SSD caching) are mandatory, with the required Intel Rapid Start Technology using the flash memory for Hibernation, to allow the system to hibernate and wake back up quickly. Intel is using this rapid sleep/resume functionality for Intel Smart Connect Technology, which periodically wakes the system for short periods (a few seconds) to sync things like your IM, e-mail, and social networking feeds. The idea is that even while your machine is off, it maintains the appearance of being always on and always connected. Intel intends for the Ultrabooks to use processors with a TDP between 10-20W, right now that's the CULV Core i5/i7 CPUs with a 17W TDP, but starting with Haswell they intend to launch processors targeted directly at that TDP range.

Edit: Turns out the Thunderbolt news was wrong, it's still a separate controller chip and not required or anything. I was a little surprised by that news initially, as the Thunderbolt controller is a HUGE chip.

Alereon fucked around with this message at 22:35 on Jun 1, 2011

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Tab8715 posted:

Wow, that is incredibly annoying. I'd love a new laptop (R61i Currently) but the HD 3000 just doesn't cut it for me. If I buy a $1k laptop, the best I can get out of Starcraft II is only low details, the hell?

The gaming laptops aren't that bad, Sager comes close but I'd rather just have a Thinkpad.
You're pretty much the exact target audience for AMD's Fusion A-series-based laptops that are on the way out this/next month. They're using CPU cores derived from the Phenom II (though die-shrunk and tweaked for power-efficiency), combined with an on-die Radeon HD 6550 GPU. Anandtech has had a lot of news about the upcoming Fusion processors this week since they're at Computex, the ones you're interested in are codenamed Llano.

Bonus Edit: Also, Lenovo does have the IdeaPad Y560p for $849, which has a quad-core Sandy Bridge i7 and a Radeon HD 6570M[/url]. Certainly no gaming powerhouse, but it's a step up from integrated graphics.

Alereon fucked around with this message at 03:25 on Jun 3, 2011

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
I was speccing out a low-power, SFF machine and came across a nice model Intel quietly launched: the Core i5 2405S 65W for $215.99. The 2405 has Intel HD Graphics 3000 like the 2500K, but a lower TDP for power-constrained applications. It seems like a pretty nice way to make a small box more capable without a dedicated videocard.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Risky posted:

I'll never understand why the P8P67 boards are the ones EVERYONE has yet they are the ones that are the most problematic.
The more people who buy a product the more people complain about problems with said product, Asus still does pretty well compared to the competition. I mean look at Gigabyte, their boards have had garbage power delivery since LGA-1155 came out at least and people still buy/recommend them. This isn't even something nebulous like anecdotal product experiences, you can monitor the voltages and see exactly how out of spec they are, tell exactly what Gigabyte did wrong, and see the consequences.

Specifically, Gigabyte boards have an absolutely retarded implementation of Loadline Calibration (aka vDroop Compensation), and it's enabled by default. Loadline Calibration changes how power is delivered to the CPU (in a way that violates the Intel spec), with the goal of reducing vDroop, the drop in CPU core voltage seen when the CPU is under heavy load. To greatly simplify, Loadline Calibration overvolts the CPU when it's under load, dropping the voltage back down when idle, in theory providing a steady voltage. The problem is that when the load is removed, the core voltage momentarily overshoots the target voltage. This happens even when Loadline Calibration is disabled, but Intel designed the power delivery spec so that at stock voltage this overshoot remains within the acceptable range for the CPU. When Loadline Calibration is enabled, this overshoot is dramatically higher, and if high enough can exceed the CPU's limits and cause a hang, bluescreen error, restart, or shutdown. Gigabyte's Loadline Calibration implementation exceeds the stock voltage by 0.10v even when the system is sitting idle on some boards, the overshoot when dropping back from full load is obviously even greater. When people were buying LGA-1155 systems en masse and used Gigabyte boards recommended in the Parts Picking Megathread, the Haus of Tech Support forum was filled with threads from people who had system problems when exiting games or when SpeedStep/Enhanced C-States were enabled (because of overshoot when the CPU goes into power-saving modes), the common factor was always Gigabyte boards. Updating the BIOS, disabling Loadline Calibration, or disabling power-save helped, but disabling power-saving sucks and even then sometimes the only fix was switching motherboard brands. The really lovely thing is that motherboard reviewers sometimes notice and comment on the bad power delivery, but somehow don't end the review with "this is a poo poo board and you should not buy it." Then again, this shouldn't surprise me with the number of times I've seen sites give good reviews to hardware that didn't even do what it claimed to.

And you see kids, this post I just typed is why you should never stop smoking :420:.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
I'm not sure, I would verify that Loadline Calibration is disabled in the BIOS and not really worry about it if you're not having stability problems.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Grumperfish posted:

On my EX58-UD3R core voltage doesn't seem to deviate much when (C1E/SS) power states change, but with C6 and LLC enabled I noted a sudden split-second jump to 1.4V on an OCCT log. Given my (anecdotal) experience, LLC and C1E's more or less fine, but there's probably a reason why Gigabyte set C6 disabled by default.
One thing to keep in mind is that we're talking about a voltage spike so brief you're not actually going to be able to measure it without extremely sensitive equipment, so take the numbers you're seeing in hardware monitoring programs with a grain of salt. There's really no good reason to have LLC enabled, as it reduces overclocking ability and exposes your CPU to more voltage than necessary.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Agreed posted:

... even on an Asus board? For me setting it to high was the key to 4.7GHz rather than 4.5GHz. Totally stable, very safe temperatures, Offset mode with all power savings in the Intel specification enabled because I am a fan of the planet when I'm not pretending to kill aliens; without LLC on, it's not stable unless I go with a much higher voltage offset that causes it to idle and run at unnecessarily high voltages. What I look at is how much wattage it's using and that seems fine, so what's wrong with LLC if you aren't using Gigabyte? The chips seem to be able to handle the juice just fine.
You'll always get better overclocking performance with LLC off and just raising the voltage than you will with LLC on. This will increase power consumption when not at max load, but the alternative is pumping high voltage transients into your chip, which is a Bad Thing(tm). Here's a good article from Anandtech about power delivery and Loadline Calibration, using a 45nm Q9000-series quad-core and an Asus motherboard for testing. The principles are the same on current CPUs, but they're even more sensitive to higher voltages due to the smaller manufacturing process.

If you actually care about the power consumption of your chip, do some tests to find out where it's sweet spot is. There's a point where increasing the core voltage and clockspeed start having massive current draw implications for incredibly minor performance gains, and that point is usually around 200Mhz or so below the core limit. If you could cut power draw 50% by dropping the clockspeed only 5%, that's probably worth it.

Bonus Edit: Here's an article from Xbitlabs showing how power consumption changes when overclocking for various CPUs, though it doesn't cover Sandy Bridge. The Core i7 860 they tested overclocked from 2.8Ghz to 3.4Ghz with only a minor power usage increase, but each step beyond that caused increasingly massive jumps in power consumption.

Alereon fucked around with this message at 03:52 on Jul 24, 2011

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

greasyhands posted:

How does a 7% cpu voltage bump increase the entire system draw (of which CPU is obviously only a significant fraction) 18%?
The relationship between voltage increases and power consumption is exponential, not linear. It's approximated for by (V/Vs)^2, with V being the actual voltage and Vs being the stock voltage. Power conversion efficiency and losses due to increased temperatures are an additional factor, but the largest factor is a genuine increase in CPU power draw (you can tell because the additional power is turning into heat in the CPU). I don't know enough about semiconductors to explain why, however.

Note that the same ISN'T true for undervolting below stock, because power leakage increases as voltage decreases (as do resistive losses). You rapidly hit diminishing returns as you push voltage down.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Longinus00 posted:

Joule's first law states P = IV. Substituting in Ohm's law (I=V/R) gets you P = V²/R. If you increase voltage from V1 to V2 you can find the increase in power as the ratio P2/P1 = (V2/V1)².
Thanks for clarifying, they never taught me a drat thing about energy/electricity in school :(

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Agreed posted:

Still, if he has info on Asus like he does on gigabyte then I would like to hear him out.
Here's the Anandtech article I linked earlier, it's their original investigation of LLC using an Asus motherboard. They haven't done another article for newer CPUs/motherboards, but they do occasionally remind readers that disabling LLC is their standing recommendation. Asus boards are much less likely than Gigabyte to result in hardware damage or instability, but the reality is that you'll get better overclocking performance with LLC disabled and manually managing voltages, and if you're at your max safe voltage and are enabling LLC to achieve stability, you're just tricking yourself into using a higher voltage.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Anandtech has posted an article testing memory bandwidth scaling with Sandy Bridge i7 processors, including at heavily overclocked speeds (4.8Ghz). This confirms previous tests showing that Sandy Bridge isn't memory bandwidth limited at all, but it's a bit surprising to see almost no scaling even when overclocked. The best bang for the buck right now is DDR3-1600 CL9, but if you REALLY need to shave off a couple bucks DDR3-1333 CL9 doesn't really hurt anything.

This also confirms that we can expect no performance improvements from the quad-channel memory architecture on the LGA-2011 Sandy Bridge-E platform, just what the additional CPU cores and integrated PCI-E controller offer.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Combat Pretzel posted:

I fail to see the relation between this and the Anandtech test. They didn't do tests with the system forced to single channel mode. I'd figure that multiple memory heavy threads could profit from more channels. At least with an increasing amount of cores. Some random loose Googling revealed some old tests, where there were minimal effects with dual core CPUs. If we're talking about six hyperthreaded cores, this might end up in a more noticeable effect.
Overclocking the CPU so heavily simulates the bandwidth demands of moving from a quad-core to a hex-core CPU. If going from 3.5Ghz to 4.8Ghz still didn't reveal any memory bandwidth bottlenecks, it's unlikely the additional cores would, especially since they have to be clocked in such a way as to fit inside a reasonable TDP.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Agreed posted:

Well, the default setting still has LLC, you cannot disable it altogether,
My expectation would be that "Normal" means "follow Intel specs," but I'm not sure about that. It seems ridiculous that anyone would release a board that couldn't run at Intel specs.

quote:

and there is a separate adjustment to control CPU current (allowing for between 100% and 140%).
The CPU Current Limit should be maxed out, otherwise you're artificially limiting overclocking.

quote:

Further, in offset voltage control mode, it seems you don't have especially precise control over the voltage anyway I and many other Asus and Asrock users have noted that it has brief excursions to wherever it feels like it ought to be when overclocking.
The offset just adds a set amount of voltage to what it would be, meaning that when the CPU is running at a lower speed and voltage it just adds say 0.10v rather than running it at the flat setting. You're still going to see differences from the set voltage due to vDroop as well as voltage overshoot when returning from high-load (LLC just exaggerates this overshoot).

quote:

Interesting to read that 1600 is the sweet spot for RAM at higher clocks, I had heard 1333 before now. Guess I made an alright choice going with 1600 9-9-9-24 and kicking it up to 1T then?
Basically DDR3-1600 CL9 is the sweet spot right now because while the performance increases are marginal, so are the price increases, so you might as well just get it. It also might help you some day in the future, if you want to drop that DDR3-1600 in a system that WOULD benefit from the memory bandwidth.

One interesting memory-related fact: As memory gets more complex, less and less of the chip is devoted to the DRAM cells storing data and more to logic and hardware related to maintaining signal integrity. Back in the day a DRAM chip was almost entirely memory with a small amount of other stuff, now it's mostly other stuff, and by the time DDR4 comes out there will be a tiny island of memory in the middle of logic/timing/signal hardware on each chip.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

spasticColon posted:

A single threaded app that needs a lot of CPU would only cause one of the cores to hit full speed right? Isn't that how Sandy Bridge chips work?
Cores can't be clocked independently, but unused cores can be power-gated (shut off) and the remaining cores can Turbo up using the TDP headroom.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Rumors from SemiAccurate are that Sandy Bridge-E (Nehalem replacement) has been pushed back until Q1 of next year due to bugs with the on-die PCI-E 3.0 controller.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

PC LOAD LETTER posted:

Supposedly 1.35v is the max for 24/7 use. A little higher is certainly fine for a while but you usually need water cooling at that point unless you like the sound of jet engines in your case. I don't think anyone knows exactly how long it'll take to actually kill the chip at a given voltage outside of Intel though. Doing quick n' dirty googles shows people who've gone up to 1.5v and have had thier chips suddenly die already.

e: "Turbine/jet engine" is pretty subjective admittedly. Personally I don't mind the WHOOOOOOSH of the several moderate rpm 120mm fans in my case but most people I know flip out at that sort of thing. YMMV
\/\/\/\/\/\/\/
Just a reminder that water cooling is always louder than air cooling. The pumps alone in one of those commercial Antec/Corsair kits are about as loud as a noisy case fan, and the fans have to spin much faster to get equivalent cooling because water cooling is so much less efficient (remember, the water is just moving heat from the CPU to a radiator, and heat pipes move more heat faster without a pump). It's possible to build your own custom water cooling system that will out perform air (by using a massive car radiator for example, or putting the radiator underground), but that's not what most people mean when they talk about water cooling.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

PC LOAD LETTER posted:

Well that depends on how you do it. The cheap pre built kits can indeed suck, once you get around or over $100 with dual or triple 120mm fan radiators they seem to get good to decent with low noise.
Even if you buy one of those expensive water cooling kits you're still not going to get performance rivaling air cooling. Silent PC Review just did a review of the Antec Kuhler water coolers, they're made by Asetek, same as the Corsair and other popular water cooling kits. When tested with the same fans or at the same noise levels as air coolers, they came in near the bottom rank of all heatsinks tested. They only have acceptable performance when the fans and pumps are run at very high speed (and thus noise levels), which is how they ship to you.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Combat Pretzel posted:

Naw, I thought the Sandy Bridges all run at 1333 and the upcoming Ivy Bridges would bump it to 1600. I run four 4GB DIMMs, the CPU would probably not dig 1600, anyway.
Sandy Bridge has memory dividers for up to 2133Mhz, Intel just only officially supports up to 1333Mhz. And yeah, pushing memory speeds with multiple DIMMs per channel just doesn't work too well.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
I posted this in the Overclocking thread awhile back, but I suggest anyone in the market for a cooler to take a look at the Thermalright HR-02 Macho cooler for $49.99. It performs almost as well as the $80-$100 dual-fan tower coolers, but only costs $49.99, making it a tremendous value. It also includes a Thermalright TY-140 140mm fan, which is the best fan currently on the market, with the most airflow at the lowest noise levels. Given that the fan retails for $15-20 on its own, it's really an incredible deal. It's more expensive than the Cooler Master Hyper 212+, but the performance is a lot better.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

spasticColon posted:

Regarding the 1T VS 2T command rate crap does having the 1T command rate help gaming performance at all or is still like the DDR1 days meaning 1T command rate doesn't get better gaming performance? I have my Corsair Vengeance RAM at DDR3 1600 9-9-9-24 2T (XMP Mode) right now.
The only thing in gaming that's even the slightest bit sensitive to memory latency is running PhysX on the CPU, and even then it's a tiny difference. If you can get it to work, great, but it's not a big deal at all.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
I wouldn't mind them selling K-unlock codes for <$20 each.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
SemiAccurate is reporting that bugs in the PCI-Express 3.0 PHY may lead Intel to launch Sandy Bridge-E without PCIe 3.0 supports. Previous rumors were that SB-E was pushed back to try to work on getting PCIe 3.0 working, but apparently that wasn't successful enough.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Tab8715 posted:

Hmm, on the subject of hardware none of the OEMs are down with the ultrabook platform.
I think the situation is similar to that of AMD with their fusion netbooks, OEMs don't want to sell them at a competitive price because they're still able to move product and make more profits with a higher price. While Ultrabooks would take over the market at $1000, they'll still sell at >$1000, just like how Brazos netbooks would obsolete Atom netbooks at $300, but people will pay $400-450 for the superior performance.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

COCKMOUTH.GIF posted:

That's a pretty good thought, too. I was thinking about weighing the option of going with a Xeon but I was always under the impression they were more expensive. I haven't really taken a good look at their cost now compared to the regular desktop line. If the price/features are right, I could always go with Sandy Bridge once Ivy Bridge comes out. Maybe then Sandy Bridge Xeon stuff will be cheaper and have more on-board features.
Here's the Intel Sandy Bridge Xeon lineup, note that only half the processors have on-die graphics. They're not really bad deals compared to the i5/i7s, just clocked a bit lower.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

necrobobsledder posted:

A Xeon E3-1230 is ~$240: http://www.newegg.com/Product/Product.aspx?Item=N82E16819115083
2x4GB ECC DDR3 RAM ~$80: http://www.newegg.com/Product/Product.aspx?Item=N82E16820139262
C202 motherboard (admittedly the cheapest of the bunch, but I didn't give a drat about SATA3 for my needs) $160: http://www.newegg.com/Product/Product.aspx?Item=N82E16813182252

So comparing costs vs. a 2600k, which does get slightly better performance:
$314 v. $240 = -$74
2x4GB DDR3 I just bought for $42 = +$38
H67 motherboard from Intel is $130 = +$30
----
-$6 - oh really now?

So... basically for the same costs here you trade off motherboard features, a tad bit of CPU performance (but the E3-1230 has lower idle than the i7 if power efficiency is a bigger concern like it is for me) and are locked into paying for ECC RAM that's about 80% more expensive... when it's one of the cheapest parts of a modern system.
You don't need a server motherboard and ECC RAM to use a Xeon. Just use a decent Z68 motherboard and make sure you get an E3-1235 or better (for the integrated graphics, which is certified for workstation applications) and you should be good to go.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Wedesdo posted:

Are Xeons also limited to 4 bins above their max turbo, in terms of overclocking?
The i5-equivalents are limited to the 3, the normal ones 4, the low-power one does turbo up to 9 bins (the lovely dual-core turbos up to 12 bins but gently caress that).

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
More Ivy Bridge info from Anandtech:

Here's a slide deck regarding CPU changes in Ivy Bridge
Here's a slide deck regarding GPU changes in Ivy Bridge
Here are a couple notes regarding overclocking changes in Ivy Bridge (slide from the CPU deck above)

The Intel HD Graphics in Ivy Bridge will have full DX11 support along with GPU computing support.

I'm thinking of making a thread dedicated to low power computing, specifically ARM, Atom, Bobcat, and the future Intel Haswell/Silvermont architectures, that may happen soon if I stop feeling lazy and tired.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

incoherent posted:

Those x79 boards and the memory slots :stare:

http://www.anandtech.com/show/4793/x79-motherboards-from-gigabyte-msi-at-idf-2011
I really have to wonder about about how they're going to fit effective power delivery components into that little amount of space on boards with 8 DIMM slots. That said, DDR3-1600 is currently under $5/GB without rebate (okay for some reason when I click on that link it shows as $54.99, but it comes up as $39.99 in a Newegg search), and memory prices are going to continue to plunge through Q4, meaning you could be filling your system with 32GB of RAM for peanuts. Where are our 8GB DIMMs already?

Alereon fucked around with this message at 08:48 on Sep 14, 2011

Adbot
ADBOT LOVES YOU

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
I've been thinking for awhile that it would make more sense to put Thunderbolt controllers on high-end graphics cards, since they already have multiple DisplayPort ports and more PCI-Express bandwidth than they know what to do with. It would also be a smaller percentage of the cost on a graphics card versus a motherboard, and lets you add ports to an existing system.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply