|
spasticColon posted:Are there any other advantages to the Z68 chipset other than running integrated graphics and overclocking at the same time?
|
# ¿ May 5, 2011 00:14 |
|
|
# ¿ Apr 26, 2024 17:19 |
|
movax posted:You can already bifurcate with the P67 and H67, just depends if your motherboard maker has configured the soft-straps appropriately in BIOS. My P8P67 will happily bifurcate the x16 PEG link into 2 x8 when needed (which is nice considering I have two RAID controllers).
|
# ¿ May 5, 2011 21:15 |
|
Anandtech has a new article up about Intel's Ivy Bridge CPU, Panther Point chipset, and SSD roadmaps. It looks like they're not being too douchey about 7-series chipsets, all consumer chipsets will have IGP and overclocking support. The differentiation is that the H77 chipset doesn't support PCIe port bifurcation, Z75 doesn't support Smart Response Technology (SSD caching), and Z77 adds support for 3-way PCIe port bifurcation (8+4+4) as well as everything else. All chipsets (except X79?) support USB 3.0. Intel also plans to introduce a 20GB SSD 310 specifically for use with SSD caching, lets hope they don't do anything lovely like REQUIRE an Intel SSD.
|
# ¿ May 6, 2011 17:53 |
|
Ashex posted:Does anyone know why Intel hasn't released the i5-2500T to retailers yet? My build is all prepped, just waiting on that part Alereon fucked around with this message at 18:12 on May 6, 2011 |
# ¿ May 6, 2011 18:08 |
|
Ashex posted:I'm building a low power pc and so I'm being super picky about parts (it's all going to be running off a 160W PicoPSU with a DA-2 power brick). I assumed it would be getting to some retailers after I read this review. The reviewer had contacted Intel who said that any retailer could order it from them, so I pre-ordered from eXtremePCgear.com
|
# ¿ May 6, 2011 18:28 |
|
We recently had the news that the top end Sandy Bridge-E processors are going to be 8-core processors (rather than 6 as previously assumed), which should give a pretty nice boost. The 130W TDP should provide room for pretty nice Turbo scaling as well.
|
# ¿ May 7, 2011 21:42 |
|
You're not transferring bytes around though, you're doing large block transfers where the main limitation is total bandwidth. The main issue with Nehalem was that Lynnfield was a more efficient design that came later, so it was able to do more with less. Sandy Bridge-E is an evolution of Sandy Bridge, so it has the potential to perform better on workloads that respond well to additional memory bandwidth or more than 4 cores (of which there aren't too many).
|
# ¿ May 7, 2011 23:03 |
|
Here's Anandtech's review of the Intel Z68 chipset and Smart Response SSD Caching technology.
|
# ¿ May 11, 2011 09:01 |
|
Anandtech has an article up about the Silvermont architecture, which is the replacement for the Atom. It will be launched in 2013 on the 22nm process, and is a more complex out-of-order design, like modern ARM CPUs and the AMD Bobcat (Atom was a simple in-order design, like older ARM CPUs and the original Pentium). This has the potential to be the first Atom with desktop-class performance, especially if it's paired with a decent integrated GPU (like maybe something derived from Ivy Bridge?)
Alereon fucked around with this message at 22:48 on May 18, 2011 |
# ¿ May 13, 2011 06:27 |
|
Anandtech has posted a summary of the Intel 2011 Investor Meeting. They're pushing REALLY hard to get their 32nm Atom SoC (Medfield) into smartphones, but this is an uphill battle as nobody would make a phone based on their 45nm Atom SoC (Moorestown). It's also been revealed that Intel expects to release a 14nm Atom codenamed "Airmont" in 2014 at around the same time as their desktop processors, but we know even less about this than we do about the 22nm "Silvermont" Atom coming in 2013. The most interesting bit of news is that Intel intends to create a new line of processors in the 10-20W TDP range. These are intended to be significantly more powerful than the <10W Atom CPUs, but presumably more efficient than the severely-underclocked >17W CULV Core CPUs. We don't know anything about the architecture yet, but it will likely either be an enhanced Atom (of the Silvermont generation) or a cut-down derivative of Haswell (Intel's new architecture on 22nm, successor to Ivy Bridge).
|
# ¿ May 18, 2011 22:47 |
|
Ivy Bridge has been delayed for another quarter, until Q2 2012. This gives AMD a lot of time to lock in the mainstream and low-end markets with their Llano and Brazos CPUs, thanks to their overwhelming graphics performance. To cross-post from the Bulldozer thread, AMD is in serious trouble, with reports that the planned Bulldozer launch has been canceled due to inability to meet clock speed targets, and that instead AMD will launch a series of less aggressively-clocked Bulldozer CPUs based on a new stepping in September. Anandtech got confirmation at Computex that the launch would be delayed so that AMD could spin a new Bulldozer stepping due to poor performance, but their estimates were a launch in July, with no mention of having to cancel the previously planned models.
|
# ¿ May 30, 2011 10:18 |
|
More selected Computex Intel news: Ivy Bridge CPUs will have configurable dynamic TDPs, making Turbo Boost dramatically more effective. Rather than allowing some slight headroom over the rated TDP, Turbo Boost will now enable the CPU to more than double it for short periods, dramatically improving both responsiveness and power efficiency (the higher TDP a processor is allowed, the more energy efficient it is). The CPU will clock back down to normal when the load goes away or the temperature rises outside of the optimal range. The "configurable" part of the TDP is that the processor can change its TDP based on the situation, the example was docking a laptop in a docking station with power and improved cooling, allowing the processor to raise its TDP target. Ivy Bridge is a "tick" in Intel's tick-tock product cadence, where ticks are die shrinks with minimal architectural changes, and tocks are new architectures on the same process. Because Ivy Bridge has much more substantial changes than are typical of just a die shrink, Intel has taken to calling it a "Tick+". Not a new architecture (that's Haswell), but not just a die-shrink either. Also, Ivy Bridge will have USB3.0 Intel has announced their new Ultrabook platform, which is intended to be a new form factor for ultra-thin, ultra-light, high-performance notebooks with long battery life, a <$1000 price, and with a suite of specific Intel technologies. The term "Ultrabook" has been trademarked by Intel, you'll only see a laptop called an Ultrabook if it meets Intel's trademark licensing requirements. SSDs (or at least SSD caching) are mandatory, with the required Intel Rapid Start Technology using the flash memory for Hibernation, to allow the system to hibernate and wake back up quickly. Intel is using this rapid sleep/resume functionality for Intel Smart Connect Technology, which periodically wakes the system for short periods (a few seconds) to sync things like your IM, e-mail, and social networking feeds. The idea is that even while your machine is off, it maintains the appearance of being always on and always connected. Intel intends for the Ultrabooks to use processors with a TDP between 10-20W, right now that's the CULV Core i5/i7 CPUs with a 17W TDP, but starting with Haswell they intend to launch processors targeted directly at that TDP range. Edit: Turns out the Thunderbolt news was wrong, it's still a separate controller chip and not required or anything. I was a little surprised by that news initially, as the Thunderbolt controller is a HUGE chip. Alereon fucked around with this message at 22:35 on Jun 1, 2011 |
# ¿ May 31, 2011 21:15 |
|
Tab8715 posted:Wow, that is incredibly annoying. I'd love a new laptop (R61i Currently) but the HD 3000 just doesn't cut it for me. If I buy a $1k laptop, the best I can get out of Starcraft II is only low details, the hell? Bonus Edit: Also, Lenovo does have the IdeaPad Y560p for $849, which has a quad-core Sandy Bridge i7 and a Radeon HD 6570M[/url]. Certainly no gaming powerhouse, but it's a step up from integrated graphics. Alereon fucked around with this message at 03:25 on Jun 3, 2011 |
# ¿ Jun 3, 2011 03:15 |
|
I was speccing out a low-power, SFF machine and came across a nice model Intel quietly launched: the Core i5 2405S 65W for $215.99. The 2405 has Intel HD Graphics 3000 like the 2500K, but a lower TDP for power-constrained applications. It seems like a pretty nice way to make a small box more capable without a dedicated videocard.
|
# ¿ Jun 13, 2011 23:36 |
|
Risky posted:I'll never understand why the P8P67 boards are the ones EVERYONE has yet they are the ones that are the most problematic. Specifically, Gigabyte boards have an absolutely retarded implementation of Loadline Calibration (aka vDroop Compensation), and it's enabled by default. Loadline Calibration changes how power is delivered to the CPU (in a way that violates the Intel spec), with the goal of reducing vDroop, the drop in CPU core voltage seen when the CPU is under heavy load. To greatly simplify, Loadline Calibration overvolts the CPU when it's under load, dropping the voltage back down when idle, in theory providing a steady voltage. The problem is that when the load is removed, the core voltage momentarily overshoots the target voltage. This happens even when Loadline Calibration is disabled, but Intel designed the power delivery spec so that at stock voltage this overshoot remains within the acceptable range for the CPU. When Loadline Calibration is enabled, this overshoot is dramatically higher, and if high enough can exceed the CPU's limits and cause a hang, bluescreen error, restart, or shutdown. Gigabyte's Loadline Calibration implementation exceeds the stock voltage by 0.10v even when the system is sitting idle on some boards, the overshoot when dropping back from full load is obviously even greater. When people were buying LGA-1155 systems en masse and used Gigabyte boards recommended in the Parts Picking Megathread, the Haus of Tech Support forum was filled with threads from people who had system problems when exiting games or when SpeedStep/Enhanced C-States were enabled (because of overshoot when the CPU goes into power-saving modes), the common factor was always Gigabyte boards. Updating the BIOS, disabling Loadline Calibration, or disabling power-save helped, but disabling power-saving sucks and even then sometimes the only fix was switching motherboard brands. The really lovely thing is that motherboard reviewers sometimes notice and comment on the bad power delivery, but somehow don't end the review with "this is a poo poo board and you should not buy it." Then again, this shouldn't surprise me with the number of times I've seen sites give good reviews to hardware that didn't even do what it claimed to. And you see kids, this post I just typed is why you should never stop smoking .
|
# ¿ Jul 22, 2011 09:56 |
|
I'm not sure, I would verify that Loadline Calibration is disabled in the BIOS and not really worry about it if you're not having stability problems.
|
# ¿ Jul 22, 2011 22:58 |
|
Grumperfish posted:On my EX58-UD3R core voltage doesn't seem to deviate much when (C1E/SS) power states change, but with C6 and LLC enabled I noted a sudden split-second jump to 1.4V on an OCCT log. Given my (anecdotal) experience, LLC and C1E's more or less fine, but there's probably a reason why Gigabyte set C6 disabled by default.
|
# ¿ Jul 23, 2011 02:26 |
|
Agreed posted:... even on an Asus board? For me setting it to high was the key to 4.7GHz rather than 4.5GHz. Totally stable, very safe temperatures, Offset mode with all power savings in the Intel specification enabled because I am a fan of the planet when I'm not pretending to kill aliens; without LLC on, it's not stable unless I go with a much higher voltage offset that causes it to idle and run at unnecessarily high voltages. What I look at is how much wattage it's using and that seems fine, so what's wrong with LLC if you aren't using Gigabyte? The chips seem to be able to handle the juice just fine. If you actually care about the power consumption of your chip, do some tests to find out where it's sweet spot is. There's a point where increasing the core voltage and clockspeed start having massive current draw implications for incredibly minor performance gains, and that point is usually around 200Mhz or so below the core limit. If you could cut power draw 50% by dropping the clockspeed only 5%, that's probably worth it. Bonus Edit: Here's an article from Xbitlabs showing how power consumption changes when overclocking for various CPUs, though it doesn't cover Sandy Bridge. The Core i7 860 they tested overclocked from 2.8Ghz to 3.4Ghz with only a minor power usage increase, but each step beyond that caused increasingly massive jumps in power consumption. Alereon fucked around with this message at 03:52 on Jul 24, 2011 |
# ¿ Jul 24, 2011 03:48 |
|
greasyhands posted:How does a 7% cpu voltage bump increase the entire system draw (of which CPU is obviously only a significant fraction) 18%? Note that the same ISN'T true for undervolting below stock, because power leakage increases as voltage decreases (as do resistive losses). You rapidly hit diminishing returns as you push voltage down.
|
# ¿ Jul 24, 2011 04:47 |
|
Longinus00 posted:Joule's first law states P = IV. Substituting in Ohm's law (I=V/R) gets you P = V²/R. If you increase voltage from V1 to V2 you can find the increase in power as the ratio P2/P1 = (V2/V1)².
|
# ¿ Jul 24, 2011 08:53 |
|
Agreed posted:Still, if he has info on Asus like he does on gigabyte then I would like to hear him out.
|
# ¿ Jul 25, 2011 05:50 |
|
Anandtech has posted an article testing memory bandwidth scaling with Sandy Bridge i7 processors, including at heavily overclocked speeds (4.8Ghz). This confirms previous tests showing that Sandy Bridge isn't memory bandwidth limited at all, but it's a bit surprising to see almost no scaling even when overclocked. The best bang for the buck right now is DDR3-1600 CL9, but if you REALLY need to shave off a couple bucks DDR3-1333 CL9 doesn't really hurt anything. This also confirms that we can expect no performance improvements from the quad-channel memory architecture on the LGA-2011 Sandy Bridge-E platform, just what the additional CPU cores and integrated PCI-E controller offer.
|
# ¿ Jul 25, 2011 12:24 |
|
Combat Pretzel posted:I fail to see the relation between this and the Anandtech test. They didn't do tests with the system forced to single channel mode. I'd figure that multiple memory heavy threads could profit from more channels. At least with an increasing amount of cores. Some random loose Googling revealed some old tests, where there were minimal effects with dual core CPUs. If we're talking about six hyperthreaded cores, this might end up in a more noticeable effect.
|
# ¿ Jul 25, 2011 17:45 |
|
Agreed posted:Well, the default setting still has LLC, you cannot disable it altogether, quote:and there is a separate adjustment to control CPU current (allowing for between 100% and 140%). quote:Further, in offset voltage control mode, it seems you don't have especially precise control over the voltage anyway I and many other Asus and Asrock users have noted that it has brief excursions to wherever it feels like it ought to be when overclocking. quote:Interesting to read that 1600 is the sweet spot for RAM at higher clocks, I had heard 1333 before now. Guess I made an alright choice going with 1600 9-9-9-24 and kicking it up to 1T then? One interesting memory-related fact: As memory gets more complex, less and less of the chip is devoted to the DRAM cells storing data and more to logic and hardware related to maintaining signal integrity. Back in the day a DRAM chip was almost entirely memory with a small amount of other stuff, now it's mostly other stuff, and by the time DDR4 comes out there will be a tiny island of memory in the middle of logic/timing/signal hardware on each chip.
|
# ¿ Jul 26, 2011 06:52 |
|
spasticColon posted:A single threaded app that needs a lot of CPU would only cause one of the cores to hit full speed right? Isn't that how Sandy Bridge chips work?
|
# ¿ Jul 28, 2011 05:25 |
|
Rumors from SemiAccurate are that Sandy Bridge-E (Nehalem replacement) has been pushed back until Q1 of next year due to bugs with the on-die PCI-E 3.0 controller.
|
# ¿ Aug 4, 2011 04:16 |
|
PC LOAD LETTER posted:Supposedly 1.35v is the max for 24/7 use. A little higher is certainly fine for a while but you usually need water cooling at that point unless you like the sound of jet engines in your case. I don't think anyone knows exactly how long it'll take to actually kill the chip at a given voltage outside of Intel though. Doing quick n' dirty googles shows people who've gone up to 1.5v and have had thier chips suddenly die already.
|
# ¿ Aug 4, 2011 23:12 |
|
PC LOAD LETTER posted:Well that depends on how you do it. The cheap pre built kits can indeed suck, once you get around or over $100 with dual or triple 120mm fan radiators they seem to get good to decent with low noise.
|
# ¿ Aug 5, 2011 04:04 |
|
Combat Pretzel posted:Naw, I thought the Sandy Bridges all run at 1333 and the upcoming Ivy Bridges would bump it to 1600. I run four 4GB DIMMs, the CPU would probably not dig 1600, anyway.
|
# ¿ Aug 5, 2011 22:24 |
|
I posted this in the Overclocking thread awhile back, but I suggest anyone in the market for a cooler to take a look at the Thermalright HR-02 Macho cooler for $49.99. It performs almost as well as the $80-$100 dual-fan tower coolers, but only costs $49.99, making it a tremendous value. It also includes a Thermalright TY-140 140mm fan, which is the best fan currently on the market, with the most airflow at the lowest noise levels. Given that the fan retails for $15-20 on its own, it's really an incredible deal. It's more expensive than the Cooler Master Hyper 212+, but the performance is a lot better.
|
# ¿ Aug 5, 2011 23:51 |
|
spasticColon posted:Regarding the 1T VS 2T command rate crap does having the 1T command rate help gaming performance at all or is still like the DDR1 days meaning 1T command rate doesn't get better gaming performance? I have my Corsair Vengeance RAM at DDR3 1600 9-9-9-24 2T (XMP Mode) right now.
|
# ¿ Aug 6, 2011 05:58 |
|
I wouldn't mind them selling K-unlock codes for <$20 each.
|
# ¿ Aug 18, 2011 00:37 |
|
SemiAccurate is reporting that bugs in the PCI-Express 3.0 PHY may lead Intel to launch Sandy Bridge-E without PCIe 3.0 supports. Previous rumors were that SB-E was pushed back to try to work on getting PCIe 3.0 working, but apparently that wasn't successful enough.
|
# ¿ Sep 11, 2011 00:29 |
|
Tab8715 posted:Hmm, on the subject of hardware none of the OEMs are down with the ultrabook platform.
|
# ¿ Sep 13, 2011 00:34 |
|
COCKMOUTH.GIF posted:That's a pretty good thought, too. I was thinking about weighing the option of going with a Xeon but I was always under the impression they were more expensive. I haven't really taken a good look at their cost now compared to the regular desktop line. If the price/features are right, I could always go with Sandy Bridge once Ivy Bridge comes out. Maybe then Sandy Bridge Xeon stuff will be cheaper and have more on-board features.
|
# ¿ Sep 13, 2011 04:26 |
|
necrobobsledder posted:A Xeon E3-1230 is ~$240: http://www.newegg.com/Product/Product.aspx?Item=N82E16819115083
|
# ¿ Sep 13, 2011 20:26 |
|
Wedesdo posted:Are Xeons also limited to 4 bins above their max turbo, in terms of overclocking?
|
# ¿ Sep 13, 2011 23:17 |
|
More Ivy Bridge info from Anandtech: Here's a slide deck regarding CPU changes in Ivy Bridge Here's a slide deck regarding GPU changes in Ivy Bridge Here are a couple notes regarding overclocking changes in Ivy Bridge (slide from the CPU deck above) The Intel HD Graphics in Ivy Bridge will have full DX11 support along with GPU computing support. I'm thinking of making a thread dedicated to low power computing, specifically ARM, Atom, Bobcat, and the future Intel Haswell/Silvermont architectures, that may happen soon if I stop feeling lazy and tired.
|
# ¿ Sep 14, 2011 03:09 |
|
incoherent posted:Those x79 boards and the memory slots Alereon fucked around with this message at 08:48 on Sep 14, 2011 |
# ¿ Sep 14, 2011 08:45 |
|
|
# ¿ Apr 26, 2024 17:19 |
|
I've been thinking for awhile that it would make more sense to put Thunderbolt controllers on high-end graphics cards, since they already have multiple DisplayPort ports and more PCI-Express bandwidth than they know what to do with. It would also be a smaller percentage of the cost on a graphics card versus a motherboard, and lets you add ports to an existing system.
|
# ¿ Sep 14, 2011 11:18 |