|
Funkalicious posted:Can anybody tell me if using the integrated GPU on my Core i5 3570k will have any effect on CPU performance? I just want to use it to run a third monitor as this is not possible with my current video card. It shouldn't. If you had poor cooling, it might cause the cpu to overheat faster than it otherwise would, which could cause it to throttle down. Also, I guess if you had a very memory intensive application, there could be some difference, since the integrated GPU does take away a fraction of your available memory and memory bandwidth. But in real world scenarios, I don't think you'll see any difference.
|
# ? Jan 12, 2013 06:20 |
|
|
# ? May 4, 2024 12:04 |
|
Will Haswell increase the number of 6Gbps SATA available? From my understanding, IVB only supports two natively, correct? Does it make any particular difference if the ports are native or not?
|
# ? Jan 13, 2013 12:52 |
|
The Lynx Point chipsets for Haswell will have six native SATA 3.0 ports. Non-native ports have generally turned out to be sub-par.
|
# ? Jan 13, 2013 13:03 |
|
Grim Up North posted:The Lynx Point chipsets for Haswell will have six native SATA 3.0 ports. Non-native ports have generally turned out to be sub-par. And generally that's not an issue because who's going to have more then 2 high-speed (SSD) devices on one board, right?
|
# ? Jan 13, 2013 17:44 |
|
Bob Morales posted:And generally that's not an issue because who's going to have more then 2 high-speed (SSD) devices on one board, right? It's thinking like this that has me looking for some of the older pre-universal-TRIM drives with good self garbage collection to stick on my non-Intel SATA ports Fast storage is awesome, and the sales structure of finding good deals now and then leads to accumulating more smaller drives to maximize gb/$ rather than just sticking one or two massive ones (relatively speaking) on the SATA3 ports. SATA2 SSDs are still fast as all hell, you just lose the absurdly good sequential to go with the dynamic access advantage.
|
# ? Jan 13, 2013 22:44 |
|
I have a question about cpu allocation. I have a 3570k and I recently tried setting affinity while rendering in 3ds max by setting the render to use just 3 cpus and use the last one to play L4D. The game had a frame rate too low to be playable, so I removed another cpu from the renderer, but the improvement was tiny. I did make sure that those 2 cores were not running anywhere near 100%. So why is it that 2 cores on my cpu aren't enough to play a relatively average video game? Ok thanks VVV Ervin K fucked around with this message at 18:59 on Jan 15, 2013 |
# ? Jan 15, 2013 06:59 |
|
Affinity isn't a good way to manage that. Try setting the renderer's process priority to 'low' instead. The game should work normally then, unless the renderer is using the GPU as well. This is what I do when encoding video in the background and it works well: no apparent performance penalty in-game and 100% CPU usage. Hooray for CPU scheduling.
|
# ? Jan 15, 2013 07:38 |
|
Anandtech (subjectively) compared Haswell GT3e to a 650m: http://www.anandtech.com/show/6600/intel-haswell-gt3e-gpu-performance-compared-to-nvidias-geforce-gt-650m Supposedly they're roughly comparable, which is a pretty huge leap compared to Ivy Bridge. While I'm guessing this draws too much power to go into real ULV ultrabooks, I'd love to see it in something that's still moderately thin and light.
|
# ? Jan 16, 2013 21:44 |
|
I understand that power draw will actually be rather low, most of the performance comes from the on-die memory, which actually reduces power usage by preventing accesses to the system RAM. There are a lot more shader ALUs on Haswell, but that was an intentional choice to allow a lower graphics clock speed, again improving power efficiency. My understanding is that GT3 will be used on the quad-core notebook CPUs with current TDPs from 35W-55W, you can fit a 35W CPU into an Ultrabook-like device if you put effort into cooling.
|
# ? Jan 16, 2013 21:58 |
|
I'm pretty sure Intel will distinguish between GT3 and GT3e, tough, the latter being the model that was demonstrated and which uses the eDRAM buffer. It remains to be seen which SKUs get GT3 and which get GT3e.
|
# ? Jan 16, 2013 23:06 |
|
I could be wrong, but I don't believe there's separate GT3/GT3e devices. My understanding is that you have the current GT1/GT2 lineup (meaning that which chips get GT1 vs GT2 is roughly similar, not that they are the same as Ivy Bridge), with the addition of GT3 which is GT2+Crystalwell DRAM. It's early so we don't have much details, though I think there's probably more in the SemiAccurate articles that are behind the paywall.
|
# ? Jan 17, 2013 00:13 |
|
Bob Morales posted:And generally that's not an issue because who's going to have more then 2 high-speed (SSD) devices on one board, right? It's an issue this gen for people who want to run raid1 with SSD caching. You really want everything to be on a SATAIII port.
|
# ? Jan 17, 2013 01:07 |
|
That shouldn't really matter, hard drives can't saturate SATA300, so you only "need" a SATA600 port for the SSD. Of course, SSD caching is kind of an obsolete solution, due to the severe limitations of Intel SRT and the DataPlex software (and the plunging prices of SSDs). It will be interesting to see if Samsung does something cool with DataPlex now that they've acquired it.
|
# ? Jan 17, 2013 01:19 |
|
Alereon posted:I could be wrong, but I don't believe there's separate GT3/GT3e devices. My understanding is that you have the current GT1/GT2 lineup (meaning that which chips get GT1 vs GT2 is roughly similar, not that they are the same as Ivy Bridge), with the addition of GT3 which is GT2+Crystalwell DRAM. It's early so we don't have much details, though I think there's probably more in the SemiAccurate articles that are behind the paywall. I can try to hunt down a better link later, but IDF coverage on AnandTech suggested GT1, GT2, and GT3 configs, plus an "optional" SDRAM configuration. GT1 is low-EUs, GT2 is HD 4000 EUs, GT3 is 2x HD 4000, and GT3e is GT3 plus eDRAM.
|
# ? Jan 17, 2013 01:28 |
|
Well this is neat, base clock overclocking might return in Haswell.
|
# ? Jan 17, 2013 03:22 |
|
Endymion FRS MK1 posted:Well this is neat, base clock overclocking might return in Haswell. Unfortunately that guy is not quite right. Intel will offer baseclock strap overclocking like the LGA 2011 Sandy Bridge E i7s. Finer-grade bclk overclocking requires PLL on every frequency domain, which is a big deal on a power-sensitive chip. Strap-based overclocking will let you get a boost, but only a coarse one. There won't be much limit-pushing or fine control that way.
|
# ? Jan 17, 2013 04:55 |
|
Alereon posted:Of course, SSD caching is kind of an obsolete solution, due to the severe limitations of Intel SRT and the DataPlex software (and the plunging prices of SSDs).
|
# ? Jan 17, 2013 18:12 |
|
Factory Factory posted:Unfortunately that guy is not quite right. Intel will offer baseclock strap overclocking like the LGA 2011 Sandy Bridge E i7s. Finer-grade bclk overclocking requires PLL on every frequency domain, which is a big deal on a power-sensitive chip. Strap-based overclocking will let you get a boost, but only a coarse one. There won't be much limit-pushing or fine control that way. Has anyone ever shown a realistic improvement in the few hundreds of MHz you get out of the exotic overclocking stuff? Especially since PCI stuff is sensitive to the BCLK
|
# ? Jan 17, 2013 18:19 |
|
Since the BCLK-locked CPUs, Sandy and Ivy Bridge, aren't really hurting for memory bandwidth, no. You can clock up the CPU, RAM, and IGP independently already. The only concrete effect I've ever heard of BCLK changes having is squeezing more MHz out of a CPU whose PLL won't take another multiplier jump but which otherwise isn't at its frequency wall (we're talking over 5 GHz here). This varies by architecture. AMD and Nehalem CPUs benefit from the increased QPI/Hypertransport speeds, or at least need them to be a certain minimum proportion of the main CPU clock, and/or they see performance benefits from RAM speed increases.
|
# ? Jan 17, 2013 19:51 |
|
This seems like as good a place to discuss it as any: Intel to add Cisco in Foundry Effort If true, this is bad news for Intel's competitors. When volumes go up in Intel fabs, it keeps costs low for their other products. (that is, production costs, not prices ) No acknowledgment from either company though. canyoneer fucked around with this message at 00:34 on Jan 18, 2013 |
# ? Jan 18, 2013 00:01 |
|
I haven't seen anyone discuss this yet here: Intel to exit the desktop motherboard business Honestly, this blows. Intel boards have always been rock-solid reliable for me*, and I just built my brother a new computer with one. They (usually) had motherboards with the most sensical/least retarded distribution of PCIe ports on the board. Also, they tended to be first to kick legacy poo poo off of their boards (though PS/2 seems to be hanging on a while longer, depending on the board). Most importantly (to me), it enabled access to an Intel network interface without too significant a price premium. </anecdote> * Okay, my DG965WH has an eccentric issue in that the computer will regularly BSOD in Windows XP if it's using onboard video and there's no card in the PCI-E x16 slot, even if idle, but that's such an edge case that's irrelevant to me since Intel onboard video before HD3000 sucks anyway
|
# ? Jan 24, 2013 14:14 |
|
Navaash posted:I haven't seen anyone discuss this yet here: I'm quite sure Intel boards are made by Foxconn for ages who doesn't exactly have a reputation of quality in the PC business. Information on that is hard to come by because hardly anyone pays attention to non-overclocking boards.
|
# ? Jan 24, 2013 16:30 |
Malcolm XML posted:Has anyone ever shown a realistic improvement in the few hundreds of MHz you get out of the exotic overclocking stuff? Overclocking isn't about realistic improvement. It's about raw numbers that go up when you clock. The person with the highest numbers wins.
|
|
# ? Jan 24, 2013 16:37 |
|
Combat Pretzel posted:I wish SSD caching would be a function of the OS. Put it in the new Storage Spaces stack, or whatever. In ZFS it was called L2ARC and worked drat fine. I think the all-in-one approach Seagate is taking with their Momentus XT drives is the best long-term approach, but at the moment it is primarily doing read caching because they only have 16GB of SSD to work with (and a chunk of that is reserved for OS boot acceleration). If they start getting 32-64+ gb of SSD on there then instead of read acceleration it could become a two-stage storage mechanism where write data is also staged there as a NV cache similar to what high-end raid controllers and NAS's are doing.
|
# ? Jan 24, 2013 18:16 |
|
teh z0rg posted:Overclocking isn't about realistic improvement. It's about raw numbers that go up when you clock. The person with the highest numbers wins. Eh, not true. Some things do benefit massively from overclocking. Anything that burns maximum CPU for some time, like video encoding..
|
# ? Jan 24, 2013 18:17 |
|
Navaash posted:I haven't seen anyone discuss this yet here: drat it. It's not really a surprise, though. Which manufacturer is the closest to making motherboards like Intel's? (with regards to decent on-board components, good layout and configuration of headers/jumpers, informative manual, etc.)
|
# ? Jan 25, 2013 04:30 |
|
THEY CALL HIM BOSS posted:drat it. It's not really a surprise, though. Which manufacturer is the closest to making motherboards like Intel's? (with regards to decent on-board components, good layout and configuration of headers/jumpers, informative manual, etc.) Asus boards tend to be quite decent. I mean...usually better than the rest (Gigabyte, ASRock, etc.). They do have their own flaws though, can't defend them to infinity.
|
# ? Jan 25, 2013 04:43 |
|
Surprisingly I've always liked Gigabyte and Biostar.
|
# ? Jan 25, 2013 04:51 |
|
Asrock is white label Asus.
|
# ? Jan 25, 2013 04:56 |
|
Looks like I'm only buying Asus now since now one else seems to use the onboard Intel GBE controller
|
# ? Jan 25, 2013 05:00 |
|
WhyteRyce posted:Looks like I'm only buying Asus now since now one else seems to use the onboard Intel GBE controller Ahhh ha! If that's true then I suppose I'm going ASUS. I need Gigabit LAN and I'd rather not have to use a new NIC.
|
# ? Jan 25, 2013 06:06 |
|
MSI's GD- series boards also use the Intel NIC, like the Z77A-GD55.
|
# ? Jan 25, 2013 06:21 |
|
I've always stuck to Asus and ASRock. ASRock boards in particular have been incredibly solid, I know they're the same company but the lower pricepoint is nice.
|
# ? Jan 25, 2013 18:47 |
|
Actually, Asus spun off ASRock years ago. It's an independent company now.
|
# ? Jan 25, 2013 19:15 |
|
As far as I have heard a few years ago, ASRock is a brand of Pegatron (the Peg in Pegasus) which was the OEM part of Asus So you could say buying ASRock gets you the original Asus quality. E: Scratch that, a quick Google confirmed that Pegatron is no longer an ODM for Asus. They seem to be completely independent since the third quarter of 2012. Grim Up North fucked around with this message at 19:51 on Jan 25, 2013 |
# ? Jan 25, 2013 19:39 |
|
What's the difference between Intel and other integrated NICs anyway? I always see people wanting Intel NIC or claiming they're the best but no one ever says why.
|
# ? Jan 25, 2013 19:52 |
|
Intel NICs have offload functionality for various jobs in the TCP/IP stack. Realtek NICs are dumb PHYs.
|
# ? Jan 25, 2013 19:55 |
|
I've had nothing but trouble when trying to configure nic teaming and some other features on servers with Broadcom nics while the Intel ones has been easy and reliable. In a home environment, stuff like that and ToE aren't going to matter a whole lot but if a couple extra bucks means I'm not paying broadcom for their lovely chips then I'm all for it.
|
# ? Jan 25, 2013 19:57 |
|
I think I remember a handful of people with HD Homerun Primes having to upgrade their NICs because the Realteks on their board were garbage and they were getting artifacting and stuttering. It's not a common thing though. And I'm assuming those Realteks have crappy power management. Gotta save those precious milliwatts WhyteRyce fucked around with this message at 20:13 on Jan 25, 2013 |
# ? Jan 25, 2013 20:10 |
|
|
# ? May 4, 2024 12:04 |
|
Realtek NICs have also had annoying driver bugs that broke connectivity with some but not all websites, and they get stuck in "deep sleep mode" (requiring power to be cut for awhile) more often than others. Very annoying.
|
# ? Jan 25, 2013 22:31 |