Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
lazydog
Apr 15, 2003

Funkalicious posted:

Can anybody tell me if using the integrated GPU on my Core i5 3570k will have any effect on CPU performance? I just want to use it to run a third monitor as this is not possible with my current video card.

It shouldn't.

If you had poor cooling, it might cause the cpu to overheat faster than it otherwise would, which could cause it to throttle down.

Also, I guess if you had a very memory intensive application, there could be some difference, since the integrated GPU does take away a fraction of your available memory and memory bandwidth. But in real world scenarios, I don't think you'll see any difference.

Adbot
ADBOT LOVES YOU

Henry Black
Jun 27, 2004

If she's not making this face, you're not doing it right.
Fun Shoe
Will Haswell increase the number of 6Gbps SATA available? From my understanding, IVB only supports two natively, correct? Does it make any particular difference if the ports are native or not?

Grim Up North
Dec 12, 2011

The Lynx Point chipsets for Haswell will have six native SATA 3.0 ports. Non-native ports have generally turned out to be sub-par.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

Grim Up North posted:

The Lynx Point chipsets for Haswell will have six native SATA 3.0 ports. Non-native ports have generally turned out to be sub-par.

And generally that's not an issue because who's going to have more then 2 high-speed (SSD) devices on one board, right?

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Bob Morales posted:

And generally that's not an issue because who's going to have more then 2 high-speed (SSD) devices on one board, right?

It's thinking like this that has me looking for some of the older pre-universal-TRIM drives with good self garbage collection to stick on my non-Intel SATA ports :mad:

Fast storage is awesome, and the sales structure of finding good deals now and then leads to accumulating more smaller drives to maximize gb/$ rather than just sticking one or two massive ones (relatively speaking) on the SATA3 ports.

SATA2 SSDs are still fast as all hell, you just lose the absurdly good sequential to go with the dynamic access advantage.

Ervin K
Nov 4, 2010

by Jeffrey of YOSPOS
I have a question about cpu allocation. I have a 3570k and I recently tried setting affinity while rendering in 3ds max by setting the render to use just 3 cpus and use the last one to play L4D. The game had a frame rate too low to be playable, so I removed another cpu from the renderer, but the improvement was tiny. I did make sure that those 2 cores were not running anywhere near 100%. So why is it that 2 cores on my cpu aren't enough to play a relatively average video game?


Ok thanks VVV

Ervin K fucked around with this message at 18:59 on Jan 15, 2013

Spatial
Nov 15, 2007

Affinity isn't a good way to manage that. Try setting the renderer's process priority to 'low' instead. The game should work normally then, unless the renderer is using the GPU as well.

This is what I do when encoding video in the background and it works well: no apparent performance penalty in-game and 100% CPU usage. Hooray for CPU scheduling.

Cicero
Dec 17, 2003

Jumpjet, melta, jumpjet. Repeat for ten minutes or until victory is assured.
Anandtech (subjectively) compared Haswell GT3e to a 650m: http://www.anandtech.com/show/6600/intel-haswell-gt3e-gpu-performance-compared-to-nvidias-geforce-gt-650m

Supposedly they're roughly comparable, which is a pretty huge leap compared to Ivy Bridge. While I'm guessing this draws too much power to go into real ULV ultrabooks, I'd love to see it in something that's still moderately thin and light.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
I understand that power draw will actually be rather low, most of the performance comes from the on-die memory, which actually reduces power usage by preventing accesses to the system RAM. There are a lot more shader ALUs on Haswell, but that was an intentional choice to allow a lower graphics clock speed, again improving power efficiency. My understanding is that GT3 will be used on the quad-core notebook CPUs with current TDPs from 35W-55W, you can fit a 35W CPU into an Ultrabook-like device if you put effort into cooling.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
I'm pretty sure Intel will distinguish between GT3 and GT3e, tough, the latter being the model that was demonstrated and which uses the eDRAM buffer. It remains to be seen which SKUs get GT3 and which get GT3e.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
I could be wrong, but I don't believe there's separate GT3/GT3e devices. My understanding is that you have the current GT1/GT2 lineup (meaning that which chips get GT1 vs GT2 is roughly similar, not that they are the same as Ivy Bridge), with the addition of GT3 which is GT2+Crystalwell DRAM. It's early so we don't have much details, though I think there's probably more in the SemiAccurate articles that are behind the paywall.

Chuu
Sep 11, 2004

Grimey Drawer

Bob Morales posted:

And generally that's not an issue because who's going to have more then 2 high-speed (SSD) devices on one board, right?

It's an issue this gen for people who want to run raid1 with SSD caching. You really want everything to be on a SATAIII port.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
That shouldn't really matter, hard drives can't saturate SATA300, so you only "need" a SATA600 port for the SSD. Of course, SSD caching is kind of an obsolete solution, due to the severe limitations of Intel SRT and the DataPlex software (and the plunging prices of SSDs). It will be interesting to see if Samsung does something cool with DataPlex now that they've acquired it.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Alereon posted:

I could be wrong, but I don't believe there's separate GT3/GT3e devices. My understanding is that you have the current GT1/GT2 lineup (meaning that which chips get GT1 vs GT2 is roughly similar, not that they are the same as Ivy Bridge), with the addition of GT3 which is GT2+Crystalwell DRAM. It's early so we don't have much details, though I think there's probably more in the SemiAccurate articles that are behind the paywall.

I can try to hunt down a better link later, but IDF coverage on AnandTech suggested GT1, GT2, and GT3 configs, plus an "optional" SDRAM configuration. GT1 is low-EUs, GT2 is HD 4000 EUs, GT3 is 2x HD 4000, and GT3e is GT3 plus eDRAM.

Endymion FRS MK1
Oct 29, 2011

I don't know what this thing is, and I don't care. I'm just tired of seeing your stupid newbie av from 2011.
Well this is neat, base clock overclocking might return in Haswell.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Unfortunately that guy is not quite right. Intel will offer baseclock strap overclocking like the LGA 2011 Sandy Bridge E i7s. Finer-grade bclk overclocking requires PLL on every frequency domain, which is a big deal on a power-sensitive chip. Strap-based overclocking will let you get a boost, but only a coarse one. There won't be much limit-pushing or fine control that way.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Alereon posted:

Of course, SSD caching is kind of an obsolete solution, due to the severe limitations of Intel SRT and the DataPlex software (and the plunging prices of SSDs).
I wish SSD caching would be a function of the OS. Put it in the new Storage Spaces stack, or whatever. In ZFS it was called L2ARC and worked drat fine.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

Factory Factory posted:

Unfortunately that guy is not quite right. Intel will offer baseclock strap overclocking like the LGA 2011 Sandy Bridge E i7s. Finer-grade bclk overclocking requires PLL on every frequency domain, which is a big deal on a power-sensitive chip. Strap-based overclocking will let you get a boost, but only a coarse one. There won't be much limit-pushing or fine control that way.

Has anyone ever shown a realistic improvement in the few hundreds of MHz you get out of the exotic overclocking stuff?

Especially since PCI stuff is sensitive to the BCLK

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Since the BCLK-locked CPUs, Sandy and Ivy Bridge, aren't really hurting for memory bandwidth, no. You can clock up the CPU, RAM, and IGP independently already. The only concrete effect I've ever heard of BCLK changes having is squeezing more MHz out of a CPU whose PLL won't take another multiplier jump but which otherwise isn't at its frequency wall (we're talking over 5 GHz here).

This varies by architecture. AMD and Nehalem CPUs benefit from the increased QPI/Hypertransport speeds, or at least need them to be a certain minimum proportion of the main CPU clock, and/or they see performance benefits from RAM speed increases.

canyoneer
Sep 13, 2005


I only have canyoneyes for you
This seems like as good a place to discuss it as any:
Intel to add Cisco in Foundry Effort

If true, this is bad news for Intel's competitors. When volumes go up in Intel fabs, it keeps costs low for their other products. (that is, production costs, not prices :v:)

No acknowledgment from either company though.

canyoneer fucked around with this message at 00:34 on Jan 18, 2013

Navaash
Aug 15, 2001

FEED ME


I haven't seen anyone discuss this yet here:

Intel to exit the desktop motherboard business

Honestly, this blows. Intel boards have always been rock-solid reliable for me*, and I just built my brother a new computer with one. They (usually) had motherboards with the most sensical/least retarded distribution of PCIe ports on the board. Also, they tended to be first to kick legacy poo poo off of their boards (though PS/2 seems to be hanging on a while longer, depending on the board). Most importantly (to me), it enabled access to an Intel network interface without too significant a price premium. </anecdote>

* Okay, my DG965WH has an eccentric issue in that the computer will regularly BSOD in Windows XP if it's using onboard video and there's no card in the PCI-E x16 slot, even if idle, but that's such an edge case that's irrelevant to me since Intel onboard video before HD3000 sucks anyway

Palladium
May 8, 2012

Very Good
✔️✔️✔️✔️

Navaash posted:

I haven't seen anyone discuss this yet here:

Intel to exit the desktop motherboard business

Honestly, this blows. Intel boards have always been rock-solid reliable for me*, and I just built my brother a new computer with one. They (usually) had motherboards with the most sensical/least retarded distribution of PCIe ports on the board. Also, they tended to be first to kick legacy poo poo off of their boards (though PS/2 seems to be hanging on a while longer, depending on the board). Most importantly (to me), it enabled access to an Intel network interface without too significant a price premium. </anecdote>

* Okay, my DG965WH has an eccentric issue in that the computer will regularly BSOD in Windows XP if it's using onboard video and there's no card in the PCI-E x16 slot, even if idle, but that's such an edge case that's irrelevant to me since Intel onboard video before HD3000 sucks anyway

I'm quite sure Intel boards are made by Foxconn for ages who doesn't exactly have a reputation of quality in the PC business. Information on that is hard to come by because hardly anyone pays attention to non-overclocking boards.

teh z0rg
Nov 17, 2012

Malcolm XML posted:

Has anyone ever shown a realistic improvement in the few hundreds of MHz you get out of the exotic overclocking stuff?

Especially since PCI stuff is sensitive to the BCLK

Overclocking isn't about realistic improvement. It's about raw numbers that go up when you clock. The person with the highest numbers wins.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Combat Pretzel posted:

I wish SSD caching would be a function of the OS. Put it in the new Storage Spaces stack, or whatever. In ZFS it was called L2ARC and worked drat fine.

I think the all-in-one approach Seagate is taking with their Momentus XT drives is the best long-term approach, but at the moment it is primarily doing read caching because they only have 16GB of SSD to work with (and a chunk of that is reserved for OS boot acceleration). If they start getting 32-64+ gb of SSD on there then instead of read acceleration it could become a two-stage storage mechanism where write data is also staged there as a NV cache similar to what high-end raid controllers and NAS's are doing.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

teh z0rg posted:

Overclocking isn't about realistic improvement. It's about raw numbers that go up when you clock. The person with the highest numbers wins.

Eh, not true. Some things do benefit massively from overclocking.
Anything that burns maximum CPU for some time, like video encoding..

PUBLIC TOILET
Jun 13, 2009

Navaash posted:

I haven't seen anyone discuss this yet here:

Intel to exit the desktop motherboard business

Honestly, this blows. Intel boards have always been rock-solid reliable for me*, and I just built my brother a new computer with one. They (usually) had motherboards with the most sensical/least retarded distribution of PCIe ports on the board. Also, they tended to be first to kick legacy poo poo off of their boards (though PS/2 seems to be hanging on a while longer, depending on the board). Most importantly (to me), it enabled access to an Intel network interface without too significant a price premium. </anecdote>

* Okay, my DG965WH has an eccentric issue in that the computer will regularly BSOD in Windows XP if it's using onboard video and there's no card in the PCI-E x16 slot, even if idle, but that's such an edge case that's irrelevant to me since Intel onboard video before HD3000 sucks anyway

drat it. It's not really a surprise, though. Which manufacturer is the closest to making motherboards like Intel's? (with regards to decent on-board components, good layout and configuration of headers/jumpers, informative manual, etc.)

Volguus
Mar 3, 2009

THEY CALL HIM BOSS posted:

drat it. It's not really a surprise, though. Which manufacturer is the closest to making motherboards like Intel's? (with regards to decent on-board components, good layout and configuration of headers/jumpers, informative manual, etc.)

Asus boards tend to be quite decent. I mean...usually better than the rest (Gigabyte, ASRock, etc.). They do have their own flaws though, can't defend them to infinity.

NerdPolice
Jun 18, 2005

GINYU FORCE RULES
Surprisingly I've always liked Gigabyte and Biostar.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Asrock is white label Asus.

WhyteRyce
Dec 30, 2001

Looks like I'm only buying Asus now since now one else seems to use the onboard Intel GBE controller

PUBLIC TOILET
Jun 13, 2009

WhyteRyce posted:

Looks like I'm only buying Asus now since now one else seems to use the onboard Intel GBE controller

Ahhh ha! If that's true then I suppose I'm going ASUS. I need Gigabit LAN and I'd rather not have to use a new NIC.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
MSI's GD- series boards also use the Intel NIC, like the Z77A-GD55.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
I've always stuck to Asus and ASRock. ASRock boards in particular have been incredibly solid, I know they're the same company but the lower pricepoint is nice.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Actually, Asus spun off ASRock years ago. It's an independent company now.

Grim Up North
Dec 12, 2011

As far as I have heard a few years ago, ASRock is a brand of Pegatron (the Peg in Pegasus) which was the OEM part of Asus, and still builds a certain percentage of Asus branded boards, the other Asus boards being produced by FIC.
So you could say buying ASRock gets you the original Asus quality. :v:


E: Scratch that, a quick Google confirmed that Pegatron is no longer an ODM for Asus. They seem to be completely independent since the third quarter of 2012.

Grim Up North fucked around with this message at 19:51 on Jan 25, 2013

Mayne
Mar 22, 2008

To crooked eyes truth may wear a wry face.
What's the difference between Intel and other integrated NICs anyway? I always see people wanting Intel NIC or claiming they're the best but no one ever says why.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Intel NICs have offload functionality for various jobs in the TCP/IP stack. Realtek NICs are dumb PHYs.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

I've had nothing but trouble when trying to configure nic teaming and some other features on servers with Broadcom nics while the Intel ones has been easy and reliable. In a home environment, stuff like that and ToE aren't going to matter a whole lot but if a couple extra bucks means I'm not paying broadcom for their lovely chips then I'm all for it.

WhyteRyce
Dec 30, 2001

I think I remember a handful of people with HD Homerun Primes having to upgrade their NICs because the Realteks on their board were garbage and they were getting artifacting and stuttering. It's not a common thing though.

And I'm assuming those Realteks have crappy power management. Gotta save those precious milliwatts

WhyteRyce fucked around with this message at 20:13 on Jan 25, 2013

Adbot
ADBOT LOVES YOU

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Realtek NICs have also had annoying driver bugs that broke connectivity with some but not all websites, and they get stuck in "deep sleep mode" (requiring power to be cut for awhile) more often than others. Very annoying.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply