Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

SanitysEdge posted:

On the topic of CPU heat, why do people bother putting the heat spreader back on after deliding their processor? Why even ship the processor with the heat spreader in the first place?
The CPU is essentially a small glass square, the heat spreader is there to keep it from being chipped or cracked from excessive or uneven mounting pressure. The reason you delid the processor is to remove the glue that holds the heatsink to the package (not the thermal paste between the CPU and heat spreader), because it's that glue that adds enough thickness to prevent good thermal contact between the heat spreader and CPU. Once the glue is removed your temperature problems are fixed.

Adbot
ADBOT LOVES YOU

InstantInfidel
Jan 9, 2010

BEST :10bux: I EVER SPENT

Oblivion590 posted:

The chart here actually indicates energy consumption, even though it says power consumption. As long as the performance speedup is larger than the power increase, the chip is more energy-efficient. This sort of confusion between energy and power is fairly common.

edit: nevermind, I hadn't looked at that chart. That is without a doubt the most backward and asinine way they could display the data, what the gently caress.

e2: reading the rest of the article seems to suggest that those guys are complete idiots and are out of their depth, which explains a lot.

InstantInfidel fucked around with this message at 02:47 on Jun 12, 2013

Shaocaholica
Oct 29, 2002

Fig. 5E

SanitysEdge posted:

#shsc is run by elitist administration, it isnt the channel for the SH/SC forum.
I have been banned from it a few times. Thats all Ill say about it because I dont want to poo poo up the thread with IRC drama.

On the topic of CPU heat, why do people bother putting the heat spreader back on after deliding their processor? Why even ship the processor with the heat spreader in the first place?

You don't -have- to put it back on but if you keep it off you have to modify the mounting mech for both the CPU and heatsink since both those parts are built around the assumption that the IHS is there. Without the IHS, the CPU bare die is about 2.5mm deeper into the socket area. Thats deep enough that the CPU retention mech will prevent any cooler from making direct contact with the die so that has to go. It just becomes a big ol project not a lot of people are willing to embark and requires tools and additional hardware. Most people just slap the IHS back on, boot their machines up and be content with the massive temp drop.

Shaocaholica
Oct 29, 2002

Fig. 5E

Alereon posted:

because it's that glue that adds enough thickness to prevent good thermal contact between the heat spreader and CPU. Once the glue is removed your temperature problems are fixed.

Well removing the heatspreader does yield some gains and also takes it out of the equation altogether. For instance, the heatspreader is composed of 2 thermal surfaces that have varying degrees of imperfection. Especially since the inner surface is machined and recessed, it can't be ground flat using traditional industrial grinders that require a non recessed surface.

Not a big deal but also not a terribly difficult project. As they say, in for a penny, in for a pound.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

InstantInfidel posted:

edit: nevermind, I hadn't looked at that chart. That is without a doubt the most backward and asinine way they could display the data, what the gently caress.

e2: reading the rest of the article seems to suggest that those guys are complete idiots and are out of their depth, which explains a lot.
It's a chart showing the total amount of energy consumed by each processor under identical conditions, why do you have a problem with it? Lower bars are better because lower energy usage is better, and the difference between the bars is proportional to the difference in energy consumption, that's well-presented data. SPCR is also a well-respected site in their niche of silent/low-noise computing, particularly for HTPCs. The composite score poo poo at the end of the article is stupid, but doing subjective analysis with numbers always is and yet readers still demand it. The rest of the article seems reasonable as a supplement to the more in-depth coverage on sites like Anandtech.

Edit: I think the FIVR was a good choice, it only adds like 7W. If it wasn't for the IHS glue issue we'd all be a lot happier with Haswell.

Alereon fucked around with this message at 04:52 on Jun 12, 2013

Yudo
May 15, 2003

You all are right. I'm just saying chip gets hot and I wish it were less so. Integrating the voltage regulator and using cheap goo has not helped what was already something of an issue with IVB.

Today I encoded some videos (of my cat. I lead a sad life) and it was so much faster than my old i5 I forgot all about whining. For an hour or so.

InstantInfidel
Jan 9, 2010

BEST :10bux: I EVER SPENT

Alereon posted:

It's a chart showing the total amount of energy consumed by each processor under identical conditions, why do you have a problem with it? Lower bars are better because lower energy usage is better, and the difference between the bars is proportional to the difference in energy consumption, that's well-presented data. SPCR is also a well-respected site in their niche of silent/low-noise computing, particularly for HTPCs. The composite score poo poo at the end of the article is stupid, but doing subjective analysis with numbers always is and yet readers still demand it. The rest of the article seems reasonable as a supplement to the more in-depth coverage on sites like Anandtech.

Edit: I think the FIVR was a good choice, it only adds like 7W. If it wasn't for the IHS glue issue we'd all be a lot happier with Haswell.

You're right, there's nothing wrong with it. They just chose some very esoteric units for that and a couple other charts they had, and I definitely believe that they could have gotten across the same point much more succinctly and equally as effectively by linking to some of those charts as sources rather than dropping one in every couple of sentences. Regardless, it's a better article than I gave it credit for.

hobbesmaster
Jan 28, 2008

InstantInfidel posted:

You're right, there's nothing wrong with it. They just chose some very esoteric units for that and a couple other charts they had, and I definitely believe that they could have gotten across the same point much more succinctly and equally as effectively by linking to some of those charts as sources rather than dropping one in every couple of sentences. Regardless, it's a better article than I gave it credit for.

Watt hours are not an esoteric unit, have you ever looked at a power bill?

EconOutlines
Jul 3, 2004

Reading the reviews, it shows that Haswell is not so good at overclocking and those with Sandy Bridge should stay with it for the most part. My 2600K is overclocked to 4.4GHz and won't go any further, and I encode video 1-2 times a day, time permitting, using handbrake or x264. Would Haswell be a worthwhile upgrade for someone who encodes often or should I still wait?

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
No there is absolutely no reason to upgrade from Sandy Bridge. I mean hell, I have a Core 2 Quad Q9550 @ 3.6Ghz and I'm on the fence about Haswell now, though having 12MB of L2 cache makes using DDR2 suck a lot less. I'll probably upgrade in the fall because my stuff's getting old, but I almost want to hold out for the refresh next year.

Yudo
May 15, 2003

I think the only way Handbrake would be noticeably faster with Haswell is if you used the Quicksync enabled version. I don't have the iGPU drivers installed; I did try the OpenCL version (using an old AMD card) and it was quite a bit faster that the CPU alone, though I didn't really inspect the quality. Handbrake is not really optimized for either yet, from what I understand.

InstantInfidel
Jan 9, 2010

BEST :10bux: I EVER SPENT

hobbesmaster posted:

Watt hours are not an esoteric unit, have you ever looked at a power bill?

Yes, and they're there because it's a convenient unit for a large-scale operation to measure them as such. In the real world, a joule is a much more widely recognized unit for energy.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Stop being dumb. A Watt is a Joule per second. A Watt-hour is 3600 Joules. You are literally using a calculator to complain about having to multiply.

syzygy86
Feb 1, 2008

InstantInfidel posted:

Yes, and they're there because it's a convenient unit for a large-scale operation to measure them as such. In the real world, a joule is a much more widely recognized unit for energy.

Stop trolling. Everything related to electrical energy uses kilowatt hours (or sometimes Watt hours) as a unit of energy since it's a more natural unit for electrical systems. If you want to use Joules feel free to do so without bitching about it to the rest of us.

edit: Ugh, FF beat me by a phone call.

Yudo
May 15, 2003

I get my electricity usage as hogsheads of whale oil burned per hour. Much easier to interpret.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Those of us interested in getting ECC RAM support on the cheap with Intel parts for strange reasons appear to have a viable option without resorting to Xeons it seems... by going back to Ivy Bridge Core i3 CPUs. Official Intel FAQs and ARK seems to confirm this because the Core i3 Haswells coming out don't have support for ECC yet the last-gen Core i3 does. Granted, there's no VT-d support on these i3s, but it just might do the trick for the few occasions you want a low power CPU but want some warm fuzzy feelings from using ECC memory (some lower power home servers come to mind). Heck, if you have some UDIMMs lying around and want to downsize without buying extra memory it's viable as well.

There'sa neat C program that someone on HardOCP wrote that'll spit out some ECC settings at runtime too if you want to validate that your ECC setup actually works.

Yudo posted:

Handbrake is not really optimized for either yet, from what I understand.
It probably wouldn't be able to without some significant changes to the x264 library that Handbrake uses for its encoding. They've only managed to partially do hardware acceleration with OpenCL last I heard. The big disadvantage to me with hardware encoding is that there's a possible floating point precision loss shoving stuff through a GPGPU pipeline (you don't exactly get 128 bit precision without losing a lot of performance last I saw) and that translates into potential quality loss that would. This is evidenced alone from a few reviews I've read that compared hardware accelerated encoders and noticed that they had varying degrees of output quality and encoding speed trade-offs.

But hey, at least the x264 guys are looking at doing it in OpenCL instead of just CUDA so that idiots with money like me considering a new Mac Pro can take advantage of those $1k of GPUs.

incoherent
Apr 24, 2004

01010100011010000111001
00110100101101100011011
000110010101110010
Someone, somewhere must of committed to an entire warehouse full of them to justify intel supporting ECC on the i3.

SRQ
Nov 9, 2009

Are Haswell Desktop chipsets able to do graphics switching? I decided to leave the integrated on in the bios, and now I see both the HD4600 and my GTX660 in device manager within windows, but I have no idea how to actually use the HD4600.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

SRQ posted:

Are Haswell Desktop chipsets able to do graphics switching? I decided to leave the integrated on in the bios, and now I see both the HD4600 and my GTX660 in device manager within windows, but I have no idea how to actually use the HD4600.

This is a "no, but"/"yes, if" answer.

Switching like you would have on a laptop with Nvidia Optimus or AMD Enduro tech will not happen. That requires driver-level support from the GPU manufacturer, and that is not offered by either Nvidia or AMD for desktop systems. Nvidia was working on an "Optimus for desktops" called Synergy around 2011, but they shitcanned it before release.

However, you can still make use of the HD 4600 in a few ways:

  1. Just install the HD Graphics drivers, and the HD 4600 will become available for OpenCL operations and QuickSync. This is a relatively new feature; it used to be you needed a monitor plugged in to an HD Graphics monitor port in order to use it at all. Shoot, I can't find this in the update notes, did I dream it?
  2. Plug monitors into the motherboard and drive up to three extra displays off the HD 4600.
  3. If your motherboard supports it, install Lucid Virtu MVP and set it up in i-Mode. This will enable switching graphics very similar to but not exactly like Optimus - the HD 4600 will be your primary display device, but 3D applications can have their render calls redirected to the discrete video card. It has a couple downsides, but: 1) you'll save MAYBE a single Watt off your system's power draw, 2) [s]you don't gain any functionality compared to option 1 above when you're using an Nvidia card with adaptive vsync E: if I'm wrong about option 1, you'll gain QuickSync support, and 3) because this isn't an Nvidia-native solution, the Nvidia driver will lose the ability to tell what game/program is sending it render commands, and you will lose all program-specific rendering tweaks in the driver. This can be a cool 10%-15% performance drop.

Factory Factory fucked around with this message at 08:33 on Jun 14, 2013

veedubfreak
Apr 2, 2005

by Smythe

Alereon posted:

No there is absolutely no reason to upgrade from Sandy Bridge. I mean hell, I have a Core 2 Quad Q9550 @ 3.6Ghz and I'm on the fence about Haswell now, though having 12MB of L2 cache makes using DDR2 suck a lot less. I'll probably upgrade in the fall because my stuff's getting old, but I almost want to hold out for the refresh next year.

I justified upgrading from my i7 920 to an i5 3570k because I bought a new SSD and wanted the sata3 and pci-e3 support. Then sold my x58/920/ram to a guy I work with for 250 and my upgrade only cost 120 :)

Chuu
Sep 11, 2004

Grimey Drawer

incoherent posted:

Someone, somewhere must of committed to an entire warehouse full of them to justify intel supporting ECC on the i3.

I would suspect it's commercial NAS vendors. i3+ECC is a really sweet spot for both commercial NAS vendors and the FreeNAS folk.

SRQ
Nov 9, 2009

Factory Factory posted:

This is a "no, but"/"yes, if" answer.

Switching like you would have on a laptop with Nvidia Optimus or AMD Enduro tech will not happen. That requires driver-level support from the GPU manufacturer, and that is not offered by either Nvidia or AMD for desktop systems. Nvidia was working on an "Optimus for desktops" called Synergy around 2011, but they shitcanned it before release.

However, you can still make use of the HD 4600 in a few ways:

  1. Just install the HD Graphics drivers, and the HD 4600 will become available for OpenCL operations and QuickSync. This is a relatively new feature; it used to be you needed a monitor plugged in to an HD Graphics monitor port in order to use it at all. Shoot, I can't find this in the update notes, did I dream it?
  2. Plug monitors into the motherboard and drive up to three extra displays off the HD 4600.
  3. If your motherboard supports it, install Lucid Virtu MVP and set it up in i-Mode. This will enable switching graphics very similar to but not exactly like Optimus - the HD 4600 will be your primary display device, but 3D applications can have their render calls redirected to the discrete video card. It has a couple downsides, but: 1) you'll save MAYBE a single Watt off your system's power draw, 2) [s]you don't gain any functionality compared to option 1 above when you're using an Nvidia card with adaptive vsync E: if I'm wrong about option 1, you'll gain QuickSync support, and 3) because this isn't an Nvidia-native solution, the Nvidia driver will lose the ability to tell what game/program is sending it render commands, and you will lose all program-specific rendering tweaks in the driver. This can be a cool 10%-15% performance drop.

Thanks, I was hoping there was a way because I've found a few ancient videogames are less buggy on intel with my laptop.

Shaocaholica
Oct 29, 2002

Fig. 5E
http://vr-zone.com/articles/intel-core-i7-ivy-bridge-e-core-i3-haswell-lineup-detailed/37832.html

I thought DDR4 was coming to Xeon first? Looks like enthusiast desktop will have first stab?

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
-E is the same exact silicon as 1P/2P Xeons, just with features disabled.

PUBLIC TOILET
Jun 13, 2009

The more I read about Haswell-E for 2014, the more I want to wait even further for it.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

PUBLIC TOILET posted:

The more I read about Haswell-E for 2014, the more I want to wait even further for it.
DDR4 is going to be fuckoff expensive.

SRQ
Nov 9, 2009

Yeah so basically unless you are willing to wait untill 2016, don't worry about it.

sincx
Jul 13, 2012

furiously masturbating to anime titties
.

sincx fucked around with this message at 05:55 on Mar 23, 2021

Professor Science
Mar 8, 2006
diplodocus + mortarboard = party
Next-gen Xeon Phi is on PCIe and QPI and has eDRAM.

Apparently, Intel has decided it's had enough of NV in HPC.

Yudo
May 15, 2003

Comparing Phi to Tesla is a little odd for me. I am not at all an expert regarding HPC, but it seems like they are good at different things: Tesla compute, Phi anything with branches (http://clbenchmark.com/compare.jsp?config_0=15887974&config_1=14378297 and yes I know neither are optimized for OpenCL). Further GPGPU is no breeze to code for, but CUDA is mature; Intel's MIC platform is not so much and code still has to be tailored just like with GPGPU.

I'm sure this will pressure NV, but if I were buying something to crunch matrix math, I don't know why I would want a Phi rather than a GPU.

Professor Science
Mar 8, 2006
diplodocus + mortarboard = party

Yudo posted:

Comparing Phi to Tesla is a little odd for me. I am not at all an expert regarding HPC, but it seems like they are good at different things: Tesla compute, Phi anything with branches (http://clbenchmark.com/compare.jsp?config_0=15887974&config_1=14378297 and yes I know neither are optimized for OpenCL). Further GPGPU is no breeze to code for, but CUDA is mature; Intel's MIC platform is not so much and code still has to be tailored just like with GPGPU.

I'm sure this will pressure NV, but if I were buying something to crunch matrix math, I don't know why I would want a Phi rather than a GPU.
Phi and Tesla are targeting exactly the same markets, as both have similar memory bandwidth, arithmetic throughput, memory capacity, etc. You can argue that CUDA is more mature, and for the moment that's probably true; however, Phi certainly has better tools and a quicker ramp due to being able to leverage the existing x86 ecosystem.

Phi on QPI gives them two things versus PCIe:

1. Coherent memory with the CPU. I can't possibly overstate the importance of this in terms of programming model improvements. No more manual memcpys, no more crazy async task queues designed for the sole purpose of keeping PCIe busy, none of that. It also allows the local memory (whether it's only the eDRAM or if they're going to solder some GDDR5 or something onto the motherboard) to act as a huge cache for the still dramatically larger DDR4 system memory.
2. Latency drops to ~0. This too is transformational. Suddenly, you can target much finer grained units of work, which means you have to do less work to your codebase. You can even start thinking about compiler-directed offload rather than developer-directed. Your strong scaling works dramatically better than dealing with PCIe. Couple this with rumored on-die fabric (Intel did buy QLogic and the Aries fabric from Cray), and it becomes obvious that a machine where each node has Xeon, Phi, and Intel fabric could scale dramatically better than something that uses PCIe-attached GPUs and fabric.

Basically, QPI takes the worst pain points of CUDA that aren't writing a well-optimized kernel and completely removes them.

~Coxy
Dec 9, 2003

R.I.P. Inter-OS Sass - b.2000AD d.2003AD

PUBLIC TOILET posted:

The more I read about Haswell-E for 2014, the more I want to wait even further for it.

Is that going to be the only hope for someone that wants 32 PCI-E lanes?

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Other than getting a board with a PLX chip, yeah. A PLX is almost as good though.

E:
--

I'm not sure how reliable DailyTech is, but it's reporting that Intel is pushing Broadwell back to 2015 due to problems with the 14nm node. Problems with 14nm is not surprising, but it's still a bummer. Apparently 2014 will bring as "Haswell Refresh" instead, like we've been getting Kepler and GCN refreshes from Nvidia and AMD on 28nm.

Apparently that will help sync up the new Silvermont Atoms with the higher-power release cycle, though. It also looks to mark the first time in a while that the top-end Extreme parts will be on the same uArch as the mainstream parts, rather than lagging a year behind, as Haswell-E's release date is unchanged.

:ninja: Oh, they source it from VR-Zone. So VR-Zone reported this.

Factory Factory fucked around with this message at 05:21 on Jun 18, 2013

Professor Science
Mar 8, 2006
diplodocus + mortarboard = party

Factory Factory posted:

Other than getting a board with a PLX chip, yeah. A PLX is almost as good though.
better for some things (P2P transfers in particular)

Mr. Smile Face Hat
Sep 15, 2003

Praise be to China's Covid-Zero Policy

Factory Factory posted:

:ninja: Oh, they source it from VR-Zone. So VR-Zone reported this.

Um, so this news from a site whose writer uses "off of" in a table is to be taken seriously?

Also I like this comment: "This is good news guys. This means we don't have to run out and buy new stuff." If only all tech progress would stop, our crap would be the latest forever! :downs: :haw:

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
What, the phrase "PCIe lanes off of processor?" I don't get it. That's not the best wording, but it's clear and unambiguous to contrast PCH PCIe lanes.

Mr. Smile Face Hat
Sep 15, 2003

Praise be to China's Covid-Zero Policy
There is no situation in the English language where "off of" is correct, and using it is a sure sign of the author not being a professional writer, which puts the research in question, at least a little bit. That's what I meant.

That being said, I've checked out some different sources, and even more than a week back there was talk of that Haswell refresh already, so I guess it's true. My apologies.

JawnV6
Jul 4, 2004

So hot ...
You're expecting perfect grammar and command of the English language from processor tech nerds? I'm almost inclined to think the inverse of that relationship would hold up better.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

flavor posted:

There is no situation in the English language where "off of" is correct, and using it is a sure sign of the author not being a professional writer, which puts the research in question, at least a little bit. That's what I meant.
Bitching about grammar is the stupidest and most pointless kind of criticism, and indicates you have no actual point to make but still feel like arguing for some reason. The rules of prescriptive grammar are for English class, once you get to the real world you (should) understand that people who talk differently from you are not wrong, even if it really bothers you.

Adbot
ADBOT LOVES YOU

AllanGordon
Jan 26, 2010

by Shine
Is this the first time intel has delayed a die shrink?

And could this mean we should only hope to see a die shrink every 3 years now instead of 2?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply