|
SanitysEdge posted:On the topic of CPU heat, why do people bother putting the heat spreader back on after deliding their processor? Why even ship the processor with the heat spreader in the first place?
|
# ? Jun 12, 2013 02:00 |
|
|
# ? Apr 27, 2024 03:00 |
|
Oblivion590 posted:The chart here actually indicates energy consumption, even though it says power consumption. As long as the performance speedup is larger than the power increase, the chip is more energy-efficient. This sort of confusion between energy and power is fairly common. edit: nevermind, I hadn't looked at that chart. That is without a doubt the most backward and asinine way they could display the data, what the gently caress. e2: reading the rest of the article seems to suggest that those guys are complete idiots and are out of their depth, which explains a lot. InstantInfidel fucked around with this message at 02:47 on Jun 12, 2013 |
# ? Jun 12, 2013 02:43 |
|
SanitysEdge posted:#shsc is run by elitist administration, it isnt the channel for the SH/SC forum. You don't -have- to put it back on but if you keep it off you have to modify the mounting mech for both the CPU and heatsink since both those parts are built around the assumption that the IHS is there. Without the IHS, the CPU bare die is about 2.5mm deeper into the socket area. Thats deep enough that the CPU retention mech will prevent any cooler from making direct contact with the die so that has to go. It just becomes a big ol project not a lot of people are willing to embark and requires tools and additional hardware. Most people just slap the IHS back on, boot their machines up and be content with the massive temp drop.
|
# ? Jun 12, 2013 03:08 |
|
Alereon posted:because it's that glue that adds enough thickness to prevent good thermal contact between the heat spreader and CPU. Once the glue is removed your temperature problems are fixed. Well removing the heatspreader does yield some gains and also takes it out of the equation altogether. For instance, the heatspreader is composed of 2 thermal surfaces that have varying degrees of imperfection. Especially since the inner surface is machined and recessed, it can't be ground flat using traditional industrial grinders that require a non recessed surface. Not a big deal but also not a terribly difficult project. As they say, in for a penny, in for a pound.
|
# ? Jun 12, 2013 03:12 |
|
InstantInfidel posted:edit: nevermind, I hadn't looked at that chart. That is without a doubt the most backward and asinine way they could display the data, what the gently caress. Edit: I think the FIVR was a good choice, it only adds like 7W. If it wasn't for the IHS glue issue we'd all be a lot happier with Haswell. Alereon fucked around with this message at 04:52 on Jun 12, 2013 |
# ? Jun 12, 2013 04:38 |
|
You all are right. I'm just saying chip gets hot and I wish it were less so. Integrating the voltage regulator and using cheap goo has not helped what was already something of an issue with IVB. Today I encoded some videos (of my cat. I lead a sad life) and it was so much faster than my old i5 I forgot all about whining. For an hour or so.
|
# ? Jun 12, 2013 04:51 |
|
Alereon posted:It's a chart showing the total amount of energy consumed by each processor under identical conditions, why do you have a problem with it? Lower bars are better because lower energy usage is better, and the difference between the bars is proportional to the difference in energy consumption, that's well-presented data. SPCR is also a well-respected site in their niche of silent/low-noise computing, particularly for HTPCs. The composite score poo poo at the end of the article is stupid, but doing subjective analysis with numbers always is and yet readers still demand it. The rest of the article seems reasonable as a supplement to the more in-depth coverage on sites like Anandtech. You're right, there's nothing wrong with it. They just chose some very esoteric units for that and a couple other charts they had, and I definitely believe that they could have gotten across the same point much more succinctly and equally as effectively by linking to some of those charts as sources rather than dropping one in every couple of sentences. Regardless, it's a better article than I gave it credit for.
|
# ? Jun 12, 2013 22:32 |
|
InstantInfidel posted:You're right, there's nothing wrong with it. They just chose some very esoteric units for that and a couple other charts they had, and I definitely believe that they could have gotten across the same point much more succinctly and equally as effectively by linking to some of those charts as sources rather than dropping one in every couple of sentences. Regardless, it's a better article than I gave it credit for. Watt hours are not an esoteric unit, have you ever looked at a power bill?
|
# ? Jun 13, 2013 00:01 |
|
Reading the reviews, it shows that Haswell is not so good at overclocking and those with Sandy Bridge should stay with it for the most part. My 2600K is overclocked to 4.4GHz and won't go any further, and I encode video 1-2 times a day, time permitting, using handbrake or x264. Would Haswell be a worthwhile upgrade for someone who encodes often or should I still wait?
|
# ? Jun 13, 2013 02:29 |
|
No there is absolutely no reason to upgrade from Sandy Bridge. I mean hell, I have a Core 2 Quad Q9550 @ 3.6Ghz and I'm on the fence about Haswell now, though having 12MB of L2 cache makes using DDR2 suck a lot less. I'll probably upgrade in the fall because my stuff's getting old, but I almost want to hold out for the refresh next year.
|
# ? Jun 13, 2013 02:37 |
|
I think the only way Handbrake would be noticeably faster with Haswell is if you used the Quicksync enabled version. I don't have the iGPU drivers installed; I did try the OpenCL version (using an old AMD card) and it was quite a bit faster that the CPU alone, though I didn't really inspect the quality. Handbrake is not really optimized for either yet, from what I understand.
|
# ? Jun 13, 2013 03:37 |
|
hobbesmaster posted:Watt hours are not an esoteric unit, have you ever looked at a power bill? Yes, and they're there because it's a convenient unit for a large-scale operation to measure them as such. In the real world, a joule is a much more widely recognized unit for energy.
|
# ? Jun 13, 2013 03:49 |
|
Stop being dumb. A Watt is a Joule per second. A Watt-hour is 3600 Joules. You are literally using a calculator to complain about having to multiply.
|
# ? Jun 13, 2013 04:01 |
|
InstantInfidel posted:Yes, and they're there because it's a convenient unit for a large-scale operation to measure them as such. In the real world, a joule is a much more widely recognized unit for energy. Stop trolling. Everything related to electrical energy uses kilowatt hours (or sometimes Watt hours) as a unit of energy since it's a more natural unit for electrical systems. If you want to use Joules feel free to do so without bitching about it to the rest of us. edit: Ugh, FF beat me by a phone call.
|
# ? Jun 13, 2013 04:07 |
|
I get my electricity usage as hogsheads of whale oil burned per hour. Much easier to interpret.
|
# ? Jun 13, 2013 08:09 |
|
Those of us interested in getting ECC RAM support on the cheap with Intel parts for strange reasons appear to have a viable option without resorting to Xeons it seems... by going back to Ivy Bridge Core i3 CPUs. Official Intel FAQs and ARK seems to confirm this because the Core i3 Haswells coming out don't have support for ECC yet the last-gen Core i3 does. Granted, there's no VT-d support on these i3s, but it just might do the trick for the few occasions you want a low power CPU but want some warm fuzzy feelings from using ECC memory (some lower power home servers come to mind). Heck, if you have some UDIMMs lying around and want to downsize without buying extra memory it's viable as well. There'sa neat C program that someone on HardOCP wrote that'll spit out some ECC settings at runtime too if you want to validate that your ECC setup actually works. Yudo posted:Handbrake is not really optimized for either yet, from what I understand. But hey, at least the x264 guys are looking at doing it in OpenCL instead of just CUDA so that idiots with money like me considering a new Mac Pro can take advantage of those $1k of GPUs.
|
# ? Jun 13, 2013 15:14 |
|
Someone, somewhere must of committed to an entire warehouse full of them to justify intel supporting ECC on the i3.
|
# ? Jun 14, 2013 06:54 |
|
Are Haswell Desktop chipsets able to do graphics switching? I decided to leave the integrated on in the bios, and now I see both the HD4600 and my GTX660 in device manager within windows, but I have no idea how to actually use the HD4600.
|
# ? Jun 14, 2013 06:55 |
|
SRQ posted:Are Haswell Desktop chipsets able to do graphics switching? I decided to leave the integrated on in the bios, and now I see both the HD4600 and my GTX660 in device manager within windows, but I have no idea how to actually use the HD4600. This is a "no, but"/"yes, if" answer. Switching like you would have on a laptop with Nvidia Optimus or AMD Enduro tech will not happen. That requires driver-level support from the GPU manufacturer, and that is not offered by either Nvidia or AMD for desktop systems. Nvidia was working on an "Optimus for desktops" called Synergy around 2011, but they shitcanned it before release. However, you can still make use of the HD 4600 in a few ways:
Factory Factory fucked around with this message at 08:33 on Jun 14, 2013 |
# ? Jun 14, 2013 08:08 |
|
Alereon posted:No there is absolutely no reason to upgrade from Sandy Bridge. I mean hell, I have a Core 2 Quad Q9550 @ 3.6Ghz and I'm on the fence about Haswell now, though having 12MB of L2 cache makes using DDR2 suck a lot less. I'll probably upgrade in the fall because my stuff's getting old, but I almost want to hold out for the refresh next year. I justified upgrading from my i7 920 to an i5 3570k because I bought a new SSD and wanted the sata3 and pci-e3 support. Then sold my x58/920/ram to a guy I work with for 250 and my upgrade only cost 120
|
# ? Jun 14, 2013 19:19 |
|
incoherent posted:Someone, somewhere must of committed to an entire warehouse full of them to justify intel supporting ECC on the i3. I would suspect it's commercial NAS vendors. i3+ECC is a really sweet spot for both commercial NAS vendors and the FreeNAS folk.
|
# ? Jun 15, 2013 00:09 |
|
Factory Factory posted:This is a "no, but"/"yes, if" answer. Thanks, I was hoping there was a way because I've found a few ancient videogames are less buggy on intel with my laptop.
|
# ? Jun 15, 2013 21:29 |
|
http://vr-zone.com/articles/intel-core-i7-ivy-bridge-e-core-i3-haswell-lineup-detailed/37832.html I thought DDR4 was coming to Xeon first? Looks like enthusiast desktop will have first stab?
|
# ? Jun 17, 2013 00:26 |
|
-E is the same exact silicon as 1P/2P Xeons, just with features disabled.
|
# ? Jun 17, 2013 00:44 |
|
The more I read about Haswell-E for 2014, the more I want to wait even further for it.
|
# ? Jun 17, 2013 05:30 |
|
PUBLIC TOILET posted:The more I read about Haswell-E for 2014, the more I want to wait even further for it.
|
# ? Jun 17, 2013 05:52 |
|
Yeah so basically unless you are willing to wait untill 2016, don't worry about it.
|
# ? Jun 17, 2013 07:26 |
|
.
sincx fucked around with this message at 05:55 on Mar 23, 2021 |
# ? Jun 17, 2013 08:28 |
|
Next-gen Xeon Phi is on PCIe and QPI and has eDRAM. Apparently, Intel has decided it's had enough of NV in HPC.
|
# ? Jun 18, 2013 02:12 |
|
Comparing Phi to Tesla is a little odd for me. I am not at all an expert regarding HPC, but it seems like they are good at different things: Tesla compute, Phi anything with branches (http://clbenchmark.com/compare.jsp?config_0=15887974&config_1=14378297 and yes I know neither are optimized for OpenCL). Further GPGPU is no breeze to code for, but CUDA is mature; Intel's MIC platform is not so much and code still has to be tailored just like with GPGPU. I'm sure this will pressure NV, but if I were buying something to crunch matrix math, I don't know why I would want a Phi rather than a GPU.
|
# ? Jun 18, 2013 02:48 |
|
Yudo posted:Comparing Phi to Tesla is a little odd for me. I am not at all an expert regarding HPC, but it seems like they are good at different things: Tesla compute, Phi anything with branches (http://clbenchmark.com/compare.jsp?config_0=15887974&config_1=14378297 and yes I know neither are optimized for OpenCL). Further GPGPU is no breeze to code for, but CUDA is mature; Intel's MIC platform is not so much and code still has to be tailored just like with GPGPU. Phi on QPI gives them two things versus PCIe: 1. Coherent memory with the CPU. I can't possibly overstate the importance of this in terms of programming model improvements. No more manual memcpys, no more crazy async task queues designed for the sole purpose of keeping PCIe busy, none of that. It also allows the local memory (whether it's only the eDRAM or if they're going to solder some GDDR5 or something onto the motherboard) to act as a huge cache for the still dramatically larger DDR4 system memory. 2. Latency drops to ~0. This too is transformational. Suddenly, you can target much finer grained units of work, which means you have to do less work to your codebase. You can even start thinking about compiler-directed offload rather than developer-directed. Your strong scaling works dramatically better than dealing with PCIe. Couple this with rumored on-die fabric (Intel did buy QLogic and the Aries fabric from Cray), and it becomes obvious that a machine where each node has Xeon, Phi, and Intel fabric could scale dramatically better than something that uses PCIe-attached GPUs and fabric. Basically, QPI takes the worst pain points of CUDA that aren't writing a well-optimized kernel and completely removes them.
|
# ? Jun 18, 2013 03:20 |
|
PUBLIC TOILET posted:The more I read about Haswell-E for 2014, the more I want to wait even further for it. Is that going to be the only hope for someone that wants 32 PCI-E lanes?
|
# ? Jun 18, 2013 04:34 |
|
Other than getting a board with a PLX chip, yeah. A PLX is almost as good though. E: -- I'm not sure how reliable DailyTech is, but it's reporting that Intel is pushing Broadwell back to 2015 due to problems with the 14nm node. Problems with 14nm is not surprising, but it's still a bummer. Apparently 2014 will bring as "Haswell Refresh" instead, like we've been getting Kepler and GCN refreshes from Nvidia and AMD on 28nm. Apparently that will help sync up the new Silvermont Atoms with the higher-power release cycle, though. It also looks to mark the first time in a while that the top-end Extreme parts will be on the same uArch as the mainstream parts, rather than lagging a year behind, as Haswell-E's release date is unchanged. Oh, they source it from VR-Zone. So VR-Zone reported this. Factory Factory fucked around with this message at 05:21 on Jun 18, 2013 |
# ? Jun 18, 2013 04:39 |
|
Factory Factory posted:Other than getting a board with a PLX chip, yeah. A PLX is almost as good though.
|
# ? Jun 18, 2013 05:21 |
|
Factory Factory posted:Oh, they source it from VR-Zone. So VR-Zone reported this. Um, so this news from a site whose writer uses "off of" in a table is to be taken seriously? Also I like this comment: "This is good news guys. This means we don't have to run out and buy new stuff." If only all tech progress would stop, our crap would be the latest forever!
|
# ? Jun 18, 2013 07:42 |
|
What, the phrase "PCIe lanes off of processor?" I don't get it. That's not the best wording, but it's clear and unambiguous to contrast PCH PCIe lanes.
|
# ? Jun 18, 2013 08:53 |
|
There is no situation in the English language where "off of" is correct, and using it is a sure sign of the author not being a professional writer, which puts the research in question, at least a little bit. That's what I meant. That being said, I've checked out some different sources, and even more than a week back there was talk of that Haswell refresh already, so I guess it's true. My apologies.
|
# ? Jun 18, 2013 10:32 |
|
You're expecting perfect grammar and command of the English language from processor tech nerds? I'm almost inclined to think the inverse of that relationship would hold up better.
|
# ? Jun 18, 2013 15:44 |
|
flavor posted:There is no situation in the English language where "off of" is correct, and using it is a sure sign of the author not being a professional writer, which puts the research in question, at least a little bit. That's what I meant.
|
# ? Jun 18, 2013 16:15 |
|
|
# ? Apr 27, 2024 03:00 |
|
Is this the first time intel has delayed a die shrink? And could this mean we should only hope to see a die shrink every 3 years now instead of 2?
|
# ? Jun 18, 2013 19:27 |