|
Don Lapre posted:I understand you don't understand putting things into holes. hey, it's all close together and hard to see in there t>
|
# ? Jan 4, 2016 00:40 |
|
|
# ? Apr 28, 2024 13:37 |
|
movax posted:My boss led development of the PCU that controls FIVR; he gets foamy at the mouth and mutters 'politics and loving Haifa' every time an article about FIVR being removed comes up. Has the FIVR ever popped up on LGA2011 series, though? That's the one place I really hope it doesn't go in as it makes negative sense in that socket's market.
|
# ? Jan 4, 2016 00:59 |
|
I've installed stock stock heatsinks more times than I can count without breaking anything. Even ones that had a broken peg. I don't understand why people are so mystified by it and complain about the difficulty but then praise aftermarket stuff that requires back plate installation, spacers, gaskets, washers, pegs, etc. Stock heat sinks are cheap but drat if they aren't super easy to handle and require zero tools
|
# ? Jan 4, 2016 02:47 |
|
Don Lapre posted:The intel stock cooler is perfectly fine for stock speeds and is not difficult at all to use. Most people simply have no idea how the drat push pins work and do poo poo like spin them all the way round. The push pins go one way when locked, and turn 90 degrees to release. I've seen reviews where a 4790K will throttle with it, but that aside, I'm referring to not the fact it's difficult to install initially (it isn't), but that it can be difficult to re-fit after years of use, because the plastic surrounding the pins that splays out to fix the cooler has a tendency to stay splayed out. So when you re-fit it, it can feel like you're up against the normal resistance, but instead, you snap one half of the plastic around the pin. I'm not talking about the releasing mechanism, which is easy in itself, although if the plastic has splayed apart and won't spring back, you often have to pinch them together from the underside of the board. You can tell it's been designed to be fitted once, quickly, and left alone. Indeed it is fine for that, and that will be the majority of cases. HalloKitty fucked around with this message at 07:54 on Jan 4, 2016 |
# ? Jan 4, 2016 07:51 |
|
I'm not sure if this is the right thread but I want to ask a question to sanity check myself. I have an i7-4790K running at stock clocks. When I do an h.264 encode the temp in some cores hits 90C. Should I be concerned about this temp or is that still in the acceptable range?
|
# ? Jan 4, 2016 14:57 |
|
I'd sweat profusely if temps were even near 70C. I'd power down my system in panic if temps hit 90C, and check for broken fans.
|
# ? Jan 4, 2016 14:58 |
|
All this talk of stock coolers has got me wondering if they are all actually the same model? For example, would the cooler found on a 2014 i7 be the same model as the one on a 2014 i3? Are the more recent models better performing, or has the design remained unchanged over the last five years?
|
# ? Jan 4, 2016 17:18 |
|
Xir posted:I'm not sure if this is the right thread but I want to ask a question to sanity check myself. I have an i7-4790K running at stock clocks. When I do an h.264 encode the temp in some cores hits 90C. Should I be concerned about this temp or is that still in the acceptable range? Its perfectly ok. Your cpu can get up to 100c and it will just start throttling at that point. Encoders use some of the same stuff prime95 holy gently caress mode does so you see similiar temps. it wont harm anything. DeaconBlues posted:All this talk of stock coolers has got me wondering if they are all actually the same model? For example, would the cooler found on a 2014 i7 be the same model as the one on a 2014 i3? Are the more recent models better performing, or has the design remained unchanged over the last five years? They are similar but not the same. i7 coolers have copper slugs.
|
# ? Jan 4, 2016 17:22 |
|
HalloKitty posted:So when you re-fit it, it can feel like you're up against the normal resistance, but instead, you snap one half of the plastic around the pin. Yeah the old Socket A/370 style mounts were difficult to use but more modern Socket 939 and AM3 clips aren't bad at all and aren't easily damaged. The lever versions in particular are some of the easiest around to use.
|
# ? Jan 4, 2016 17:34 |
|
Update on my situation: I am a colossal loving moron. The cord had gone over the fan, I believe, and was preventing it from spinning. So no wonder it was loving burning up. I've kept my eye on it since noticing that at noon and the temp has stayed around 40 or so (still underclocked), I'm going to crank it back up to full speed in a few hours and see what happens. I'm still going to put on the new cooler and such tomorrow because I do not entirely trust my stock cooler anymore and because I love tweaking my pc (though thank god my Antec 900 has a CPU cutout so I won't have to tear my PC apart). It has basically only been on a few hours total since the issue began, so hopefully damage was basically non-existent. Thanks for your help, goons.
|
# ? Jan 4, 2016 21:21 |
|
Modern CPUs are actually very good at preventing you from cooking them, functional fan or not--they'll just downclock themselves like hell when they get too hot. About the only way you can really cook them to death these days is if you have no heatsink whatsoever, and/or you bump the voltage up a good bit for the hell of it.
|
# ? Jan 4, 2016 21:26 |
|
DrDork posted:Modern CPUs are actually very good at preventing you from cooking them, functional fan or not--they'll just downclock themselves like hell when they get too hot. About the only way you can really cook them to death these days is if you have no heatsink whatsoever, and/or you bump the voltage up a good bit for the hell of it. I recently did that "wires blocking the CPU cooler fan from spinning at all" and the thing made it through several benchmarks, I couldn't figure out why it was topping out at ~40fps in games though. Poor lil guy was throttling itself to stay alive.
|
# ? Jan 4, 2016 21:40 |
|
I had my Q6600 machine shut down several times within minutes of starting rendering video recently. Turned out that the heatsink was extremely dusty, to the point that I don't think I've ever seen one so bad before. Not sure what was up with that as I do clean it out once in a while, but the temps got pretty ridiculous before I blew it out again.
|
# ? Jan 4, 2016 21:48 |
|
I suspect that I recently had a Xeon E5 cook itself. The server arrived with a loose heatsink; after some tests it was identified (it throttled to ~1/2 expected performance under load.) After the heatsink was reattached, the CPU would only perform at 2/3rd peak. The problem followed the CPU to different heatsinks or sockets (correct thermal paste, etc.). I expected that the throttling would've prevented any permanent damage, so it was surprising to see ongoing performance issues.
|
# ? Jan 4, 2016 22:18 |
|
I think the throttling can save it when it is able to at least vent some heat, a heatsink that is attached properly but with no fan should still work good enough to keep from damaging anything, but a heatsink not attached is much more likely to kill it as it has nowhere for that instant build up of heat to go quick enough. I did see a kid at a local lan with a new 4770K couldn't figure out how come his PC wouldn't play CS:GO at 200FPS like he was told it would. Asking around and people are checking all over with drivers, windows install, etc that nobody decided to check the temps. I knew it looked like a throttle issue as it would play at >100 FPS for a few seconds, then drop down to the 30-50FPS range for the remainder of a match. Take a look at the temps, yep 99C. Heatsink was hanging by a single one of the pushpin arms and they were turned to be released. The kid said he followed the arrows. Set him straight and the comp went on to play in the 40-60C range and 200+FPS like he had been told. I agree that some people just shouldn't build or work on their own PC's even in this easy as pie day and age. Also the stock coolers used to be better IMO then the ones they have now. The ones that came with the early Core 2 Quad or 1st Gen i7's were massive stockers and with the copper core, the things were practically an upgrade for a lower end CPU that came with only an aluminum core and the shorter fins. I was able to overclock a few lower end C2Q's and i7's later with the "upgraded" coolers that actually allowed a modest OC without creating as much heat as the one that they came with. Of course any actual OC needs a good aftermarket, but for the most part, the stocker is just fine at stock clocks as long as you just install, make sure the pins go into the holes, and push the 4 pins and hear the drat click.
|
# ? Jan 4, 2016 22:35 |
|
Broadwell-E is still slated for Q2, right? Any more details yet whether it is start or end of Q2? Hoping the speculation of 600ish bucks for an eight core one turns out true.
|
# ? Jan 4, 2016 23:14 |
|
mobby_6kl posted:I had my Q6600 machine shut down several times within minutes of starting rendering video recently. Turned out that the heatsink was extremely dusty, to the point that I don't think I've ever seen one so bad before. Not sure what was up with that as I do clean it out once in a while, but the temps got pretty ridiculous before I blew it out again. this is a big one, especially homes/offices that go through a renovation while still in use. drywall powder is loving horrific.
|
# ? Jan 4, 2016 23:56 |
|
I recently decided to use some of my bonus money to build a new machine, figured I hadn't upgraded in about 5 or 6 years and may as well do it now. This was my build: 1tb Samsung Evo 850 ASUS Maximus VIII Hero i5 6600K G.Skill Ripjaws V 32GB (4x8gb) DDR4 3000 1.35v Hyper 212 EVO Nvidia GTX 980ti Got pretty much everything installed and running quickly since this isn't the first system I've built. However I'm not sure if it's my mobo or these new Z710 chipset but I've never had a mobo have a problem with a posted speed of RAM before. Initially I ordered some older Ripjaws that I guess were originally meant for X99 boards since their speed was only 2400 mhz and they only took 1.2v. I had some issues getting the XMP working properly and the board booting so I ordered ones that Asus themselves had said were verified compatible with the board. The new 3000mhz sticks arrived and I put them in, sure enough they had the same problem. If I went with the stock speeds that XMP was supposed to work on them, the board wouldn't post and would fallback on the failsafe speed of 2133mhz. It was only after some googling did I come to see that the 2133mhz is actually the stock speeds for these sticks and any other speed that's quoted is basically overclocking, albeit OC straight from the manufacturer. After also reading that if you gave it a bit more voltage, or turned the speed down slightly that could help I did mange to get the board to be stable. So right now I'm running the memory at 2933mhz with 1.36v and things are stable. My question to you guys is this. Is this a normal thing with every Z710 board or do I have either a faulty RAM, or faulty mobo that I should return?
|
# ? Jan 5, 2016 02:11 |
|
Stable Memory speed is dependent on the cpus memory controller and its a bit like the silicon lottery.
|
# ? Jan 5, 2016 02:17 |
|
real_scud posted:G.Skill Ripjaws V 32GB (4x8gb) DDR4 3000 1.35v I've got the same issue with the same RAM but specced at 3200. It will only run at 3000 speeds and any higher will cause the mobo to boot at 2133
|
# ? Jan 5, 2016 07:18 |
|
dud root posted:I've got the same issue with the same RAM but specced at 3200. It will only run at 3000 speeds and any higher will cause the mobo to boot at 2133
|
# ? Jan 5, 2016 13:32 |
|
Any reason someone would want faster than DDR4-2133 on pre-Skylake anyway? I've seen the Anandtech benchmark and the differences of all variants between 2133 and 3200 (or 3400, don't remember) went under in the error noise. Only memory specific benchmarks showed slight improvement. Probably better to spring for more CAS instead?
|
# ? Jan 5, 2016 14:49 |
|
if you want ddr4-3200 to actually be usable you should have went on the x99 platform as the memory controllers on those aren't lovely on skylake in general you don't want to go over 2400 because 1) there's no real benefit on cpu function with the exception of border cases 2) the memory controllers cannot actually handle speed that high (your sample is actually one of the better ones, most skylake chips poo poo their pants at 2700-2800) 3) you're paying more for better integrated graphics performance, which you're not using unless you want to get sick framerate on fallout 4, then by all means Anime Schoolgirl fucked around with this message at 15:11 on Jan 5, 2016 |
# ? Jan 5, 2016 15:07 |
|
These guys found significant differences in game performance between DDR4-2133 and DDR4-2666 with an i3 6100 but they seem to be the only ones and I don't know if they've tested it with any other CPU so v0v
|
# ? Jan 5, 2016 15:07 |
|
Lord Windy posted:How do you guys use so much RAM? I have 16 and through normal use have never used more than 8 Any modern OS will use unused-by-applications RAM for disk cache, so having extra should still give you some benefit.
|
# ? Jan 5, 2016 15:16 |
|
New NUCs announced, there's the usual ones but also going to be a quad core Iris Pro model with TB3: http://arstechnica.com/gadgets/2016/01/intels-next-nuc-will-be-a-quad-core-mini-pc-with-iris-pro-and-thunderbolt-3/ Somewhat related if you haven't seen it yet, Razer announced a TB3 GPU enclosure/dock: http://www.razerzone.com/gaming-systems/razer-blade-stealth#gpu-support
|
# ? Jan 7, 2016 05:17 |
|
Ah poo poo, found out that Intel plans to release Broadwell-E with Computex. That's June plus whatever time it needs to trickle down to the shops.
|
# ? Jan 7, 2016 16:58 |
|
japtor posted:New NUCs announced, there's the usual ones but also going to be a quad core Iris Pro model with TB3: Hm, I bet the i5 version of that would make a decent HTPC + Steam remote play box. Wonder if it supports 4K60 output.
|
# ? Jan 8, 2016 01:19 |
|
japtor posted:New NUCs announced, there's the usual ones but also going to be a quad core Iris Pro model with TB3: Why are all of these video card enclosures always so big? This one is at least smaller than the Alienware one, but these thing are the size of full mini-itx computers.
|
# ? Jan 8, 2016 07:00 |
|
With graphics cards the size they are its hard to go smaller. And yes I realize there are itx options for some cards but there's not that many. You can probably also guess what dictates the size of itx enclosures.
|
# ? Jan 8, 2016 07:49 |
|
I do notice that Razer is quick to put a '40 Gigabits is a lot, look at this tall bar' graph up there to sell the concept to people, but 40 gigabits per second is 5GB/second (and that's saturating the whole available bus). PCIe 3.0 x16 is ~16GB/sec, and 2.0 x16 is half that. For all intents and purposes, Razer's TB3 box will give you PCIe 3.0 x5-equivalent performance - at the most - and that doesn't even really factor in the (probably infinitesimal) lag you'll most likely get by piping in a video signal over a highly-glorified TB3->PCIe bridge chip. Given that the laptop they're trying to pair it to only has QHD and UHD options...I think they're pulling a snow job. BIG HEADLINE fucked around with this message at 09:48 on Jan 8, 2016 |
# ? Jan 8, 2016 09:45 |
|
BIG HEADLINE posted:I do notice that Razer is quick to put a '40 Gigabits is a lot, look at this tall bar' graph up there to sell the concept to people, but 40 gigabits per second is 5GB/second (and that's saturating the whole available bus). PCIe 3.0 x16 is ~16GB/sec, and 2.0 x16 is half that. For all intents and purposes, Razer's TB3 box will give you PCIe 3.0 x5-equivalent performance - at the most - and that doesn't even really factor in the (probably infinitesimal) lag you'll most likely get by piping in a video signal over a highly-glorified TB3->PCIe bridge chip. it turns out that graphics is barely affected by halving or quartering the pcie link bandwidth: https://www.pugetsystems.com/labs/articles/Impact-of-PCI-E-Speed-on-Gaming-Performance-518/ this is not really an issue even at 4k. nobody is waiting for slow-rear end pcie transfers when they are rendering frames.
|
# ? Jan 8, 2016 10:23 |
|
VulgarandStupid posted:Why are all of these video card enclosures always so big? This one is at least smaller than the Alienware one, but these thing are the size of full mini-itx computers. BIG HEADLINE posted:I do notice that Razer is quick to put a '40 Gigabits is a lot, look at this tall bar' graph up there to sell the concept to people, but 40 gigabits per second is 5GB/second (and that's saturating the whole available bus). PCIe 3.0 x16 is ~16GB/sec, and 2.0 x16 is half that. For all intents and purposes, Razer's TB3 box will give you PCIe 3.0 x5-equivalent performance - at the most - and that doesn't even really factor in the (probably infinitesimal) lag you'll most likely get by piping in a video signal over a highly-glorified TB3->PCIe bridge chip. http://www.techpowerup.com/mobile/reviews/NVIDIA/GTX_980_PCI-Express_Scaling/21.html I am curious about the bandwidth when outputting to the internal screen though and how that affects performance. Off the top of my head at worst that'd eat a bit over a third of the bandwidth?
|
# ? Jan 8, 2016 11:01 |
|
Malcolm XML posted:it turns out that graphics is barely affected by halving or quartering the pcie link bandwidth: https://www.pugetsystems.com/labs/articles/Impact-of-PCI-E-Speed-on-Gaming-Performance-518/ This is because, as I understand it, because you load the entire dataset of what you might want to render from into the memory of your gpu. A smaller pipe from pc to egpu should only mean slightly longer loading times.
|
# ? Jan 8, 2016 11:30 |
|
ASUS is claiming that using Thunderbolt does add noticeable latency, so they custom built a PCIe-over-USB-C solution that they say achieves 99.99% performance of PCIe x16. Hopefully somebody will benchmark both solutions.
|
# ? Jan 8, 2016 16:34 |
|
I'm shocked it's taken so long for someone to market a commercial external GPU. Most gaming laptops are disgustingly gaudy and it'd be nice to be able have a Thinkpad or MacBook play real games.
|
# ? Jan 8, 2016 18:25 |
|
Tab8715 posted:I'm shocked it's taken so long for someone to market a commercial external GPU. Most gaming laptops are disgustingly gaudy and it'd be nice to be able have a Thinkpad or MacBook play real games. The biggest issue was the lack of interface, but USB 3. I seems to be good enough for it. Alienware released an external gpu enclosure fairly recently, but used a proprietary port, which meant most people couldn't use it. Before that, there was some express card stuff going on, but I was limited to something like PCI-E 2.0 2x.
|
# ? Jan 8, 2016 18:54 |
|
Thunderbolt has been around for 2-4 years, I don't see why we couldn't use that interface.
|
# ? Jan 8, 2016 18:56 |
|
Tab8715 posted:Thunderbolt has been around for 2-4 years, I don't see why we couldn't use that interface. Because nothing ships with Thunderbolt besides a few random Sony laptops and Apple computers. The Sony laptops did use it for external GPU stuff, but IIRC they weren't that good.
|
# ? Jan 8, 2016 18:58 |
|
|
# ? Apr 28, 2024 13:37 |
|
Tab8715 posted:I'm shocked it's taken so long for someone to market a commercial external GPU. Most gaming laptops are disgustingly gaudy and it'd be nice to be able have a Thinkpad or MacBook play real games. For what its worth, my $99 Kangaroo with an Atom CPU can do the Steam In-Home Streaming thing from my real PC pretty much OK, it only looks a little messed up once in a while briefly. I bet any laptop or even a NUC could do it flawlessly.
|
# ? Jan 8, 2016 19:24 |