Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Kazinsal
Dec 13, 2011



Man, I hope Skylake-K doesn't make my SB-E completely irrelevant. I mean, yeah, it's an old chip, but it's still fast for current stuff. I'm just banking on it still being fast enough a year from now.

Adbot
ADBOT LOVES YOU

Kazinsal
Dec 13, 2011



evilweasel posted:

More than enough, CPUs don't get that much increase in processing power each generation these days and virtually nothing caps it out.

I wonder how soon we're going to start seeing Moore's Law degrading rapidly as the number of transistors per square inch we can increase successive die generations by approaches "zero, unless you like massive electron migration and quantum tunneling". Is 10nm going to be it? IIRC you start to see gnarly quantum tunneling happening at that point.

Kazinsal
Dec 13, 2011



Disabling hyperthreading entirely for a single game may be one of the stupidest things I've heard suggested for an extra couple frames.

You basically turn your i7 into a (possibly greatly) more expensive i5 by doing that.

Kazinsal
Dec 13, 2011



My SB-E (i7-3820) is still rarely a bottleneck at 3.8 GHz (up from 3.6 GHz stock). I have however considered following one of the guides to Nehalem-style overclock it to 4.625 GHz, though I'm not sure if my Corsair H60 is enough for that...

Kazinsal
Dec 13, 2011



What the gently caress does Intel not having an LTE modem have to do with loving ANY of this poo poo?

I'm with Jawn on this one. You've gotta be from a parallel world. Do your Vulcans have goatees?

Kazinsal
Dec 13, 2011



Had that happen on my SB-E. At first I chalked it up to Windows being unable to accurately determine my clock speed (task manager thinks it's ~5.1 GHz, it's actually 4.2) but it turns out it only happens if I try using sleep mode. I would assume it affects regular SB if it affects SB-E.

Kazinsal
Dec 13, 2011



imurdaddy415 posted:

What do you think of a 5820K for CF fury x? would the 5830k be worth the extra for the larger pcie lane?

CPU's good. Take some advice and buy a pair of 980Tis though, the Fury X is pretty lackluster unless you plan on playing games at 4K.

Kazinsal
Dec 13, 2011




Dang, dude. That's double the CPU PassMark score. You're gonna have a good time.

Kazinsal
Dec 13, 2011



Dumbass friend of mine needs a quiet 1366 cooler on a budget. Like, a $40 budget.

212 EVO with the fan speed dramatically lowered?

Kazinsal
Dec 13, 2011



He got an i7-940 and board for $100 (CAD, too -- we're in Canada) so he decided it would be a good media/seed server or something. No plans to majorly overclock it as far as I know. Of course, the only place he has to put it in is his bedroom so he's kind of hoping for a way to keep it cool without driving him nuts.

I need to remind him that he's a broke college student and he needs to stop doing stupid things like this when he could probably get decent streaming performance out of a god damned RPi 2...

Kazinsal
Dec 13, 2011



He doesn't have the stock cooler -- it's a used build made of parts thrown together.

I'll let him know just to grab a 212 and drops the fan speed if it keeps him awake or whatever. Thanks guys!

Kazinsal
Dec 13, 2011



I can't remember if this was posted at any point, so in case it hasn't, here it is: Prime95 and similar heavy complex workloads freeze Skylake CPUs up on certain exponents. Intel acknowledges the problem at the top of the second page of the thread. Microcode update incoming to fix it.

Kazinsal
Dec 13, 2011



real_scud posted:

For once I'd like to 'win' and get one that can OC to 5.2 or something.

Try LN2. :science:

Kazinsal
Dec 13, 2011



I think if you got a stable 5.1 GHz Xeon you win for clocks, IPC, and power consumption all in one.

Kazinsal
Dec 13, 2011



Windows client doesn't do PAE on 32-bit anymore. Hasn't since XP SP2. 32-bit Server does, and PAE is a literal requirement to enter 64-bit long mode, so that one is a given.

There's a highly unofficial kernel patch for Windows 7 to enable PAE but ymmv on using unofficial kernel patches in a production environment.

Kazinsal
Dec 13, 2011



EdEddnEddy posted:

... but why would anyone really want to build a brand spanking new system with the latest hardware, and then run an out of date OS on it?

Paranoia.

Windows 10 telemetry is no more than any previous Windows beta but some idiots making things up to scare people blew any sense of rationality out of the water. Not to mention that these "privacy conscious" people are too daft to hit the "customize settings" button and just blindly smack Express Settings, which is a "do everything that Microsoft recommends" option.

But no, it's an OS problem. Users are innocent. Microsoft is the devil. We are all forced to use Windows 98 and IE4. It is the era of toastytech and AMD CPUs. Dehumanize yourself and face to Bill.

Kazinsal
Dec 13, 2011



Yeah, there's a lot more opt-out available in 10, and explicit statements on what's collected. You can straight up turn telemetry off in Enterprise with a GPO.

Kazinsal
Dec 13, 2011



JawnV6 posted:

So driver support on an entirely new OS is 100% perfect? There's no legitimate reason to prefer a software stack that's been proven to work with whatever goofy devices are floating around a particular lab or office, paranoia about telemetry is the only possible reason to avoid that upgrade.

In my experience it's rock solid. This isn't the XP->Vista era anymore; we're not dealing with a majorly changed driver model and framework being tripped up on old code (though it is closer internally to Windows 8.1 than Windows 7, or 8 for that matter -- 8.1 introduced a new user-mode driver framework that is designed for moving kernel-mode drivers out of kernel space).

Yes, if you're running sketchy bespoke unsigned drivers with crap code quality in test mode and it barely holds together on Windows 7 then there's no guarantee it's going to work on Windows 10. But in that case it's probably a miracle it works on Windows 7.

I've honestly had better performance in Windows 10 than I had in Windows 7. Better network throughput, lower temperatures and more idle time, and all my software still works.

Kazinsal
Dec 13, 2011



That's where I get a little confused myself, because PAE is a requirement for NX. The NX bit is the uppermost bit of a page entry in PAE page tables.

So, technically NX on 32-bit Windows uses PAE, but the extra physical address space isn't.

Kazinsal
Dec 13, 2011



Tab8715 posted:

How does a mobile i5 compare to the desktop i5? I'm sure if it's faster but is it enough to play modern games? I'd rather buy one of these for my next PC than anything else.

Mobile Core i-series is a minefield. i5s are dual-core, except ones that have a Q at the end of the model number, which are quad-core. The dual-core mobile i5s are hyperthreaded, while the quad-core mobile i5s are not. All mobile i7s support hyperthreading, but similar to the mobile i5s, unless they have a Q or X in the model number they're dual-core.

The mobile i7 quad-core line is generally close to par performance-wise with a similar desktop chip, unless thermal throttling kicks in. If thermal throttling kicks in though, your lap is already on fire because the Tcase for those is like 100 C.

Kazinsal
Dec 13, 2011



Integrated graphics is literally a fairly basic GPU on the CPU die. The lowest end ones these days are enough to do video 1080p video encode/decode smoothly, and play lighter games at passable settings.

And then there's the Intel Iris Pro 580, which can do 4K 60fps encode/decode and is smack between a GTX 750 and 750 Ti for single-precision GFLOPS. Too bad it only has 128 MB of eDRAM and is only available on a handful of high-end mobile chips.

Kazinsal fucked around with this message at 06:20 on Apr 9, 2016

Kazinsal
Dec 13, 2011



Yup. The OP is from 2011 though so read the last couple pages to catch up on Zen. http://forums.somethingawful.com/showthread.php?threadid=3380752

Kazinsal
Dec 13, 2011



So, speculation on the next generation of Intel processors being what it is, would it be worth it to go from an i7-3820 (3.6-3.8 GHz, 4C/8T) to a Xeon E5-2670 (2.6-3.3 GHz, 8C/16T) for about $90 shipped, or should I hold out for whatever the 7th generation is going to be? I do about equal parts gaming and virtualization, and one of the things that I saw in Intel ARK that was that the E5 does PCI-e 3.0, not just 2.0 like the 3820.

Kazinsal
Dec 13, 2011



NATEX usually has a couple in stock. They're used, tested, and in working condition with a 60 day warranty. They're basically a wholesale previous-previous-generation electronics recycling/reseller.

Kazinsal
Dec 13, 2011



sincx posted:

25% increase from Haswell to Skylake seems REALLY generous. I thought it was more like 10-15%?

If even that. In a lot of gaming situations it's within a few percent.

Kazinsal
Dec 13, 2011



EdEddnEddy posted:

I ask myself the same with my SandyBridge-E 3930K. BW-E Really offers diddly for an upgrade at the cost. I feel I am still 2 -E generations before I bite on another upgrade.

For a 2500/2700K Quad core, sure one of the newer -E series would be a upgrade for CPU task and the number of PCI-E Lanes you can get if you don't get the bottom -E Chip, but outside of that, those SB chips are still drat fine.

Meanwhile, I'm on a gently overclocked SB-E 3820 -- everything I touch seems to not like overclocking nearly as far as other people can push things -- and wondering if I should jump on 6700K, wait a bit and jump on 6850K, or wait longer and jump on 7700K.

Kazinsal
Dec 13, 2011



EdEddnEddy posted:

Well the 3820 takes some finesse as it isn't a K series so OC'ing it is a whole different bag.

You could always swap it out for a 3930K or 1650 Xeon which is unlocked allowing you to OC to 4.4~4.8ghz with the right cooling and board.

Though if done right, the 3820 was able to OC to the mid 4's easy as well. What MB are you using and what have you tried?

Gigabyte GA-X79-UP4 rev1.0. I can do 4.0 GHz stable with just the multiplier. I followed a BCLK strap OC guide for getting up to 4.625 GHz but it wouldn't POST. :(

Kazinsal
Dec 13, 2011



EdEddnEddy posted:

There are probably other things you have to tone down to get it to post that high besides just voltage and whatnot. But I would have to see the BIOS screens you have and figure out what does what vs my ASUS bios I have been messing with for years.

Can you push it to 4.625 and then reduce the multiplier to bring it down to 4.4 or so just to get things rolling?

This is exactly the board and BIOS I have, and the guide I followed to no avail. Maybe it needs more volts... :science:

Kazinsal
Dec 13, 2011



AEMINAL posted:

When I first overclocked my 60 hz monitor to 76 hz this is all I did for a while. Can't imagine what 144 hz is like :stonk:

It's a blessing and a curse. Holy gently caress is it ever smooth, but once you use a 144 Hz monitor any 60 Hz monitor is going to feel like it has the input lag of a cheap mid-2000s LCD TV.

Kazinsal
Dec 13, 2011



Boiled Water posted:

What sort of work can justify an -E processor? I mean compared to just farming our work to an AWS instance.

A lot of video work is a lot faster on processors with more than four cores and loads of fast memory. And you really don't want to be sending a terabyte of raw 4K footage to an AWS instance to do you edit and render on.

Kazinsal
Dec 13, 2011



Holy wow Intel needs an old-fashioned AMD rear end kicking.

Kazinsal
Dec 13, 2011



PerrineClostermann posted:

What's the performance difference between a C2D Conroe and a Skylake Pentium?

Skylake Pentium wins by a margin of about 2.5:1.

Kazinsal
Dec 13, 2011



HardOCP reviews the i7-7700K.

TL;DR same IPC, same perf/clock, same stock clocks. 10-20W less under heavy load.

Zen better not be a dumpster fire, save us AMD

Kazinsal
Dec 13, 2011



Sormus posted:

Its page 286, only 100 more pages till 386 and 200 till 486!

This thread is now in 16-bit protected mode.

Anime Schoolgirl posted:

i dunno, 45 less watts than a comparable Xeon Broadwell-EP/Broadwell-E SKU for the same performance is at least eyebrow-raising.

Yeah that's the kind of thing that would definitely get me to build a home server, and probably a new desktop as well if it's inexpensive.

Kazinsal
Dec 13, 2011



PerrineClostermann posted:

Would a 3D printed delidder be strong enough to withstand multiple deliddings? The typical filament used is pretty weak structurally, isn't it?

3D printing has gotten a lot more "real" in the things that it can produce with a consumer/hobbyist printer that costs under a thousand dollars. Low end printers only do PLA, which is a biodegradable plastic made from starches and often used in recyclable plastic containers. Beefier printers can work with ABS filament, which is what most extruded plastic commercial goods are made out of -- including Lego bricks, and think of how solid those are when you step on them. You also need a better printer to work with ABS, since it requires hotter temperatures and a heated bed usually covered in mildly adhesive tape in order to not warp.

Printed PLA is more brittle than ABS, and not usually something you want to make actual working tools out of. Printed ABS can be really strong, though, and survives a fair bit of mechanical stress, but it's not food-safe without sealing. While both PLA and ABS should be printed in a well-ventilated area, this goes doubly for ABS since it releases fumes that are potentially toxic if inhaled too much.

Kazinsal
Dec 13, 2011



With an overclocking board and CPU combo you can also set Turbo Boost up to run on all cores at once (instead of only on a certain number of cores at once), or to run all the time on all cores, or to disable all dynamic clock lowering and run at full tilt 24/7.

Kazinsal
Dec 13, 2011



As long as you're not hitting some ridiculous temperature (like, 80+ C) it's going to maintain itself indefinitely.

Kazinsal
Dec 13, 2011



What x86 needed thirty years ago was more than eight loving general purpose registers. Especially since one is the stack pointer and one is the stack frame pointer (unless your code is using frame pointer optimization, which makes debugging a massive pain in the rear end). x86-64 added another eight, but that's still a pitiful number of registers compared to x86's contemporaries. The 68000 had eight data registers and eight address registers, though one of them was a dedicated stack pointer that was automatically swapped out in a transition between user and kernel mode. ARM has a whole mess of registers, including a number that are windowed depending on what CPU mode you're in, such that you have a different known stack pointer and link register for each exception type.

x86 already has register renaming internally for parallelism in micro-ops. It has for decades. But until x86-64, you had six, sometimes seven registers to work with. That's it. A lot of intermediate results in calculations had to be at best a cache hit, and possibly a memory hit or -- worse yet -- a page fault and resultant memory juggling.

e: Itanium, on the other hand, had 128 general-purpose integer registers, 128 floating point registers, 64 one-bit predicate registers (used for compare results), and eight branch registers.

Kazinsal fucked around with this message at 13:27 on Dec 27, 2016

Kazinsal
Dec 13, 2011



Jay's a dude who knows what he's talking about, and can put it into practice. I've bought AMD GPUs for almost a decade now, through thick and thin, and I appreciate and agree with his honesty and knowledge of how to really get the most oomph you can out of your purchases.

Linus pretends to know what he's talking about and would be paying for his ridiculously high project failure rate if it weren't for his massive sponsorships that come from being the Pewdiepie of computer hardware.

Adbot
ADBOT LOVES YOU

Kazinsal
Dec 13, 2011



EdEddnEddy posted:

I think, if Xpoint ever actually becomes a good thing (for consumers) then maybe that could be the kicker I need to actually care to upgrade past SB-E.

Cheap Haswell-E equivalent cores on whatever Zen's platform is going to be is the kicker for me.

I have a feeling if Zen sucks I'll just end up grabbing a 3930K off eBay to replace this 3820 and hope that tides me over until Zen 2 or whatever the next thing is that Intel produces and is actually a worthy upgrade arrives.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply