Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
EoRaptor
Sep 13, 2003

by Fluffdaddy

Chuu posted:

I was pretty excited when I saw the announcement since it seemed like a no-brainer to get an ITX version of this out there tailored for FreeNAS that could be loaded up with ECC Memory. Then I saw the 8GB limit. Seriously Intel?

Just imagine Paul Otellini jumping around on stage, sweating and yelling "MARKET SEGMENTATION! MARKET SEGMENTATION! MARKET SEGMENTATION!" and you've pretty well got the reason for it.

Adbot
ADBOT LOVES YOU

EoRaptor
Sep 13, 2003

by Fluffdaddy

Alereon posted:

Note that Haswell-E requires DDR4 memory.

I'm not too interested in Haswell-E, but it might knock prices for current high end stuff around a bit. DDR4 is going to be a tougher sell, but even if people just see it as a premium option, the 'newer technology' aspect should put some price pressure on DDR3.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Alereon posted:

Remember that Iris Pro comes with 64/128MB of L4 cache which can have a more general impact on performance. There also are Iris Pro Core i5s.

I'm more interested in what market intel is going after with a K series that has Iris Pro. A general rule is that the bigger the die, the lower the overclock potential, and Iris Pro is a huge amount of silicon. Is it separately clocked? Can you turn it off and use the 'area' to help with cooling?

The performance potential of 128mb of fast, local L4 cache is nice, but few programs will be really able to take advantage of it, and the CPU cache control hardware won't be optimized for it, so it may not yield as much benefit as it could.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Henrik Zetterberg posted:

This makes no sense whatsoever.

If it's there just for the iGPU, then my statement is false. However, if the CPU can access it as a L4 cache, then you are relying on the cache control logic to correctly decide what data to keep in that cache, what to flush, what to precache, and to keep the cache coherent among all the threads that could access it. This is a very specialized bit of logic, and is tailored very specifically for the cache size that is on the CPU. It can certainly use a larger cache, but the performance improvement won't be as great as if that logic had been built for the larger cache pool from the start.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Combat Pretzel posted:

I suppose it's good that I held off. Not that hardware lock elision nets big rear end performance improvements, but given various highly threaded apps and games, I take anything as soon the various apps and threading libraries support it.

Yeah, this looked like a great feature to let multi-threaded applications both run more quickly and run more safely, all with very little work needed outside of compiler and library support.

It'll still happen, but now everybody is going to be much more wary of it and implementation will be massively slowed.

I guess I'm waiting for Broadwell Desktop 2015? I think that's what intel meant when they said it would be fixed in the next Broadwell CPUs.

EoRaptor
Sep 13, 2003

by Fluffdaddy

r0ck0 posted:

Are there going to be any more CPUs made for the z97 chipset? Is the 4790k the last and the greatest for this mobo?

Broadwell has, so far, very different power needs than haswell. Even if they preserve lga1150, you would likely need a new MB to accommodate the new power requirements.

Skylake will absolutely need a new socket, as the switch to DDR4 is a big move.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Malcolm XML posted:

looks like they might just dump broadwell entirely. Why even bother with Skylake so close and broadwell being essentially a marginal improvement?

Skylake will be a new chipset and a new memory type, and both will have a premium (probably a pretty big one for the memory)

Broadwell will socket right into existing motherboards and use existing, cheap memory.


Producing both might seem foolish, but intel has already done all the work to get broadwell to market, so the additional cost of actually making and delivering it is very small by comparison, and there is certainly a segment of the market to fulfill. We still see older pentium and core2 CPU's being made simply to meet a cost:value market niche.

I'm betting the performance of skylake and broadwell for most day to day loads, and even for most gaming loads, will be effectively identical.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Jan posted:

*Existing 9 series chipset motherboards.

Yes, sorry, also only if they get a firmware update and some just wont ever work, etc, etc. I just always assume that will be the case for this type of thing.

EoRaptor
Sep 13, 2003

by Fluffdaddy

HERAK posted:

DDR4 introduced some more voltage regulation and reduced the power required which is important for servers but not so much for home users.

The DDR4 spec for voltages is pretty dated, remember that the spec was finalized in 2011. We've pushed very hard on performance per watt since then.

Better is the DDR4L (LPDDR4) spec that exists for laptops, it pushed voltage down to 1.05V without sacrificing performance. Desktops could probably switch to SODIMM formats and adopt this spec without any end user impact, but I don't know if this is seriously considered or not. You could make a traditional LPDDR4 DIMM, but I don't think anyone actually has bothered to.

Expect DDR4 to stick around for a long time, though. No one has proposed a spec that solves any of DDR4's problems in a way that is affordable for consumers.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Ninja Rope posted:

What changes in chip technology would lead to this?

It's a "14nm" design with FinFET. It should have a huge amount of space on die for making IPC improvements possible, though I doubt any individual component contributes more than a percent or two.

Add it all up though:
Cache design
Memory controller
integer unit design / count
pipeline design / depth
vector unit design / count

You only need a tiny bit everywhere to make a big overall difference. This has been intels overall strategy for a number of years, even when they claim a brand new design, it's often reshuffling already extant compute units with maybe one or two section sporting something new.


I'm just hoping the K variants aren't crippled by some horrible marketing decision and get the full features set (virtualization, transactional memory, etc)

EoRaptor
Sep 13, 2003

by Fluffdaddy

Combat Pretzel posted:

I thought you could retrofit lock elision into existing apps via the system's locking primitives (--edit: or threading libraries, depending on your platform)? Obviously not as effective as specifically making direct use of the relevant instructions, but it should result in some difference?

Same for the non-K version.
There are two TSX modes. One re-uses an existing x86 instruction pair that is not normally associated with the memory lock command. Older CPU's will ignore them, newer CPU will use HLE to try a basic transaction on the memory area. You need to recompile with a TSX aware compiler and library, but don't need to change your code at all.

The other is a new set of instructions that lets you finely control a transaction attempt, and catch the fallout of the success/failure. You need to change your code to handle this new method, as well as needing compiler and library support.

LiquidRain posted:

If you see desktop boards with DDR3L support. I imagine you'll only see DDR4. DDR3L is likely there for lower-cost convertible tablets or some such until DDR4 reaches price parity.

SkyLake supports DDR3L and DDR4, and a motherboard can offer support for both via the UniDIMM standard, which is a modified SO-DIMM spec (same pin count, new notch location) that allows for DDR3L or DDR4. You cannot mix and match DDR3/DDR4 in the same system, but you can switch at any time. It's not clear how well supported UniDIMM will be.

EoRaptor
Sep 13, 2003

by Fluffdaddy

necrobobsledder posted:

Porting pthreads, for example, to support TSX and lock elision in userspace is technically viable but anyone interested in stability will be negatively impacted potentially and cause some friction and may need a little more battle testing before it can actually be considered mainline support. I don't want my production app to be a guinea pig for hardware transactional memory when I upgraded my Postgres version for a new query type, for example.
Last I remember looking at that instruction I don't remember any library actually using that for spin locks or semaphores, so it wouldn't be used by those libraries for what you'd hope for. REPNE/REPE is used for repeating instructions on a string sequence (which is kind of a clever instruction set to re-purpose honestly). But they're used for writing a loop as a single instruction and on TSX processors is now a safe sequence is the only difference. Atomic memory swap operations and the straight up lock prefixed instructions on x86 are what pthreads would compile to probably (I haven't touched pthreads for 12 years, I make zero claims on wtf is actually current). So this means not a whole lot of performance or anything without modifying the LOCK prefixed instructions, unfortunately.

However, implicit XACQUIRE/XRELEASE does make it possible on a TSX supporting processor to treat the memory region region as free to other threads and will support write ordering resolution plus cache coherency conflict resolution that should be working for Skylake at least. This means that if your compiler spit out a null terminated string strlen function it will be a transaction that's protected from other threads - that's kinda cool, but it's kinda one of those things that people have done with lockless programming on x86 for a long time now (instruction sequences that will force reordering by some fluke of internals creating hardware-based memory fences). Sure, this should help for some many cases of multithreaded programming performance and sanity headaches, but it won't help much beyond trivial locking failures. Then again, if they do cool stuff like creating lock digraphs to do deadlock detection and resolution with nested transactions or something, that would be super rad. But I see no such documentation so far.


Pretty sure that the few places that will get use out of this are database vendors and HFT shops that actually do multithreaded transactions but somehow haven't managed to just go lockless programming by now despite their talent availability. For everyone else that's not rewriting their locking primitives to support the instructions, TSX can give you better protection against certain segfaults in multithreaded code for userspace application code. Instead of outright crashing, you'll get a free, safe transaction that's a little faster depending upon how much and whether context switching overhead outweighs your locking overhead.

Edit:
TL;DR: TSX unfortunately does not actually get you this for all situations even with the backwards-compatibility help http://devopsreactions.tumblr.com/post/110529123748/lockess-algorithm

Absolutely true that performance will only come with dedicated code. I think the main benefit of the hardware support will be that you cannot create faulty lockless code that potentially corrupts data. It will also make validating results much more straightforward.

For actual use cases, I'm thinking Apple has the leg up here. If I squint a bit, the threading wrapper code for Swift looks like it could be tweaked to take advantage of TSX with very little work to change already written code. This could give much better thread performance throughout the entire O/S and application stack, and though I doubt it would be visible to end users as performance, it will probably pop out as less heat and longer battery life.

For big data databases, the amount of inflight transactions possible would need to be greatly increased, and we will probably see that happen on the Xeon lineup. Getting it into Skylake is probably about developer usage, not data center usage (yet).

EoRaptor
Sep 13, 2003

by Fluffdaddy

Grapeshot posted:

As far as I understand it, UniDIMM is supposed to be for SODIMMs only and incompatible with both standard DDR3 and DDR4 so you won't be using your old memory like that.

My quiet hope was that we'd switch to the so-dimm format for desktop boards as well. It would simplify a lot of stuff and DDR4 is as good a time as ever.

EoRaptor
Sep 13, 2003

by Fluffdaddy

necrobobsledder posted:

The sheer density of chips from high density compute server lines on DIMM boards cannot be achieved in SO-DIMM form factors, and servers are not going away - they'll go away after SO-DIMMs I would argue (no more laptops or small nettops being made, that is). You try putting 32 chips of the same size as what's on a typical DIMM now into a SO-DIMM and see how well that turns out.

The supposed trend to go towards "micro" servers that might use SO-DIMMs that some industry pundits tried to push back in 2012 as "the next thing for servers" didn't happen. Instead, Intel just made their Xeon lines a lot more power efficient and the raw performance loss going down to those Moonshot servers or whatever just wasn't worth it for pretty much anyone. If companies like Google and Amazon aren't using it for their massive farms, it is likely not as economically efficient as advertised.

I was referring just to desktop. Server DIMM requirements are so far removed from desktop that there is effectively no overlap anyway.

Besides, the cap intel puts on the desktop cpu memory controller (32GB total, 8GB per DIMM) makes it worthless for any serious server usage.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Sidesaddle Cavalry posted:

I can't answer that question directly, but I can hypothesize that it wouldn't solve the issue of two production lines like blowfish mentioned. I understand there's a difference between ECC and non-ECC RAM, but memory makers would still need to assemble for two different form factors.

We weren't talking only DIMM production lines, more that you'd stop producing a line to desktop DIMMs in favour of producing extra laptop SO-DIMMs. You've just saved a bunch of design, testing, and validation. There would be some savings in the retail channel, as server dimms are pretty rare there already, so the removal of an entire product line (desktop dimms) would clear inventory space and reduce inventory management.

You'd also make consumer MB design slightly easier, as the space requirements for memory slots would go down.

If you think that the design/testing/validation for server dimms shares anything with desktop dimms, be assured it doesn't. Beyond any ECC requirements, the DIMM itself needs to be much, much stricter on electrical tolerances to keep EM noise down, so larger banks of dimms (slots) can all be populated without errors creeping in. Even though it shares a basic shape with desktop dimms, there really isn't any relationship between them once you begin the design process.

EoRaptor
Sep 13, 2003

by Fluffdaddy

PC LOAD LETTER posted:

If I'm reading this right it sure looks like things haven't changed much fundamentally and the 'front' vs 'back end' metaphor still works pretty well even with a very modern x86 chip that can do uop fusion and has a trace cache. Sometimes you can get a 1:1 uop vs x86 instruction ratio but sometimes you still see multiple uops even with new instructions. Seems to be all over the place really.

I don't think there is going to be a better solution than the current method of profiling applications, determining what they do most, optimizing that or introducing instructions that optimize certain actions and seeing what sticks.

The lag between instruction availability, compiler support, application support, and instruction universality (eg: >80% of currently in use CPU's have it) is so huge that it's always going to be hard to predict what will actually turn out to be useful by the time it's actually generally usable.

We typically have cycles where different areas get focus (hardware, compiler, language) for a bit, but even then it's hard to say where we currently are, only to look back and see where we were and try to go on from there.

It's fun to watch, because there is real innovation happening all the time by some very smart people (and groups of people). The whole drive toward multithread/multiprocess as we ran into Ghz scaling limits was really interesting, and we are still seeing the results. :)

EoRaptor
Sep 13, 2003

by Fluffdaddy

Welmu posted:

Intel is dropping the stock cooler from Skylake-S processors.

It was a good bit of money for something >90% of the buying market never used.

I'm surprised it didn't happen years ago.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Ak Gara posted:

There doesn't seem to be a water cooling thread so I'll ask here. My 5ghz 2500k is quite loud using an H100 so I was looking into putting together a custom loop (+ SLI 680's.)

I've read that sometimes adding a second radiator to your loop only drops the temps by a few degrees due to already being at the thermal capacity limit of the water block itself. Is that correct?

Yes. Unless the radiator is actually warm/hot to the touch and its fans are running all the time, it's not the bottleneck.

EoRaptor
Sep 13, 2003

by Fluffdaddy

VostokProgram posted:

...Whatever happened to the memristor, anyway?

Turns out making them reliable over long periods is hard, especially at the feature sizes modern manufacturing methods use.

Xpoint is a type of memristor, though, so we are finally getting there. Don't hold your breath for logic gates built with them, though.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Pryor on Fire posted:

Well the good news is that CPU progress has slowed to such a glacial pace that you can just keep the same CPU/mobo for 5-10 years without any compelling reason to upgrade so plugging a new CPU into a socket isn't really something that happens anymore :v:

I think the next big improvements for computers will come outside the CPU. We are already seeing SSD's as a meaningful upgrade that is more cost efficient than a new CPU, and things like xpoint (or similar) that move faster storage closer to the CPU, as well as HBM (or similar) that move faster memory closer to the CPU will be the next big 'must haves' for computers.

GPU growth and integration will continue a pace for a while, and the VR products coming in the next few years might give them a boost, but we are already heavily into the 'branding, not innovation' business model, so don't expect anything amazing.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Combat Pretzel posted:

Why wouldn't one want XPoint, if it's even faster solid state memory? --edit: I mean with NVMe interface.

Intel hasn't shown any desire to use xpoint as a flash ram replacement in SSD's or phones or what have you.

They are targeting server memory via specialty DIMMs that allow a huge increase in the amount of memory a server can have, by using a blend of xpoint and regular memory on a single DIMM. This is either managed by the CPU itself or by an 'xpoint aware' memory manager (or both!)

On the consumer front, I'd actually expect Apple to be the first ones to use xpoint in their mac pro series. They have the total control over hardware and operating system you need to turn around such a product quickly, and price isn't the first concern for people purchasing workstation class mac products. Xpoint in a high end laptop would also make a lot of sense, if the price is justifiable.

EoRaptor
Sep 13, 2003

by Fluffdaddy

I completely missed this. Oops.

Durinia posted:


This part is awesome.


Did I fall asleep and wake up in a world where memory capacity was a constraint in the PC space?

It's more that, you can stick a terabyte of memory in a server, but now for the same price you can stick 4TB in. For workstations, this is basically the same selling point. It applies especially well to mac pros, which are heavily used for video and image editing, where having lots of memory helps, but you generally are only working with small chunks of it at a time.

For laptops, the fact that xpoint requires no refresh cycle means it should be much more power efficient than dram. So, a system with 4GB of dram and 4GB of xpoint should perform as if it has 8GB of memory but have the battery life equal to the 4GB model. It gets even better as you increase the amount of xpoint memory in the system.

EoRaptor
Sep 13, 2003

by Fluffdaddy

fishmech posted:

Because nothing ships with Thunderbolt besides a few random Sony laptops and Apple computers. The Sony laptops did use it for external GPU stuff, but IIRC they weren't that good.

Also intel refused to certify any device which broke out the thunerbolt port into a pcie port in an external enclosure, so that kiboshed graphics cards along with everything else.

They changed that with thunderbolt3, and adopted the usb3 port specification, so now there is much less barrier to entry and much more flexibility to implement devices.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Ludicrous Gibs! posted:

I've got an I5-2500 non-k that's coming up on 5 years old now. Since OC'ing isn't an option, I take it an upgrade to Skylake is probably a good idea when I build my VR rig in a month or so? Should I go for an OC-able chip this time?

The biggest boost here is that two USB 3 ports are required for Occulus VR (and probably others), and getting anew motherboard that includes a bunch will fulfill that nicely.

If you can hold off making decisions around GPU's until at least April 7th, we will have some more news about next gen nVidia GPU's, and probably AMD as well, whcih should help with any planning you are making.

EoRaptor
Sep 13, 2003

by Fluffdaddy

VulgarandStupid posted:

The bootleg market for VCDs was huge, I went to China 13 years ago and in some stores they have the legitimate DVDs/VCDs on the normal displays. Then in cabinets underneath, they had all the bootlegs, which were obviously much cheaper. I don't think its very hush hush over there.

FYI Most DVD players could play VCDs.

The DVD logo spec actually specifies that (S)VCD's must be playable. This wasn't well tested, and some models wouldn't play them at all, but it ended up not mattering because the DVD encryption was broken so quickly.

EoRaptor
Sep 13, 2003

by Fluffdaddy
In a good sign, Kaby Lake CPU's are already available for hardware development: http://arstechnica.com/gadgets/2016/05/intels-post-tick-tock-kaby-lake-cpus-definitely-coming-later-this-year/
Intel is giving every indication that it is on track for a 2H16 release, though not all features are completely finalized/announced.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Platystemon posted:

If it were true, a lot of people would need to know about it well in advance.

If Intel breaks compatibility, you won’t hear about it as a rumour. It will be publicly announced.

The only people who know about it before then will be working for Intel under non‐disclosure agreements.

I think the article is writing things to try to hype up their cachet, that aren't really true or even probable. The SIMD stuff at the end is probably the entire movement on Intels part, removing the obsolete SIMD stuff like MMX or the first few SIMD generations would be fine, as even under emulation, newer CPU will outperform the older ones with the dedicated hardware. I could also see a high density targeted CPU design that does away with FP emulation entirely, along with more aggressively dropping other features, in a push for markets that don't use them, like high performance computing or storage computing. Desktop computing and general purpose server computing won't see anything so major.

Dropping legacy x86 stuff would be weird, because Intel already emulates all that. Modern x86 CPU's use a totally non-x86 compatible architecture for execution, and have a decoder that takes each x86 op and produces one (more or less*) micro-op's that then pass through the execution stage(s). The resulting output is then put back into 'x86' so the program can find the result it expects in the way it expects. It's this type of system that provides all the 'lift' needed for branch prediction, SMT, pre-loading the cache, etc. x86-64 is treated the same way, just with a different decoder to produce the micro-op's. There is no reason to drop anything, because none of it really exists anyway.

*Most x86 instructions produce one or two micro-ops, but some instructions so commonly occur together that these 'sets' of x86 instructions only produce a single micro-op.

EoRaptor
Sep 13, 2003

by Fluffdaddy

EdEddnEddy posted:

Today at CES Intel showed off a few Laptops with Cannon Lake slated for Q4 2017 and some VR stuff. Yay? Just seems odd to just release an Uninspired Kabby Lake and already demoing another CPU thats also coming out in the same year (if at the end of it). And probably another lacking much if anything outside of power savings. Not sure what they can do for VR specific optimizations unless they are going to tap into the chips encoding capabilities directly for real time inside out positional tracking or something.

This isn't actually new behaviour from intel, it's just a new branding on something they already did.

In the past, they have launched a desktop processor model, then followed up with a a low power laptop variant (-U, -ULV) some months later. At the same time a set of low power desktop versions appear (-T), and Xeon server variants also appear (-E) around then. These all used the same branding as the current generation, but are actually improved designs for the process node.

Now, intel has chosen to stretch out the timeline a bit, and rebrand those improved designs as a separate, new CPU model. That's why we saw Kaby Lake laptop parts in 3Q 2016, before the scheduled desktop part, as that was were the best return on investment would be. Intel intended to to launch the desktop chips very soon after, but ran into some unspecified manufacturing issues, and the desktops chips slid into 2017. Thus we are seeing the collision with Cannonlake. If intel hadn't chose to put Kaby Lake out as a new brand, and instead we got ultra low power skylake cpu's for laptops and desktops, nobody would have noticed or commented on it.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Kazinsal posted:

Ryzen MacBooks when?

Intel pays Apple a boatload of cash to use Intel CPU's. It's admittedly not a direct monetary payment, but Intel handles all the development work for Apple mainboards, including arranging manufacturing and ensuring supply in preference over other OEMs. I'm pretty sure they also help out with a bunch of the EFI stuff and driver development, though that's much less clearcut.

In other Intel news, it seems Coffee Lake will remain on 14nm, there will be a 6 core i7 branded chip version of it, and Cannon Lake 10nm is going to start as a Xeon brand and work its way down to consumer, instead of the usual consumer variants first. https://arstechnica.com/gadgets/2017/02/intel-coffee-lake-14nm-release-date/

EoRaptor
Sep 13, 2003

by Fluffdaddy

ConanTheLibrarian posted:

Given they've targeted mobile first with their last few releases, presumably because of the power savings, could this mean the 10nm process doesn't have significantly less power use than 14nm?

If I was to guess, I'd say it's probably more price related than anything. The mobile market can't charge a premium for low power chips in ultralights the way it used to, and Intel needs a return on the 10nm investments it's made. New server chips, especially ones that have added logic to accelerate particular problems (encryption is a big one) still command a premium.

I'm sure we are well into diminishing returns in performance and power savings from process shrinks, both from lower percentage shrinks and from the reality that only tiny parts of the logic on the chip are 10nm, the rest ends up being much bigger. That's been true for a while, and most power savings have come from designing in aggressive sleep states and power gating.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Tab8715 posted:

Is Coffeelake or Iceland expected to bring anything new to the table aside from minor performance gains?

It seems like most of us with Sandy Bridge systems will be waiting for Tiger Lake.

Nope. Coffee Lake was supposed to be the next process shrink, but now it's not, so it'll just bring ????. The only thing we know about it for sure is that the Coffee Lake Xeon's are launching ahead of the desktop and mobile parts, and that Kaby Lake Xeons may never appear because of this.

I also think that intels 'secret' internal codename poo poo has been hijacked by marketing and it's no longer a worthwhile way to talk about intel processors. :/

EoRaptor
Sep 13, 2003

by Fluffdaddy

Don Lapre posted:

Well its not straight gallium. Whatever the alloy is it definitely doesn't freeze at 29.8c otherwise the tubes would all rupture.

Most materials shrink when they freeze, not expand. Water is in the minority, even if it is so common. Gallium wouldn't rupture a container if frozen. :eng101:

EoRaptor
Sep 13, 2003

by Fluffdaddy

mcbexx posted:

Overclocking question:

I still have a 2500K, which has been running at 4.4GHz for well over two years now.
Lately I keep getting occasional Bluescreens with a sound loop (mostly when watching video content in the browser or with MP-HC) while playing and browsing at the same time.

Most of the time it is a "WHEA_UNCORRECTABLE_ERROR", most recently I also got a "CLOCK_WATCHDOG_TIMEOUT". The event manager always just shows that the system has been rebooted due to an sudden error, there are no events logged immediately prior to that.

I haven't touched the VCore voltage yet, it's still on auto.
Maybe I need to adjust it - problem is I have no clue. Is it too high or too low?
Googling suggests that the WHEA_UNCORRECTABLE_ERROR can be caused by undervolting, so should I crank VCore up a notch (0.01V)?

When under load (Prime95) at 4.4GHz Vcore is at ~1.390V according to the utility that came with the mainboard (Asrock Z77 Extreme 6).
It's idling at 1.6GHz and around 0.975V
CPU temp is at 65C/149F under load and 35C/98F when idle, so nothing out of the ordinary.

I also already checked the system files with sfc /scannow, as this was mentioned as another possible cause.

There's an overclocking thread which may have answers.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Boiled Water posted:

I guess it's a good marketing move when you need to convince your manager not to buy pleb tier chips.

It seems to have been originally conceived to better mark out chips based on features, not core count. So, a company that wanted the best AES performance in a platform accelerator, for instance, would previously have bought e7 class chips, even though an e3 with the same AES engine enabled would be better because it had a faster base and turbo clock, as the platform isn't dual socket or high memory.

It's currently a bit jacked up, because it was pasted over the existing product stack, and marketing got their hands on it. I wonder if intel can make it stick or not. Their target market is really OEM's, not customers, and OEM's have resisted such changes in the past.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Phenotype posted:

How soon will "fixed" chips hit the market? I've been building a system since Black Friday and I just got my 8700k in the mail a few days ago, still unopened. I'm debating whether or not it's worth just returning it and getting an AMD processor, or waiting a little while till fixed Intel chips are available that won't take that performance hit.

This is a flaw in the way TLB caches are currently designed. It's highly unlikely there will be any sort of fixed silicon until the next generation of processors. A decent shot at explaining it is given here: https://arstechnica.com/gadgets/2018/01/whats-behind-the-intel-design-flaw-forcing-numerous-patches/ and if that's true then the entire TLB would need to be physically split into a RING 0 cache and RING 3 cache, with a hardware gate between them that can enforce access levels at all times, even during speculative execution. This is not a simple tweak of a few transistors.

EoRaptor fucked around with this message at 23:31 on Jan 3, 2018

EoRaptor
Sep 13, 2003

by Fluffdaddy

Phenotype posted:

Well in that case, is it better at this point to return it and go with an AMD processor? I looked at benchmarks for the Ryzen 1800x and they're all noticeably slower than the 8700k on most tasks, even if the 8700k takes a 25% performance hit. I plan to use it for gaming and multitasking with a number of remote sessions open, so I'm not sure the issue is going to affect me much, but still. I've been saving money and putting together pretty close to a top-of-the-line machine and it's really lovely to hear literally a few days before all the parts finish arriving.

AMD claims to not be affected, but the current patches for Linux include AMD in their mitigation/fix, so ??? to that for now. I personally wouldn't worry about it, as the highest impacts are on very specific workloads that people don't run on their desktops, most applications either see no performance impact from the fix, or one that is sub 5%. The big security issue is the ability to discover information about other VMs in a virtualized environment, which doesn't apply to desktop usage either.

Edit: Ah, reading the above blogs shows that there are actually two types of attacks, not one, which is probably why information available prior to this disclosure was so confusing. It seems intel is affected by both, and AMD only by one. The one that affects intel and AMD is one that could have an impact on desktop users, so that shouldn't influence your purchasing choices. Performance impact for mitigation should be the same for both intel and AMD cpu's as well.

EoRaptor fucked around with this message at 23:44 on Jan 3, 2018

EoRaptor
Sep 13, 2003

by Fluffdaddy

Craptacular! posted:

ASUS will patch back to Skylake. ASRock has no comment. MSI made a vague statement that said "Older chipsets may need more time to wait, as it's up to Intel to release required resources. No ETA given."

It should be noted it's not really clear if you self-built your computer whether or not your motherboard manufacturer actually needs to do anything, or if Intel will simply be releasing tools themselves. They're not eager to begin committing to putting out fires until Intel tells them whether or not they have to, but Intel is going to want pre-built partners like HP, Dell, etc to distribute the patch through their points of customer contact because a generic Intel alert won't get grandma's attention quite like "Hello CYNTHIA, we here at Dell ask if you could point your INSPIRON 6120 to our site at..."

As a pretty big laptop manufacturer, it may simply be that Asus is jumping to the call sooner than everyone else because, whether Intel handles it for self-built or not, they're going to be asked to do something.

Dell isn't going very far back at all. You can check out what they will patch (and what they won't) here: http://www.dell.com/support/meltdown-spectre

Captain Hair posted:

So I'm asking more out of curiosity than fear, but I have a bunch of friends/family that are running Xeon chips on ye olde core2duo boards (asus p5q and the like).

I thought I'd head someone mention Xeon earlier, just wondering if they're more at risk or anything? Also I realise it's very unlikely that any of these old boards will get a bios or microcode update however since getting these old Xeon to run in there boards requires editing the microcode of the bios to include the Xeon chips, I'm assuming *if* these old boards were to get a patch it would make all these old Xeon units unusable till I edited the new bios, correct?

If you edited the BIOS to include actual microcode, and not just the basic CPU ID support, you could edit the BIOS again with the updated microcode and flash it yourself. This depends on Intel producing a microcode update for a CPU that old and it being available publicly in a format you can incorporate into whatever BIOS you have available.

EoRaptor fucked around with this message at 13:36 on Jan 10, 2018

EoRaptor
Sep 13, 2003

by Fluffdaddy

GRINDCORE MEGGIDO posted:

How can they make a spectre proof hardware design?

Physically split the TLB cache in two, with one that can be used as current, and the other that is a shadow of the current that can only be used by the CPU when it is speculating. The shadow cannot be read or written by any normal process, and only gets pushed up to the main cache if speculation succeeds. Take advantage of transactional memory support keep overhead low.

EoRaptor
Sep 13, 2003

by Fluffdaddy

repiv posted:

Oh that's surprising. Is physically fixing the bugs within 12-18 months of disclosure at all feasible, or were Intel quietly working on a fix before Google independently found the flaws? :thunk:

I think the fix will be in the classic cost vs time vs quality triangle, and that if intel is choosing time and quality the fix will be fine, if somewhat expensive*. As long as performance of the overall CPU doesn't suffer vs previous generation, even if it doesn't improve by much, but meltdown and spectre are both blocked, then the market will accept it.


* Expensive will probably come down to how much silicon space they end up spending on it. There is actually empty space and other 'padding' on current CPU designs, so if they can make use of that then the manufacturing cost won't change meaningfully and you only need to eat the development costs. If they need to grow the chip, then things are less clear about where the compromises will come.

Adbot
ADBOT LOVES YOU

EoRaptor
Sep 13, 2003

by Fluffdaddy

Cygni posted:

Interesting sorta post mortem on the Spectre/Meltdown patches for Intel. As with the other testing, shows pretty much no impact to gaming numbers and anywhere from "unnoticeable" to "goddamnit" impacts to storage performance depending on what you are doin.

https://www.anandtech.com/show/12566/analyzing-meltdown-spectre-perf-impact-on-intel-nuc7i7bnh

I think there is a chance for the storage impact to be somewhat mitigated. It only seems to affect NVMe drives under Windows, and it might be possible to change how the NVMe driver behaves to help out. Hopefully this can be explored by MS (and Samsung, who like to write their own driver) and an improvement found.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply