|
Chuu posted:I was pretty excited when I saw the announcement since it seemed like a no-brainer to get an ITX version of this out there tailored for FreeNAS that could be loaded up with ECC Memory. Then I saw the 8GB limit. Seriously Intel? Just imagine Paul Otellini jumping around on stage, sweating and yelling "MARKET SEGMENTATION! MARKET SEGMENTATION! MARKET SEGMENTATION!" and you've pretty well got the reason for it.
|
# ¿ Dec 13, 2012 16:34 |
|
|
# ¿ Apr 19, 2024 22:24 |
|
Alereon posted:Note that Haswell-E requires DDR4 memory. I'm not too interested in Haswell-E, but it might knock prices for current high end stuff around a bit. DDR4 is going to be a tougher sell, but even if people just see it as a premium option, the 'newer technology' aspect should put some price pressure on DDR3.
|
# ¿ Mar 19, 2014 00:05 |
|
Alereon posted:Remember that Iris Pro comes with 64/128MB of L4 cache which can have a more general impact on performance. There also are Iris Pro Core i5s. I'm more interested in what market intel is going after with a K series that has Iris Pro. A general rule is that the bigger the die, the lower the overclock potential, and Iris Pro is a huge amount of silicon. Is it separately clocked? Can you turn it off and use the 'area' to help with cooling? The performance potential of 128mb of fast, local L4 cache is nice, but few programs will be really able to take advantage of it, and the CPU cache control hardware won't be optimized for it, so it may not yield as much benefit as it could.
|
# ¿ Mar 20, 2014 16:03 |
|
Henrik Zetterberg posted:This makes no sense whatsoever. If it's there just for the iGPU, then my statement is false. However, if the CPU can access it as a L4 cache, then you are relying on the cache control logic to correctly decide what data to keep in that cache, what to flush, what to precache, and to keep the cache coherent among all the threads that could access it. This is a very specialized bit of logic, and is tailored very specifically for the cache size that is on the CPU. It can certainly use a larger cache, but the performance improvement won't be as great as if that logic had been built for the larger cache pool from the start.
|
# ¿ Mar 21, 2014 00:34 |
|
Combat Pretzel posted:I suppose it's good that I held off. Not that hardware lock elision nets big rear end performance improvements, but given various highly threaded apps and games, I take anything as soon the various apps and threading libraries support it. Yeah, this looked like a great feature to let multi-threaded applications both run more quickly and run more safely, all with very little work needed outside of compiler and library support. It'll still happen, but now everybody is going to be much more wary of it and implementation will be massively slowed. I guess I'm waiting for Broadwell Desktop 2015? I think that's what intel meant when they said it would be fixed in the next Broadwell CPUs.
|
# ¿ Aug 12, 2014 20:42 |
|
r0ck0 posted:Are there going to be any more CPUs made for the z97 chipset? Is the 4790k the last and the greatest for this mobo? Broadwell has, so far, very different power needs than haswell. Even if they preserve lga1150, you would likely need a new MB to accommodate the new power requirements. Skylake will absolutely need a new socket, as the switch to DDR4 is a big move.
|
# ¿ Sep 26, 2014 15:58 |
|
Malcolm XML posted:looks like they might just dump broadwell entirely. Why even bother with Skylake so close and broadwell being essentially a marginal improvement? Skylake will be a new chipset and a new memory type, and both will have a premium (probably a pretty big one for the memory) Broadwell will socket right into existing motherboards and use existing, cheap memory. Producing both might seem foolish, but intel has already done all the work to get broadwell to market, so the additional cost of actually making and delivering it is very small by comparison, and there is certainly a segment of the market to fulfill. We still see older pentium and core2 CPU's being made simply to meet a cost:value market niche. I'm betting the performance of skylake and broadwell for most day to day loads, and even for most gaming loads, will be effectively identical.
|
# ¿ Oct 27, 2014 16:49 |
|
Jan posted:*Existing 9 series chipset motherboards. Yes, sorry, also only if they get a firmware update and some just wont ever work, etc, etc. I just always assume that will be the case for this type of thing.
|
# ¿ Oct 27, 2014 17:37 |
|
HERAK posted:DDR4 introduced some more voltage regulation and reduced the power required which is important for servers but not so much for home users. The DDR4 spec for voltages is pretty dated, remember that the spec was finalized in 2011. We've pushed very hard on performance per watt since then. Better is the DDR4L (LPDDR4) spec that exists for laptops, it pushed voltage down to 1.05V without sacrificing performance. Desktops could probably switch to SODIMM formats and adopt this spec without any end user impact, but I don't know if this is seriously considered or not. You could make a traditional LPDDR4 DIMM, but I don't think anyone actually has bothered to. Expect DDR4 to stick around for a long time, though. No one has proposed a spec that solves any of DDR4's problems in a way that is affordable for consumers.
|
# ¿ Feb 6, 2015 02:47 |
|
Ninja Rope posted:What changes in chip technology would lead to this? It's a "14nm" design with FinFET. It should have a huge amount of space on die for making IPC improvements possible, though I doubt any individual component contributes more than a percent or two. Add it all up though: Cache design Memory controller integer unit design / count pipeline design / depth vector unit design / count You only need a tiny bit everywhere to make a big overall difference. This has been intels overall strategy for a number of years, even when they claim a brand new design, it's often reshuffling already extant compute units with maybe one or two section sporting something new. I'm just hoping the K variants aren't crippled by some horrible marketing decision and get the full features set (virtualization, transactional memory, etc)
|
# ¿ Apr 26, 2015 03:37 |
|
Combat Pretzel posted:I thought you could retrofit lock elision into existing apps via the system's locking primitives (--edit: or threading libraries, depending on your platform)? Obviously not as effective as specifically making direct use of the relevant instructions, but it should result in some difference? The other is a new set of instructions that lets you finely control a transaction attempt, and catch the fallout of the success/failure. You need to change your code to handle this new method, as well as needing compiler and library support. LiquidRain posted:If you see desktop boards with DDR3L support. I imagine you'll only see DDR4. DDR3L is likely there for lower-cost convertible tablets or some such until DDR4 reaches price parity. SkyLake supports DDR3L and DDR4, and a motherboard can offer support for both via the UniDIMM standard, which is a modified SO-DIMM spec (same pin count, new notch location) that allows for DDR3L or DDR4. You cannot mix and match DDR3/DDR4 in the same system, but you can switch at any time. It's not clear how well supported UniDIMM will be.
|
# ¿ Apr 27, 2015 02:59 |
|
necrobobsledder posted:Porting pthreads, for example, to support TSX and lock elision in userspace is technically viable but anyone interested in stability will be negatively impacted potentially and cause some friction and may need a little more battle testing before it can actually be considered mainline support. I don't want my production app to be a guinea pig for hardware transactional memory when I upgraded my Postgres version for a new query type, for example. Absolutely true that performance will only come with dedicated code. I think the main benefit of the hardware support will be that you cannot create faulty lockless code that potentially corrupts data. It will also make validating results much more straightforward. For actual use cases, I'm thinking Apple has the leg up here. If I squint a bit, the threading wrapper code for Swift looks like it could be tweaked to take advantage of TSX with very little work to change already written code. This could give much better thread performance throughout the entire O/S and application stack, and though I doubt it would be visible to end users as performance, it will probably pop out as less heat and longer battery life. For big data databases, the amount of inflight transactions possible would need to be greatly increased, and we will probably see that happen on the Xeon lineup. Getting it into Skylake is probably about developer usage, not data center usage (yet).
|
# ¿ Apr 27, 2015 17:24 |
|
Grapeshot posted:As far as I understand it, UniDIMM is supposed to be for SODIMMs only and incompatible with both standard DDR3 and DDR4 so you won't be using your old memory like that. My quiet hope was that we'd switch to the so-dimm format for desktop boards as well. It would simplify a lot of stuff and DDR4 is as good a time as ever.
|
# ¿ Apr 29, 2015 14:06 |
|
necrobobsledder posted:The sheer density of chips from high density compute server lines on DIMM boards cannot be achieved in SO-DIMM form factors, and servers are not going away - they'll go away after SO-DIMMs I would argue (no more laptops or small nettops being made, that is). You try putting 32 chips of the same size as what's on a typical DIMM now into a SO-DIMM and see how well that turns out. I was referring just to desktop. Server DIMM requirements are so far removed from desktop that there is effectively no overlap anyway. Besides, the cap intel puts on the desktop cpu memory controller (32GB total, 8GB per DIMM) makes it worthless for any serious server usage.
|
# ¿ Apr 29, 2015 17:28 |
|
Sidesaddle Cavalry posted:I can't answer that question directly, but I can hypothesize that it wouldn't solve the issue of two production lines like blowfish mentioned. I understand there's a difference between ECC and non-ECC RAM, but memory makers would still need to assemble for two different form factors. We weren't talking only DIMM production lines, more that you'd stop producing a line to desktop DIMMs in favour of producing extra laptop SO-DIMMs. You've just saved a bunch of design, testing, and validation. There would be some savings in the retail channel, as server dimms are pretty rare there already, so the removal of an entire product line (desktop dimms) would clear inventory space and reduce inventory management. You'd also make consumer MB design slightly easier, as the space requirements for memory slots would go down. If you think that the design/testing/validation for server dimms shares anything with desktop dimms, be assured it doesn't. Beyond any ECC requirements, the DIMM itself needs to be much, much stricter on electrical tolerances to keep EM noise down, so larger banks of dimms (slots) can all be populated without errors creeping in. Even though it shares a basic shape with desktop dimms, there really isn't any relationship between them once you begin the design process.
|
# ¿ Apr 29, 2015 20:07 |
|
PC LOAD LETTER posted:If I'm reading this right it sure looks like things haven't changed much fundamentally and the 'front' vs 'back end' metaphor still works pretty well even with a very modern x86 chip that can do uop fusion and has a trace cache. Sometimes you can get a 1:1 uop vs x86 instruction ratio but sometimes you still see multiple uops even with new instructions. Seems to be all over the place really. I don't think there is going to be a better solution than the current method of profiling applications, determining what they do most, optimizing that or introducing instructions that optimize certain actions and seeing what sticks. The lag between instruction availability, compiler support, application support, and instruction universality (eg: >80% of currently in use CPU's have it) is so huge that it's always going to be hard to predict what will actually turn out to be useful by the time it's actually generally usable. We typically have cycles where different areas get focus (hardware, compiler, language) for a bit, but even then it's hard to say where we currently are, only to look back and see where we were and try to go on from there. It's fun to watch, because there is real innovation happening all the time by some very smart people (and groups of people). The whole drive toward multithread/multiprocess as we ran into Ghz scaling limits was really interesting, and we are still seeing the results.
|
# ¿ Jun 4, 2015 19:30 |
|
Welmu posted:Intel is dropping the stock cooler from Skylake-S processors. It was a good bit of money for something >90% of the buying market never used. I'm surprised it didn't happen years ago.
|
# ¿ Jul 2, 2015 17:36 |
|
Ak Gara posted:There doesn't seem to be a water cooling thread so I'll ask here. My 5ghz 2500k is quite loud using an H100 so I was looking into putting together a custom loop (+ SLI 680's.) Yes. Unless the radiator is actually warm/hot to the touch and its fans are running all the time, it's not the bottleneck.
|
# ¿ Jul 28, 2015 20:27 |
|
VostokProgram posted:...Whatever happened to the memristor, anyway? Turns out making them reliable over long periods is hard, especially at the feature sizes modern manufacturing methods use. Xpoint is a type of memristor, though, so we are finally getting there. Don't hold your breath for logic gates built with them, though.
|
# ¿ Jul 31, 2015 14:59 |
|
Pryor on Fire posted:Well the good news is that CPU progress has slowed to such a glacial pace that you can just keep the same CPU/mobo for 5-10 years without any compelling reason to upgrade so plugging a new CPU into a socket isn't really something that happens anymore I think the next big improvements for computers will come outside the CPU. We are already seeing SSD's as a meaningful upgrade that is more cost efficient than a new CPU, and things like xpoint (or similar) that move faster storage closer to the CPU, as well as HBM (or similar) that move faster memory closer to the CPU will be the next big 'must haves' for computers. GPU growth and integration will continue a pace for a while, and the VR products coming in the next few years might give them a boost, but we are already heavily into the 'branding, not innovation' business model, so don't expect anything amazing.
|
# ¿ Aug 2, 2015 23:11 |
|
Combat Pretzel posted:Why wouldn't one want XPoint, if it's even faster solid state memory? --edit: I mean with NVMe interface. Intel hasn't shown any desire to use xpoint as a flash ram replacement in SSD's or phones or what have you. They are targeting server memory via specialty DIMMs that allow a huge increase in the amount of memory a server can have, by using a blend of xpoint and regular memory on a single DIMM. This is either managed by the CPU itself or by an 'xpoint aware' memory manager (or both!) On the consumer front, I'd actually expect Apple to be the first ones to use xpoint in their mac pro series. They have the total control over hardware and operating system you need to turn around such a product quickly, and price isn't the first concern for people purchasing workstation class mac products. Xpoint in a high end laptop would also make a lot of sense, if the price is justifiable.
|
# ¿ Nov 6, 2015 20:09 |
|
Durinia posted:wat? I completely missed this. Oops. Durinia posted:
It's more that, you can stick a terabyte of memory in a server, but now for the same price you can stick 4TB in. For workstations, this is basically the same selling point. It applies especially well to mac pros, which are heavily used for video and image editing, where having lots of memory helps, but you generally are only working with small chunks of it at a time. For laptops, the fact that xpoint requires no refresh cycle means it should be much more power efficient than dram. So, a system with 4GB of dram and 4GB of xpoint should perform as if it has 8GB of memory but have the battery life equal to the 4GB model. It gets even better as you increase the amount of xpoint memory in the system.
|
# ¿ Nov 6, 2015 20:22 |
|
fishmech posted:Because nothing ships with Thunderbolt besides a few random Sony laptops and Apple computers. The Sony laptops did use it for external GPU stuff, but IIRC they weren't that good. Also intel refused to certify any device which broke out the thunerbolt port into a pcie port in an external enclosure, so that kiboshed graphics cards along with everything else. They changed that with thunderbolt3, and adopted the usb3 port specification, so now there is much less barrier to entry and much more flexibility to implement devices.
|
# ¿ Jan 8, 2016 19:34 |
|
Ludicrous Gibs! posted:I've got an I5-2500 non-k that's coming up on 5 years old now. Since OC'ing isn't an option, I take it an upgrade to Skylake is probably a good idea when I build my VR rig in a month or so? Should I go for an OC-able chip this time? The biggest boost here is that two USB 3 ports are required for Occulus VR (and probably others), and getting anew motherboard that includes a bunch will fulfill that nicely. If you can hold off making decisions around GPU's until at least April 7th, we will have some more news about next gen nVidia GPU's, and probably AMD as well, whcih should help with any planning you are making.
|
# ¿ Mar 31, 2016 00:37 |
|
VulgarandStupid posted:The bootleg market for VCDs was huge, I went to China 13 years ago and in some stores they have the legitimate DVDs/VCDs on the normal displays. Then in cabinets underneath, they had all the bootlegs, which were obviously much cheaper. I don't think its very hush hush over there. The DVD logo spec actually specifies that (S)VCD's must be playable. This wasn't well tested, and some models wouldn't play them at all, but it ended up not mattering because the DVD encryption was broken so quickly.
|
# ¿ May 11, 2016 20:10 |
|
In a good sign, Kaby Lake CPU's are already available for hardware development: http://arstechnica.com/gadgets/2016/05/intels-post-tick-tock-kaby-lake-cpus-definitely-coming-later-this-year/ Intel is giving every indication that it is on track for a 2H16 release, though not all features are completely finalized/announced.
|
# ¿ Jun 1, 2016 02:21 |
|
Platystemon posted:If it were true, a lot of people would need to know about it well in advance. I think the article is writing things to try to hype up their cachet, that aren't really true or even probable. The SIMD stuff at the end is probably the entire movement on Intels part, removing the obsolete SIMD stuff like MMX or the first few SIMD generations would be fine, as even under emulation, newer CPU will outperform the older ones with the dedicated hardware. I could also see a high density targeted CPU design that does away with FP emulation entirely, along with more aggressively dropping other features, in a push for markets that don't use them, like high performance computing or storage computing. Desktop computing and general purpose server computing won't see anything so major. Dropping legacy x86 stuff would be weird, because Intel already emulates all that. Modern x86 CPU's use a totally non-x86 compatible architecture for execution, and have a decoder that takes each x86 op and produces one (more or less*) micro-op's that then pass through the execution stage(s). The resulting output is then put back into 'x86' so the program can find the result it expects in the way it expects. It's this type of system that provides all the 'lift' needed for branch prediction, SMT, pre-loading the cache, etc. x86-64 is treated the same way, just with a different decoder to produce the micro-op's. There is no reason to drop anything, because none of it really exists anyway. *Most x86 instructions produce one or two micro-ops, but some instructions so commonly occur together that these 'sets' of x86 instructions only produce a single micro-op.
|
# ¿ Dec 27, 2016 06:59 |
|
EdEddnEddy posted:Today at CES Intel showed off a few Laptops with Cannon Lake slated for Q4 2017 and some VR stuff. Yay? Just seems odd to just release an Uninspired Kabby Lake and already demoing another CPU thats also coming out in the same year (if at the end of it). And probably another lacking much if anything outside of power savings. Not sure what they can do for VR specific optimizations unless they are going to tap into the chips encoding capabilities directly for real time inside out positional tracking or something. This isn't actually new behaviour from intel, it's just a new branding on something they already did. In the past, they have launched a desktop processor model, then followed up with a a low power laptop variant (-U, -ULV) some months later. At the same time a set of low power desktop versions appear (-T), and Xeon server variants also appear (-E) around then. These all used the same branding as the current generation, but are actually improved designs for the process node. Now, intel has chosen to stretch out the timeline a bit, and rebrand those improved designs as a separate, new CPU model. That's why we saw Kaby Lake laptop parts in 3Q 2016, before the scheduled desktop part, as that was were the best return on investment would be. Intel intended to to launch the desktop chips very soon after, but ran into some unspecified manufacturing issues, and the desktops chips slid into 2017. Thus we are seeing the collision with Cannonlake. If intel hadn't chose to put Kaby Lake out as a new brand, and instead we got ultra low power skylake cpu's for laptops and desktops, nobody would have noticed or commented on it.
|
# ¿ Jan 5, 2017 21:21 |
|
Kazinsal posted:Ryzen MacBooks when? Intel pays Apple a boatload of cash to use Intel CPU's. It's admittedly not a direct monetary payment, but Intel handles all the development work for Apple mainboards, including arranging manufacturing and ensuring supply in preference over other OEMs. I'm pretty sure they also help out with a bunch of the EFI stuff and driver development, though that's much less clearcut. In other Intel news, it seems Coffee Lake will remain on 14nm, there will be a 6 core i7 branded chip version of it, and Cannon Lake 10nm is going to start as a Xeon brand and work its way down to consumer, instead of the usual consumer variants first. https://arstechnica.com/gadgets/2017/02/intel-coffee-lake-14nm-release-date/
|
# ¿ Feb 13, 2017 16:10 |
|
ConanTheLibrarian posted:Given they've targeted mobile first with their last few releases, presumably because of the power savings, could this mean the 10nm process doesn't have significantly less power use than 14nm? If I was to guess, I'd say it's probably more price related than anything. The mobile market can't charge a premium for low power chips in ultralights the way it used to, and Intel needs a return on the 10nm investments it's made. New server chips, especially ones that have added logic to accelerate particular problems (encryption is a big one) still command a premium. I'm sure we are well into diminishing returns in performance and power savings from process shrinks, both from lower percentage shrinks and from the reality that only tiny parts of the logic on the chip are 10nm, the rest ends up being much bigger. That's been true for a while, and most power savings have come from designing in aggressive sleep states and power gating.
|
# ¿ Feb 13, 2017 21:51 |
|
Tab8715 posted:Is Coffeelake or Iceland expected to bring anything new to the table aside from minor performance gains? Nope. Coffee Lake was supposed to be the next process shrink, but now it's not, so it'll just bring ????. The only thing we know about it for sure is that the Coffee Lake Xeon's are launching ahead of the desktop and mobile parts, and that Kaby Lake Xeons may never appear because of this. I also think that intels 'secret' internal codename poo poo has been hijacked by marketing and it's no longer a worthwhile way to talk about intel processors. :/
|
# ¿ Feb 28, 2017 03:27 |
|
Don Lapre posted:Well its not straight gallium. Whatever the alloy is it definitely doesn't freeze at 29.8c otherwise the tubes would all rupture. Most materials shrink when they freeze, not expand. Water is in the minority, even if it is so common. Gallium wouldn't rupture a container if frozen.
|
# ¿ Feb 28, 2017 20:03 |
|
mcbexx posted:Overclocking question: There's an overclocking thread which may have answers.
|
# ¿ Mar 11, 2017 02:47 |
|
Boiled Water posted:I guess it's a good marketing move when you need to convince your manager not to buy pleb tier chips. It seems to have been originally conceived to better mark out chips based on features, not core count. So, a company that wanted the best AES performance in a platform accelerator, for instance, would previously have bought e7 class chips, even though an e3 with the same AES engine enabled would be better because it had a faster base and turbo clock, as the platform isn't dual socket or high memory. It's currently a bit jacked up, because it was pasted over the existing product stack, and marketing got their hands on it. I wonder if intel can make it stick or not. Their target market is really OEM's, not customers, and OEM's have resisted such changes in the past.
|
# ¿ May 5, 2017 04:16 |
|
Phenotype posted:How soon will "fixed" chips hit the market? I've been building a system since Black Friday and I just got my 8700k in the mail a few days ago, still unopened. I'm debating whether or not it's worth just returning it and getting an AMD processor, or waiting a little while till fixed Intel chips are available that won't take that performance hit. This is a flaw in the way TLB caches are currently designed. It's highly unlikely there will be any sort of fixed silicon until the next generation of processors. A decent shot at explaining it is given here: https://arstechnica.com/gadgets/2018/01/whats-behind-the-intel-design-flaw-forcing-numerous-patches/ and if that's true then the entire TLB would need to be physically split into a RING 0 cache and RING 3 cache, with a hardware gate between them that can enforce access levels at all times, even during speculative execution. This is not a simple tweak of a few transistors. EoRaptor fucked around with this message at 23:31 on Jan 3, 2018 |
# ¿ Jan 3, 2018 23:22 |
|
Phenotype posted:Well in that case, is it better at this point to return it and go with an AMD processor? I looked at benchmarks for the Ryzen 1800x and they're all noticeably slower than the 8700k on most tasks, even if the 8700k takes a 25% performance hit. I plan to use it for gaming and multitasking with a number of remote sessions open, so I'm not sure the issue is going to affect me much, but still. I've been saving money and putting together pretty close to a top-of-the-line machine and it's really lovely to hear literally a few days before all the parts finish arriving. AMD claims to not be affected, but the current patches for Linux include AMD in their mitigation/fix, so ??? to that for now. I personally wouldn't worry about it, as the highest impacts are on very specific workloads that people don't run on their desktops, most applications either see no performance impact from the fix, or one that is sub 5%. The big security issue is the ability to discover information about other VMs in a virtualized environment, which doesn't apply to desktop usage either. Edit: Ah, reading the above blogs shows that there are actually two types of attacks, not one, which is probably why information available prior to this disclosure was so confusing. It seems intel is affected by both, and AMD only by one. The one that affects intel and AMD is one that could have an impact on desktop users, so that shouldn't influence your purchasing choices. Performance impact for mitigation should be the same for both intel and AMD cpu's as well. EoRaptor fucked around with this message at 23:44 on Jan 3, 2018 |
# ¿ Jan 3, 2018 23:34 |
|
Craptacular! posted:ASUS will patch back to Skylake. ASRock has no comment. MSI made a vague statement that said "Older chipsets may need more time to wait, as it's up to Intel to release required resources. No ETA given." Dell isn't going very far back at all. You can check out what they will patch (and what they won't) here: http://www.dell.com/support/meltdown-spectre Captain Hair posted:So I'm asking more out of curiosity than fear, but I have a bunch of friends/family that are running Xeon chips on ye olde core2duo boards (asus p5q and the like). If you edited the BIOS to include actual microcode, and not just the basic CPU ID support, you could edit the BIOS again with the updated microcode and flash it yourself. This depends on Intel producing a microcode update for a CPU that old and it being available publicly in a format you can incorporate into whatever BIOS you have available. EoRaptor fucked around with this message at 13:36 on Jan 10, 2018 |
# ¿ Jan 10, 2018 13:31 |
|
GRINDCORE MEGGIDO posted:How can they make a spectre proof hardware design? Physically split the TLB cache in two, with one that can be used as current, and the other that is a shadow of the current that can only be used by the CPU when it is speculating. The shadow cannot be read or written by any normal process, and only gets pushed up to the main cache if speculation succeeds. Take advantage of transactional memory support keep overhead low.
|
# ¿ Jan 10, 2018 21:14 |
|
repiv posted:Oh that's surprising. Is physically fixing the bugs within 12-18 months of disclosure at all feasible, or were Intel quietly working on a fix before Google independently found the flaws? I think the fix will be in the classic cost vs time vs quality triangle, and that if intel is choosing time and quality the fix will be fine, if somewhat expensive*. As long as performance of the overall CPU doesn't suffer vs previous generation, even if it doesn't improve by much, but meltdown and spectre are both blocked, then the market will accept it. * Expensive will probably come down to how much silicon space they end up spending on it. There is actually empty space and other 'padding' on current CPU designs, so if they can make use of that then the manufacturing cost won't change meaningfully and you only need to eat the development costs. If they need to grow the chip, then things are less clear about where the compromises will come.
|
# ¿ Jan 26, 2018 22:49 |
|
|
# ¿ Apr 19, 2024 22:24 |
|
Cygni posted:Interesting sorta post mortem on the Spectre/Meltdown patches for Intel. As with the other testing, shows pretty much no impact to gaming numbers and anywhere from "unnoticeable" to "goddamnit" impacts to storage performance depending on what you are doin. I think there is a chance for the storage impact to be somewhat mitigated. It only seems to affect NVMe drives under Windows, and it might be possible to change how the NVMe driver behaves to help out. Hopefully this can be explored by MS (and Samsung, who like to write their own driver) and an improvement found.
|
# ¿ Mar 24, 2018 01:36 |