Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BlankSystemDaemon
Mar 13, 2009



I'm a little bit excited about is getting a quad-core non-SMT 7W CPU that has SHA512 as part of its supported crypto-primitives, and the higher bandwidth on the platforms.
That lets me dream about a passively cooled router/file-server/HTPC which has ZFS with sha512t256 checksumming both on-disk and in-memory.

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



BangersInMyKnickers posted:

Why the hell are you trying to use sha512 for checksumming??
While SamDabbers is absolutely correct that it's good if you're using dedup on ZFS, I'm not doing that (because dedup on ZFS as it's currently implemented is loving nuts to use anything except a few very specific use-cases)
SHA512 was always likely to be one of the checksums which would get accelerated by hardware whereas fletcher2 and fletcher4 are unlikely to be accelerated and crc32c isn't supported by ZFS (despite being improved by SSE4). Skein, the only other option, is unlikely to ever be accelerated in hardware.
SHA512 is already supported by both QuickAssist and the Chelsio crypto accelerators supported by FreeBSDs ccr(4) driver supports it too - on top of that irrespective of x86 there's ARM, POWER9/10, and RISC-V which are getting or planning on adding it.

BlankSystemDaemon
Mar 13, 2009



BangersInMyKnickers posted:

I mean, yeah, if you've got QA cards to offload to then by all means go nuts but there just aren't enough permutations on a 128kb block to make a hash collision probable over sha256 compared to some other corruption event.

https://blogs.oracle.com/bonwick/zfs-deduplication-v2
The reason why SHA512 was chosen over SHA256 for ZFS (which, as you probably know, has variable records between 512 bytes and 256 kilobytes) is that even in software if any message beyond 448 bytes, SHA512 on any 64bit ISA is up to 60% faster than SHA256 (assuming ideal conditions, but in the real world it's ~50% faster) because it's only 25% more rounds and 64bit handle quadwords whereas 32bit can only handle doublewords per instruction.

craig588 posted:

I CRCed my console dumps and tested a few of them with MD5 after someone said CRC isn't valid anymore. If they passed CRCing they passed MD5 checks. I think if you're not worried about an actual adversary you can verify with pretty much anything.
CRC isn't valid as a cryptographically safe checksum (ie. it can't stand up to a dedicated attacker), but it still works to tell you if a random bit gets flipped on a disk.
If combined with integrity checking and optionally encryption (like GEOM/GELI in FreeBSD can do), it's perfectly reasonable to implement it - which is what the FFS/UFS creator is currently working on for FreeBSD.

BlankSystemDaemon
Mar 13, 2009



Asus might be the only company who still make motherboards which aren't garishly LED'ed, so there must be a lot of people who're buying the ones that could make a rainbow feel envious.

BlankSystemDaemon
Mar 13, 2009



Big Mackson posted:

I am planning to go from 4770k to 9600k and i hope it will be moar performance. + going from DDR3 to DDR4.
Back in the Pentium days before SMT, SMP and everything else (including speculative attacks), there was a general rule to aim for a 700MHz boost in raw clock frequency regardless of generational IPC improvements in order to feel the added performance on a day-to-day basis.
So far as I know, this is still true for the purposes of vidya gayms and most programs since most of them still perform most of their workload on a single high-clocked core - and since the i5-9600k boosts to 4.6 for a period of time, it sounds like an excellent upgrade.

The big difference with respect to memory isn't that it's DDR4 over DDR3, it's the memory bandwidth you end up going which - which at the top-end is almost doubled.

Plus, the two extra cores can always improve multitasking with all the bullshit and third party programs that seem to run in the background on modern Windows.

BlankSystemDaemon
Mar 13, 2009



Lambert posted:

FLAC is great, it's for people that want to experience music at its highest quality.
Archival reasons alone are enough of a justification for using lossless formats when disk-space has continued to grow at an astonishing rate (although it's a pity that bandwidth hasn't kept up).
The earliest part of my digital music collection that I've been working on for almost 25 years have been converted from one format to another enough times that any lossy codec is statistically likely to start exhibiting loss of audio quality. What makes FLAC great is what makes any well-compressed lossless format great: When a new format comes a long, you just have to spend some electricity re-compressing, which gets faster as long as CPUs keep getting faster/more SMP/SMT.

Riflen posted:

I put FLACs on sdcards. Check mate. :smug:
That's fine, as long as you use ZFS. You just need a minimum of 64MB per SD card. :smugbert:

BlankSystemDaemon
Mar 13, 2009



SwissArmyDruid posted:

2011-ish is the year I switched off spinning rust and onto an SSD and never looked back. I was one of the lucky motherfuckers that got an OCZ Vertex 3s that did NOT exhibit any of the controller problems that others were having.
I was one of the people who bought the Intel X25-M that didn't have the SandForce controller which plagued basically every other ODM on the market?

BlankSystemDaemon
Mar 13, 2009



How all ya'all feel about not using RAID0 is how I feel about using anything other than ZFS. :v:

BlankSystemDaemon
Mar 13, 2009



DrDork posted:

On Windows, at least, you can use Storage Spaces to do more or less exactly that: JBOD but addressable as a single drive letter.
I believe they're called CONCAT arrays.

Paul MaudDib posted:

I've been wondering what the merits of RAID0 vs a spanned volume are. Obviously in both cases if you lose a drive you lose the entire array, but I guess in principle RAID reads are aligned and every read hits every disk, while spanned could allow it to service multiple requests on different drives depending on how the data scatters out across the filesystem? And spanned can be easily grown by adding disks while RAID stripes can't really be rewritten.

It seems like if it's supported, spanned should be preferable in most cases. You can't boot from a spanned volume though, while you could boot from hardware RAID.
A proper implementation of CONCAT will only lose the data on the drive that fails - as opposed to a RAID0 which will not only take the entire array with it, the MTBF of the array will be halved every time you add a disk.
Also, gconcat(8) aka. GEOM CONCAT in FreeBSD can be booted from just fine, as long as you place the firmware-compatible boot-block on the firmwares first disk (ie. what the BIOS calls C: and what UEFI calls disk0) - so it's a question of whether Windows 10 still uses NTLDR62 or has been updated to support Storage Spaces/ReFS.

BlankSystemDaemon fucked around with this message at 12:07 on Sep 1, 2019

BlankSystemDaemon
Mar 13, 2009



Paul MaudDib posted:

Right, you only lose what was on that disk... but files can be scattered across multiple disks at a block level, and likely will be for performance reasons, so you will lose half of every file on average.

Sadly storage spaces cannot be booted like ZFS spans, or so I’ve read.
ZFS can't do SPAN arrays; it stripes data across vdevs so any pool with more than one vdev automatically comes RAIDN0 where N is either blank, 1, 5, 6, or 7.

BlankSystemDaemon
Mar 13, 2009



Anand are pretty good about including POST and boot times on the various motherboards they review, so if that's something anyone cares about, they may wanna pay attention to those articles as there can be huge variances even for boards from the same manefacturer (though the variances aren't always explained, which I feel like Real Anand Of Old articles would've done).

I don't turn my computer off, because it's configured to go into a pretty energy-preserving state when it's idle - the whole CPU package uses 4W, the monitor turns off, and a power meter on the mains shows only 11W when idle.

BlankSystemDaemon
Mar 13, 2009



Happy_Misanthrope posted:

I have no idea that DDR4's auto settings for memory timing caused such a delay on boot. It's not a huge issue for me but just bugged me that bios POST was the slowest factor in my system boot - is there a utility like Ryzen Dram Calculator for figuring out timings for just standard Intel? Or is one even needed? I have standard 2666 DDR4 for an i5 9400 system, where would I go to get the specific timing settings? Cpu-Z only seems to cover a fraction of the fields my bios has.
All hardware nowadays has firmware which is responsible for figuring out timings, not just memory. All of that auto-configuration takes time.
Also, it's probably UEFI, not BIOS, which makes it significantly worse as UEFI boot process is horrendously long and complicated.

Isn't AMDs equivalent to XMP called AMP, or something along those lines? I'm pretty sure they have an equivalent system, so you might wanna see if that's an option.

EDIT: Apparently it can be called X-AMP, DOCP, EOCP, or some other acronym depending on whether you have MSI, Asus, Gigabyte or some other manefacturer?

BlankSystemDaemon fucked around with this message at 10:06 on Sep 4, 2019

BlankSystemDaemon
Mar 13, 2009



eames posted:

What’s the spectre situation with these old platforms and noname boards? I assume there’s no performance impact because you never get a BIOS update anyway. :confuoot:
The firmware updates sometimes supplied by vendors for these speculative execution attacks all have one thing in common; they just load microcode as part of the firmware startup sequence; Both Windows, Linux and FreeBSD (as well as the other BSDs) can apply these microcode updates at runtime too, and it can easily be integrated as part of the boot-up sequence.
Also, FreeBSD and Linux at least have software mitigations for some of these speculative execution attacks.

Once the vBSDCon presentations are done (vBSDCon started today, and will continue tomorrow) and the videos have been processed, there will be a video by Colin Percival (FreeBSD security officer emeritus and haver of tarsnap) that goes into the history of these speculative attacks, including how he found some back in 2005.

BlankSystemDaemon
Mar 13, 2009



DrDork posted:

AMD does not use a DDIO-like structure, so no. That said, even the researchers note that this is not a particularly probable attack to see in the wild anytime soon, so a lot of users are probably ok with leaving things as-is for now. But yeah, add it to the list of poo poo Intel needs to fix in future chips--which is gonna be hard without taking a solid performance hit on some workloads.

It's gonna be real interesting to see how the chip security world reacts to AMD actually being a player again, though: for years they've basically ignored them because they had such a small market share. But now that they're good, maybe AMD will get some security love and we'll see what sorts of interesting flaws pop up there, too.
While this is certainly a big problem for HPC clusters and :yaybutt: and the like since it seems to take advantage of RDMA which doesn't transcend broadcast domains, it doesn't seem like that big of a problem for anyone else.
It almost annoys me more that the people who did awesome work discovering it went ahead and overloaded an existing term, making it more difficult to casually talk about it since now we have to go out of our way to make it clear from context that "So how about that netcat thing" is about ⍺ instead of x.

BlankSystemDaemon
Mar 13, 2009



K8.0 posted:

Don't even talk to me if your CPU is only running at 3.6 Roentgen.
Psch, I've been exposed to 60 gray over a 6-week period and whatever the amount is for ~15 chest and abdominal CT-scans.

BlankSystemDaemon
Mar 13, 2009



omeg posted:

I think that dose would be very fatal.
:iiam: that I'm here then.

Alternatively it's because all of that radiation has been spread over more than 3 years so far.
IMRT was 2 gray radiation in a very small and well-defined area (located both by a titanium clip as well as MR and CT scans) every weekday during a 6 week period where I also received chemo-therapy. After that is when the regular scans started, of which all but the first of the abdominal and chest CT-scans have been 15-minute scans with contrast, to minimize exposure.

Consider that Igor Fedorovich Kostin, the reporter who was in the helicopter that filmed down into the Chernobyl reactor through an open door by flying over it died at 78, a few years ago (of a car crash, as I recall). So while there's a definite correlation, it's more complex than radiation == dead.

BlankSystemDaemon fucked around with this message at 19:27 on Sep 22, 2019

BlankSystemDaemon
Mar 13, 2009



omeg posted:

I might have mixed greys with sieverts... Radiation units are weird.
Gray and Sieverts are both absorbed dose as far as I remember (from talking with the particle physicist that's attached to that oncological department). Sieverts account for what type of radiation it is with a simple multiplication factor; Alpha gets a 20x multiplier, Beta and Gamma get 1x multiplier. Alpha and Beta are stopped by a simple layer of clothing, whereas Gamma can go right through you.

But you are right, 60 Gray absorbed dose in the entire body, of any kind of radiation, would leave you dead within the hour.

BlankSystemDaemon
Mar 13, 2009



They're HEDT chips, so because of market segmentation of course ECC won't be available.

BlankSystemDaemon
Mar 13, 2009



SamDabbers posted:

I'm running a pair of 16GB ECC UDIMMs (Samsung M391A2K43BB1-CRC) in an ASRock X370 Taichi with an R7 1700. ECC is both detected and enabled. I have not seen a single bit error reported yet though.

code:
$ dmesg | grep -i edac
[    0.274126] EDAC MC: Ver: 3.0.0
[   14.105194] EDAC amd64: Node 0: DRAM ECC enabled.
[   14.105196] EDAC amd64: F17h detected (node 0).
[   14.105239] EDAC MC: UMC0 chip selects:
[   14.105241] EDAC amd64: MC: 0:     0MB 1:     0MB
[   14.105242] EDAC amd64: MC: 2:  8191MB 3:  8191MB
[   14.105243] EDAC amd64: MC: 4:     0MB 5:     0MB
[   14.105244] EDAC amd64: MC: 6:     0MB 7:     0MB
[   14.105246] EDAC MC: UMC1 chip selects:
[   14.105247] EDAC amd64: MC: 0:     0MB 1:     0MB
[   14.105248] EDAC amd64: MC: 2:  8191MB 3:  8191MB
[   14.105248] EDAC amd64: MC: 4:     0MB 5:     0MB
[   14.105249] EDAC amd64: MC: 6:     0MB 7:     0MB
[   14.105250] EDAC amd64: using x8 syndromes.
[   14.105250] EDAC amd64: MCT channel count: 2
[   14.105356] EDAC MC0: Giving out device to module amd64_edac controller F17h: DEV 0000:00:18.3 (INTERRUPT)
[   14.105370] EDAC PCI0: Giving out device to module amd64_edac controller EDAC PCI controller: DEV 0000:00:18.0 (POLLED)
[   14.105371] AMD64 EDAC driver v3.5.0
$ sudo dmidecode -t memory
# dmidecode 3.2
Getting SMBIOS data from sysfs.
SMBIOS 3.0.0 present.

Handle 0x000E, DMI type 16, 23 bytes
Physical Memory Array
	Location: System Board Or Motherboard
	Use: System Memory
	Error Correction Type: Multi-bit ECC
	Maximum Capacity: 256 GB
	Error Information Handle: 0x000D
	Number Of Devices: 4
Sure, it's supported - but ECC support generally breaks down a couple of ways:
1) ECC is supported in that the system lets you POST with it
2) ECC is supported and single bit errors get corrected silently (which can be bad because what happens if a memory DIMM starts failing and producing a lot of errors?)
3) ECC is supported and single-bit errors get corrected and reported to the OS via an unmaskable interrupt (this is the most preferred option, as the OS can then decide if it shoud panic or not)
4) ECC is supported and the CPU is reset to prevent corruption (this generally happens in systems which are built with multiple levels of redundancy)
Add a couple more options for when using lock-stepped memory that can correct for two bit errors, and RAIM/memory mirroring/memory sparing which are irrelevant here.

So which is it? Unless you know beforehand, or know an engineer at Asrock, it can be an absolute pain in the arse to find out.

BlankSystemDaemon
Mar 13, 2009



That's good. Now all we can make a list of motherboards which we know work how they should when it comes to ECC, and we can start by adding ASRock X370 Taichi.
Not that I can afford one for a new system, but for my own gratification I'd like to know if the 8 SATA ports are connected to the south bridge, as is the ASmedia controller that adds two more SATA ports?

BlankSystemDaemon
Mar 13, 2009



Malcolm XML posted:

Too bad they discontinued the x299/itx

I now want 18cores in mini itx
That's gonna be a big rear end socket on a Mini-ITX board.

BlankSystemDaemon
Mar 13, 2009



Isn't a nichel market like HEDT also more likely to go with a E-ATX motherboard to get four pci-ex for SLI or something equally silly?

BlankSystemDaemon
Mar 13, 2009



eames posted:

Ha this guy says Moore's Law is Not Dead
ha that guy is Jim Keller :stare:

Stumbled across this recent presentation with a bunch of interesting statements. The sunny cove slides before the one I linked suggests they're going really wide.
I watched that presentation a little while ago (and thought I linked it in this thread, but apparently not). I do wonder how wide they're planning to go, it seems like they're suddenly expecting to get more than 1 IPC on general workloads?

BlankSystemDaemon
Mar 13, 2009



gradenko_2000 posted:

so HFT is just but coin mining but real?
It basically buttcoin mining, in that the money basically doesn't exist before they create it by having automated systems trade it as fast as possible (much faster than humanly possible), which is why it's been mandated that every server in the datacenters for HFT have to have the same length of fiberoptic cable.

BlankSystemDaemon
Mar 13, 2009



necrobobsledder posted:

Absolutely nonsense unfortunately because randomization of latency is what will screw over HFT traders, not consistent latency. There was an exchange started a while ago that was designed expressly for quants that didn't work work HFT approaches because they added long loops and some dropped trades everywhere and you were not able to buy better QoS. Just add captchas randomly and a solid trading strategy works just fine in many respects for long and even option centric positions. Naked shorts are illegal and all but shorting and triggered trading setups like stop loss positions exacerbate sell-off cycles but are legal and are oftentimes done by retail traders and industrial scale alike.
Tom Scott had a video where he visited one of the datacenters where it definitely happens and the video is featured in this article, which also has some more details on it.

EDIT: That work they did in the datacenter eventually became a whitepaper by the SEC and there's been some talk of making it mandated, which is what I was wrongly remembering as already having been mandated.
EDIT2: Oh, I just realized, the paper is linked in the article too. orz

BlankSystemDaemon fucked around with this message at 16:33 on Oct 29, 2019

BlankSystemDaemon
Mar 13, 2009



Shipon posted:

Like I said, it was drummed up scaremongering from an infosec consultancy industry that tries to boost its public profile. Anyone who needed to mitigate these problems had ways to get around it and the performance penalty was absolutely not worth it for everyone who wasn't in that segment.
That's an especially hot take, since the only reason more people haven't had their information/passwords/et cetera ad nauseum stolen via Javascript using cache attacks is that both Mozilla and Google lowered the time resolution accessible to Firefox and Chrom(e|ium).
There are still plenty of systems who have not received microcode updates in their firmware because the vendor hasn't bothered updating the firmware even if it's still officially supported, and who runs software that hasn't been updated to either automatically enable the mitigations or had them manually enabled.

BlankSystemDaemon
Mar 13, 2009



Doh004 posted:

I'm speccing out a custom build for my own dedicated Plex server and am curious about Quick Sync. Do I need the integrated graphics in order to have Quick Sync enabled (Plex uses this primarily for video encoding). I'm looking at the i3-9100 here: https://www.newegg.com/intel-core-i3-9th-gen-core-i3-9100/p/N82E16819118022 but can shave off $100 to go without the graphics.

I have a spare 1050 ti that *should* fit in my case which can provide the encoding for me, but I'm trying to see if I actually need it.
Yep, QuickSync uses special chips that're included in the iGPU of the Intel chips, so while it doesn't use GPGPU execution units like nvenc, it does need the iGPU to be on the die.
These fixed-function decoding and encoding chips are only good for a subset of the codec they support, which means you're at the mercy of Intel to have chosen a codec at the particular bitrate and settings that are fixed in the hardware.
Unfortunately for all of us, Intel have decided that the baseline profile, which is primarily used for video conferencing and mobile video, is good enough - so it looks absolute shite if you're transcoding onto your TV, for example.

BlankSystemDaemon fucked around with this message at 14:34 on Nov 3, 2019

BlankSystemDaemon
Mar 13, 2009



SwissArmyDruid posted:

https://blogs.intel.com/technology/2019/11/ipas-november-2019-intel-platform-update-ipu/


Intel plz. I remember when six or more CVEs in A YEAR was unusual, but 77 in a month?
I might argue that the reason it was 6 CVEs in a year was that they just didn't publish information about it publicly, and only informed their partners like the three-letter agencies?

There's also http://tpm.fail/ which we've known about for almost a month.

BlankSystemDaemon
Mar 13, 2009



eames posted:

In *theory* your CPU could leak the content of your unlocked password manager window while you visit a site with a malicious javascript ad.
Minor detail, but this is not applicable since the first wave of userspace information leaks back around Spectre/Meltdown, as both Chrome and Firefox (and hopefully all other browsers using javascript) have loosened timing resolutions such that javascript can no longer be used for it.

BlankSystemDaemon
Mar 13, 2009



ratbert90 posted:

Ok.

The 3950x is $150 more expensive than a 10900x, has 6 more cores, uses less power, has a higher sustained boost clock, has 45MB more L3 cache, can support ECC memory, and has PCIe 4.0 support.
Minor detail, but when you say ECC, what mode are you talking about?
It can be one of these:
  • The system can go beyond POST with ECC memory plugged in
  • The memory can correct one or two bit error(s), but doesn't generate a non-maskable interrupt to notify the system
  • The system can correct one or two bit error(s), and does generate a non-maskable interrupt to notify the system
  • A non-maskable interrupt is generated, and it causes the CPU to reset rather than write corrupt data to disk
1 is fine if you don't actually care about system stability but why pay the ~$20-50 that ECC costs, 2 is bad because it effectly makes the system behave like it doesn't have ECC when you have enough memory errors, 3 is optimal as it lets you pick whether to ignore the NMI or panic the system, and 4 is only used in the most critical setups with active/active high-availability.

BlankSystemDaemon
Mar 13, 2009



Paul MaudDib posted:

I personally would not rely on ECC for any board that is not a server-oriented board.
Not even that will guarentee anything unless it's from a reputable ODM like Supermicro.

Eletriarnation posted:

Sure, so hypothetically AMD could push out an AGESA update that breaks ECC across the whole Ryzen line and tell everyone to gently caress off. In that case ASRock would just have to stop advertising support, and anyone with a working ECC system would have to stop updating or lose that feature.

The reason I'm calling it FUD is that as far as I can tell there's no reason for AMD to do this (and there are some reasons for them not to do it), with nothing indicating that it's likely to happen. The actual reality today is that ECC seems to work just fine on Ryzen platforms which advertise it. The idea that it won't in the future is, again as far as I can tell, based on nothing except "hey they can't get sued if they decide to do this".
20 years of industry experience has taught me that "ECC supported via (un)registered DIMMs" doesn't mean "the firmware will generate non-maskable interrupts so you get to decide what happens", no matter how much anyone hopes that that's what it means.

BlankSystemDaemon
Mar 13, 2009



BangersInMyKnickers posted:

ECC without NMI is unacceptable for server applications, but I do not agree that is matters that much for home/gaming use. Even perfectly healthy dimms can bitflip occasionally and this will stop that from impacting you, as well as generally increasing overall reliability. Yeah, if you are losing a dimm then troubleshooting it will be a pain in the rear end but a memtest will generally turn it up eventually while reducing the likelihood of system problems being caused by ram.
With ECC and NMIs you still get the benefit of knowing when your memory is going bad, it's not like you need to configure it to crash the system if it the ECC memory can't fix the bit errors, and it'll catch those single bit-flips.

EDIT: Microsoft published a study in 2007 that showed that based on the information they'd gathered, even consumer PCs would benefit from having ECC in terms of reducing lost work/productivity, iirc. Google also published a paper showing that bitflips are much more common, and there's been other studies on the frequency of cosmic rays and how often they lead to system crashes.

BlankSystemDaemon fucked around with this message at 23:49 on Nov 15, 2019

BlankSystemDaemon
Mar 13, 2009



We still haven't seen any non-Goldmont CPUs with SHA Extensions, have we?

EDIT: The news of a Comet Lake NUC just popped into my RSS feeds, so there's that.

BlankSystemDaemon fucked around with this message at 19:26 on Nov 17, 2019

BlankSystemDaemon
Mar 13, 2009



Asimov posted:

I agree and in a consumer-friendly world there would be a little "opt out of speculation execution exploits" button for consumers. Intel's explanation would probably be that their REAL customers are server farms, and that you should upgrade your hardware anyway. In addition, consider the bad press coverage that would occur if a data breach happened that could be blamed on un-patched operating systems and CPU architecture. mix that up into a stew and you get mandatory performance-limiting software patches.
Every single HPC cluster doesn't use the exploit mitigations, because all of the OSS OS' including Linux lets you turn them off (though there's no standard) and they're all heavily firewalled from even accessing the internet.

BlankSystemDaemon
Mar 13, 2009



For what it's worth, the biggest issue with the side-channel attacks was that it allowed javascript running in browsers to be used to leak kernel information in memory including passwords stored in password vaults - and that has been mitigated on both Chrom(e|ium) and Firefox which covers like 98% of the browser market, by making the timings in browsers not as tight.

So if you're like me and have an old laptop where the performance impact is rather high (even with the big speed increases in Firefox), you might forego them and rely on the browser mitigation instead.

BlankSystemDaemon
Mar 13, 2009



It's at times like these I wish we had a shared CPU thread, pun fully intended.


A Success on Arm for HPC: AnandTech Found a Fujitsu A64fx Wafer
and Arm Server CPUs: You Can Now Buy Ampere’s eMAG in a Workstation according to AnandTech.

BlankSystemDaemon
Mar 13, 2009



The modern implementation of SMT that Intel has was introduced in 2008 (with Nehalem) and bears very little relationship with the one found in the 2002 version which appeared in the Pentium 4 Northwood-C, which I had in a machine that has long since given up the blue smoke.

Nehalem introduced a shitload of changes, including but not limited to the big change associated with transitioning EM64T/IA-32e to Intel 64 (which, if I recall correctly, had less of a difference - but not perfect compatibility, with AMD64), turbo-boost functionality, a new MMU (which AMD had years before Intel), as well as the branch predictor we know and love today, a second-level TLB, plus the hardware-accelerated virtualization and SLAT. They also did quite a lot of work on compare and swap for atomic operations.
And they also got rid of the Northbridge, added DRAM DMA paths to the CPU itself, started requiring QPI links for cross-socket connection, and hung the Southbridge off the CPU primary CPU making NUMA significantly worse to deal with.

EDIT: Most of that is from memory or scanning FreeBSD man-pages, so I might not have all the details correct.

BlankSystemDaemon fucked around with this message at 00:44 on Dec 11, 2019

BlankSystemDaemon
Mar 13, 2009



EdEddnEddy posted:

And overall, has any of these flaws led to an actual attack that we know of? Patch it all you want, there has to be countless unpatched hardware systems in the wild still.

In the strive for performance improvements and advancement, before you know it they are just going to label the CPU itself is vulnerable to being used to take your info and put it on the Internet by a user, so it just needs to be powered off completely to save you the headache.
The sneaky part about at least the Meltdown and Spectre attacks is that you couldn't know if it was happening to you - but it's unlikely to have been used by anyone except state-level threat actors.
Nothing has leaked about anyone getting hit from it, and there was that whole debacle about Intel only informing certain of their customers until a Linux developer blew the whole thing wide open.

Paul MaudDib posted:

the 5-year solution is Lakefield, you will have slower in-order (side benefit more efficient!) processors for handling tasks that are designated as "secure" and fast insecure cores for gaming and other poo poo that just needs to go fast, with caching systems that don't overlap.
Intel has 30 years of favoring performance over security, I have serious doubts that they'll fix it in as little as 5 years since it's inevitable there's more stuff out there that doesn't come from the branch predictors or the processor doing out-of-order execution.

EDIT: Also, wasn't Lakefield originally scheduled for 2019? :v:

BlankSystemDaemon fucked around with this message at 00:42 on Dec 11, 2019

BlankSystemDaemon
Mar 13, 2009



Endymion FRS MK1 posted:

I have a dumb question. How do these things get patched? I have an 8086K, do I just rely on motherboard bios updates or is there something else I actively have to do?
Intel has to (and has, consistently) produce fixes in the form of binary data which produces microcode that can be loaded into the CPU.
These can either be applied at runtime that follows a spec Intel has laid out whereby special CPU registers can have the binary data written to which will transparently load the microcode once it's been verified. The other way to load it is to have the firmware do the injection, but that requires your particular motherboard vendor to supply you with a firmware update.

EDIT: I forgot to mention that if you keep your OS up-to-date, these should all be either available or enabled by default.

BlankSystemDaemon fucked around with this message at 11:16 on Dec 11, 2019

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



That's great and all, but considering this quote:

"Anandtech posted:

It’s worth also pointing out, based on the title of this slide, that Intel still believes in Moore’s Law. Just don’t ask how much it’ll cost.
It seems that Intel has forgotten that Moore's Law includes the cost of the product and not just the relationship between performance and time.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply