Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Fame Douglas
Nov 20, 2013

by Fluffdaddy
Ah yes, all the other totally secure Intel features were hacked (including their super secure Management Engine), but VT-d is clearly impenetrable.

DrDork posted:

True, but the point remains that unless you're living under a technological rock, you're already assuming those risks with basically anything else you're doing. To worry about the security concerns of physically connected devices via IOMMU by comparison is, perhaps, worrying about the wrong things.

Worrying about the security of devices directly connected to your PC is absolutely something you should do. It's not superfluous, like double-spacing. Even regular USB devices can do a ton of harm.

Adbot
ADBOT LOVES YOU

Potato Salad
Oct 23, 2014

nobody cares


Fame Douglas posted:

VT-d is clearly impenetrable

that is a very strange and dubious claim

~Coxy
Dec 9, 2003

R.I.P. Inter-OS Sass - b.2000AD d.2003AD

Cygni posted:

To add to that, if you saw those USB4 speed numbers of 20Gbps or 40Gbps and got happy, good news! USB-IF hosed that up too! USB4 20Gbps certified devices only have to transfer data at... 10Gbps. The other 10Gbps can come from data or the required displayport pass through support. So even if a port says USB4 20Gbps, there is no guarantee it will run at anything higher than 10Gb/s for actual data. There is also no branding standard to figure out which USB4 20Gbps ports and devices support what, so being cynical, I think its likely that device makers won't bother supporting anything but the required portions.

I was trying to test some cables recently and you don't even have any way of knowing what rate your PC and the device are "syncing" at.
If you have an eGPU you can do some VRAM fill rate test and that's about it.
(Maybe someone should make a fake USB stick that writes everything to /dev/null for speed tests)

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"

priznat posted:

I was tasked with buying a few USB-C cables for connectivity to lab analyzers and am positively paralyzed by the options on amazon. It sucks trying to figure out which ones are high speed (important for getting 4-8GB data traces out of the analyzer quickly) and which aren't. UGH!

When in doubt, buy the USB-C cables directly from Google, especially if it's someone else footing the bill. It's the only way you know you're getting the genuine article. Don't trust Amazon, even for "name brand" items.

It looks like he hasn't updated it in a while, but this is a list curated by Benson Leung, the guy who originally broke the news that all USB-C cables were not created equal: https://www.amazon.com/hz/wishlist/genericItemsPage/IOGWYK5NXWAE?filter=&ref_=pdp_new_wl&sort=default&type=wishlist&_encoding=UTF8

His Amazon review page also reads like a "buy this, not that" list: https://www.amazon.com/gp/profile/amzn1.account.AFLICGQRF6BRJGH2RRD4VGMB47ZA/ref=cm_cr_srp_d_gw_btm?ie=UTF8

BIG HEADLINE fucked around with this message at 04:33 on Feb 26, 2021

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Fame Douglas posted:

Ah yes, all the other totally secure Intel features were hacked (including their super secure Management Engine), but VT-d is clearly impenetrable.

Do you have any idea how absurd your false equivalency is?

In one corner, we have Intel ME. This is literally a computer inside your computer, Xzibit style: an entire extra processor core, not visible to your normal operating system, running an Intel-provided Minix 3 OS complete with TCP/IP networking, file systems, drivers, a motherfucking web server, and more.

Of course ME got hacked. It was never not getting hacked. Intel didn't even choose an open source OS designed for security! They picked a pedagogical OS designed to be easy for CS undergrads to modify for schoolwork.

In the other corner, we have VT-d. VT-d is an IOMMU. An IOMMU is just a MMU for I/O devices.

What's a MMU? A device for translating virtual addresses to physical addresses. This is not a complex task, it consists of roughly the following steps:

1. Split the virtual address (VA) into two fields, the virtual page number and the offset. For x86, since pages are 4Kbytes or 12 address bits, the offset consists of the low 12 bits and the page number is all the rest of the address bits.

2. Look up the physical page number for the virtual page number.

3. Splice the physical page number with the offset and bam, you've got your translated address.

The hardware required to do #1 and #3 is so trivial it's impossible for there to be bugs. #2 is the "hard" part, and even that isn't particularly hard. The job can be very rigorously defined, it's simple, and the hardware you need to do it is not even a Turing complete (programmable) computer.

Is it possible for a MMU to have an exploitable bug? Sure. Is the scope for that even a millionth as large as the scope for security flaws in an entire loving Minix distro complete with application software intentionally designed to be capable of reimaging your hard drive from a network server while you think your computer is off? Oh gently caress no, gently caress off with this bullshit about how it's exactly the same level of risk.

BlankSystemDaemon
Mar 13, 2009



VT-d (and any IOMMU) will surely protect the :yaybutt:, just like SGX was supposed to.
Too bad SGX was so susceptible to so many speculative execution and/or timing attacks that any article covering it has to mention half a dozen of them just to get things out of the way.
And speaking of secure things that Intel swears everyone should use, TXT also has issues.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Fame Douglas posted:

Worrying about the security of devices directly connected to your PC is absolutely something you should do. It's not superfluous, like double-spacing. Even regular USB devices can do a ton of harm.

In a general security sense, yeah, you should be aware of what is getting plugged in to your system. The point is that worrying about TB3/4 devices being notably more dangerous than current USB because of PCIe access via IOMMU is maybe a little misguided.

But yeah, you really shouldn't be plugging random poo poo in, and if you're super concerned about security you straight up physically disable the ability to plug random poo poo in to begin with--neither TB nor legacy USB ports work well once you fill them with glue :shrug:

JawnV6
Jul 4, 2004

So hot ...

BlankSystemDaemon posted:

VT-d (and any IOMMU) will surely protect the :yaybutt:, just like SGX was supposed to.

can you even pretend to describe the threat model for this BS equivalence, like is your cloud server getting multiple devices plugged in somehow

Beef
Jul 26, 2004
Have there been any known attacks that used any of the speculative shenanigans? Hardware vectors seem so much :effort: compared to the bazillion software/networking vulnerabilities out there.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

BlankSystemDaemon posted:

VT-d (and any IOMMU) will surely protect the :yaybutt:, just like SGX was supposed to.
Too bad SGX was so susceptible to so many speculative execution and/or timing attacks that any article covering it has to mention half a dozen of them just to get things out of the way.
And speaking of secure things that Intel swears everyone should use, TXT also has issues.

Please try to understand at least a little about the technology you're posting about instead of just reflexively making GBS threads out whataboutisms.

MMUs are 1970s (1960s maybe?) tech, and IOMMUs are just "hey let's apply this well-worn MMU concept to everything which can touch memory, not just software".

A MMU is like a combination bouncer and guide. It has a list of who's allowed into what rooms (physical memory pages) at the club, and every club member has a potentially unique-to-them virtual numbering scheme for the rooms. Every time someone wants to get in a virtual room, they have to ask the bouncer for directions. It consults its page table, and either sends them along to the correct physical room or tells them to gently caress off because that one's off limits.

The scope for problems is mostly "did the OS put the right access controls in the page table". The hardware which enforces these access controls on behalf of the OS can be so simple that it's possible to formally verify (mathematically prove that it does what its specification says it should, and no more).

The only likely attack is that MMUs typically use a cache (TLB) to accelerate page table lookups. In principle, any cache is at risk of supporting some form of timing attack. Is that likely to happen? Seems doubtful. But maybe this is why Apple's going with one IOMMU per DMA device - they're being preemptively paranoid about timing sidechannel attacks.

More importantly, if someone figures out how to break MMUs with timing sidechannels, guess what? You should be more worried about the non-IO MMU, the one which allows your OS to isolate processes from each other. You will have far more immediate threats than someone plugging a rogue Thunderbolt device into your computer.

BlankSystemDaemon
Mar 13, 2009



Jesus loving christ, I know what a loving MMU is - I've just spent the last good while figuring out translation lookaside buffer depessimizations, because guess what, VMX translations invalidate TLBs and page-structure caches.

The point was that all these things which Intel say are good for security turn out to be loving full of holes, and even the things that are good for performance turn out to also be loving full of holes.

BlankSystemDaemon fucked around with this message at 00:55 on Feb 27, 2021

movax
Aug 30, 2008

Lets try to keep it civil here, dudes -- please don't get personal in poking each other in the eye.

Deep breath and remember this is all sand we (people) tricked into thinking, and people are always the worst part of any technical problem. VT-d / IOMMUs are tools and tools can get broken / may be poorly designed / can potentially get taken advantage of. On paper, the concept of an IOMMU makes perfect sense + is, IMO, a very sensible evolution in computing technology. Practically, because meatbags are responsible for this and the management of any technical project of sufficient complexity is a study and discipline of its own (systems engineering), poo poo's gonna break.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

BlankSystemDaemon posted:

The point was that all these things which Intel say are good for security turn out to be loving full of holes, and even the things that are good for performance turn out to also be loving full of holes.

I mean you're not technically wrong, but at the same time basically no reasonably complex general purpose computer that we have ever made has managed to stand up to prolonged security probing. Intel gets the worst of it because they've been the only serious game in town for the last decade prior to Zen3. But everyone else making consumer electronics also have gotten all sorts of things wrong over the years.

Possible vulnerabilities are only relevant in the context of a particular threat profile. If you already are assuming people are able to jack arbitrary untrusted devices into your machine, worrying about being able to pull off some currently non-existent attack against a subsystem that has so far resisted efforts to subvert it is a pretty strange concern given that USB already exists and everything that brings with it.

Potato Salad
Oct 23, 2014

nobody cares


There exist various classes and scales of risk. If you're hyperfocusing on safer/difficult-to-attack things, you've got a huge security team with an infinite budget OR you are setting your priorities wrong.

Okay, so I need to be mindful of quality whenever attaching a DMA device to a hypervisor, and I should locate core security functions like auth or platform health attestation to hosts running Epyc and traditional virt tcpip. Onward to removing weird browser extensions, or patching firmware, or any other topics that address dramatically larger oceans of risk.

Potato Salad fucked around with this message at 14:24 on Feb 27, 2021

Cygni
Nov 12, 2005

raring to post



If these end up being the real prices, then as expected, the i9s are truly "premium" products that are well off the price/performance arc for gaming. Firmly in the "dont buy unless you already have a 3090" spectrum.

On the other hand, the 11700KF is price targeted directly at the feature comparable 5800X (8/16 with overclocking and no integrated graphics). The 5800x is $450 at Microcenter/BestBuy when in stock.

The early leaked benchmarks have record single core scores but strangely gimped multi core scores, and Intel has apparently told reviewers that a BIOS fix is incoming and numbers aren't final. So take the leaks (even with retail parts) with a grain of salt.

Ugly In The Morning
Jul 1, 2010
Pillbug

Cygni posted:



If these end up being the real prices, then as expected, the i9s are truly "premium" products that are well off the price/performance arc for gaming. Firmly in the "dont buy unless you already have a 3090" spectrum.

On the other hand, the 11700KF is price targeted directly at the feature comparable 5800X (8/16 with overclocking and no integrated graphics). The 5800x is $450 at Microcenter/BestBuy when in stock.

The early leaked benchmarks have record single core scores but strangely gimped multi core scores, and Intel has apparently told reviewers that a BIOS fix is incoming and numbers aren't final. So take the leaks (even with retail parts) with a grain of salt.

I’m wondering how the 11700KF stacks up against the 10900KF for gaming if the single core stuff is that much better.

E:like in theoretical performance, I know either one will blow away any extant game right now.

Ika
Dec 30, 2004
Pure insanity

Apparently some german retailers messed up the embargo and already fulfilled orders. Guess benchmarks will start being posted in the next couple of days.

Kazinsal
Dec 13, 2011



e: ^^^ well those retails aren't going to see advance shipments from Intel in the future :rip:

I'm betting most 11700KFs will be able to overclock similarly to 11900KFs. At the very least they'll meet the 11900KF's stock boosts, almost assuredly.

Quite looking forward to these.

Kazinsal fucked around with this message at 05:02 on Mar 1, 2021

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy
those i5 prices are tasty if the (single-core) performance is going to be anything close to Zen 3, as compared to a 5600X

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"
Zen 4 will supposedly support AVX-512: https://www.techpowerup.com/279129/amd-zen-4-microarchitecture-to-support-avx-512

SwissArmyDruid
Feb 14, 2014

by sebmojo
....but why though.

BlankSystemDaemon
Mar 13, 2009



Because AMD wants to sell CPUs to supercomputer manufacturers.

Kazinsal
Dec 13, 2011



Yep, that’s a play to get enormous Epyc servers into HPC farms. A real smart one, too.

repiv
Aug 13, 2009

They don't even necessarily need to go wider in hardware, AVX-512 has so many new features over AVX2 that it's worth having even if they run it at half rate

BlankSystemDaemon
Mar 13, 2009



Assuming you don't mix your workloads, sure it's a great idea.
Running AVX512 code alongside x86_64/amd64 or x87 code isn't the best idea.
Even if we disregard the added latency of SSE, MMX and everything else (which is the case at least for a lot of kernel code which operates on the principle of finishing small operations the fastest, since that's what there's most of), performance is pessimized by a by a lot if running any AVX512 instructions down-clocks the CPU core from 3GHz to 1.6 or 1.2GHz just to keep it from overheating.
You're better off pinning an entire OS kernel to one core, and running every userspace task doing AVX512 instructions on every other core in the system.

BlankSystemDaemon fucked around with this message at 15:46 on Mar 1, 2021

repiv
Aug 13, 2009

The thermal issues of AVX-512 only really come into play when executing 512-bit instruction at full rate (as Intel's implementation does), there's a useful middle ground where you use 128-bit or 256-bit AVX-512 instructions, or run the 512-bit instructions at half rate, which gets you all the new features without the throttling

AMD might come up with a more graceful way of handling full-rate AVX-512 anyway, like they did with OG AVX. Intel chips panic and downclock as soon as they see an AVX instruction but Zen2/3 AVX has no immediate penalty, they only slow down if there's enough continuous AVX load for the boost system to notice the swing in voltage/thermals

repiv fucked around with this message at 16:05 on Mar 1, 2021

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

A lot of AVX 512 received knowledge is either no longer accurate with newer processors , obsolete, edge cases extended to general case, or just wrong.

BlankSystemDaemon
Mar 13, 2009



In my defense, I hadn't exactly planned on getting a life-threatening disease that made me unable to work.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

repiv posted:

The thermal issues of AVX-512 only really come into play when executing 512-bit instruction at full rate (as Intel's implementation does), there's a useful middle ground where you use 128-bit or 256-bit AVX-512 instructions, or run the 512-bit instructions at half rate, which gets you all the new features without the throttling

Ice Lake almost entirely eliminates the idea of AVX downclocking entirely, at least on the consumer chips. The chips only lose 100 MHz off the maximum possible single-core turbo if you are running the heaviest AVX-512 instructions in 1T mode, in all other scenarios they run at full speed. So AVX512-light, AVX2 heavy, and multithreaded loads (with any instructions) all run at full speed now.

https://travisdowns.github.io/blog/2020/08/19/icl-avx512-freq.html

I haven't seen anything on whether this applies to the server chips as well yet, but probably, hopefully? but the efficiency gains on newer nodes appear to have fixed a lot of the power problems involved with AVX even without moving to half-rate AVX-512, which is a pretty obvious win for feature set even if you don't go for the full 512b unit width.

This will also be something that will be interesting to look at on Rocket Lake, see whether it has the same AVX-512 boost behavior as Skylake-X. I would assume yes since it's 14nm still, but maybe Intel is working on it given how much of a problem it's been for them (and how much it's probably hurt adoption). AMD can get it working without having weird downclocking anomalies, on lovely GloFo nodes no less.


The really ironic thing is that imo the future of AVX-512 is in jeopardy on the Intel side since Alder Lake's LITTLE cores won't support AVX-512. The expectation I've heard is that they will be forced to disable AVX-512 on the big cores as a result even though they nominally support it, so the chip as a whole effectively won't support AVX-512 unless you turn off the little cores. And that's their desktop platform. Laptops have it (and have for a while) but those will probably move to big.LITTLE sooner rather than later for power reasons, leaving server once again as the only thing that uniformly supports it.

Or maybe they will figure out some way to pin those threads to the big cores, but that still means using AVX-512 means giving up the little cores and the MT performance they provide, so it'll be a tradeoff.

Paul MaudDib fucked around with this message at 18:56 on Mar 1, 2021

repiv
Aug 13, 2009

Weren't there some ARM chips that had hardware NEON on the big cores and microcoded/emulated NEON on the little cores? Maybe they'll do something along those lines.

CatHorse
Jan 5, 2008
Anandtech review of i7-11700K
https://www.anandtech.com/show/16535/intel-core-i7-11700k-review-blasting-off-with-rocket-lake

lkz
May 1, 2009
Soiled Meat
So apparently Anandtech got a Core i7-11700K early and ran a bunch of benchmarks: https://www.anandtech.com/show/16535/intel-core-i7-11700k-review-blasting-off-with-rocket-lake

Edit: little slow to draw I guess lol

Inept
Jul 8, 2003

The heatsink slayer has arrived



shrike82
Jun 11, 2005

seems bad

quote:

Users looking at our gaming results will undoubtedly be disappointed. The improvements Intel has made to its processor seem to do very little in our gaming tests, and in a lot of cases, we see performance regressions rather than improvements. If Intel is promoting +19% IPC, then why is gaming so adversely affected? The answer from our side of the fence is that Rocket Lake has some regressions in core-to-core performance and its memory latency profile.
...
The danger is that during our testing, the power peaked at an eye-catching 292 W. This was during an all-core AVX-512 workload, automatically set at 4.6 GHz, and the CPU hit 104ºC momentarily. There’s no indication that the frequency reduced when hitting this temperature, and our cooler is easily sufficient for the thermal load, which means that on some level we might be hitting the thermal density limits of wide mathematics processing on Intel’s 14nm. In order to keep temperatures down, new cooling methods have to be used.

Ugly In The Morning
Jul 1, 2010
Pillbug

shrike82 posted:

seems bad

Time for geothermal cooling. :getin:

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
I wasn't expecting anything more than "too little too late" but that's actually terrible, just suck it up and hunt down a 5000 series chip.

obviously still losing to AMD in productivity, at massively high power is bad, but getting beaten by Skylake in gaming is just sad

WhyteRyce
Dec 30, 2001

:lol: I remember the days when we had to use peltiers to heat the chips up to 100C to test the corners don't know what the gently caress they do now for that

redeyes
Sep 14, 2002

by Fluffdaddy
yeah gently caress Intel, that chip shouldn't even exist. Also wtf is this performance regression in math libraries?! It's bad!

repiv
Aug 13, 2009

lol what a shitshow

Is 12th gen expected to be on a new node or are we getting a 14nm++++++++++ 500w space heater?

Adbot
ADBOT LOVES YOU

Inept
Jul 8, 2003

Alder Lake is 10nm and supposed to be out later this year. We'll see.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply