Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Fauxtool
Oct 21, 2008

by Jeffrey of YOSPOS
I won the 1st place prize in the intel serious sam lottery and part of the prize is a 10700k.
Im thinking of doing a custom loop ITX for fun, is there a handy tier list of z490 mobos?
What is the sweet spot for ram speed for intel?

Adbot
ADBOT LOVES YOU

SuperTeeJay
Jun 14, 2015

Fauxtool posted:

What is the sweet spot for ram speed for intel?
The conventional wisdom was not to bother with anything faster than 3200MHz but some more recent benchmarking shows FPS gains at 3600MHz and (to a lesser extent) 4000MHz. I'd go for 3600/C16 or C14 in a new build.

Kazinsal
Dec 13, 2011



The price/perf jump from 3200 to 3600 is a much more reasonable ratio than the price/perf jump from 3600 to 4000.

Canna Happy
Jul 11, 2004
The engine, code A855, has a cast iron closed deck block and split crankcase. It uses an 8.1:1 compression ratio with Mahle cast eutectic aluminum alloy pistons, forged connecting rods with cracked caps and threaded-in 9 mm rod bolts, and a cast high

The best z490 itx motherboards are either the gigabyte aorus ultra to the msi meg unify. They have pretty much the same vrm, but the gigabyte will most likely run a few c cooler due to a slightly larger heatsink. The msi should have slightly better memory overclocking capabilities. The msi has a thunderbolt 3 port and realtek 2.5gb lan where the gigabyte uses an intel nic and no thunderbolt. These are the main differences. I'm a happy msi z490i user, but mostly due to it being fifty dollars cheaper at time of purchase.

Cygni
Nov 12, 2005

raring to post

Looks like Rocket Lake will launch on March 15th. Dunno if that means actual availability on that day or not though.

https://videocardz.com/newz/intel-rocket-lake-s-to-be-available-on-march-15th-alder-lake-s-to-feature-10nm-enhanced-superfin-architecture

Shrimp or Shrimps
Feb 14, 2012


SuperTeeJay posted:

The conventional wisdom was not to bother with anything faster than 3200MHz but some more recent benchmarking shows FPS gains at 3600MHz and (to a lesser extent) 4000MHz. I'd go for 3600/C16 or C14 in a new build.

Do ram speeds tend to matter less the higher resolution you go? 1080p vs 4k for eg. I'm assuming yes because you move to being GPU bound rather than CPU bound?

Also, what's the general strategy for undervolting / overclocking / increasing efficiency for intel these days? Is cache undervolting recommended? What about downclocking cache to get a better core clock? Or what about overclocking cache? Is undervolting iGPU safe when not using it? Does it even do anything? What about igpu unslice?

VCCIO and system agent, as I understand it, might need voltage bumps when overclocking memory and / or enabling XMP profile on memory. I definitely need to push both a touch for my 6700k/z270 asrock to get my 3200 ram xmp profile stable.

Shrimp or Shrimps fucked around with this message at 06:51 on Feb 18, 2021

Fauxtool
Oct 21, 2008

by Jeffrey of YOSPOS

Shrimp or Shrimps posted:

Also, what's the general strategy for undervolting and / or increasing efficiency for intel these days? Is cache undervolting recommended? What about downclocking cache to get a better core clock? Or what about overclocking cache? Is undervolting iGPU safe when not using it? Does it even do anything? What about igpu unslice?

VCCIO and system agent, as I understand it, might need voltage bumps when overclocking memory and / or enabling XMP profile on memory. I definitely need to push both a touch for my 6700k/z270 asrock to get my 3200 ram xmp profile stable.

same questions as this goon mostly

my 10700k arrived way sooner than I expected so this build is happening sooner than later. Whats the general overclocking strat these days?
The last time I seriously tried was on a 2500k and that was relatively simple. Adjust the multiplier, raise voltage until stable, repeat.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Shrimp or Shrimps posted:

Do ram speeds tend to matter less the higher resolution you go? 1080p vs 4k for eg. I'm assuming yes because you move to being GPU bound rather than CPU bound?

yes, it's a question of CPU bound vs GPU bound and at higher resolutions you're more GPU bound so you can get away with whatever on the CPU side. But RAM does matter for stuff like open-world or "sim" games that are mostly CPU-bound even at higher resolutions - Fallout 4 was RAM-bound even like 6 years ago, you got fantastic scaling up to at least 4000.

AFAIK nobody really cares about undervolting on Intel, there is very little meaningful "boost management" beyond "at X cores you get Y clocks". It's the last gasp of the "old" paradigm where you just find whatever your best all-core clock at whatever you consider reasonable voltage, then you tune your RAM, your cache, and your ringbus. The latter two are pretty significant on Intel as well, ringbus is like tuning infinity fabric, and I don't really know if cache tuning exists on AMD as an independent thing, but the closer you can get ringbus and cache to "1:1" the better. The gains from these are essentially compounded, if you get 10% extra from cache and 10% extra from ringbus you will get 21% extra total performance.

Obviously, just as with AMD, the more voltage you shove into all of those things, the faster they will be damaged.

Paul MaudDib fucked around with this message at 07:30 on Feb 18, 2021

SuperTeeJay
Jun 14, 2015

Fauxtool posted:

my 10700k arrived way sooner than I expected so this build is happening sooner than later. Whats the general overclocking strat these days?
The last time I seriously tried was on a 2500k and that was relatively simple. Adjust the multiplier, raise voltage until stable, repeat.
Guides aimed at obtaining the maximum stable overclock will recommend changing all sorts of settings, but getting all cores to run at the highest single core boost or a bit higher is as simple as you say. I'd also change the load line calibration from Auto to a medium setting and then start hitting it with OCCT/Aida64 and Small FFTs in P95 until it stops falling over.

BlankSystemDaemon
Mar 13, 2009



Has the recommendation to disable load-line calibration and spread spectrum when doing overclocking changed?

redeyes
Sep 14, 2002

by Fluffdaddy

BlankSystemDaemon posted:

Has the recommendation to disable load-line calibration and spread spectrum when doing overclocking changed?

The spread spectrum maybe, but you can't disable LLC... you can set it differently though.

BlankSystemDaemon
Mar 13, 2009



redeyes posted:

The spread spectrum maybe, but you can't disable LLC... you can set it differently though.
My old workstations UEFI (which has all the pretty UI, but doesn't do UEFI boot at all) has toggles to disable both.

redeyes
Sep 14, 2002

by Fluffdaddy

BlankSystemDaemon posted:

My old workstations UEFI (which has all the pretty UI, but doesn't do UEFI boot at all) has toggles to disable both.

I'd guess disabling sets it to 'normal', kind of odd though.

BlankSystemDaemon
Mar 13, 2009



redeyes posted:

I'd guess disabling sets it to 'normal', kind of odd though.
I dunno. I've ended up replacing that workstation with a server with a shitload more cores and memory. :shrug:

Don't have the money to buy stuff new enough that it does all the fancy overclocking.

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy
We're getting our first announcements of DDR5 memory: https://videocardz.com/newz/asgard-announces-ddr5-4800-memory-for-intel-12th-gen-core-alder-lake-series

That said, the timings are supposed to be 40-40-40, so as has been covered previously, DDR5 is probably going to have such loose timings at the start that DDR4 will likely be better for a while yet in some use-cases.

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"
Question for the thread concerning Management Engine fuckery.

I have a Z390. I know I'm supposed to stick to the 12.x F/W, the most recent of which is 12.0.71.1681, as 14.x and 15.x are for the 400 and 500 series respectively. But what about the MEI *software*, which is now in the 15.x branch?

Cygni
Nov 12, 2005

raring to post

Rocket Lake will apparently be announced/go on pre order on 3/16, with reviews and availability on 3/30. lol. The Big 3 sure do love these lovely embargo games.

Fantastic Foreskin
Jan 6, 2013

A golden helix streaked skyward from the Helvault. A thunderous explosion shattered the silver monolith and Avacyn emerged, free from her prison at last.

It's good to be king a duopolist.

Beef
Jul 26, 2004
Death to marketing and sales departments, I say. And pass the savings on to YOU!

Cygni
Nov 12, 2005

raring to post

Just wanted to share a rabbit hole I went down. In short, USB sucks rear end!

A lot of these Z590 boards have USB 3.2 Gen 2x2 Type-C ports (yes that is the actual name) that will go up to 20Gbps. If you were wondering if future USB4/TB4 devices will actually use that port at its 20Gbps speed, the answer is no. It is going to negotiate down to 10Gbps. 2x2 ports will only work at 20Gbps with 2x2 specific devices. Also 2x2 specific devices will not run at 20Gb/s on TB4/USB4 ports, either. So it is an orphaned standard, both on the port and device side. Because the USB group is dumb as fuckin poo poo. It is theoretically possible that someone could develop a future combo device driver that recognized both standards and switched between them, but considering how short lived this 2x2 standard is already looking and that only Asmedia has ever released any host or device silicon in the first place, i would put the chance of that at 0%.

To add to that, if you saw those USB4 speed numbers of 20Gbps or 40Gbps and got happy, good news! USB-IF hosed that up too! USB4 20Gbps certified devices only have to transfer data at... 10Gbps. The other 10Gbps can come from data or the required displayport pass through support. So even if a port says USB4 20Gbps, there is no guarantee it will run at anything higher than 10Gb/s for actual data. There is also no branding standard to figure out which USB4 20Gbps ports and devices support what, so being cynical, I think its likely that device makers won't bother supporting anything but the required portions.

Meaning if you plug a device into a host that both display this: Congrats, you will get 20Gbps!

However, if you plug a device into a host that both display this, the brand new fancy USB4 20Gbps logo: Well tough luck fucko, youre probably gettin 10Gbps.

On the USB4 40Gbps side, i started reading about all the PCIe tunneling tom foolery and how it is severely speed limited in most current devices and TB3 support not being guaranteed and TB4 and all of that, but I think I'm done caring. poo poo sucks. If anyone knows more about this than me, feel free to correct me!

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
:stare: goddamn that’s nuts. USB definitely does seem to be the redheaded stepchild of pc connectivity compared to ethernet, PCIe and sas/sata.

GRINDCORE MEGGIDO
Feb 28, 1985


Why is USB such a clusterfuck?

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"
It might be a clusterfuck, but most people in here probably never had to go through three different ISA serial port cards before finding one that'd work reliably with the family's 33.6kbps modem without making GBS threads the bed.

Wild EEPROM
Jul 29, 2011


oh, my, god. Becky, look at her bitrate.
Don’t forget about the absolute fuckery that is compatible cables.

BlankSystemDaemon
Mar 13, 2009



Or just IRQ fuckery.

Fame Douglas
Nov 20, 2013

by Fluffdaddy
Another thing about USB that I hate is that they keep changing all the old names with every revision, so manufacturers can sell the same old USB 3.0 port as the new hotness.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

USB’s been hosed since “full speed” vs “high speed”; it has always been a compatibility driven standard. If you want enforced performance standards there’s always FireWire.

Fame Douglas
Nov 20, 2013

by Fluffdaddy
I'd say the issue is less the backwards compatibility, more the way marketing tries to obfuscate differences by constantly renaming everything. Also, IEEE 1394 (called FireWire by some brands) is very much a dead standard.

WhyteRyce
Dec 30, 2001

It sucks but I guess that's what you get with the low cost bus that need to support shitloads of various devices from various vendors and various concessions and exceptions are requested by every player in the market.

BlankSystemDaemon
Mar 13, 2009



Any bus that exposes DMA to an external port is asking for a security nightmare, so it's for the best that IEEE1394 is dead.

And since USB is heading in that direction, surely nothing can go bad, all possible lessons have been learned, right? :shepicide:

AARP LARPer
Feb 19, 2005

THE DARK SIDE OF SCIENCE BREEDS A WEAPON OF WAR

Buglord
my blood pressure went up just reading that, holy moley

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
I was tasked with buying a few USB-C cables for connectivity to lab analyzers and am positively paralyzed by the options on amazon. It sucks trying to figure out which ones are high speed (important for getting 4-8GB data traces out of the analyzer quickly) and which aren't. UGH!

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

WhyteRyce posted:

It sucks but I guess that's what you get with the low cost bus that need to support shitloads of various devices from various vendors and various concessions and exceptions are requested by every player in the market.

Yup. The reason USB sucks is that it is very PCindustry.txt. Design by committee, and the committee includes actors which wanna make cheap low-effort junk.

BlankSystemDaemon posted:

Any bus that exposes DMA to an external port is asking for a security nightmare, so it's for the best that IEEE1394 is dead.

And since USB is heading in that direction, surely nothing can go bad, all possible lessons have been learned, right? :shepicide:

The CPU is also a "DMA" agent that can read and write any byte of physical memory. Because some of the things running on a CPU can't be trusted, we virtualize their view of memory and make them go through a page table crafted by a trusted piece of the system (the OS) to access memory.

The same exact solution exists for DMA peripherals. It's called an IOMMU, or VT-d in Intel parlance. This is why Intel requires VT-d (or equivalent) for Thunderbolt 4 certification.

IOMMUs actually ought to be used for every DMA peripheral, not just Thunderbolt ones. It isn't just about security, it's also stability: anything which can touch all of memory and has a bug can crash the system (or worse). Apple is doing this on Apple Silicon Macs; one of their slides from the presentation last year showed that everything which can DMA (including on-chip devices) gets its own private IOMMU.

IOMMUs were a known and accepted way to deal with IEEE1394 security implications back when it was still a living breathing evolving standard, btw. Your paranoia is extremely out of date.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

PCjr sidecar posted:

USB’s been hosed since “full speed” vs “high speed”; it has always been a compatibility driven standard. If you want enforced performance standards there’s always FireWire.

Yup. The main reason USB 2.0 and "high speed" were even created was that after Steve Jobs decided to try to extract lots of revenue for Apple per Firewire port, someone decided USB needed to take on Firewire's role, so they hacked in something that would give it a big paper number to make it look competitive.

It wasn't, though. Real world FW400 performance was far better than USB 2.0 480 Mbps. This is because USB 2.0 was mostly just 1.0 with a faster line rate, but the USB bus protocol wasn't designed for high performance since it had to be extremely cheap to implement.

Nevertheless, USB 2.0 effectively killed FW400's chance at becoming the standard of choice for things like external hard drives, video capture devices, etc. It was cheap, good enough, and had the powerful players in the PC industry behind it. (this stuff went down before Apple released the iPod or iPhone, so they were the opposite of the behemoth they are today)

So that was the first clumsy retrofit which set us down the path towards "USB 3.2 Gen 2x2".

WhyteRyce
Dec 30, 2001

Split packets and rate matches was a cool was to handle FS HS compatibility. At least in theory and not when you are trying to debug USB issues. SS was just throw some more lines elsewhere on the connector and have them go somewhere else which I guess its own kind of elegance

BlankSystemDaemon
Mar 13, 2009



BobHoward posted:

IOMMUs were a known and accepted way to deal with IEEE1394 security implications back when it was still a living breathing evolving standard, btw. Your paranoia is extremely out of date.
Yes, I know - it's not exactly new - and yet Intel were still shipping products without VT-d up until 2016, which are still found in products sold today.
It's still notably missing from most ARM chips that you find in, for example, Chromebooks, I've not yet been able to check whether it's in the M1.

All of this assumes there aren't any problems with the implementation, and considering the track record that Intel has for implementing features that we all depend on (whether we want to or not) but which are found to be less than secure (see: hyperthreading, management engine, uefi, et cetera ad nauseum), I'm not holding my breath.
Which was my entire point to begin with.

BlankSystemDaemon fucked around with this message at 23:02 on Feb 25, 2021

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

BlankSystemDaemon posted:

Yes, I know - it's not exactly new - and yet Intel were still shipping products without VT-d up until 2016, which are still found in products sold today.

I said it before: by Intel's own rules vendors can't certify something without VT-d or equivalent as having TBT 4.0 ports. So if you're paranoid, look for the certification.

quote:

It's still notably missing from most ARM chips that you find in, for example, Chromebooks, I've not yet been able to check whether it's in the M1.

You will never find VT-d itself on an ARM chip, because VT-d is an Intel marketing name for Intel's x86 IOMMU. Intel has officially clarified that TBT 4.0 certification requires any IOMMU capable of passing TBT conformance tests, even though some of their material calls out VT-d as the requirement. They're just using VT-d as a synonym for IOMMU because it's an Intel trademark and it helps market the brand.

You won't find VT-d implemented in an AMD CPU either. AMD has their own x86 IOMMU, which appears to be named AMD-Vi, and iirc AMD includes it in all AMD64 CPUs rather than trying to use it as a premium feature like Intel does.

Apple M1 has IOMMUs. As is often the case I'm unsure whether you even read what I wrote. To recap, Apple's presentation on Apple Silicon said that every DMA device gets its own IOMMU. (Without getting into specifics, Apple claimed that per-device IOMMUs provide extra protection compared to the Intel model where all devices share one IOMMU.)

quote:

All of this assumes there aren't any problems with the implementation, and considering the track record that Intel has for implementing features that we all depend on (whether we want to or not) but which are found to be less than secure (see: hyperthreading, management engine, uefi, et cetera ad nauseum), I'm not holding my breath.
Which was my entire point to begin with.

IOMMUs have been around a while, have been beaten on for years, and are extremely small attack surfaces compared to things like ME, UEFI, or even HT. It's not a risky new tech.

I'll put it this way: do you use any cloud hosted service? (like, say, the Something Awful forums?) Then you have been relying on Intel VT-d not having exploitable flaws, because major cloud hosting service providers rely on VT-d (among other things) to isolate instances from each other.

Fame Douglas
Nov 20, 2013

by Fluffdaddy

BobHoward posted:

I'll put it this way: do you use any cloud hosted service? (like, say, the Something Awful forums?) Then you have been relying on Intel VT-d not having exploitable flaws, because major cloud hosting service providers rely on VT-d (among other things) to isolate instances from each other.

All those CPU bugs of the past few years tell us that no, cloud instances aren't properly isolated from each other.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Fame Douglas posted:

All those CPU bugs of the past few years tell us that no, cloud instances aren't properly isolated from each other.

True, but the point remains that unless you're living under a technological rock, you're already assuming those risks with basically anything else you're doing. To worry about the security concerns of physically connected devices via IOMMU by comparison is, perhaps, worrying about the wrong things.

Adbot
ADBOT LOVES YOU

Potato Salad
Oct 23, 2014

nobody cares


Fame Douglas posted:

All those CPU bugs of the past few years tell us that no, cloud instances aren't properly isolated from each other.

Yes, the microarchitecture attacks of the last couple years permit and attacker to massage cache management and execution in a way that exfiltrates data from neighbor processes.

I'm not aware of any of them that was an attack on IOMMU, however. A few of them attacked the way that cache is loaded in predictive execution, a few have exploited physical weaknesses in Intel's secure memory feature (a flag that purported to manage secure memory id for you, permitting operating system maintainers to backport the feature at the level of the OBS so that the software developers don't have to change all their legacy code to take advantage of the speed benefits, but of course Intel did so poorly), and one was an attack taking advantage of transient ac phenomena in non-ECC ram.

Like, guest escapement via IOMMU isn't really something that happens. It's likelier an attack is going after other Intel features, hardware weaknesses, and things like bad optical media or floppy disk drivers written by hypervisor devs in an era before caring about memory safety was considered cool

Potato Salad fucked around with this message at 02:36 on Feb 26, 2021

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply