Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
JawnV6
Jul 4, 2004

So hot ...

movax posted:

is that getting back die area? Or removal of microcode

whats the difference

Adbot
ADBOT LOVES YOU

Sidesaddle Cavalry
Mar 15, 2013

Oh Boy Desert Map

D. Ebdrup posted:

They should give up on OoO all together.

hell yea

go back to stack operation architecture while were at it

BlankSystemDaemon
Mar 13, 2009



I'm sure we as a species can manage to build faster in-order processors than we were building back in the 90s - and at the very least, we can build more cores on each processor. The only ones who stand to lose are the people who run HPC on firewalled networks, and those machines generally won't be suceptible to the kinds of attacks that OoO permit.

Memory has also gotten a lot faster, so while the memory wall (which is the whole reason we're in this mess) is still there it's not as big of a problem.

sincx
Jul 13, 2012

furiously masturbating to anime titties
.

sincx fucked around with this message at 05:55 on Mar 23, 2021

JawnV6
Jul 4, 2004

So hot ...
I think you run out of applicable tricks once you've thrown enough gates at in-order. Just tossing in 'oh just make a lot of tiny cores!!' like that absolves you of further tradeoffs is asinine. It can occupy a different part of the power/perf curve without being a wistfully unused panacea for all other architectural issues.

Cygni
Nov 12, 2005

raring to post

Comet Lake-H mobile CPUs will launch April 3rd, and Comet Lake-S for desktop will launch April 30th:

https://videocardz.com/newz/intel-to-announce-10th-gen-core-comet-lake-s-on-april-30

movax
Aug 30, 2008

I've been eyeing grabbing the new XPS 13 for work — 16:10 screen and lots of shinies, but... Ice Lake under the hood, not Comet Lake. Maybe I'll wait another year.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

movax posted:

I've been eyeing grabbing the new XPS 13 for work — 16:10 screen and lots of shinies, but... Ice Lake under the hood, not Comet Lake. Maybe I'll wait another year.

But isn't Ice Lake the good one, not Comet Lake?
Ice Lake is the 10nm design with the way better GPU
Comet Lake is 14nm with a poo poo GPU.

Intel has really made their line-up even more confusing this time

HalloKitty fucked around with this message at 23:18 on Mar 26, 2020

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


Yeah, Ice Lake is the one you want.

Gucci Loafers
May 20, 2006

Ask yourself, do you really want to talk to pair of really nice gaudy shoes?


What hell is the point of releasing Comet Lake? It's cheap?

movax
Aug 30, 2008

HalloKitty posted:

But isn't Ice Lake the good one, not Comet Lake?
Ice Lake is the 10nm design with the way better GPU
Comet Lake is 14nm with a poo poo GPU.

Intel has really made their line-up even more confusing this time

Personally, I don't care about the iGPU performance, but I was more talking about this type of thing: https://www.anandtech.com/show/15385/intels-confusing-messaging-is-comet-lake-better-than-ice-lake

I think for some workloads, Ice Lake in its current state (mostly lower clocks, lower boost) actually loses to Comet Lake. The line seems fuzzy between these two architectures best I can tell — the move to 10 nm is important, but it may not necessarily be a slam dunk over its predecessor.

Cygni
Nov 12, 2005

raring to post

Tab8715 posted:

What hell is the point of releasing Comet Lake? It's cheap?

10nm is still comparatively low volume and its advantage is mostly in power usage/density, not performance. With Intel canning the single socket Cooper Lake, I imagine the vast majority of 10nm capacity is going to go to the Ice Lake SP Xeons.

Cygni
Nov 12, 2005

raring to post

Production parts/boards are leaking, like always!

10900K with an Asrock Z490M Pro4 motherboard:

https://browser.geekbench.com/v5/cpu/1584149

Raw data:

https://browser.geekbench.com/v5/cpu/1584149.gb5

Seems to sit at 5075 mhz for most of the test. Scores are more or less a 9900KS in single core and 3900X in multicore, which is to say Good and pretty much as expected. But obvi price+temps are gonna make or break whether its a great all-rounder, or just something that only "price is no object" gamers will want.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

D. Ebdrup posted:

I'm sure we as a species can manage to build faster in-order processors than we were building back in the 90s - and at the very least, we can build more cores on each processor. The only ones who stand to lose are the people who run HPC on firewalled networks, and those machines generally won't be suceptible to the kinds of attacks that OoO permit.

Faster in-order than the 90s has already been done. So have chips with shitloads of simple in-order cores. Neither has been commercially successful against fewer OoO cores. Gene Amdahl knew what the gently caress he was talkin' about.

quote:

Memory has also gotten a lot faster, so while the memory wall (which is the whole reason we're in this mess) is still there it's not as big of a problem.

no. It's far more a problem than it was in the 1990s. Why do you think modern chips have so many layers of cache hierarchy? and each layer so huge?

Caches, by the way, are just as much The Problem as OoO itself. Most of these microarchitectural data sampling attacks I've read up on rely on cache timing sidechannels to move sampled data from a microarchitecturally dead branch to a live one.

At this point it's likely that literally every microarchitectural feature which results in non-constant instruction timings is a risk. Naive appeals to the simple days of yore aren't gonna cut it, we need to figure out how to make these features work safely.

BlankSystemDaemon
Mar 13, 2009



Doesn't Amdahls Law state that if one task out of a set of tasks, that takes X hours, is the only that cannot be parallelized, it's still possible to gain a maximum of X times improvement, meaning you still get a linear improvement?
And that excepting the special case, it's not just a law of diminishing returns, since often-times quick fixes to code tend to not provide as much improvement as code that takes more time to develop?

I know that there are tasks which are very hard to parallelize, but I remember a time back when I was working on video encoding, where it was thought to be impossible to do that multi-threaded. Nowadays, even x264 and x265 are capable of doing it.
So isn't it "just" a question of solving those hard-to-parallelize problems?

If a guy like Andy Glew (who now works at SiFive) can come along and introduce the whole concept of OoO, SMT, and using caches to combat the memory wall, at least according to his published works, then surely other people might be able to solve other problems of a similar magnitude?

EDIT: Curiously, the some of the original patents that Andy filed have already run out, and the patents that underpin all of our current processors are soon to run out.

BlankSystemDaemon fucked around with this message at 13:25 on Mar 27, 2020

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
That's not quite what Amdahl's law says. It's more like this: Assume a program you are trying to accelerate takes 10 hours to execute on a single processor. Assume further that 1 hour of the program's work is serial, i.e. there is no way to parallelize it. Therefore: no matter how many processor cores you throw at the problem, you are unable to run the task in less time than 1 hour. You could have a million processors and it would not give you a 1,000,000x speedup, only a 10x.

The question is always this: How much of the work cannot be parallelized?

In video encoding, you gave an example which isn't quite what you think it is. Nobody actually figured out how to parallelize encoders in a perfect way. Instead, they accept a tradeoff: as you add more threads, you get worse visual quality at the same output bitrate (or similar quality at worse output bitrate, your pick). You will essentially never get bit-identical output to the single- or minimally-threaded version of the compression algorithm, because the parallelized version is inherently not the same.

Why is that so? Because video encoding produces heavily serialized output, and serialized output implies the work is also serialized. This is because encoding involves finding frame-to-frame similarities which can be left out of the output bitstream. Some frames (I-frames) are still encoded independently so they can be decompressed without reference to others (you have to do this to support seeking to random positions in the video). All other frames are encoded as derivatives of other frames. This means there's just tons of dependency chains.

If you deliberately break those chains - for example, you emit I-frames with greater frequency - you can create more opportunities to break up the work into chunks which have no dependencies with each other. The cost is that you don't compress as well, because building longer dependency chains is how you increase the compression ratio in video encoding.

Fortunately, in video encoding you can often get away with worse quality or worse bitrate, and it's not the end of the world if results are not bit-for-bit identical.

Unfortunately, those properties are definitely not true of all things people want to do with computer.

kaworu
Jul 23, 2004

Uh, hey guys, I have a bit of a query...

So basically, as of February I was in the market for a new laptop and looking around. The general consensus seemed to be that it would be *smart* to wait, because the laptop I wanted was the Alienware Area 51-m (I know, I have my reasons) but there were some aspects that made the timing bad. For one, a "super" version of the NVIDIA RTX 2070 Mobile was going to be coming out sometime soon. And then there was the rollout of the new 10th-generation Intel processors. It just seemed like the model was due for a refresh in a matter of a few months, and waiting would be prudent.

Then the whole coronavirus thing happened, and my old laptop started getting more and more buggy with blue-screens happening daily from the video card crashing. This was early March, and around that time a 17% discount became available, which would knock a cool ~$550 or so off the sticker price. That, combined with rumblings of supply-chain issues, and I just pulled the trigger - I am very happy I did! After all, this is like a laptop where I can ACTUALLY upgrade both the CPU and GPU with relative ease.


Anyway, my question basically just has to do with the processor I wound up with on this laptop: an i7-9600K. You might be thinking "hey, that's not a mobile processor!" and you'd be right - this laptop is a throwback to bulky desktop-replacement machines, and has enough cooling tech in it to easily run a desktop-class processor without exploding or catching fire. Frankly, the 9600K runs pretty drat cool in this with room to overclock it - this machine was designed to carry a gpddamn i9-9900K and run it on full-blast for hours, so it can handle quite a bit.

Anyway, I actually wasn't aware the 9600K didn't have super-threading when I got it at first, and I thought I'd made a big mistake... But frankly, I think I made the right choice. I only render video very occasionally, and given what I *think* I know about processors, it should be *almost* just as good at the 9900K when it comes to things like gaming.

But my question is basically this: when should I even begin to think about upgrading the processor? I am sorta thinking that I can skip this coming generation (at the least). I'm actually conflicted, to say the least, about the whole AMD vs Intel/NVIDIA thing. Seems that AMD is way ahead of Intel right now on processors, and Intel is sort of playing catch-up.

Sorry for the and ramblingt effortpost guys its just how I roll :cool:

Shaocaholica
Oct 29, 2002

Fig. 5E
Heh I just got 1080p24 'smooth' playback of modern encoded h.264 'movies' on a Pentium-M 780. That's a 2005 part, 32bit single core single thread. Kinda like a 2.2ghz pentium 3. Had to take some liberties on the decoder settings but its very watchable and only a AV nerd would be able to spot the decoding shortcuts.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

BobHoward posted:

In video encoding, you gave an example which isn't quite what you think it is. Nobody actually figured out how to parallelize encoders in a perfect way. Instead, they accept a tradeoff: as you add more threads, you get worse visual quality at the same output bitrate (or similar quality at worse output bitrate, your pick). You will essentially never get bit-identical output to the single- or minimally-threaded version of the compression algorithm, because the parallelized version is inherently not the same.

Why is that so? Because video encoding produces heavily serialized output, and serialized output implies the work is also serialized. This is because encoding involves finding frame-to-frame similarities which can be left out of the output bitstream. Some frames (I-frames) are still encoded independently so they can be decompressed without reference to others (you have to do this to support seeking to random positions in the video). All other frames are encoded as derivatives of other frames. This means there's just tons of dependency chains.

If you deliberately break those chains - for example, you emit I-frames with greater frequency - you can create more opportunities to break up the work into chunks which have no dependencies with each other. The cost is that you don't compress as well, because building longer dependency chains is how you increase the compression ratio in video encoding.

eh, with video encoding I don't think this specifically is true. I don't think anyone uses shorter keyframe limits (fewer frames between keyframes) when they use more threads, nor does keyframe placement necessarily work like that. You don't analyze N chunks of the video in parallel, because it's serial, and the optimal placement can't be independently determined without churning through the previous frames.

Instead, what you have is something closer to branch-and-bound, where N threads are searching for the best way to "shorthand" the elements of the picture and how they're moving (motion estimation). You are looking for the best encoding out of some set of possible encodings that you define. Higher "quality presets" (slower, veryslow, etc) let you do a finer precision and span across a deeper temporal range but a single-threaded encoding does not get to search any "state space" that a multithreaded encoding does not. In fact the multithreaded implementation may be able to search deeper than the single-threaded for a given amount of time.

I just tried it on a 1 minute clip of 480p video with crf=29 and the standard behavior does not change any internals according to the x264 config dump. The multithreaded file was binary-different but was actually smaller (2611 kb vs 2618 kb for a single-threaded run). I guess I would need to look at psnr but I don't really know what I'm looking at there.

edit: with crf=39 (much more stringent, you are probably not hitting an "optimal case") the file was 1000kb vs 1001kb for single-threaded and I don't really see much visual difference between the two (both have really chunky blocking).

Paul MaudDib fucked around with this message at 22:27 on Mar 27, 2020

Shaocaholica
Oct 29, 2002

Fig. 5E
I used to do this kind of stuff for fun at work. Run a bunch of encodes with different parameters. Load all the encodes into nuke and visualize the mathematical difference from the source footage for all encodes simultaneously and just eyeball the differences in artifacts and difference from the source.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

kaworu posted:

But my question is basically this: when should I even begin to think about upgrading the processor? I am sorta thinking that I can skip this coming generation (at the least). I'm actually conflicted, to say the least, about the whole AMD vs Intel/NVIDIA thing. Seems that AMD is way ahead of Intel right now on processors, and Intel is sort of playing catch-up.

Honestly? Probably never. Upgrades to a laptop like that are usually an unfortunate waste of time/effort/money. I'm not sure how you managed to spend >$3000 and only end up with a 9600k, but whatever, that's a perfectly fine CPU, and will likely be a perfectly fine CPU for at least another 3 years. The 2070 in there will likely be a solid mid-class card for at least 2-3 years. So realistically you're probably looking at ~3 years before it starts to feel overly slow, unless you were making the mistake of thinking you could ride it to 150+ FPS on Ultramaxx settings on AAA games 3 years from now, in which case...yeah, it just won't.

Anyhow, what I'm saying is that by the time the CPU/GPU are due for an upgrade, you'll almost certainly be better off just getting a new mid-range rig anyhow, for various reasons. Hopefully monitor tech will have improved and OLED or micro-LED will be available. DDR5 will be in play. And while in theory you can just drop in a desktop CPU for ~$300, the 2070 MXM upgrade kit right now costs about $1000, and there's no reason to think that the equivalent kit in 3 years won't cost a similar amount--if it's even available. So now you're talking $1300+ to upgrade a laptop that's already gonna be otherwise behind the curve and showing the usual 3 years of wear (are the hinges broke yet? if not, they will be soon!). Which is...not much less than what you can get a 2070 laptop for today.

tl;dr don't bother upgrading, just wait ~3 years until it's time to move on and then get another $1500 laptop and call it a day.

Shrimp or Shrimps
Feb 14, 2012


The GPU in the A51M is propriety and not MXM, too, so a lot of the price or possibility of upgrading that depends on whether Alienware ever releases an upgrade kit, and if they do it will definitely have that Alienware tax likely in excess of the MXM tax.

But for the CPU, OP, unless your workload substantially shifts to multicore workloads, going from an overclocked 9600k to a stock 9900k probably won't ever be worth it for gaming.

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"

kaworu posted:

But my question is basically this: when should I even begin to think about upgrading the processor? I am sorta thinking that I can skip this coming generation (at the least). I'm actually conflicted, to say the least, about the whole AMD vs Intel/NVIDIA thing. Seems that AMD is way ahead of Intel right now on processors, and Intel is sort of playing catch-up.

Given that the A51m uses desktop processors, a 9900KS is your highest-end option, and honestly, probably in ~2-3 years when people are unloading them for whatever the new hotness is then.

"Upgradeable" laptops are really only designed for the kids of Saudi princes and team-affiliated streamers/pro gamers who have expense accounts and sponsors.

kaworu
Jul 23, 2004

DrDork posted:

I'm not sure how you managed to spend >$3000 and only end up with a 9600k,

Oh, *poo poo*. I uh, misspoke greatly. The processor in this thing is an i7-9700K - as in the desktop model. You take off the panel in the back of the laptop and I mean, there's a Z390 chipset and the processor right there, and swapping out the CPU on this thing is practically no different than doing it on a desktop rig. Which is very cool! But given that is an (overclocked) i7-9700K and not the 9600k (thankfully) I am hoping that it does at least have more than 2-3 years of life?

Oh, and this laptop only cost me ~$2,400 or so plus tax, coming out to around ~$2580. The subtotal was something like $3200 before discounts, though. In addition to that 17% discount, I managed to finagle an additional $300 discount by working the salesman a bit and pretending to be really on the fence - so all in all I saved something like $800+ wiith all the discounts combined, and all in all it made a killer laptop I'd never be able to afford... barely affordable. I'd always rather spend a bit more money on a quality product that actually lasts... Before I went with Alienware, I was absolutely getting a new laptop every 2-3 years, and it was loving RIDICULOUS - they were always riddled with problems and breaking down and would have really cheap and poorly made hardware. I had never tried Alienware because they seemed too gimmicky, but in 2015 I said "gently caress IT!" and plunked down an extra $500 bucks to get the new Alienware 15 that had just come out, even though I could've gotten a Clevo/Asus/MSI/whatever for less money with the same hardware. And it was the best computer I had ever bought by a mile - worked for over 5 years without a single problem or hiccup or issue during that time, and I worked that laptop hard. Anyway.

And with the GPU, possibly upgrading that *is* a little bit of a question mark. You see, rather than being soldered to the motherboard like it almost always would be, it's attached with this proprietary tech called "Dell Graphics Form Factor" (DGFF) which allows you to remove it with some ease. I'm unsure if your average end-user can properly re-apply a heat sink (apparently Dell is a bit unsure too according to an article I just read) so I'd guess they'd give people the option of a "ship and return" kind of thing. DGFF is s pretty new concept, and despite my doubts about Dell's competency in making it work, hope springs eternal. I kinda hope they learn from MSI and the whole "Pascal" failure.

Anyway - I honestly think that an i7-9700K + RTX 2070 is a pretty strong setup - with a screen at 1080p, I cannot imagine that any game (except maybe Metro) would fail to run at at least 60FPS on max settings. In terms of benchmarks for that GPU that seems to be the case, and I honestly think that the 9700K will be powerful enough... Well, I hope.


I mean, when I was spec'ing the laptop, I had a choice between 4 processors, and these were my options - please tell me if you think I chose well, or poorly.

#1 i7-9700 + $0 - this is what the laptop comes with at its base configuration
#2 i7-9700K + $100 - this is what I went with, for reasons I will explain
#3 i9-9900 + $350 - I might have gotten this one at a certain point but it would have been a mistake - if you're going to spend an additional $350 you may as well spend the extra $100 and go all out, no?
#4 i9-9900K+$450 - I'd like to have gotten this, but in truth the $2400 I spent on this laptop was already $400 over my budget so, yeah.

So, those were my options.... think I made the right choice.

Fantastic Foreskin
Jan 6, 2013

A golden helix streaked skyward from the Helvault. A thunderous explosion shattered the silver monolith and Avacyn emerged, free from her prison at last.

A 9700k and a 2070 is gross overkill for 1080p60.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

ItBreathes posted:

A 9700k and a 2070 is gross overkill for 1080p60.

The more overkill it is, the more you can let it clock down and the more efficiently and cooler it’ll run. Set a frame limiter for 60fps.

Zen2 APUs would be better for a laptop though.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

kaworu posted:

Oh, *poo poo*. I uh, misspoke greatly.

Ah, ok. I thought you meant that the 17% discount = $550, which implied a >$3000 price, rather than the much more reasonable price you actually paid. You absolutely made the right choice not getting the 9900k: totally not worth the price.

Anyhow, the problem with upgrading the CPU is the chipset/socket: there's no really meaningful CPU upgrade for you off a 9700k that makes any sense within the 9-series chips (a 9900k just adds hyper-threading and a little extra L3), and Comet Lake will use a new socket, so in ~3 years when things are feeling a little slow, you won't be able to upgrade to a "modern" CPU: you'll be stuck with 9-series chips.

The Dell-proprietary GPU socket is a huge concern. As was already pointed out, even if they bother making a drop-in RTX 3080 (or whatever) kit, it'll be monstrously expensive. The 2070 one right now is $1000. And a 2070 will run 1080@144 for a good while.

Again, the upgradability is mostly a gimmick. By the time you would want to upgrade, the price structure won't make any sense for you to actually do so: just sell it for $500, take the $1500 you'd have spent upgrading, and buy a new $2000 laptop.

DrDork fucked around with this message at 03:02 on Mar 28, 2020

kaworu
Jul 23, 2004

ItBreathes posted:

A 9700k and a 2070 is gross overkill for 1080p60.

Well, it's a desktop replacement though. Part of the idea is that you can plug this thing into a monitor with an external mouse and keyboard, and it can run games at much higher resolutions. That's how the laptop is intended to be used.

I can understand why a laptop like this is not practical for most people - it really doesn't make much sense for your average person! But me, I'm living a bit of a nomadic life right now, and I am not entirely sure where I'm going to end up living or what I'm going to have access to sometimes. I might have access to a desk and a monitor, I might not. I might end up staying somewhere that is practical for desktop computing, but I cannot rely on it. Lugging around a desktop would be utterly impractical for me in my life, and I've never been the kind of guy ho uses his laptop in the coffee shop or anything - my phone is more than enough computer when I'm out and about. So something like this is absolutely perfect me, to be frank.

And hearing that it is overpowered as hell makes me feel [i]good[//i] - the whole idea behind buying a gaming laptop (from my point of view) is getting something that is *WAY more* powerful than you need, so in 5 years it hopefully won't be obsolete... Who knows.

And you guys are probably right about the upgrade stuff, kind of a pipe dream/poorly executed concept. I am fine with what I have though :)

kaworu fucked around with this message at 03:39 on Mar 28, 2020

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
if it is really that easy to access the CPU and GPU, it will at least be easy to change the thermal paste if it starts overheating in a year or two

Shrimp or Shrimps
Feb 14, 2012


Paul MaudDib posted:

if it is really that easy to access the CPU and GPU, it will at least be easy to change the thermal paste if it starts overheating in a year or two

On most laptops these days its pretty easy. On the A51M it is, uh, not that easy because of the 'ribcage' or whatever they call it, and cabling being wired over the top of it. It's actually a little weird to me how difficult they made it to get the whole heatpipe and sink assembly off for a repaste, considering it's absolutely a product aimed at enthusiasts.

The A51M with newer bios revisions however is going to run cool on the GPU because the GPU is set to throttle hard at 75c (instead of 82 or whatever) to avoid the VRM issues they were having at launch killing 2080 GPUs because the VRMs are/were passively cooled.

@kaworu the A51M is definitely a beast like most "DTR" laptops. It's going to reliably max games out at 1080p60fps for a long while. I also definitely understand your particular use case for a DTR, and am looking to possibly get one myself once we hit the 3xxx generation of nVidia GPU or AMD equivalent (lol).

But to be honest, 5 years I don't think is something to bank on. I guess what I'm saying is, 5 years tends to be quite optimistic for a gaming laptop, even one with desktop components. Like, a 5 year old desktop is running, what, a GTX980 and 6700K? For 1080p60 that's mostly fine, but if you're going for high refresh rates or 4K, it's game-dependent.

Even assuming the hardware is relatively performant then, other things can go wrong with laptops (like a hinge breaking or the charger port breaking or whatever) that can meaningfully shorten the life of a laptop if you can't A) fix it yourself once it's out of warranty, or B) if you can, but replacement parts aren't available.

With the new generation of consoles coming out with some very powerful hardware, it's going to raise the bar in PC gaming, too.

Definitely be happy with your purchase and enjoy gaming on it! You are absolutely right that that 9700K is giving more gaming performance than a 9880H mobile 8c/16t part simply due to being allowed to clock up higher and maintain a higher power draw, and of course it completely outclasses a 9750H 6c/12t part.

But to answer your original query, the only reason you would have to upgrade to a 9900K is if your workload shifts primarily to a multithread-benefitting one, like if you started rendering videos everyday and doing it quicker was imperative and that it benefited from hyperthreading.

For gaming it's unlikely to ever be a worthwhile upgrade.

Shrimp or Shrimps fucked around with this message at 06:55 on Mar 28, 2020

Budzilla
Oct 14, 2007

We can all learn from our past mistakes.

Paul MaudDib posted:

if it is really that easy to access the CPU and GPU, it will at least be easy to change the thermal paste if it starts overheating in a year or two
Or get a laptop cooler to help out.

JawnV6
Jul 4, 2004

So hot ...

D. Ebdrup posted:

If a guy like Andy Glew (who now works at SiFive) can come along and introduce the whole concept of OoO, SMT, and using caches to combat the memory wall, at least according to his published works, then surely other people might be able to solve other problems of a similar magnitude?

so last page OoO was a dead-end garbage idea that the world should give up on, but here in the brave new world of 549 it's the only ticket to get past the memory wall?

BlankSystemDaemon
Mar 13, 2009



JawnV6 posted:

so last page OoO was a dead-end garbage idea that the world should give up on, but here in the brave new world of 549 it's the only ticket to get past the memory wall?
Good job completely missing my point, which was that if OoO/SMT aren't the solution to the memory wall problem if they are inherently problematic in implementation (as certainly seems to be the case), there may be some other solution(s) which is only waiting for a genius to come along and invent it.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

D. Ebdrup posted:

there may be some other solution(s) which is only waiting for a genius to come along and invent it.

This is true of an enormously large number of problems, though, and companies can't rely on some genius breakthrough that redefines the entire computational landscape. So we'll keep churning on OoO and trying to mitigate the security aspects of it until Mr. Genius pops up at some indeterminate point in the future. Or maybe Mr. Genius figures out how to fix the security issues of predictive OoO execution and it's all fine! :iiam:

BlankSystemDaemon
Mar 13, 2009



DrDork posted:

Or maybe Mr. Genius figures out how to fix the security issues of predictive OoO execution and it's all fine! :iiam:
That part they're at least already working on, as it doesn't require genius. The problem is that industry veterans who I've heard talk about this have all agreed it could easily take 20 years to ensure that the problems have been fixed.

EDIT: ..which probably explains why one of the experts who said this ended up working on redesigning silicon.

BlankSystemDaemon fucked around with this message at 21:00 on Mar 30, 2020

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Yeah, it's amazing how deeply rooted some of the security issues are in modern designs. 20 years doesn't sound outlandish at all, honestly.

So we're basically waiting for Mr. Genius to magically fix in-order design, or Mr. Slightly-Less-Genius to magically fix up OoO's security issues in a reasonable time-frame. :shrug:

JawnV6
Jul 4, 2004

So hot ...

D. Ebdrup posted:

Good job completely missing my point, which was that if OoO/SMT aren't the solution to the memory wall problem if they are inherently problematic in implementation (as certainly seems to be the case), there may be some other solution(s) which is only waiting for a genius to come along and invent it.
what's "inherently problematic" about them? some german university can sniff secrets out of SGX at one bit per week? you're needlessly conflating "what shared hosting providers will buy" with viable computer architecture strategies.

and the lack of attempted solutions isn't indicative of a dearth of geniuses or ideas. specific to x86, what if you holy poo poo this violated NDA wow whoops soz

JawnV6 fucked around with this message at 20:07 on May 5, 2020

BlankSystemDaemon
Mar 13, 2009



DrDork posted:

Yeah, it's amazing how deeply rooted some of the security issues are in modern designs. 20 years doesn't sound outlandish at all, honestly.

So we're basically waiting for Mr. Genius to magically fix in-order design, or Mr. Slightly-Less-Genius to magically fix up OoO's security issues in a reasonable time-frame. :shrug:
So like I started with saying: we're screwed, and might as well go back 20 years unless we're doing HPC. :P

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Except with OoO everyone who isn't a shared hosting provider can more or less keep on keepin' on with minimal risk. Going back decades seems like a poor exchange again what, for the vast majority of people, will only be a theoretical issue that applies to someone else.

Adbot
ADBOT LOVES YOU

snickothemule
Jul 11, 2016

wretched single ply might as well use my socks
Hey fellas, with all the self isolation going on I think I'm developing a fresh case of brain worms, I'm looking at a 1660 v4 ES chip (6900k equivalent) that is reported to be overclockable. Being an engineering sample I understand is a huge risk, but having 2 extra cores over my 6800k could do wonders for some of my workloads in photogrammetry and give me 40pcie lanes instead of the current 28 (which isn't a problem now, but I may end up adding a u.2 drive down the track).

I have a strong sense this is a foolish endeavor but...brain worms.

I keep telling myself to just wait for the 4900x or equivalent and stop horsing around, but it's been years since I've done anything with this machine and I have the itch to muck around.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply