Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Rinkles
Oct 24, 2010

What I'm getting at is...
Do you feel the same way?
Apparently the Alienware bios won't even accept a 7700k.

Adbot
ADBOT LOVES YOU

JawnV6
Jul 4, 2004

So hot ...

BobHoward posted:

I'm sure the answer was roughly "because it's there". Any time you erect a barrier like that, people will try to tear it down and see.
idk im trying to think of some example of a technical solution that made sense at a particular point in time, but too many folks started using it and that locked the manufacturer into producing compatible iterations instead of allowing some flexibility. is there anything like that? hard to say.

BobHoward posted:

Don't know that I agree about the Transmeta approach being inevitable. Agree that they had some smart people, but I think they failed due to fundamental issues with the concept. Every time someone tries to veer away from brainiac core designs, the brainiac cores just keep on winning.
idk what 'brainiac' cores means, but my feeling is that "cpu perf" is solved in an incredibly myopic way most of the time. 1% gains on a synthetic benchmark that starts with empty caches and has nothing to do with the "steady state" of a long compute process running for minutes. CMS is one of the few places you can solve that sort of mismatch. the things most folks think matter (e.g. branch prediction) are useless diversions, most things if you cut read bandwidth in half you'd double real perf and that type of elision is going to be invisible/impossible to most uarch.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

JawnV6 posted:

idk what 'brainiac' cores means, but my feeling is that "cpu perf" is solved in an incredibly myopic way most of the time. 1% gains on a synthetic benchmark that starts with empty caches and has nothing to do with the "steady state" of a long compute process running for minutes. CMS is one of the few places you can solve that sort of mismatch. the things most folks think matter (e.g. branch prediction) are useless diversions, most things if you cut read bandwidth in half you'd double real perf and that type of elision is going to be invisible/impossible to most uarch.

'brainiac' is a somewhat loose term for going deep on superscalar / out-of-order / speculative execution / etc. A brainiac CPU has control logic that's very clever about analyzing a nominally serial stream of instructions to extract instruction level parallelism (ILP). Apple's Firestorm cores (the big cores in A14 and M1) are probably the most brainiac design currently being sold to consumers.

Transmeta hoped to gain power and area efficiency advantages by going anti-brainiac: the CPU core was in-order VLIW core with next to no silicon or power spent on control logic. CMS was there to smooth over the classic issues with trying to use VLIW in general purpose computers, and also attempted to provide OoO-like features in software. (I think that's what your flexibility comments were about, yeah?)

I don't see why only CMS enables optimizations like elision of half the read BW. If transformations along those lines are possible, they ought to be possible at compile time too. And when you do them then, you get more generality (easier to reason about the semantics of the code when you still have the AST) and zero runtime overhead.

Also, I don't buy that things like clever branch predictors are useless outside synthetic benchmarks. Sure, if you're gonna spend several minutes just crunching FP math, they might look useless. But in real-world general purpose computing, branchy pointer-chasing integer code is very important, actually. That's why people invest a lot of money in branch predictors. In fact, I would flip your claim on its head: I think the true value of modern branch predictors is hard to observe in simplistic synthetic benchmarks.

JawnV6
Jul 4, 2004

So hot ...

BobHoward posted:

'brainiac' is a somewhat loose term for going deep on superscalar / out-of-order / speculative execution / etc. A brainiac CPU has control logic that's very clever about analyzing a nominally serial stream of instructions to extract instruction level parallelism (ILP). Apple's Firestorm cores (the big cores in A14 and M1) are probably the most brainiac design currently being sold to consumers.

Transmeta hoped to gain power and area efficiency advantages by going anti-brainiac: the CPU core was in-order VLIW core with next to no silicon or power spent on control logic. CMS was there to smooth over the classic issues with trying to use VLIW in general purpose computers, and also attempted to provide OoO-like features in software. (I think that's what your flexibility comments were about, yeah?)
no like, imagine if the x86 ISA wasn't set in stone and they had like, RISC-V levels of churn on what subsets they supported, would any of the goofy segment stuff still be shipping? the promise of transmeta and what the CMS could do would largely be undone by targeting the "native" core, it'd be a tech debt albatross just like corners of x86

BobHoward posted:

I don't see why only CMS enables optimizations like elision of half the read BW. If transformations along those lines are possible, they ought to be possible at compile time too. And when you do them then, you get more generality (easier to reason about the semantics of the code when you still have the AST) and zero runtime overhead.
the amount of info available at runtime is staggeringly large. like c'mon "do it at compile time", how fine-grained are JIT engines, are they monitoring each branch and re-writing poor predictors on the fly yet? what if I can adjust my code path based on my ping to the server, how's the compiler sussing out that one. there are ILP/TLP gains possible with a larger window/visibility than a ROB/OoO engine can possibly support and a CMS would sit right there.

BobHoward posted:

Also, I don't buy that things like clever branch predictors are useless outside synthetic benchmarks. Sure, if you're gonna spend several minutes just crunching FP math, they might look useless. But in real-world general purpose computing, branchy pointer-chasing integer code is very important, actually. That's why people invest a lot of money in branch predictors. In fact, I would flip your claim on its head: I think the true value of modern branch predictors is hard to observe in simplistic synthetic benchmarks.
I dont quite recall making that argument. I worked on a cpu team and there's just no good way to simulate architectural improvement effects in that long-term steady-state kind of thing. branch predictors are more solved than anyone deep into the weeds is aware and it's chasing nines not 50% swings. there's no easy way to even ask "what if we dedicated 30% of a die to X" because at most any given architect is given purview of like a million gates at most.

brainiac cores beget brainiac cores, there is a better path that transmeta was likely on but iterating in the 1~5% range per block will never get there

Beef
Jul 26, 2004

JawnV6 posted:

I worked on a cpu team and there's just no good way to simulate architectural improvement effects in that long-term steady-state kind of thing.

There have been developments in the past decade on architectural simulators, and there are simulation techniques now that do that kind of analysis successfully, e.g. Graphite or Sniper. We're still talking 1000x slowdown compared to hardware, but at least you can simulate a few bil cycles of a multi-socket multi-core. However, slow-as-gently caress cycle-accurate simulators are still the architects' bread and butter, it's hard to make them trust anything else. You can bet they will have to start trusting other simulation techniques if they want to start moving beyond *spits on the floor* spec benchmarks workloads.

JawnV6
Jul 4, 2004

So hot ...

Beef posted:

There have been developments in the past decade on architectural simulators, and there are simulation techniques now that do that kind of analysis successfully, e.g. Graphite or Sniper. We're still talking 1000x slowdown compared to hardware, but at least you can simulate a few bil cycles of a multi-socket multi-core. However, slow-as-gently caress cycle-accurate simulators are still the architects' bread and butter, it's hard to make them trust anything else. You can bet they will have to start trusting other simulation techniques if they want to start moving beyond *spits on the floor* spec benchmarks workloads.

yeah, my knowledge is likely about that stale. im sure that the preference for cycle-accurate is from scars, some "rare" case turns out to hit 50% of transactions in practice and the perf doesn't get realized. but even those are more likely to favor small scale analysis right? easier to simulate 'double the load ports' instead of rejiggering pipeline stages.

benchmarks aren't great but TPC-D was the first to catch row hammer :v:

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

JawnV6 posted:

no like, imagine if the x86 ISA wasn't set in stone and they had like, RISC-V levels of churn on what subsets they supported, would any of the goofy segment stuff still be shipping? the promise of transmeta and what the CMS could do would largely be undone by targeting the "native" core, it'd be a tech debt albatross just like corners of x86

Ok I think we were talking past each other on this one, I was originally just saying people tried to RE it for the same reason Mallory said he wanted to climb Everest ("because it's there"). I don't think anyone who worked on that seriously wanted to write native code for the Crusoe core.

(The one real world exception to that rule which would've become a thing had Transmeta become popular: backdoors. Whether intentional or not, any way to escape the JIT and execute native code probably would've had awful security consequences.)

quote:

the amount of info available at runtime is staggeringly large. like c'mon "do it at compile time", how fine-grained are JIT engines, are they monitoring each branch and re-writing poor predictors on the fly yet? what if I can adjust my code path based on my ping to the server, how's the compiler sussing out that one. there are ILP/TLP gains possible with a larger window/visibility than a ROB/OoO engine can possibly support and a CMS would sit right there.

The thing about these fancy JIT engines with all the dynamic behavior is that (at least in browser land) the overhead is high enough that they have to get tricksy about when to even do it. A few years back I read about WebKit having something like four or five different tiers of javascript JIT engines, each with progressively more expensive optimizations, and the lowest tier is basically just an interpreter because it turns out that for many kinds of code, it's less important to have best performance than it is to minimize unexpected disruptions in performance due to the JIT engine recompiling things or whatever.

And in spite of all that incredible effort and tuning, native code is still faster.

(ok ok, it's not sane to expect javascript JITs to approach native code because JS is so awful, I know it's not a fair comparison)

quote:

I dont quite recall making that argument. I worked on a cpu team and there's just no good way to simulate architectural improvement effects in that long-term steady-state kind of thing. branch predictors are more solved than anyone deep into the weeds is aware and it's chasing nines not 50% swings. there's no easy way to even ask "what if we dedicated 30% of a die to X" because at most any given architect is given purview of like a million gates at most.

brainiac cores beget brainiac cores, there is a better path that transmeta was likely on but iterating in the 1~5% range per block will never get there

Ok the first paragraph seems like more talking past each other.

As for the second, yeah, it is an interesting path not taken. You're making an argument that we're trapped on the hill we know, and we can't see across the valley to a higher hill, and there's thorns and poo poo in there so we don't wanna go. I'm sympathetic, there's all kinds of examples of local minima / maxima capturing branches of technology, but I guess what I disagree on is that the Transmeta-ish hill is necessarily taller than the brainiac hill. But I don't have hard data to back that.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

JawnV6 posted:

no like, imagine if the x86 ISA wasn't set in stone and they had like, RISC-V levels of churn on what subsets they supported, would any of the goofy segment stuff still be shipping? the promise of transmeta and what the CMS could do would largely be undone by targeting the "native" core, it'd be a tech debt albatross just like corners of x86
That "goofy segment stuff" was implemented on a processor with orders of magnitude less gates than anything out there now. They could plop the original 286 core in, all 134000 transistors of it, just to run "legacy" code on. it wouldn't be a rounding error on the gate count.

When you talk about visibility there's a cost there, too. The more you look at the more work you need to do. You risk spending time to optimize an active loop only to have it finish before you make the changes, with your super-optimizer always playing catch-up to what the code is actually doing. Sure, you could store it for the next time it goes back there, but when will that be, and will the data conditions be the same when it does?

JawnV6
Jul 4, 2004

So hot ...

BobHoward posted:

The thing about these fancy JIT engines with all the dynamic behavior is that (at least in browser land) the overhead is high enough that they have to get tricksy about when to even do it. A few years back I read about WebKit having something like four or five different tiers of javascript JIT engines, each with progressively more expensive optimizations, and the lowest tier is basically just an interpreter because it turns out that for many kinds of code, it's less important to have best performance than it is to minimize unexpected disruptions in performance due to the JIT engine recompiling things or whatever.

And in spite of all that incredible effort and tuning, native code is still faster.
oh my stars, ~*4 or 5*~ layers of abstractions, can you imagine?

this is where brainiac cores have you hooked. you can't fathom something affecting execution without it being a big, dumb instruction that has to go down the pipe and gum up everything to get that kind of feedback. despite reams of such structures that ferry information around the core, all the time, none of it can bubble up to a flexible layer that could do something about it, so you treat it as an immutable fact of the world.

do you know what limits single-core full blast computing? it's very likely the ability for the thermal system to cool a single hot spot. you can get 'free' perf by core hopping, taking the workload to another core and heating up a point there. doing it in pure SW is too slow, you kill the perf gain shuffling things around with big SW hooks. HW alone can't do a good job of it, like y'all are saying hopping at the wrong point will be disastrous and it doesn't have scope. but something in a middle layer that had a global picture of "this task will last 10s," HW acceleration to get the relevant state from one core to the other, and perhaps had some freedom to tune the hopping frequency could squeeze out that thermal headroom. the idea's been kicked around forever but it's not in use because of current architectural limitations.

BobHoward posted:

As for the second, yeah, it is an interesting path not taken. You're making an argument that we're trapped on the hill we know, and we can't see across the valley to a higher hill, and there's thorns and poo poo in there so we don't wanna go. I'm sympathetic, there's all kinds of examples of local minima / maxima capturing branches of technology, but I guess what I disagree on is that the Transmeta-ish hill is necessarily taller than the brainiac hill. But I don't have hard data to back that.
the brainiac hill is going to continue iterating in that single-digit-per-block range even when there are decent improvements visible because they fall in the gap of hw/sw cooperation and nobody can make the leap. transmeta spanned that exact gap and would have had access to a wide range of such improvements.

Harik posted:

That "goofy segment stuff" was implemented on a processor with orders of magnitude less gates than anything out there now. They could plop the original 286 core in, all 134000 transistors of it, just to run "legacy" code on. it wouldn't be a rounding error on the gate count.
pretend I said "coherency" instead. and just to run with your example, what if each modern core dedicated 134k transistors to make up a 286, and then had to ask it questions about execution state all the time. it's not the gate count that would kill you it would be routing all of the pipeline stages up to that one critical section. just to add, this effectively already happens, there are certain inter-core interrupt paths that still look up ordering in a 386-era file and it is a significant point of contention. who knew when you stitched up 286's, that could only talk by yanking a request line, up with gigabyte-per-second fabric checking in a few billion times a second it might not scale!

Harik posted:

When you talk about visibility there's a cost there, too. The more you look at the more work you need to do. You risk spending time to optimize an active loop only to have it finish before you make the changes, with your super-optimizer always playing catch-up to what the code is actually doing. Sure, you could store it for the next time it goes back there, but when will that be, and will the data conditions be the same when it does?
yet again, brainiac cores have you hooked. there's simply no way to do any sort of optimization without doing a global stop the world garbage collection, considering each blob of code one at a time without regard for context, and re-start the merry go round at full speed? this is a limitation of current cores, it is not a global statement about all possible optimizations.

oh well, guess there's simply no way to ever do anything except widen the OoO window, add a couple load ports, tweak a cache size. all this runtime info we're collecting anyway can't effectively be sampled and acted on by big, dumb instructions blocking useful work, so there's no use in any form of HW/SW co-operation. I mean you said yourself that 286's are free and we could have one kicking around for no die cost, just do the optimization work on one of those for starters.

ConanTheLibrarian
Aug 13, 2004


dis buch is late
Fallen Rib
Is that HW/SW co-op the kind of thing Apple could pull off or would they be limited by the ARM ISA?

Beef
Jul 26, 2004
Conway's law applies.

I'll go on a slight tangent here and mention that we don't see new Transmeta's around just because of current architectural limitations, but also the way hardware development gets organized and funded.

I have a big soft spot for cross-layer hw/sw co-design. It's been a buzzword in the academia and industry for a while now, but what materializes in reality is ultimately not co-design. A big part is that the abstraction layers are also organizational border lines. Another part is just because how large scale hardware projects managed and funded.

I'm lucky to be involved in a project that organizationally blurs those lines, but there is still a huge practical and knowledge barrier between the 'hardware guys' and 'software guys'. The best case scenario I've seen is that software/workload people get a chair at the table early in the architectural design, and the software stack gets developed in parallel with a functional simulator. But that doesn't mean you get a feedback/iteration loop. Hardware people have their timetable and it's *tight*. By the time your functional simulator is finally up to snuff and your software stack starts chugging along, it's way too late to make big hardware design changes. We (SW guys) are lucky to just be able to catch bugs before tapeout.

Beef
Jul 26, 2004
edit: dupe, looks like 502 response did get through

Beef fucked around with this message at 09:50 on Apr 21, 2021

Beef
Jul 26, 2004
edit: dupe, looks like 502 response did get through

Beef fucked around with this message at 09:50 on Apr 21, 2021

trilobite terror
Oct 20, 2007
BUT MY LIVELIHOOD DEPENDS ON THE FORUMS!
looking at prebuilt desktops, is 11th Gen i5/i7 worth the price premium ($100-$300 it looks) over 10th Gen?

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy

Ok Comboomer posted:

looking at prebuilt desktops, is 11th Gen i5/i7 worth the price premium ($100-$300 it looks) over 10th Gen?

At a 100+ dollar price premium no

trilobite terror
Oct 20, 2007
BUT MY LIVELIHOOD DEPENDS ON THE FORUMS!
^^addendum

I haven’t bought a computer since my 2013 MacBook Pro

I have no clue what’s worthwhile but I have between $1500-$2k (maaaaybe 2.5k) to spend on a computer that will come with a 3060 and that I want to last me 5+ years doing Adobe stuff and playing games and whatnot (gpu can be upgraded, mobo is forever). Not tippety-top of the line but definitely “good enough”.

I’m realistically looking between the 11700, 11700K, and the 900/900K.

It’s an XPS, I know I gotta put a $70 Noctua cooler on it. What do I buy?

trilobite terror
Oct 20, 2007
BUT MY LIVELIHOOD DEPENDS ON THE FORUMS!

gradenko_2000 posted:

At a 100+ dollar price premium no

Unfortunately I have to buy 11th Gen if I want a 3060, which I probably do vs a 1660, which would be my other option.

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy
If you need to make it an 11th gen to get the 3060, then might as well make it an 11th gen

You don't need make it a K model

i5 is plenty for gaming and light productivity, EDIT: but if you're only picking between i7 and i9 for whatever other reason then the i7 is plenty

Check your buying options to make sure you're not inadvertently signing up for something you don't want

https://twitter.com/GamersNexus/status/1385275530996600835?s=19

trilobite terror
Oct 20, 2007
BUT MY LIVELIHOOD DEPENDS ON THE FORUMS!

gradenko_2000 posted:

If you need to make it an 11th gen to get the 3060, then might as well make it an 11th gen

You don't need make it a K model

i5 is plenty for gaming and light productivity, EDIT: but if you're only picking between i7 and i9 for whatever other reason then the i7 is plenty

Check your buying options to make sure you're not inadvertently signing up for something you don't want

https://twitter.com/GamersNexus/status/1385275530996600835?s=19

I’m very aware of this scam Dell’s running, thanks for reminding me

Prescription Combs
Apr 20, 2005
   6
I feel like that's being blown way out of proportion. It's very clearly marked on the page...

Cygni
Nov 12, 2005

raring to post

If you have to buy an 11th gen and you don't know what the motherboard will let you do, i would go 11700k. There is zero reason for the 11900k to exist, and the 11700(non-k) is going to be horrendously TDP limited.

Normally you can override the TDP limits and its fine... but you can never guarantee that option will exist in a Dell prebuilt.

trilobite terror
Oct 20, 2007
BUT MY LIVELIHOOD DEPENDS ON THE FORUMS!

gradenko_2000 posted:

If you need to make it an 11th gen to get the 3060, then might as well make it an 11th gen

You don't need make it a K model

i5 is plenty for gaming and light productivity, EDIT: but if you're only picking between i7 and i9 for whatever other reason then the i7 is plenty

Cygni posted:

If you have to buy an 11th gen and you don't know what the motherboard will let you do, i would go 11700k. There is zero reason for the 11900k to exist, and the 11700(non-k) is going to be horrendously TDP limited.

Normally you can override the TDP limits and its fine... but you can never guarantee that option will exist in a Dell prebuilt.

Ok- XPS 8940 ordered: 3060Ti, 32gb RAM, 2TB SSD, 500W, I went with the 11700k for the above reason. At ~$2k what’s another $100. Blu-ray drive so I can rip my Kazaa downloads.

Got it in white, natch. Reminds me of a classic white box PC from Back In The Day. May it synthesize fondly with my memories of my first childhood XPS from 20 years ago, and may it last me at least twice as long without going to absolute dogshit in four years, getting replaced with a second, black, 2004 XPS that went to dogshit even faster. Inshallah

Thanks for the help. Hope I didn’t buy an expensive lemon. It arrives....”by June 2nd”, fml.

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



There wasn't an option to upsize the PSU? That's the only change I would have suggested, since Dell loves to use proprietary connectors on their loving "ATX" PSUs.

trilobite terror
Oct 20, 2007
BUT MY LIVELIHOOD DEPENDS ON THE FORUMS!

SourKraut posted:

There wasn't an option to upsize the PSU? That's the only change I would have suggested, since Dell loves to use proprietary connectors on their loving "ATX" PSUs.

That IS the upsize PSU, dogg

The basic one is 350W, this is the 500W one.

What’s worse is that this is miles better than what I would’ve dealt with if I’d gone with the HP I was initially going to buy.

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



Ok Comboomer posted:

That IS the upsize PSU, dogg

The basic one is 350W, this is the 500W one.

What’s worse is that this is miles better than what I would’ve dealt with if I’d gone with the HP I was initially going to buy.

Oh that sucks, sorry. loving Dell (and double-loving-HP)

trilobite terror
Oct 20, 2007
BUT MY LIVELIHOOD DEPENDS ON THE FORUMS!

SourKraut posted:

Oh that sucks, sorry. loving Dell (and double-loving-HP)

Eh, I have a feeling a 3060+i7 will be plenty for me for the foreseeable future. My expectations/experiences are relatively low, coming from MacWorld, and this drought plus trends in the last five years of GPUs lead me to expect that we won’t be seeing any major leaps here shaking up the market for a good long while.

You’ll appreciate this, SK: 20 years ago my family’s 5 year old Mac Performa pizza box died. 5th grader me was pretty sure we’d end up with one of those iconic colorful G3s I’d experienced at the library, so imagine my horror and bemusement when a colossal white XPS tower showed up for Christmas with a note from “Santa” that my dad wrote in the customer note when he placed the order at work. That horror soon died with rogue squadron, and that computer was an unstoppable beast for all of 12 months before the dangers of the early 2000s internet filled it with molasses.

Anyways, right now I feel like I’m repeating history a bit. Turning my back on a colorful iMac that I’m quite keen on and would likely serve me well to buy an expensive Dell tower for the sake of “PeRfORmaNCe GaMiNg”. I also picked up a USB-C 4K monitor to go with it, and the plan is to probably end up with an Apple Silicon MacBook that will also make use of the monitor by the end of the year, so hopefully both systems will coexist nicely.

Alternatively, I’ll either/also slap a Mac mini on top of the Dell tower for a Master-Blaster setup, or better yet- put an iMac next to the Dell display and try to dual-screen it. 24” at 4.5K and 27” 4K won’t look terrible together at all, no sir. They definitely won’t give you horrid eye strain and weird ocular confusion effects. :shepspends:

~Coxy
Dec 9, 2003

R.I.P. Inter-OS Sass - b.2000AD d.2003AD
I know you already ordered the K but I for anyone else the K CPUs come with better coolers in the current XPS line, so it's probably worth it.

trilobite terror
Oct 20, 2007
BUT MY LIVELIHOOD DEPENDS ON THE FORUMS!

~Coxy posted:

I know you already ordered the K but I for anyone else the K CPUs come with better coolers in the current XPS line, so it's probably worth it.

yeah, that was the other part of it. Idk if it’s in this thread or the GPU one, but the base cooler is totally inadequate (like, ‘routinely hits ~100 degrees C and throttles’-inadequate) and people recommend either sourcing the upgraded Dell one or throwing in a Noctua NH-U9S with some 20mm screws. Hopefully this saves me the time + labor of sourcing and mounting a $70 aftermarket cooler (plus thermal paste, etc), even if micro center is like 10 min away.

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"

SourKraut posted:

There wasn't an option to upsize the PSU? That's the only change I would have suggested, since Dell loves to use proprietary connectors on their loving "ATX" PSUs.

At least they're not using proprietary ATX connectors on their *motherboards* anymore. They used to change the pin-in config so you couldn't easily replace or upgrade the PSU without seeking one out that had the correct connectors, but they made it possible for you to plug in a regular ATX PSU because *reasons*. PC Power & Cooling sold Dell-wired units, and I nearly killed my P2 450 box trying to plug an Antec unit in and realizing poo poo was awry when I smelled burning plastic before I even hit the power button. Probably saved my PC.

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy

Ok Comboomer posted:

Ok- XPS 8940 ordered: 3060Ti, 32gb RAM, 2TB SSD, 500W, I went with the 11700k for the above reason.

That's a good choice, should last you a good long while.

The PSU should be fine: the 3060Ti pulls 200w at stock, and the 11700K pulls 190w at stock. If you lift the power limits it can chow down as much as 200-260w, but even then you'd still have some allowance in your PSU budget assuming it's anything decent

Wild EEPROM
Jul 29, 2011


oh, my, god. Becky, look at her bitrate.

BIG HEADLINE posted:

At least they're not using proprietary ATX connectors on their *motherboards* anymore. They used to change the pin-in config so you couldn't easily replace or upgrade the PSU without seeking one out that had the correct connectors, but they made it possible for you to plug in a regular ATX PSU because *reasons*. PC Power & Cooling sold Dell-wired units, and I nearly killed my P2 450 box trying to plug an Antec unit in and realizing poo poo was awry when I smelled burning plastic before I even hit the power button. Probably saved my PC.

yeah now they just use propriatary connectors too and the power supply only supplies 12v with the motherboard providing the other voltages

granted the power supplies are high quality, but

Arzachel
May 12, 2012

Wild EEPROM posted:

yeah now they just use propriatary connectors too and the power supply only supplies 12v with the motherboard providing the other voltages

granted the power supplies are high quality, but

I'd love to get a 12v-only board and psu for sff builds but it seems to be limited to oem use.

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



Ok Comboomer posted:

Eh, I have a feeling a 3060+i7 will be plenty for me for the foreseeable future. My expectations/experiences are relatively low, coming from MacWorld, and this drought plus trends in the last five years of GPUs lead me to expect that we won’t be seeing any major leaps here shaking up the market for a good long while.

You’ll appreciate this, SK: 20 years ago my family’s 5 year old Mac Performa pizza box died. 5th grader me was pretty sure we’d end up with one of those iconic colorful G3s I’d experienced at the library, so imagine my horror and bemusement when a colossal white XPS tower showed up for Christmas with a note from “Santa” that my dad wrote in the customer note when he placed the order at work. That horror soon died with rogue squadron, and that computer was an unstoppable beast for all of 12 months before the dangers of the early 2000s internet filled it with molasses.

Anyways, right now I feel like I’m repeating history a bit. Turning my back on a colorful iMac that I’m quite keen on and would likely serve me well to buy an expensive Dell tower for the sake of “PeRfORmaNCe GaMiNg”. I also picked up a USB-C 4K monitor to go with it, and the plan is to probably end up with an Apple Silicon MacBook that will also make use of the monitor by the end of the year, so hopefully both systems will coexist nicely.

Alternatively, I’ll either/also slap a Mac mini on top of the Dell tower for a Master-Blaster setup, or better yet- put an iMac next to the Dell display and try to dual-screen it. 24” at 4.5K and 27” 4K won’t look terrible together at all, no sir. They definitely won’t give you horrid eye strain and weird ocular confusion effects. :shepspends:

Yeah, I think that build will last you a good amount of time. I saw you mentioned the 3060 Ti in the GPU thread, but it should be able to easily handle a couple of 4K monitors for content/media creation.

I remember the colorful G3s fondly, and going to Circuit City or CompUSA (Apple section) to look at them and wishing i could own one. My family's first "modern" PC was a Packard Bell Pentium 133 that couldn't run anything well and that, to this day, I'm amazed it didn't burn our house down given the poor build quality.

I complained enough about it that my parents ultimately bought me some generic brand beige box from Sam's Club that had a Cyrix MediaGX processor in it that, while slightly better than the Packard Bell, still couldn't run anything well. But hey, "MediaGX" sounds really fancy and cool, right?

I had started to learn how to assemble my own system around this time, but was still apprehensive for some reason, so I ended up getting a "gaming focused" desktop with my first job, an eMachine eMonster 500. While everyone can and should poo poo on eMachines, I still have fond memories of it, and interestingly enough, it still can boot up and load Windows 2000 to this day. I joke that it's probably the only functioning eMachine in the present day.

Around this time, one of my best friends' dad got me interested in a new beta OS called OS X, and so ultimately I saved up and bought a PowerMac G4 Quicksilver and learned what actual good build quality was compared to all the lovely Wintel systems that I'd used before it.

Anyway, FWIW, I'm hoping they do colorful 27" (or probably 30" if the rumors are correct?) iMacs with the M2 or so, because I think they'll end up being really nice. I do have a 2018 Mac Mini with eGPU connected via TB3 and it's surprisingly good, and the combination can handle any of the games with OS X support that I've been throwing at it. But the system you got from Dell should be pretty great for gaming too!

repiv
Aug 13, 2009

ah poo poo here we go again

https://twitter.com/FreeBSDHelp/status/1388280497097252866

how much performance do we lose this time

BlankSystemDaemon
Mar 13, 2009



repiv posted:

ah poo poo here we go again

https://twitter.com/FreeBSDHelp/status/1388280497097252866

how much performance do we lose this time
All of it.

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!
I was hoping someone would do it, and they did, 11900k laptop :stare:

https://twitter.com/JarrodsTech/status/1387946871906201600?s=20

hobbesmaster
Jan 28, 2008

MaxxBot posted:

I was hoping someone would do it, and they did, 11900k laptop :stare:

https://twitter.com/JarrodsTech/status/1387946871906201600?s=20

Oh Clevo :allears:

Perplx
Jun 26, 2004


Best viewed on Orgasma Plasma
Lipstick Apathy
It's weird that it's easy to slap a desktop cpu in a laptop, but making a new high wattage power brick is a total non starter and you have to use 2 off the shelf bricks.

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

Perplx posted:

It's weird that it's easy to slap a desktop cpu in a laptop, but making a new high wattage power brick is a total non starter and you have to use 2 off the shelf bricks.

I'm not an EE, but my first guess would be that there are limits to the power you can push through a passively-cooled (and, in fact, sealed-in-plastic) transformer before it starts setting everything on fire.

Adbot
ADBOT LOVES YOU

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy
https://m.youtube.com/watch?v=ysvZIcb3XAQ

This was also done for the 10900K

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply