|
Apparently the Alienware bios won't even accept a 7700k.
|
# ? Apr 10, 2021 23:14 |
|
|
# ? Apr 20, 2024 04:04 |
|
BobHoward posted:I'm sure the answer was roughly "because it's there". Any time you erect a barrier like that, people will try to tear it down and see. BobHoward posted:Don't know that I agree about the Transmeta approach being inevitable. Agree that they had some smart people, but I think they failed due to fundamental issues with the concept. Every time someone tries to veer away from brainiac core designs, the brainiac cores just keep on winning.
|
# ? Apr 12, 2021 17:40 |
|
JawnV6 posted:idk what 'brainiac' cores means, but my feeling is that "cpu perf" is solved in an incredibly myopic way most of the time. 1% gains on a synthetic benchmark that starts with empty caches and has nothing to do with the "steady state" of a long compute process running for minutes. CMS is one of the few places you can solve that sort of mismatch. the things most folks think matter (e.g. branch prediction) are useless diversions, most things if you cut read bandwidth in half you'd double real perf and that type of elision is going to be invisible/impossible to most uarch. 'brainiac' is a somewhat loose term for going deep on superscalar / out-of-order / speculative execution / etc. A brainiac CPU has control logic that's very clever about analyzing a nominally serial stream of instructions to extract instruction level parallelism (ILP). Apple's Firestorm cores (the big cores in A14 and M1) are probably the most brainiac design currently being sold to consumers. Transmeta hoped to gain power and area efficiency advantages by going anti-brainiac: the CPU core was in-order VLIW core with next to no silicon or power spent on control logic. CMS was there to smooth over the classic issues with trying to use VLIW in general purpose computers, and also attempted to provide OoO-like features in software. (I think that's what your flexibility comments were about, yeah?) I don't see why only CMS enables optimizations like elision of half the read BW. If transformations along those lines are possible, they ought to be possible at compile time too. And when you do them then, you get more generality (easier to reason about the semantics of the code when you still have the AST) and zero runtime overhead. Also, I don't buy that things like clever branch predictors are useless outside synthetic benchmarks. Sure, if you're gonna spend several minutes just crunching FP math, they might look useless. But in real-world general purpose computing, branchy pointer-chasing integer code is very important, actually. That's why people invest a lot of money in branch predictors. In fact, I would flip your claim on its head: I think the true value of modern branch predictors is hard to observe in simplistic synthetic benchmarks.
|
# ? Apr 13, 2021 21:49 |
|
BobHoward posted:'brainiac' is a somewhat loose term for going deep on superscalar / out-of-order / speculative execution / etc. A brainiac CPU has control logic that's very clever about analyzing a nominally serial stream of instructions to extract instruction level parallelism (ILP). Apple's Firestorm cores (the big cores in A14 and M1) are probably the most brainiac design currently being sold to consumers. BobHoward posted:I don't see why only CMS enables optimizations like elision of half the read BW. If transformations along those lines are possible, they ought to be possible at compile time too. And when you do them then, you get more generality (easier to reason about the semantics of the code when you still have the AST) and zero runtime overhead. BobHoward posted:Also, I don't buy that things like clever branch predictors are useless outside synthetic benchmarks. Sure, if you're gonna spend several minutes just crunching FP math, they might look useless. But in real-world general purpose computing, branchy pointer-chasing integer code is very important, actually. That's why people invest a lot of money in branch predictors. In fact, I would flip your claim on its head: I think the true value of modern branch predictors is hard to observe in simplistic synthetic benchmarks. brainiac cores beget brainiac cores, there is a better path that transmeta was likely on but iterating in the 1~5% range per block will never get there
|
# ? Apr 15, 2021 19:00 |
|
JawnV6 posted:I worked on a cpu team and there's just no good way to simulate architectural improvement effects in that long-term steady-state kind of thing. There have been developments in the past decade on architectural simulators, and there are simulation techniques now that do that kind of analysis successfully, e.g. Graphite or Sniper. We're still talking 1000x slowdown compared to hardware, but at least you can simulate a few bil cycles of a multi-socket multi-core. However, slow-as-gently caress cycle-accurate simulators are still the architects' bread and butter, it's hard to make them trust anything else. You can bet they will have to start trusting other simulation techniques if they want to start moving beyond *spits on the floor* spec benchmarks workloads.
|
# ? Apr 15, 2021 20:38 |
|
Beef posted:There have been developments in the past decade on architectural simulators, and there are simulation techniques now that do that kind of analysis successfully, e.g. Graphite or Sniper. We're still talking 1000x slowdown compared to hardware, but at least you can simulate a few bil cycles of a multi-socket multi-core. However, slow-as-gently caress cycle-accurate simulators are still the architects' bread and butter, it's hard to make them trust anything else. You can bet they will have to start trusting other simulation techniques if they want to start moving beyond *spits on the floor* spec benchmarks workloads. yeah, my knowledge is likely about that stale. im sure that the preference for cycle-accurate is from scars, some "rare" case turns out to hit 50% of transactions in practice and the perf doesn't get realized. but even those are more likely to favor small scale analysis right? easier to simulate 'double the load ports' instead of rejiggering pipeline stages. benchmarks aren't great but TPC-D was the first to catch row hammer
|
# ? Apr 15, 2021 21:39 |
|
JawnV6 posted:no like, imagine if the x86 ISA wasn't set in stone and they had like, RISC-V levels of churn on what subsets they supported, would any of the goofy segment stuff still be shipping? the promise of transmeta and what the CMS could do would largely be undone by targeting the "native" core, it'd be a tech debt albatross just like corners of x86 Ok I think we were talking past each other on this one, I was originally just saying people tried to RE it for the same reason Mallory said he wanted to climb Everest ("because it's there"). I don't think anyone who worked on that seriously wanted to write native code for the Crusoe core. (The one real world exception to that rule which would've become a thing had Transmeta become popular: backdoors. Whether intentional or not, any way to escape the JIT and execute native code probably would've had awful security consequences.) quote:the amount of info available at runtime is staggeringly large. like c'mon "do it at compile time", how fine-grained are JIT engines, are they monitoring each branch and re-writing poor predictors on the fly yet? what if I can adjust my code path based on my ping to the server, how's the compiler sussing out that one. there are ILP/TLP gains possible with a larger window/visibility than a ROB/OoO engine can possibly support and a CMS would sit right there. The thing about these fancy JIT engines with all the dynamic behavior is that (at least in browser land) the overhead is high enough that they have to get tricksy about when to even do it. A few years back I read about WebKit having something like four or five different tiers of javascript JIT engines, each with progressively more expensive optimizations, and the lowest tier is basically just an interpreter because it turns out that for many kinds of code, it's less important to have best performance than it is to minimize unexpected disruptions in performance due to the JIT engine recompiling things or whatever. And in spite of all that incredible effort and tuning, native code is still faster. (ok ok, it's not sane to expect javascript JITs to approach native code because JS is so awful, I know it's not a fair comparison) quote:I dont quite recall making that argument. I worked on a cpu team and there's just no good way to simulate architectural improvement effects in that long-term steady-state kind of thing. branch predictors are more solved than anyone deep into the weeds is aware and it's chasing nines not 50% swings. there's no easy way to even ask "what if we dedicated 30% of a die to X" because at most any given architect is given purview of like a million gates at most. Ok the first paragraph seems like more talking past each other. As for the second, yeah, it is an interesting path not taken. You're making an argument that we're trapped on the hill we know, and we can't see across the valley to a higher hill, and there's thorns and poo poo in there so we don't wanna go. I'm sympathetic, there's all kinds of examples of local minima / maxima capturing branches of technology, but I guess what I disagree on is that the Transmeta-ish hill is necessarily taller than the brainiac hill. But I don't have hard data to back that.
|
# ? Apr 15, 2021 22:00 |
|
JawnV6 posted:no like, imagine if the x86 ISA wasn't set in stone and they had like, RISC-V levels of churn on what subsets they supported, would any of the goofy segment stuff still be shipping? the promise of transmeta and what the CMS could do would largely be undone by targeting the "native" core, it'd be a tech debt albatross just like corners of x86 When you talk about visibility there's a cost there, too. The more you look at the more work you need to do. You risk spending time to optimize an active loop only to have it finish before you make the changes, with your super-optimizer always playing catch-up to what the code is actually doing. Sure, you could store it for the next time it goes back there, but when will that be, and will the data conditions be the same when it does?
|
# ? Apr 19, 2021 01:36 |
|
BobHoward posted:The thing about these fancy JIT engines with all the dynamic behavior is that (at least in browser land) the overhead is high enough that they have to get tricksy about when to even do it. A few years back I read about WebKit having something like four or five different tiers of javascript JIT engines, each with progressively more expensive optimizations, and the lowest tier is basically just an interpreter because it turns out that for many kinds of code, it's less important to have best performance than it is to minimize unexpected disruptions in performance due to the JIT engine recompiling things or whatever. this is where brainiac cores have you hooked. you can't fathom something affecting execution without it being a big, dumb instruction that has to go down the pipe and gum up everything to get that kind of feedback. despite reams of such structures that ferry information around the core, all the time, none of it can bubble up to a flexible layer that could do something about it, so you treat it as an immutable fact of the world. do you know what limits single-core full blast computing? it's very likely the ability for the thermal system to cool a single hot spot. you can get 'free' perf by core hopping, taking the workload to another core and heating up a point there. doing it in pure SW is too slow, you kill the perf gain shuffling things around with big SW hooks. HW alone can't do a good job of it, like y'all are saying hopping at the wrong point will be disastrous and it doesn't have scope. but something in a middle layer that had a global picture of "this task will last 10s," HW acceleration to get the relevant state from one core to the other, and perhaps had some freedom to tune the hopping frequency could squeeze out that thermal headroom. the idea's been kicked around forever but it's not in use because of current architectural limitations. BobHoward posted:As for the second, yeah, it is an interesting path not taken. You're making an argument that we're trapped on the hill we know, and we can't see across the valley to a higher hill, and there's thorns and poo poo in there so we don't wanna go. I'm sympathetic, there's all kinds of examples of local minima / maxima capturing branches of technology, but I guess what I disagree on is that the Transmeta-ish hill is necessarily taller than the brainiac hill. But I don't have hard data to back that. Harik posted:That "goofy segment stuff" was implemented on a processor with orders of magnitude less gates than anything out there now. They could plop the original 286 core in, all 134000 transistors of it, just to run "legacy" code on. it wouldn't be a rounding error on the gate count. Harik posted:When you talk about visibility there's a cost there, too. The more you look at the more work you need to do. You risk spending time to optimize an active loop only to have it finish before you make the changes, with your super-optimizer always playing catch-up to what the code is actually doing. Sure, you could store it for the next time it goes back there, but when will that be, and will the data conditions be the same when it does? oh well, guess there's simply no way to ever do anything except widen the OoO window, add a couple load ports, tweak a cache size. all this runtime info we're collecting anyway can't effectively be sampled and acted on by big, dumb instructions blocking useful work, so there's no use in any form of HW/SW co-operation. I mean you said yourself that 286's are free and we could have one kicking around for no die cost, just do the optimization work on one of those for starters.
|
# ? Apr 20, 2021 18:53 |
|
Is that HW/SW co-op the kind of thing Apple could pull off or would they be limited by the ARM ISA?
|
# ? Apr 20, 2021 20:57 |
|
Conway's law applies. I'll go on a slight tangent here and mention that we don't see new Transmeta's around just because of current architectural limitations, but also the way hardware development gets organized and funded. I have a big soft spot for cross-layer hw/sw co-design. It's been a buzzword in the academia and industry for a while now, but what materializes in reality is ultimately not co-design. A big part is that the abstraction layers are also organizational border lines. Another part is just because how large scale hardware projects managed and funded. I'm lucky to be involved in a project that organizationally blurs those lines, but there is still a huge practical and knowledge barrier between the 'hardware guys' and 'software guys'. The best case scenario I've seen is that software/workload people get a chair at the table early in the architectural design, and the software stack gets developed in parallel with a functional simulator. But that doesn't mean you get a feedback/iteration loop. Hardware people have their timetable and it's *tight*. By the time your functional simulator is finally up to snuff and your software stack starts chugging along, it's way too late to make big hardware design changes. We (SW guys) are lucky to just be able to catch bugs before tapeout.
|
# ? Apr 21, 2021 09:27 |
|
edit: dupe, looks like 502 response did get through
Beef fucked around with this message at 09:50 on Apr 21, 2021 |
# ? Apr 21, 2021 09:35 |
|
edit: dupe, looks like 502 response did get through
Beef fucked around with this message at 09:50 on Apr 21, 2021 |
# ? Apr 21, 2021 09:48 |
|
looking at prebuilt desktops, is 11th Gen i5/i7 worth the price premium ($100-$300 it looks) over 10th Gen?
|
# ? Apr 27, 2021 19:17 |
|
Ok Comboomer posted:looking at prebuilt desktops, is 11th Gen i5/i7 worth the price premium ($100-$300 it looks) over 10th Gen? At a 100+ dollar price premium no
|
# ? Apr 27, 2021 21:51 |
|
^^addendum I haven’t bought a computer since my 2013 MacBook Pro I have no clue what’s worthwhile but I have between $1500-$2k (maaaaybe 2.5k) to spend on a computer that will come with a 3060 and that I want to last me 5+ years doing Adobe stuff and playing games and whatnot (gpu can be upgraded, mobo is forever). Not tippety-top of the line but definitely “good enough”. I’m realistically looking between the 11700, 11700K, and the 900/900K. It’s an XPS, I know I gotta put a $70 Noctua cooler on it. What do I buy?
|
# ? Apr 27, 2021 21:55 |
|
gradenko_2000 posted:At a 100+ dollar price premium no Unfortunately I have to buy 11th Gen if I want a 3060, which I probably do vs a 1660, which would be my other option.
|
# ? Apr 27, 2021 21:56 |
|
If you need to make it an 11th gen to get the 3060, then might as well make it an 11th gen You don't need make it a K model i5 is plenty for gaming and light productivity, EDIT: but if you're only picking between i7 and i9 for whatever other reason then the i7 is plenty Check your buying options to make sure you're not inadvertently signing up for something you don't want https://twitter.com/GamersNexus/status/1385275530996600835?s=19
|
# ? Apr 27, 2021 22:07 |
|
gradenko_2000 posted:If you need to make it an 11th gen to get the 3060, then might as well make it an 11th gen I’m very aware of this scam Dell’s running, thanks for reminding me
|
# ? Apr 27, 2021 22:17 |
|
I feel like that's being blown way out of proportion. It's very clearly marked on the page...
|
# ? Apr 27, 2021 22:25 |
|
If you have to buy an 11th gen and you don't know what the motherboard will let you do, i would go 11700k. There is zero reason for the 11900k to exist, and the 11700(non-k) is going to be horrendously TDP limited. Normally you can override the TDP limits and its fine... but you can never guarantee that option will exist in a Dell prebuilt.
|
# ? Apr 27, 2021 22:25 |
|
gradenko_2000 posted:If you need to make it an 11th gen to get the 3060, then might as well make it an 11th gen Cygni posted:If you have to buy an 11th gen and you don't know what the motherboard will let you do, i would go 11700k. There is zero reason for the 11900k to exist, and the 11700(non-k) is going to be horrendously TDP limited. Ok- XPS 8940 ordered: 3060Ti, 32gb RAM, 2TB SSD, 500W, I went with the 11700k for the above reason. At ~$2k what’s another $100. Blu-ray drive so I can rip my Kazaa downloads. Got it in white, natch. Reminds me of a classic white box PC from Back In The Day. May it synthesize fondly with my memories of my first childhood XPS from 20 years ago, and may it last me at least twice as long without going to absolute dogshit in four years, getting replaced with a second, black, 2004 XPS that went to dogshit even faster. Inshallah Thanks for the help. Hope I didn’t buy an expensive lemon. It arrives....”by June 2nd”, fml.
|
# ? Apr 27, 2021 23:06 |
|
There wasn't an option to upsize the PSU? That's the only change I would have suggested, since Dell loves to use proprietary connectors on their loving "ATX" PSUs.
|
# ? Apr 27, 2021 23:09 |
|
SourKraut posted:There wasn't an option to upsize the PSU? That's the only change I would have suggested, since Dell loves to use proprietary connectors on their loving "ATX" PSUs. That IS the upsize PSU, dogg The basic one is 350W, this is the 500W one. What’s worse is that this is miles better than what I would’ve dealt with if I’d gone with the HP I was initially going to buy.
|
# ? Apr 27, 2021 23:12 |
|
Ok Comboomer posted:That IS the upsize PSU, dogg Oh that sucks, sorry. loving Dell (and double-loving-HP)
|
# ? Apr 27, 2021 23:17 |
|
SourKraut posted:Oh that sucks, sorry. loving Dell (and double-loving-HP) Eh, I have a feeling a 3060+i7 will be plenty for me for the foreseeable future. My expectations/experiences are relatively low, coming from MacWorld, and this drought plus trends in the last five years of GPUs lead me to expect that we won’t be seeing any major leaps here shaking up the market for a good long while. You’ll appreciate this, SK: 20 years ago my family’s 5 year old Mac Performa pizza box died. 5th grader me was pretty sure we’d end up with one of those iconic colorful G3s I’d experienced at the library, so imagine my horror and bemusement when a colossal white XPS tower showed up for Christmas with a note from “Santa” that my dad wrote in the customer note when he placed the order at work. That horror soon died with rogue squadron, and that computer was an unstoppable beast for all of 12 months before the dangers of the early 2000s internet filled it with molasses. Anyways, right now I feel like I’m repeating history a bit. Turning my back on a colorful iMac that I’m quite keen on and would likely serve me well to buy an expensive Dell tower for the sake of “PeRfORmaNCe GaMiNg”. I also picked up a USB-C 4K monitor to go with it, and the plan is to probably end up with an Apple Silicon MacBook that will also make use of the monitor by the end of the year, so hopefully both systems will coexist nicely. Alternatively, I’ll either/also slap a Mac mini on top of the Dell tower for a Master-Blaster setup, or better yet- put an iMac next to the Dell display and try to dual-screen it. 24” at 4.5K and 27” 4K won’t look terrible together at all, no sir. They definitely won’t give you horrid eye strain and weird ocular confusion effects.
|
# ? Apr 27, 2021 23:42 |
|
I know you already ordered the K but I for anyone else the K CPUs come with better coolers in the current XPS line, so it's probably worth it.
|
# ? Apr 28, 2021 04:57 |
|
~Coxy posted:I know you already ordered the K but I for anyone else the K CPUs come with better coolers in the current XPS line, so it's probably worth it. yeah, that was the other part of it. Idk if it’s in this thread or the GPU one, but the base cooler is totally inadequate (like, ‘routinely hits ~100 degrees C and throttles’-inadequate) and people recommend either sourcing the upgraded Dell one or throwing in a Noctua NH-U9S with some 20mm screws. Hopefully this saves me the time + labor of sourcing and mounting a $70 aftermarket cooler (plus thermal paste, etc), even if micro center is like 10 min away.
|
# ? Apr 28, 2021 07:21 |
|
SourKraut posted:There wasn't an option to upsize the PSU? That's the only change I would have suggested, since Dell loves to use proprietary connectors on their loving "ATX" PSUs. At least they're not using proprietary ATX connectors on their *motherboards* anymore. They used to change the pin-in config so you couldn't easily replace or upgrade the PSU without seeking one out that had the correct connectors, but they made it possible for you to plug in a regular ATX PSU because *reasons*. PC Power & Cooling sold Dell-wired units, and I nearly killed my P2 450 box trying to plug an Antec unit in and realizing poo poo was awry when I smelled burning plastic before I even hit the power button. Probably saved my PC.
|
# ? Apr 28, 2021 07:32 |
|
Ok Comboomer posted:Ok- XPS 8940 ordered: 3060Ti, 32gb RAM, 2TB SSD, 500W, I went with the 11700k for the above reason. That's a good choice, should last you a good long while. The PSU should be fine: the 3060Ti pulls 200w at stock, and the 11700K pulls 190w at stock. If you lift the power limits it can chow down as much as 200-260w, but even then you'd still have some allowance in your PSU budget assuming it's anything decent
|
# ? Apr 28, 2021 07:34 |
|
BIG HEADLINE posted:At least they're not using proprietary ATX connectors on their *motherboards* anymore. They used to change the pin-in config so you couldn't easily replace or upgrade the PSU without seeking one out that had the correct connectors, but they made it possible for you to plug in a regular ATX PSU because *reasons*. PC Power & Cooling sold Dell-wired units, and I nearly killed my P2 450 box trying to plug an Antec unit in and realizing poo poo was awry when I smelled burning plastic before I even hit the power button. Probably saved my PC. yeah now they just use propriatary connectors too and the power supply only supplies 12v with the motherboard providing the other voltages granted the power supplies are high quality, but
|
# ? Apr 28, 2021 08:03 |
|
Wild EEPROM posted:yeah now they just use propriatary connectors too and the power supply only supplies 12v with the motherboard providing the other voltages I'd love to get a 12v-only board and psu for sff builds but it seems to be limited to oem use.
|
# ? Apr 28, 2021 08:42 |
|
Ok Comboomer posted:Eh, I have a feeling a 3060+i7 will be plenty for me for the foreseeable future. My expectations/experiences are relatively low, coming from MacWorld, and this drought plus trends in the last five years of GPUs lead me to expect that we won’t be seeing any major leaps here shaking up the market for a good long while. Yeah, I think that build will last you a good amount of time. I saw you mentioned the 3060 Ti in the GPU thread, but it should be able to easily handle a couple of 4K monitors for content/media creation. I remember the colorful G3s fondly, and going to Circuit City or CompUSA (Apple section) to look at them and wishing i could own one. My family's first "modern" PC was a Packard Bell Pentium 133 that couldn't run anything well and that, to this day, I'm amazed it didn't burn our house down given the poor build quality. I complained enough about it that my parents ultimately bought me some generic brand beige box from Sam's Club that had a Cyrix MediaGX processor in it that, while slightly better than the Packard Bell, still couldn't run anything well. But hey, "MediaGX" sounds really fancy and cool, right? I had started to learn how to assemble my own system around this time, but was still apprehensive for some reason, so I ended up getting a "gaming focused" desktop with my first job, an eMachine eMonster 500. While everyone can and should poo poo on eMachines, I still have fond memories of it, and interestingly enough, it still can boot up and load Windows 2000 to this day. I joke that it's probably the only functioning eMachine in the present day. Around this time, one of my best friends' dad got me interested in a new beta OS called OS X, and so ultimately I saved up and bought a PowerMac G4 Quicksilver and learned what actual good build quality was compared to all the lovely Wintel systems that I'd used before it. Anyway, FWIW, I'm hoping they do colorful 27" (or probably 30" if the rumors are correct?) iMacs with the M2 or so, because I think they'll end up being really nice. I do have a 2018 Mac Mini with eGPU connected via TB3 and it's surprisingly good, and the combination can handle any of the games with OS X support that I've been throwing at it. But the system you got from Dell should be pretty great for gaming too!
|
# ? Apr 29, 2021 08:22 |
|
ah poo poo here we go again https://twitter.com/FreeBSDHelp/status/1388280497097252866 how much performance do we lose this time
|
# ? May 1, 2021 18:22 |
repiv posted:ah poo poo here we go again
|
|
# ? May 1, 2021 21:58 |
|
I was hoping someone would do it, and they did, 11900k laptop https://twitter.com/JarrodsTech/status/1387946871906201600?s=20
|
# ? May 2, 2021 04:28 |
|
MaxxBot posted:I was hoping someone would do it, and they did, 11900k laptop Oh Clevo
|
# ? May 2, 2021 04:36 |
|
It's weird that it's easy to slap a desktop cpu in a laptop, but making a new high wattage power brick is a total non starter and you have to use 2 off the shelf bricks.
|
# ? May 2, 2021 05:20 |
|
Perplx posted:It's weird that it's easy to slap a desktop cpu in a laptop, but making a new high wattage power brick is a total non starter and you have to use 2 off the shelf bricks. I'm not an EE, but my first guess would be that there are limits to the power you can push through a passively-cooled (and, in fact, sealed-in-plastic) transformer before it starts setting everything on fire.
|
# ? May 2, 2021 05:42 |
|
|
# ? Apr 20, 2024 04:04 |
|
https://m.youtube.com/watch?v=ysvZIcb3XAQ This was also done for the 10900K
|
# ? May 2, 2021 05:46 |