|
SemiAccurate claims to have good info about Rome/Zen2 from a recent leak. Claims it's pretty exceptional. Anyone know what it is exactly? Paywall and such.
|
# ? Oct 29, 2018 14:30 |
|
|
# ? Apr 26, 2024 15:26 |
|
I've been looking to jump ship and build all AMD for my next rig and I'm wondering what the sweet spot of waiting is. I'd originally planned for say late 2019/sometime in 2020, around whenever Cyberpunk 2077 comes out. Should I be looking at whenever Zen 2 and Navi come out? My ideal goal is to run max settings at 1440p and ~100hz (I'll be looking at a 144hz freesync monitor but in practice 100 fps is plenty imo). I know we're operating on a lot of rumors and speculation about stuff that hasn't been released yet, just trying to come up with a plan.
|
# ? Oct 29, 2018 15:37 |
|
Azuren posted:I've been looking to jump ship and build all AMD for my next rig and I'm wondering what the sweet spot of waiting is. I'd originally planned for say late 2019/sometime in 2020, around whenever Cyberpunk 2077 comes out. Should I be looking at whenever Zen 2 and Navi come out? My ideal goal is to run max settings at 1440p and ~100hz (I'll be looking at a 144hz freesync monitor but in practice 100 fps is plenty imo). I know we're operating on a lot of rumors and speculation about stuff that hasn't been released yet, just trying to come up with a plan.
|
# ? Oct 29, 2018 15:50 |
|
Hey guys I'm here today with a quick tutorial on how to build a PC for a game. Step 1 : Forget about buying a computer until a month or two before the game comes out. Seriously the only reason to keep track of hardware that you aren't going to use right away is because it's fun. Never ever buy poo poo in advance, yeah there are plenty of times when prices have gone up but the odds are buying hardware you aren't going to use right away is just throwing money away.
|
# ? Oct 29, 2018 16:01 |
|
I wanted to upgrade for like three or four years, but there just weren't any games I genuinely wanted to play it on, so I just waited until my components basically looked like they were about to eat it. People thought Black Ops 4 was going to be the next game, and look what happened. Star Citizen, too. Same thing might be the case for Cyberpunk 2077, we still don't know whether the game will actually be good. Think of it as buying clothes; don't buy based on what you might use another time and when you lose weight, just make up your mind about what the situation is right now. Also, you can always buy components piecemeal like PSUs, cases, fans, and cooling first, then you still get something to play with. And something's always going to go south with that which is where you realize you have to spend some more money on cables, thermal paste, and all sorts of expenses you didn't account for first. I spent time waiting for things like Intel iGPU support for Variable Refresh Rate which they promised like four years ago, along with other features, and look how much that amounted to. Better to decide based on the available facts, not what might be. E: And as someone who waited that long to buy, there was never really a great time to buy. I got an eight-core processor with sixteen threads to show for it this year, but the RAM, SSD, and GPU prices sure weren't fun. ufarn fucked around with this message at 16:31 on Oct 29, 2018 |
# ? Oct 29, 2018 16:20 |
|
Oh yeah, I'm definitely not buying anything ahead of time. I won't buy anything until I'm ready to build, was mostly just trying to get an idea for what will be coming down the pipe around that timeframe.
|
# ? Oct 29, 2018 16:36 |
|
The best time to buy new hardware is tomorrow This continues to apply tomorrow Such is life
|
# ? Oct 29, 2018 17:25 |
|
Anyone think AMD is going to follow Intels footsteps and do a Ryzen 9 this next generation?
|
# ? Oct 29, 2018 17:39 |
|
I mean, they narrowed the product stack with Zen+, opting not to float a 2800/X product. I don't see what artificially inserting a new product tier in between Threadripper and R7 does for AMD, aside from waste money keeping up with the Intelses. Intel's already in a bad enough spot as it is, with everything below i9 now being non-hyperthreaded parts. There's a saying in basketball: Don't bail out a struggling shooter on the opposite team. (By which it means, don't be giving them fouls and letting them step to the free throw line) SwissArmyDruid fucked around with this message at 17:50 on Oct 29, 2018 |
# ? Oct 29, 2018 17:47 |
|
Strictly speaking, that's essentially what Threadripper is. The i9 nomenclature is their HEDT range these days. Their last set was Skylake X which had an i7 "entry" level part for that platform, much in the same way that there's an 8C/16T Threadripper part. It's all pretty much workstation parts without the enterprise price tag (and concomitant service guarantees), which works out nicely because enthusiasts are more likely to build and service their own machines.
|
# ? Oct 29, 2018 17:52 |
|
Hmm.... you know, you're right. i9s are about as thermally constrained as all get out, what with tech sites pretty much coming to the consensus that solder was done out of necessity rather than conceding to popular demand. I don't know why I even assumed that there would be any more thermal headroom for an -X tier part on top of what already is. Just the idea of a 200W Intel processor in this day and age is kinds of hilarious. The FX9590 or whatever was bonkers enough as it was, with its 220W TDP. I dunno, marketing is a stupid thing. SwissArmyDruid fucked around with this message at 19:14 on Oct 29, 2018 |
# ? Oct 29, 2018 19:03 |
|
NewFatMike posted:Strictly speaking, that's essentially what Threadripper is. The i9 nomenclature is their HEDT range these days. Their last set was Skylake X which had an i7 "entry" level part for that platform, much in the same way that there's an 8C/16T Threadripper part.
|
# ? Oct 29, 2018 19:08 |
|
Combat Pretzel posted:SemiAccurate claims to have good info about Rome/Zen2 from a recent leak. Claims it's pretty exceptional. Anyone know what it is exactly? Paywall and such. From what I gather, it's just a rehash of the 8+1 core ccx rumors that were floating a while ago.
|
# ? Oct 29, 2018 21:03 |
|
It's more than that, they're also claiming server socket change up as well, my guess would be for something like SP3+ enabling Hexdecachannel memory, while still retaining backwards/forward compatibility. It would indicate tiny rear end dies though as it'd be a full node shrink, so a theoretical 16C/32T part isn't impossible on say AM4 or AM4+ (AM4+ enabling quad channel) through MCM. X499/599 enables octochannel for TR. It'd be one hell of a way to sell new motherboards and have the A520/B550/X570 be more than a BIOS update. Lol, could AMD cheat bandwidth out for a native dual channel APU this way? Like, just MCM on another die which behaves as a 128-bit memory controller to help utilize the traces and feed the iGPU? More or less expensive/practical than just eDRAM, HBM or GDDR6? EmpyreanFlux fucked around with this message at 00:34 on Oct 30, 2018 |
# ? Oct 29, 2018 22:35 |
|
Arzachel posted:From what I gather, it's just a rehash of the 8+1 core ccx rumors that were floating a while ago.
|
# ? Oct 29, 2018 23:07 |
|
EmpyreanFlux posted:It's more than that, they're also claiming server socket change up as well, my guess would be for something like SP3+ enabling Hexdecachannel memory, while still retaining backwards/forward compatibility. It would indicate tiny rear end dies though as it'd be a full node shrink, so a theoretical 16C/32T part isn't impossible on say AM4 or AM4+ (AM4+ enabling quad channel) through MCM. X499/599 enables octochannel for TR. It'd be one hell of a way to sell new motherboards and have the A520/B550/X570 be more than a BIOS update. If the Kaby Lake G parts are anything to go by, HBM would probably be the way to go. I'd want to see something stupid with that TR4/SP3 package size, like a socketed GPU on a dual socket board. What would be the use case? No clue. Lower profile for server blades, maybe?
|
# ? Oct 29, 2018 23:09 |
|
.
sincx fucked around with this message at 05:50 on Mar 23, 2021 |
# ? Oct 30, 2018 00:04 |
|
EmpyreanFlux posted:More or less expensive/practical than just eDRAM, HBM or GDDR6? GDDR6 on package or on mobo probably won't work out for the same reasons GDDR6 didn't on AM3. eDRAM has the bandwidth and reasonable power usage but costs too much to be practical.
|
# ? Oct 30, 2018 00:14 |
|
EmpyreanFlux posted:It's more than that, they're also claiming server socket change up as well, my guess would be for something like SP3+ enabling Hexdecachannel memory, while still retaining backwards/forward compatibility. It would indicate tiny rear end dies though as it'd be a full node shrink, so a theoretical 16C/32T part isn't impossible on say AM4 or AM4+ (AM4+ enabling quad channel) through MCM. X499/599 enables octochannel for TR. It'd be one hell of a way to sell new motherboards and have the A520/B550/X570 be more than a BIOS update. Less practical/more expensive. Extra memory channels pull significantly more power and require much more complex motherboard designs.
|
# ? Oct 30, 2018 00:16 |
|
Didn't we figure out that DDR4 has as much bandwidth as the old edram has? Is it latency that's the issue?
|
# ? Oct 30, 2018 00:19 |
|
Just a quick note I hosed up and said "dual core", not "dual channel" as I intended but I think everyone got that anyway.PC LOAD LETTER posted:I think they'd have to go with HBM of some sort for it to make sense to feed a iGPU and give something closer to mid range-ish dGPU performance. Or at least give enough performance over current APU's and Intel iGPU's to stand out anyways. HBM means either EMIB (which they may not have patents too) or TSV (and that induces much higher failure rate for finished products), where as a dedicated companion memory controller to feed the iGPI needs only an MCM, is my understanding. Is MCM vs TSV relatively similar risk in production? Arzachel posted:Less practical/more expensive. Extra memory channels pull significantly more power and require much more complex motherboard designs. The point I raised though was that say in theory AM4+ X570 boards come with quad channel as a feature for dual die R7s, again just dumb speculation based on the 8+1 rumor. Hopefully takes care of the inherent board complexity issue, but how much more power would a dedicated 128-bit IMC pull compared to say HBM, and the relative complexity of implementing both as well as failure rates in manufacturing and overall cost because of each approach? Is it even feasible for a companion memory controller to exist in this fashion? My understanding is a tenative yes, as it wouldn't be that different from chipsets pre-Core2/Athlon64. Sorry it's a kind of hopelessly large theoretical question and I don't mean to dump it at anyone's feat.
|
# ? Oct 30, 2018 00:53 |
|
Adored has apparently heard some of the same rumors as Charlie Demerjian: https://www.youtube.com/watch?v=K4xctJOa6bQ He now sounds confident that Rome will have eight 8 core CCXs around a central controller chip for 64 cores per CPU. He also said something about Rome servers having a single NUMA domain per socket which I guess would mean the memory controller is moved from the CCX to the central chip. Is that feasible?
|
# ? Oct 30, 2018 11:29 |
|
Has Adored actually gotten scoops on anything?
|
# ? Oct 30, 2018 12:13 |
|
Drakhoran posted:Adored has apparently heard some of the same rumors as Charlie Demerjian: I've got vague memories of ye ole hypertransport needing a "directory" service for the memory hierarchy to scale beyond a few chips. The central die could hold the directory service, four InfinityFabric 2: now silkier! links and a handful or two of memory channels. It shouldn't be outrageously big, right? There have been old GPUs with 512 bit RAM interfaces starting with the ATI 2900XT in some prehistoric process node. Although I believe physical interfaces don't scale that much down, or maybe it was the analog parts of them.
|
# ? Oct 30, 2018 14:35 |
|
I am eager for zen 2 news. Or just CPU news in general I guess? All this technical stuff is super interesting to me.
|
# ? Oct 30, 2018 16:56 |
|
OhFunny posted:Has Adored actually gotten scoops on anything? Recently he got some leaks regarding Turing. I don't remember it being that big of a deal, but a scoop it was.
|
# ? Oct 30, 2018 16:59 |
|
Yudo posted:Recently he got some leaks regarding Turing. I don't remember it being that big of a deal, but a scoop it was. Who can forget when Adored leaked the very real $3000 Titan RTX, and RTX 2070 with 7GB of VRAM.
|
# ? Oct 30, 2018 17:11 |
|
repiv posted:Who can forget when Adored leaked the very real $3000 Titan RTX, and RTX 2070 with 7GB of VRAM. I thought it was something more substantial and accurate, but I guess not.
|
# ? Oct 30, 2018 17:37 |
|
repiv posted:Who can forget when Adored leaked the very real $3000 Titan RTX, and RTX 2070 with 7GB of VRAM. totally nailed the 13-18% performance gain gen-on-gen "maybe as much as 20% though!" Paul MaudDib fucked around with this message at 19:26 on Oct 30, 2018 |
# ? Oct 30, 2018 19:21 |
|
AMD apparently have new Vega 16 and 20 GPUs for mobile with HBM 2: https://www.amd.com/en/graphics/radeon-pro-vega-20-pro-vega-16 They're already in the new MacBooks. Still 14nm, but I'll be interested in seeing how it does. As a more compute focused architecture, I'm kinda surprised it's not in any mobile workstations.
|
# ? Oct 30, 2018 23:04 |
|
Hi thread What's the general consensus on those laptop Raven Ridge APUs, and why are there so few (read as "none") high-end SKUs and only one mid-tier? I don't need to buy, but I want to buy. Is there a basic timeline outside of magic ball stuff for when another series of APUs will launch (12nm?), or is AMD happy with its position?
|
# ? Oct 30, 2018 23:10 |
|
.
sincx fucked around with this message at 05:50 on Mar 23, 2021 |
# ? Oct 30, 2018 23:20 |
|
A Bad King posted:What's the general consensus on those laptop Raven Ridge APUs, and why are there so few (read as "none") high-end SKUs and only one mid-tier? They're not power efficient enough for road-warrior battery life, which keeps them out of the high end ultraportables. And the GPU, while better than intel stuff, isn't good enough for high end gaming laptops. I think they're great as a platform for companion laptops, ie for people with a primary desktop. But most people for whom the laptop is a secondary machine are sticking to an under $1k purchase. (Also laptops were always the place intel was most pushy about locking out AMD )
|
# ? Oct 30, 2018 23:37 |
|
sincx posted:They're about twice as fast as Intel's iGPU but around half the speed of a MX130. I'm guessing that's not fast enough for a high-end SKU. I can't seem to find the MX150 or 130 in anything ~13" with a not-trash IPS screen with greater than 80% sRGB... and Intel never places their GT3e/Iris tech in anything under ~25watts this cycle. I was hoping for a unicorn... Just want a low watt, <$1.2k, <3lb Lilliputian machine that does graphics gooder than a UHD 620, with decent build, >5hrs of web surfing and USB-C.... I have a monster desktop at home, but why can't I do work on the airplane tray table? Edit: Guess I'll wait for the next revisions, refurb my Yoga with it's rotted out battery and hope. Thanks. A Bad King fucked around with this message at 23:44 on Oct 30, 2018 |
# ? Oct 30, 2018 23:39 |
|
sincx posted:They're about twice as fast as Intel's iGPU but around half the speed of a MX150. I'm guessing that's not fast enough for a high-end SKU. Careful about this, there are configuration of the MX150 that are clocked ~500Mhz slower and the reduction in performance that brings (like~30% IIRC).
|
# ? Oct 30, 2018 23:56 |
|
EmpyreanFlux posted:HBM means either EMIB (which they may not have patents too) or TSV (and that induces much higher failure rate for finished products), where as a dedicated companion memory controller to feed the iGPI needs only an MCM, is my understanding. Its just that the cost and power isn't worthwhile to add a dedicated companion memory controller to the package just to act between the APU die and the on package/mobo memory even if total package power wasn't a issue (its a big one unfortunately) and package pins weren't a issue (I don't think there is enough pins available to add a on mobo DRAM of any sort and maintain a similar level of I/O that AM4 currently has, maybe if you scrapped nearly all the PCIe lanes it could work? I don't think they'd do that though). I think if AMD wanted to put some sort of higher than system DRAM bandwidth on package to boost iGPU performance for their APU's they'd just engineer in another memory controller into the APU die itself. They did do a on die GDDR5 for some of the next to last or last gen AM3 APU's I believe but they never actually added the GDDR5 to the package. Its unknown why. EmpyreanFlux posted:Is MCM vs TSV relatively similar risk in production? EmpyreanFlux posted:in theory AM4+ X570 boards come with quad channel as a feature for dual die R7s Drakhoran posted:He also said something about Rome servers having a single NUMA domain per socket which I guess would mean the memory controller is moved from the CCX to the central chip. Is that feasible? But that is on paper so to speak. Its all about trade offs and without benches it could really go either way in the real world. Good implementation will be critical for that approach working well in the real world. PC LOAD LETTER fucked around with this message at 00:26 on Oct 31, 2018 |
# ? Oct 31, 2018 00:20 |
|
.
sincx fucked around with this message at 05:50 on Mar 23, 2021 |
# ? Oct 31, 2018 00:22 |
|
sincx posted:If you are willing to format and do a clean install of windows: That one doesn't meet his qualifications, the screen is far short of >80% srgb. (99% of laptops are not gonna meet that list of qualifications regardless of CPU. A pro-quality screen on a sub-3lb machine for under $1200? Good luck.)
|
# ? Oct 31, 2018 00:33 |
|
Klyith posted:
I'm not asking for Adobe. Just sRGB, which is more common than you think. Apparently the D has 73% sRGB and poor color gamut. But for ~$550.....dang.
|
# ? Oct 31, 2018 00:49 |
|
|
# ? Apr 26, 2024 15:26 |
|
A Bad King posted:I can't seem to find the MX150 or 130 in anything ~13" with a not-trash IPS screen with greater than 80% sRGB... and Intel never places their GT3e/Iris tech in anything under ~25watts this cycle. I was hoping for a unicorn... I think this is another case for the HP Envy x360 13z, but I am growing more and more leery of recommending that by the day. My brother texted me for help the other day, and not only are there STILL no drivers or BIOS updates or anything of any sort on HP's support pages for his 2-in-1, but HP's own self-updating tool tried to download and install an Intel BIOS this past weekend, stopped only by the updater utility not being able to parse the update. I'm starting to think that there are corporate drama and shenanigans going on surrounding this product going on at HP.
|
# ? Oct 31, 2018 04:48 |