|
https://www.youtube.com/watch?v=pCDJERMTL3s TL;DW is that the Athlon is price/perf better than the Pentium 5400, and that despite a similar number of ALUs it still savages Intel HD, and occasionally jumps ahead of the A12-9800 in gaming. 1) Holy poo poo are the older apus choked to hell on the GPU side. 2) Holy poo poo Intel fix your loving drivers, in Firestrike an HD 630 is it on par with Vega 3 but here it gets mangled. 3) AMD shouldn't have locked it, as unlocked it'd be pissing on the Pentiums grave. EmpyreanFlux fucked around with this message at 20:26 on Sep 18, 2018 |
# ? Sep 18, 2018 20:23 |
|
|
# ? Apr 24, 2024 02:04 |
|
i wouldn't put it past them to put up an overclockable 242GX that's 5-8 dollars more expensive than the 240GE
Anime Schoolgirl fucked around with this message at 23:35 on Sep 18, 2018 |
# ? Sep 18, 2018 23:28 |
|
Cygni posted:Didn't see this mentioned, AMD launched more of the Zen+ stack today. 2700E and 2600E (45w limited parts), 2500X (4/8), and 2300X (4/4) So are the E parts OEM-only as well? The article only calls out the X ones as being OEM limited.
|
# ? Sep 19, 2018 17:17 |
|
Munkeymon posted:So are the E parts OEM-only as well? The article only calls out the X ones as being OEM limited. I really hope not. Intel sells boxed T series CPUs, so I'm hoping for parity there.
|
# ? Sep 19, 2018 21:15 |
|
Anime Schoolgirl posted:i wouldn't put it past them to put up an overclockable 242GX that's 5-8 dollars more expensive than the 240GE I could see a 220GE (3.4Ghz/3CU; 60$) a 240GE (3.5Ghz/6CU; 70$) and a 260GX (3.6Ghz/6CU, unlocked; 80$). I mean on paper that 260GX is terrible value so close to the 2200G, but it'd definitely maul anything sub 100$ Intel still, while a 240GE would be an obvious Price/Perf winner.
|
# ? Sep 19, 2018 21:24 |
|
Sooo, after this upcoming 7nm fabrication release, are AMD/Intel looking to push the shrink even further, or will we have reached the limit of what physics will allow by then? Or will be an elongated mission over 5 or 10 years to push down to 5 or 3nm because of the increase in difficulty? During that time they could sell a chip on new features or a vast number of cores while they slowly fab down to 5nm or less? I mean, surely the effort required is on the top end of an exponential difficulty curve at the moment?
|
# ? Sep 20, 2018 04:07 |
|
mdxi posted:I really hope not. Intel sells boxed T series CPUs, so I'm hoping for parity there. Aw poo poo. 2700E has a page on AMD's site now. The only part info given is: "OPN Tray: YD270EBHM88AF" Every other retail CPU in the Ryzen line has had a part ending in "BOX", like the 2700X's "OPN PIB: YD270XBGAFBOX"
|
# ? Sep 20, 2018 04:38 |
|
apropos man posted:Sooo, after this upcoming 7nm fabrication release, are AMD/Intel looking to push the shrink even further, or will we have reached the limit of what physics will allow by then? 7nm is going to be around for a looooongg while. Think 7nm++++ By the time they're done we are probably looking at consumer grade parts (NOT even TR) with 16c/32t @ 5.5 GHz for ~ <150W though so its not all bad.
|
# ? Sep 20, 2018 05:05 |
|
apropos man posted:Sooo, after this upcoming 7nm fabrication release, are AMD/Intel looking to push the shrink even further, or will we have reached the limit of what physics will allow by then? tl&dr: 3nm will probably be the practical limit mostly due to economics, and due to economics for many of the cost focused products they won't ever go past 5nm, but there are also huge and real physics problems with trying to mass manufacture chips at less than 3nm that will probably make sub 3nm untenable too. apropos man posted:Or will be an elongated mission over 5 or 10 years to push down to 5 or 3nm because of the increase in difficulty? Even Intel for instance has been rumored to be renaming their 7nm improved version processes "5nm" or even "3nm" instead of 7nm+ or 7nm++.
|
# ? Sep 20, 2018 05:29 |
|
Hmm. Yes. Let's give the time taken to beat 7nm an arbitrary length of time. Say 10 years R&D. That delay could cause a shakeup in how we use out PC's, do ya think? We could all be using PC's with much many cores or perhaps your average PC lover would end up having a cluster of machines around the house, all working together since the compute power of one CPU has hit bottleneck? Maybe we'll see gaming being delivered as a service, so that you only need a basic terminal and the game streams from Steam? Maybe we'd go through a period of PC's becoming extremely optimised for I/O etc, since that is the main area of development left as the CPU designs slow to a snail's pace.
|
# ? Sep 20, 2018 05:33 |
|
PC LOAD LETTER posted:
Seriously? How can they announce blatant lies if they're selling 7nm fabbed chips and passing them off as 5 or 3?
|
# ? Sep 20, 2018 05:38 |
|
Well if things continue to follow the current trends you'll see lots more cores at a minimum. How that gets implemented exactly will be where things get interesting (ie. chiplets, die stacking, etc.). I think the biggest possible shake up could be if we see real big and powerful FPGA's get integrated into these future CPU's as a standard feature. Along with possibly some sort of "AI" machine learning engine of sorts. You're starting to see some of that being integrated into some of these new phone/mobile SoC's. I think right now they're not very practical since the software to make proper use of them in interesting ways isn't really there yet but potentially they could become a real big deal. apropos man posted:Seriously? How can they announce blatant lies if they're selling 7nm fabbed chips and passing them off as 5 or 3? PC LOAD LETTER fucked around with this message at 05:41 on Sep 20, 2018 |
# ? Sep 20, 2018 05:39 |
|
Yep, I do agree that a bottleneck could alter the home computer experience. Aren't FPGA's slugging, though, because of the circuits inside them being adaptable to whatever program you give it, there's a performance penalty due to the complexity compared to a traditional CPU.
|
# ? Sep 20, 2018 05:45 |
|
apropos man posted:Aren't FPGA's slugging, though, because of the circuits inside them being adaptable to whatever program you give it, there's a performance penalty due to the complexity compared to a traditional CPU. The gotcha there is that the FPGA has to be properly programmed to achieve that level of performance and doing so is usually not something that is easy and that it won't be good at general purpose tasks. But you'd have a "traditional" x86 CPU there to handle the general purpose/legacy stuff in this case so you wouldn't need it be good at general purpose stuff. Programming the FPGA would be something that would potentially be handled by the software guys so when you'd go to load DOOM2026 or whatever the software would also set up the FPGA to run a given task instead of trying to run it on the CPU or over on the GPU. (edit) Doing that would free up either of those devices to either do more work elsewhere or just to potentially use much less power (ie. the CPU cores would turn themselves off mostly or go to a low power state while most of the heavy lifting would be done more efficiently + faster by the FPGA). At least that is potentially how it'd be done. Whether or not things pan out that way is something that remains to be seen. PC LOAD LETTER fucked around with this message at 05:56 on Sep 20, 2018 |
# ? Sep 20, 2018 05:54 |
|
apropos man posted:Yep, I do agree that a bottleneck could alter the home computer experience. You're getting things a bit twisted around. An FPGA attempting to emulate a full traditional CPU will be slower, because FPGAs usually can't do all that work at high rate of speed in the same price point. But what you're doing with an FPGA in a situation like this is you're setting it up to only handle a reduced amount of things, and it is then able to perform that restricted set of tasks much faster than a general purpose CPU of similar costs and power budget. In this situation of integrating a bunch of additional FPGA space with the CPU, you'd have people customizing their setup towards some particular task they do often - say routines to speed up video rendering - while using the typical CPU for other tasks. You might even have it setup to switch between a few particular feature sets as needed, though you can't quite do that on the fly. So there'd need to be some wait time while the FPGA is reprogrammed to switch between a profile for fast video rendering over to a profile for accelerating crunching a large dataset for an "AI" thing. As to whether the benefits of including the FPGA for such a use are really worth having it in a future CPU design, it's hard to say. But it's not because an FPGA is always slower or anything.
|
# ? Sep 20, 2018 06:04 |
|
Thanks for all the detailed replies. I wonder if we may end up seeing PC's with several FPGA's, perhaps a bank of then. One could be programmed to act as a wireless adaptor, another for a USB chipset, another for a RAID controller, etc etc. That approach could lead to some really tidy little PC's, since there would be a lot of modules on the board of the same dimension for placement.
|
# ? Sep 20, 2018 06:04 |
|
apropos man posted:One could be programmed to act as a wireless adaptor, another for a USB chipset, another for a RAID controller, etc etc. These things currently, are the opposite of a general purpose processor. These are examples of dedicated hardware which is built for a relatively simple task and allows it to be small and fast and efficient. They are also very cheap to produce. FPGA's are expensive like a CPU, but can be programmed repeatedly to optimize certain instructions. edit: perhaps the next step is faster communication between an APU and separate graphics hardware. I think this is possibly with AMD stuff, but not very practical. LRADIKAL fucked around with this message at 09:08 on Sep 20, 2018 |
# ? Sep 20, 2018 09:05 |
|
I would like to add that none of what has been discussed so far has even scratched the surface of moving to a different material. All of the current projected woes are due to the physical limits of Si or SiGe, there's an entire other bag of problems that may arise if the industry moves to a different substrate, such as InGaAs, or maybe we finally tackle graphene some time within the next decade, or something a little less exotic, like boron-doped diamonds.
|
# ? Sep 20, 2018 09:41 |
|
Is there a paper available for a deeper dive on the benefits to moving to a new material? Like besides allowing smaller node sizes, do they enable better performance in general? Would it be worth it to migrate such techniques back up the stack to 12nm, 28nm and 40/45nm?
|
# ? Sep 20, 2018 10:29 |
|
The general goals of moving to new materials are 1. Making the electrons inside go faster so the transistors switch faster so computers go faster 2. Using fewer electrons to make a transistor switch so that it consumes less power and doesn't get literally hotter than the freaking sun when you pack billions of them together And certain III-V materials can do one or the other, given their crystal structure. Even with our current materials there's a ton of research being done on new transistor "mechanisms", like single-atom transistors or spintronic ones that don't have to worry about pushing electrons at all.
|
# ? Sep 20, 2018 15:54 |
|
Additionally: there's no reason to suppose any of them will work.
|
# ? Sep 21, 2018 01:13 |
|
Yes. That is a BIG problem. Some of them are already in use *now*, but not in the kind of ways that we see silicon currently being used at that kind of resolution and detail. The industry might burn as much money as they have to get to 7nm now, just to get one of those materials working at a node that we would consider enormous by comparison.
|
# ? Sep 21, 2018 02:40 |
|
apropos man posted:Sooo, after this upcoming 7nm fabrication release, are AMD/Intel looking to push the shrink even further, or will we have reached the limit of what physics will allow by then? Continued shrinks might cause longevity issues we haven't encountered much before. Degradation due to use becomes a concern eventually at smaller sizes. That's one thing people usually fail to mention because it's not as cool as tunneling or density related issues. Khorne fucked around with this message at 22:25 on Sep 21, 2018 |
# ? Sep 21, 2018 22:15 |
|
apropos man posted:Maybe we'd go through a period of PC's becoming extremely optimised for I/O etc, since that is the main area of development left as the CPU designs slow to a snail's pace. This is already where we've gone in enterprise. Because cpus haven't been improving all that much in comparison to the conpute load humanity demands of the internet, in storage alone we are: gutting protocol overhead (SATA/SAS to NVMe), eliminating the need for storage protocols altogether (3dxpoint dimms now sold as "pmem"), and making relatively-secure storage retrieval directly from network interface to block device without even going though the cpu (RDMA). Both dx12 and vulkan are minimizing the involvement of system calls. The cpu is being eliminated as a middleman where possible. Potato Salad fucked around with this message at 02:39 on Sep 22, 2018 |
# ? Sep 22, 2018 02:37 |
|
I have an ASRock AB350M Pro 4 running a Ryzen 5 1400. It currently has a 256gb SSD on the M.2 Ultra port on the board. I’m thinking about taking another 256gb SSD on SATA and making a raid1 array for increased read performance. My two questions are: 1) will the AMD Raid1 implementation actually increase read speeds? A cursory google seems to say “maybe, maybe not” and it seems to be mostly raid controller manufacturer dependent 2) Am I hamstringing myself in any way by trying to do this on the M.2 and SATA bus combined?
|
# ? Sep 22, 2018 20:16 |
|
I wish mainboard manufacturers would go into better detail about the topology of the things on their mainboards. I have three M.2 slots on the X399 Taichi, and from the IO schema of the Threadripper system, I know that one of these is on the X399 chipset with all other devices. The manual however doesn't specify which one :[
|
# ? Sep 22, 2018 20:33 |
|
Jim Silly-Balls posted:I have an ASRock AB350M Pro 4 running a Ryzen 5 1400. What model of SSD do you currently have? The M.2 port can take both SATA and NVMe SSDs. If you aren't using an NVMe drive, I'd look into upgrading to one of those before thinking about RAID. In RAID1, all write operations must complete on all disks before the next operation can occur, so the write speed will be dependent on the slowest drive. If you're mixing an NVMe drive with a SATA drive, you're giving up a lot of write speed.
|
# ? Sep 22, 2018 21:29 |
|
It’s this, which looks like an SSD? https://www.amazon.com/Samsung-XP941-256GB-AHCI-MZHPU256HCGL/dp/B00J9V53M6
|
# ? Sep 22, 2018 21:33 |
|
I have been meaning to do an effortpost, but you would have a decent use case for StoreMI which you can get for free. It'll put the drives together and as it's a tiered storage solution, it'll put your more commonly used junk onto the faster device. The above is an NVME drive as indicated by it using PCI express lanes. Also 256? At least put 500 on the SATA device. LRADIKAL fucked around with this message at 21:51 on Sep 22, 2018 |
# ? Sep 22, 2018 21:49 |
|
Jim Silly-Balls posted:a raid1 array for increased read performance. Increased read performance of what? If you haven't characterized your workload that's an impossible question. Raid1 does increase read speed but not like a stripe set does. IIRC one of the main performance boosts is random access times, which is much more noticeable on a HD than a SSD. (If your workload is "a pc running desktop programs and games" then no this is not worth doing for purely performance reasons.) Also I don't even see if AMD RAID supports NVMe on non-X399 chipsets, let alone mixed NVMe and SATA raid.
|
# ? Sep 22, 2018 21:56 |
|
It’s a gaming pc so the workload is loading games, I forgot to mention that. I’ll look into an nvme for the m.2 slot
|
# ? Sep 22, 2018 22:37 |
|
Jim Silly-Balls posted:I have an ASRock AB350M Pro 4 running a Ryzen 5 1400. If your NVME drive doesn't have enough read performance for whatever it is you're doing, I suspect you're not going to like the price tags on anything that can go faster. You'll need to build ground up to go fast if speed is actually important, not kludge mixed drives into a generic desktop PC. What's your workload and where are you seeing bottlenecks? If you just need more space get a single bigger SSD, prices are lower than ever and it's not worth the headache to get RAID playing nicely. edit: if your workload is games then yeah it doesn't matter, you can save money by getting a SATA SSD instead of NVME if you want and you probably won't notice a difference. isndl fucked around with this message at 22:40 on Sep 22, 2018 |
# ? Sep 22, 2018 22:37 |
|
LRADIKAL posted:I have been meaning to do an effortpost, but you would have a decent use case for StoreMI which you can get for free. It'll put the drives together and as it's a tiered storage solution, it'll put your more commonly used junk onto the faster device. PrimoCache better than StoreMI, according to LTT: https://www.youtube.com/watch?v=rWXBo0bb_dU
|
# ? Sep 23, 2018 00:48 |
|
Jim Silly-Balls posted:It’s a gaming pc so the workload is loading games, I forgot to mention that. While your drive isn't an NVMe drive, it's still faster than a SATA SSD. Depending on the game and what needs to be loaded, a faster drive may not actually help. Game data is often stored on your disk compressed (to save space), and when it is read off your drive it will need to be decompressed (by the CPU) before it can be used. It won't matter that the compressed data is read faster if the CPU can't keep up. We see a large reduction in loading times when going to SATA SSDs from SATA HDDs because we can read the compressed game data fast enough for the CPU to be the bottleneck. We don't typically see that reduction continue when moving from SATA SSDs to NVMe SSDs.
|
# ? Sep 23, 2018 01:43 |
|
Hmm, so my main problem is in VR, playing games like Elite Dangerous, it will flash the SteamVR logo every once in a while, which usually means it’s trying to load something and couldn’t do it in time to avoid rendering issues. I assumed that was a storage speed issue, but if that drive is actually NVME I have a hard time believing that. The rest of the pc is as follows: Ryzen 5 1400 8gb DDR4 @ 2128mhz GTX 1060 TI 6GB Maybe it’s a RAM issue then? 8GB isn’t much for a VR rig
|
# ? Sep 23, 2018 15:14 |
|
I would tend to guess that the problems are lack of CPU and RAM, and perhaps some degree of GPU horsepower, not storage speed. E:D is extremely procedural, so even more than an average game "loading" stuff is very, very computationally expensive.
|
# ? Sep 23, 2018 15:32 |
|
E:D has a bunch of frame stalls that are unrelated to loading assets from the HD. Half of those are Scaleform doing dumb poo poo. I suspect you may be memory bound, so adding more may help.
|
# ? Sep 23, 2018 15:39 |
|
A stock 1400 with 2133 memory isn't going to be amazing single threaded performance either.
|
# ? Sep 23, 2018 15:42 |
|
The Ryzen 5 1400 was a slow CPU when it was new and saddling any Ryzen with 2133 MT/s RAM is going to tank performance, for sure. I would look into overclocking the CPU if you can and getting the cheapest kit of 2933/3200 RAM that you can, period. If overclocking isn't an option you should in theory be able to put a better-performing CPU in there, like a 2600X, with a BIOS update.
|
# ? Sep 23, 2018 16:06 |
|
|
# ? Apr 24, 2024 02:04 |
|
I got this PC because it was super cheap on Facebook marketplace and it’s an upgrade from my old rig so I’m willing to accept it might need some upgrades. I’ll start with 16gb of faster RAM because no matter what I’ll need that
|
# ? Sep 23, 2018 16:34 |