|
When you are using gsync, the vsync toggle in the driver turns into a FPS cap. Enable it and gsync will cap your framerate at the monitors maximum 144 Hz refresh rate so you won't experience any tearing even at high framerates. Disable it and the GPU will run past 144 FPS if it can and you will experience tearing (which may be less visible to you at >144 FPS). I say leave the global vsync toggle enabled for most people on gsync monitors, with the possible exception of competitive twitch shooter players. There is little point to going over the monitors maximum refresh otherwise and it comes with some potential power/heat/noise savings beyond just eliminating tearing. Also set your windows desktop to 120 Hz and you can enjoy a higher desktop refresh while still having the gpu idle down to standard 100-200 MHz 2d clocks.
|
# ¿ Sep 18, 2016 02:53 |
|
|
# ¿ Apr 25, 2024 00:47 |
|
Shrimp or Shrimps posted:But do you want to swap to vsync for input lag concerns if FPS exceeds refresh rate? Wouldn't capping your FPS at, say, 143 frames be a better alternative?
|
# ¿ Sep 18, 2016 13:00 |
|
Space Racist posted:Nothing I’m aware of - as mentioned earlier it’s a clean install from a few days ago so there’s not much cruft yet. All I had was the Nvidia driver package, Precision XOC, and I also ran the Nvidia Inspector to try to force the GPU to downclock with multi monitors. I uninstalled Precision, deleted Inspector after disabling the downclock attempt, and then used DDU to remove and reinstall the Nvidia driver package just in case Inspector left anything behind. In the nvidia control panel, try toggling "debug mode" in the help menu (it forces the card to reference clocks and voltages, even for factory overclocked cards). I know my 1080 stops idling when you connect more than 2 monitors to it, my primary display is 1080p/240 Hz and my secondary is 1200p/60 Hz. If I connect a third display my card goes to a high idle of like 1100 MHz something, not the full 1600 MHz base clock but much higher than the default 139 MHz idle. 2x 2160p/60 monitors does require more bandwidth to drive though, but it should still boost when called for even if it wasn't idling anymore. You said its a RMA replacement card? Perhaps examine the bios on it and make sure it hasn't been tampered with (like someone disabled boost and undervolted the card in order to maintain peak efficiency as a mining card). Though last time I checked which was admittedly a LONG time ago it wasn't possible to flash pascal cards with custom bios yet...
|
# ¿ Sep 5, 2018 01:00 |
|
Aeka 2.0 posted:I just realized my motherboard has been running my 3000mhz ram at 2100 and I just assumed that's what i bought because my brain is poo poo. Why would it do this when set to auto? Because auto is the JEDEC standard that guarantees compatibility, to actually get what the manufacturer claims you have to switch it from auto to XMP.
|
# ¿ Oct 7, 2018 16:04 |
|
I had a factory overclocked evga 980 that was unstable and would lock up every few hours of gaming, causing the driver to reset and make whatever game I was playing crash to the desktop. RMAed it and the replacement was better but still did it at least once every few days. And it wasn't even happening at the highest frequency/power/load, at the time I was using a 60 Hz 1200p display with adaptive vsync, so quite often the card would crash at 70-90% power but would be fine at 100%. However I completely eliminated the issue by loading up MSI afterburner and just dragging the power target and voltage target to the maximum values without applying any further overclock (it boosted to 1.4 GHz out of the box anyway). I loaned the card to a friend and told him to do the same thing, it is still working perfectly fine to this day and never crashes on him. Basically depending on the silicon lottery the the card may dip into voltages that are too low to be stable at a given boost frequency and it can be fixed by just globally tuning up the voltage.
|
# ¿ Oct 14, 2018 14:14 |
|
I recently had a Seasonic die on me, G series 360w 80+ gold, at about 4 years or run time (near 24/7, it was powering a pfsense router box that tended to hover at around 40w at the outlet according to the battery backup it is attached to).Anime Schoolgirl posted:doesn't the 2080ti draw like 600w at very brief moments Probably, but harmless. All GPUs and CPUs can cause momentary (less than a millisecond duration) current spikes well into the multiples of the rated TDP, this is what the capacitors are for. The important thing is the average power over a second or two shouldn't exceed the specifications.
|
# ¿ Oct 15, 2018 01:12 |
|
I went from a 680, to a 980, to a 1080 (and water cooled the 1080), and at the moment I have very little interest in the 2000 series. The only cards fast enough to really be worth the effort to tear everything down and rebuild start at $1200 and that is before before throwing a full cover water block on one. I think I'm going to sit it out till whatever they put out on 7 nm hits, hopefully by then we will see more widespread use of RTX hardware and will have some idea about how it actually performs too.
|
# ¿ Oct 16, 2018 02:39 |
|
Identifying a CPU limit in a game is trivially easy, disable AA modes and crank the resolution down (like take that upscaling setting and move it to negative 50%). If your FPS barely changes at all you are at a CPU limit. If on the other hand it goes up significantly, then its a GPU limit and you shouldn't worry about the CPU. Also I don't know if it is the DRM or what, but Ubisoft does suck at this, Far Cry 5 on a 4.8 GHz i7-7700k/GTX1080 is totally CPU limited even at 1080p with a heavy AA mode. The base engine they are using is like 10 years old at this point, you would think they would have ironed out some optimizations by now.
|
# ¿ Oct 20, 2018 23:58 |
|
TheFluff posted:Really wasn't planning to spend this Sunday afternoon computer janitoring, but alas, I noticed colors were looking wonky, and sure enough the monitor had got completely stuck in YCbCr 422 mode for no apparent reason whatsoever. Attempting to change back to RGB from the Nvidia control panel was impossible - changing away from "use default color settings" to "use NVIDIA color settings" and hitting apply just immediately went back to default. Doing a clean driver reinstall via Geforce Experience changed jack poo poo. Rebooting multiple times changed jack poo poo. Booting into safe mode did solve it while in safe mode, so it was clearly a driver issue. Yeah, I can kind of see why nvidia is still using an expensive FPGA for gsync hdr, there is no point to taping out an ASIC for it in the absence of enough displayport or hdmi bandwidth to drive HDR 4:4:4 at the 144 Hz target refresh rate. With the current 8b/10b encoding used in HDMI and DP, you'd need 60 gbps to deliver 144 Hz HDR. If they ever target 4k HDR 240 Hz they would need over 100 gbps on the cable to pull it off.
|
# ¿ Oct 21, 2018 16:21 |
|
TheFluff posted:Even liquid metal is a poor thermal conductor compared to copper - we're talking like 70 W/mK to somewhere in the 3-400 W/mK range. It's great compared to thermal paste though, which tends to be somewhere around 10 W/mK. More like 5W/mK, the really good stuff can push to 12W/mK. I have both my (delidded) CPU and my GPU in the same custom cooler loop and it is not uncommon for the CPU to be 20C warmer than the GPU at load. The GPU consumes nearly 2x the power of the CPU, but it is dramatically closer to the coolant than the CPU could ever be. Also to the person wanting a silent PC, perhaps think about a custom loop cooler? A full cover GPU block requires no additional fans, plus effectively cools the memory/power delivery and you can also cool the CPU in the same loop with only a single superior copper radiator vs having to use multiple AIOs with their aluminum construction. And if you use compression fittings on flexible tubing it is really hard to screw up, if you are competent enough to install an AIO on a previously air cooled GPU going full custom loop is likely well within your skill level. Obviously the one big downside is cost, a custom loop can easily end up at 2x to 3x the cost of a couple AIOs.
|
# ¿ Oct 22, 2018 23:00 |
|
Xerophyte posted:It's more that if my radiator fans are pulling air to the point where the closed R6/S2 front is a significant airflow limitation then my radiator fans are almost certainly too drat loud for me to want to keep the case open. I considered the O11 Dynamic since it looks pretty good but it's about as constrained as air goes. The O11 Air is looks ok, the H500M ugly, the Conquer a hideous monstrosity that clashes with absolutely every other piece of furniture I own.
|
# ¿ Oct 27, 2018 12:59 |
|
Paul MaudDib posted:It's kind of sad that sleek contemporary design is on the way out in favor of the blingee LED cases. Actually the side panels on the 750D are interchangeable, so if you order a second/replacement solid panel for it you can turn one into a windowless case. I was tempted to do that on mine but I was already something like $1500 in to the build and decided against spending any further for something that was purely cosmetic.
|
# ¿ Oct 29, 2018 00:16 |
|
Zero VGS posted:I have a roll of black gaffer tape for covering up all the various LED at my bedroom desk and it works great with no residue. I guess I can tape across the entire "Nvidia" on the side of the card, but the LEDs will still be lit up and probably leaking light out the side where I can't mask it. I emailed PNY to see if they have any ideas. Are the LEDs part of the shroud, or part of the PCB? The LEDs on my MSI card were part of the shroud, which means I could simply unplug the header that powered them (and eventually dumped the whole thing for a LED free full cover water block).
|
# ¿ Nov 7, 2018 23:27 |
|
Frame limiting makes the game engine wait 7 MS between the start of each frame. So it starts a frame, cpu does the simulation and feeds the data to the gpu which renders the frame then scans it out when its done. If less than 7 MS has passed since starting the last frame, it waits till 7 MS has passed and then repeats the process. If more than 7 MS has already passed, then it starts rendering the next frame immediately. The frame rate will never exceed ~141 FPS, so the gpu will always be able to scan out the next frame to the display as soon as it is done, frames never sit in buffers, latency is minimized. The thing is; nvidia put quite a bit of effort into their frame pacing, so when you enable double buffered vsync the nvidia driver will lie to the game about when it is ready to start the next frame and make the game wait till roughly 6.94 MS after the start of the last frame before starting the next one. At that point the only difference is that frame limiting does "wait and then render" while vsync does "render and then wait". The amount of additional latency caused by vsync in that scenario is always going to be trivial and less than 1 refresh interval. You will only run into latency trouble if you didn't change that one critically important to latency setting in the nvidia control panel / global 3d settings labeled "Maximum pre-rendered frames". It even says in the description this is how many frames ahead the driver will let the CPU queue up, this setting is absolutely a big fat high latency buffer any time your CPU is faster than your GPU or display. If this is anything other than 1, you are setting yourself up for significantly increased input latency. You can even test it yourself, disable gsync, set your refresh rate to 60 Hz, force vsync and then play a game that you could easily break 144 Hz on while adjusting that setting. You can even set vsync to half-refresh adaptive which will let you push the latency of that buffer up to over 133 MS (4 frames ahead at 33.3 MS/frame). Beware that some games also have their own render ahead setting which can and often will be stacked on top of the one the driver does. Granted, latency flies right out the window when you are using AFR SLI, which requires deeper and therefore higher latency buffers as a basic requirement to the technology functioning at all. At a minimum the latency of AFR SLI is going to be double the latency of a single GPU at the same frame rate, or equal to a single GPU at half the frame rate. Indiana_Krom fucked around with this message at 15:57 on Dec 15, 2018 |
# ¿ Dec 15, 2018 15:52 |
|
Craptacular! posted:I realized this morning that the best feature of 144hz really isn’t 144 FPS, it’s a locked 120 FPS within adaptive sync range and no extraneous bullshit. 240 Hz monitors are even better, pretty much none of that bullshit ever applies because actually exceeding the refresh limit enough for it to matter on one is borderline impossible.
|
# ¿ Dec 16, 2018 00:37 |
|
Combat Pretzel posted:I find it a bit hilarious, that Jensen is chastising "uncompliant" FreeSync monitors for blanking and poo poo like that, when not so long ago, their driver was broken for 3-4 releases, making GSync fail exactly the same way. I've been using a gsync display while always updating my drivers when a new one hit and this never happened to me. Granted this is a sample size of one gsync display and one gpu, maybe I just dodged the bullet?
|
# ¿ Jan 12, 2019 20:25 |
|
Sininu posted:CSGO cap is piece of poo poo, I presume Doto2's is as well. Did it ever occur to you that whatever point in the pipeline you are graphing there might be full of poo poo and not actually even remotely connected to when the frames are physically up on the screen? Basically unless you are graphing an output on FCAT, this means absolutely nothing.
|
# ¿ Jan 15, 2019 23:13 |
|
Is running a full blown ray traced lighting in a modern game actually particularly more expensive than running full blown ray traced lighting in a 22 year old game? The differences in how hard it is to rasterize each one might be enormous, but the ray traced lighting shouldn't actually be that different between a modern game and a two decade old game I would think. They couldn't do this 22 years ago because it was the same amount of work then as it is now and even a super computer of the era fell well short of the level of performance necessary. This quake 2 thing is probably there because its a well known open source that probably wasn't terribly complicated to modify for RTX support. IIRC it has been stated that ray tracing is actually the easier method to implement because when you use it the computer does all the work of making everything look right. Basically I'm guessing it isn't that it is easier to do on a 22 year old game, it is just that it took 22 years to get hardware fast enough to pull it off period.
|
# ¿ Jan 20, 2019 21:14 |
|
As much as people like to diss 1080p monitors, I have one of those 24" 1080p @ 240 Hz monitors and its loving awesome, even though I don't play esports games. Granted, supposedly later this year or early 2020 there should be 1440p/240 Hz monitors on the market, just its going to take a lot more GPU just to break even on FPS compared to 1080p, and hitting 240 Hz on 1080p is already a huge pain in the rear end.
|
# ¿ Jan 22, 2019 03:44 |
|
tehinternet posted:Yeah, I feel like with my 1080 Ti pushing 120hZ 1440 Ultrawide I’m good til at least the next generation or the one after that. With an i5-4670k I’m due for a CPU upgrade today over any GPU one. I already own a 9900k, its very nice. Today I was shuffling some games off my SSD to free up space and steam backup was going super slow ~30 MB/sec because it is single threaded , so I canceled that and instead fired up 7-zip and started manually compressing them into "fastest" 7z archives on the backup drive which pegged all 16 threads and ran at 145 MB/sec. My old quad core couldn't even come close to that kind of throughput outside of "store" which is not compressing files at all, now 7z is actually significantly faster than my gigabit internet so it is worth it to keep local backups again.
|
# ¿ Jan 28, 2019 01:43 |
|
Craptacular! posted:I think Asus Strix can even tie the GPU's fans to your case fans so the case will spin up to cool the card down (likely requires an Asus motherboard as well). You can still do it in software with a lot of motherboards by using speedfan to rev up your case fans based on both the CPU and GPU temps. I used to do that before I went full custom water and just slaved everything to the coolant temperature.
|
# ¿ Feb 3, 2019 01:33 |
|
Yeah, the feel of a game at 180+ fps is absolutely a lot smoother and snappier than at 40 fps, even if its a gsync monitor that does it all without tearing or lag.
|
# ¿ Feb 20, 2019 02:03 |
|
PC gaming is not dead, and is in no danger of being replaced by consoles or streaming. For one thing, a good single GPU gaming PC can draw and dissipate 400w or more of power, which is well over double what most consoles can get away with. PCs will always be 2-3x faster than consoles purely because of the higher power envelope they allow. Streaming is always going to have a latency penalty that will limit adoption for several popular game types.
|
# ¿ Feb 20, 2019 14:21 |
|
Gay Retard posted:Things ARE different now, though. I'd rather play an optimized 4K/60FPS game on a next gen console over a PC that can't reach the same performance. I'm not saying PC gaming will die, but the value proposition will get even worse. Thing is, on the next gen console said game will be 4k/30 fps cap, and a same generation PC of the time will be doing the same game with higher resolution textures, details, and AA also at 4k and holding somewhere around 90-110 fps. We could get to the point where PCs and consoles have the same basic architecture/OS/etc, just the PC will have 3x more of every resource clocked significantly higher and with a dramatically higher power limit. Consoles will take over from PCs when they start shipping with 850 watt power supplies and the cooling capacity to dissipate that much heat. Oh yeah, one more reason PCs are going to stick around: Older games, your new Nth generation console that can do 4k/60 unfortunately can't run any of the hundreds of older games you would also like to run at 4k/60. But your new PC can, or even better it can run them at 4k/144+. I can still play DOS/3.5" floppy era games on my modern desktop PC, and almost any game with directx support can be hacked or patched up to 4k/high Hz. Older games look better and run better on newer PCs than they ever did on the consoles of their day. The old question of "But can it run Crysis?" remains relevant even today, and the answer on a gaming PC is more often than not "Yes." these days.
|
# ¿ Feb 20, 2019 17:36 |
|
Edmond Dantes posted:Not sure if this is the correct thread, so apologies in advance. You can do all that, or you can just shut down and swap the card. I've done it several times before and never had an issue beyond occasionally needing a second reboot.
|
# ¿ Feb 23, 2019 14:53 |
|
Edmond Dantes posted:Hey, I got a couple general (dumb) GPU questions: 1) Adaptive vsync will give you the best compromise between performance and quality, but all types of vsync add latency. It is similar but not exactly the same as capping (vsync is a buffer, capping is a throttle). 2) Cap it if you want, but note you can't bank up higher frame rates for when things get busy. If anything, running uncapped makes your computer run hotter and therefore slower. 3) On Windows 7/8.x/10 this is pretty much irrelevant, do whatever works for you. Exclusive full screen might be <1% faster. 3b) No, but what you are running on the second monitor can/will impact performance possibly severely. 4) Varies by game, look up tweaking guides for specific games.
|
# ¿ Mar 4, 2019 03:30 |
|
Phone posted:I need to verify when I get home, but did some more research and apparently nvidia’s drivers freak out and put the card into maximum overdrive and gently caress with the fan profile if you have a high refresh rate monitor and a 60Hz monitor plugged in at the same time. VelociBacon posted:This is old news but yes if you set your high refresh rate monitor to over 120hz on older Nvidia cards while having a 60hz secondary monitor it idles the card quite a bit higher. I had this problem with my 980ti and it was resolved with my 2080ti. Had a 144 Hz and a 60 Hz display together on a 980 and did not have a single issue with this. Currently have a 240 Hz and a 60 Hz together on a 1080 also with no such issue. The only time I know for sure you will have a nvidia card fail to idle down is if you plug three displays into a single card (even if they are all 60 Hz).
|
# ¿ Mar 4, 2019 23:15 |
|
Possible, my combination was 1080p/144 and 1200p/60 which may be close enough to not cause problems.
|
# ¿ Mar 5, 2019 00:43 |
|
DrDork posted:1440p@60 / 3440p@100 / 1440p@60 and I've idled my 1080Ti at ~1500Mhz core since day 1. Though between the zero actual load and it being under a AIO, it's silent regardless. Actual idle on a 1080ti should be like 135 MHz or something in that ballpark, but because you have 3 monitors connected it never actually idles down. If you have an iGPU output available, you could always plug one of the 1440p monitors into it and save yourself a few $ in power every time your computer is on.
|
# ¿ Mar 5, 2019 03:33 |
|
Statutory Ape posted:just got around to trying this and no dice Plug whatever monitor doesn't need your main GPU's power into the iGPU if you have one. Nvidia cards do not idle if they have 3 or more displays connected (refresh rate is irrelevant).
|
# ¿ Mar 7, 2019 23:21 |
|
Anti-Hero posted:I swapped out my EVGA 2080 XC with an EVGA 2080Ti Black. The 2080Ti refused to display when installed into the PCIE16x slot; it worked fine on the 8x. A motherboard BIOS update fixed that, this was odd and frustrating. After installing the drives I ran the Afterburner OC scanner successfully, but when I try to play any games the PC hard reboots. I've tried this both with the stock GPU settings and the overclock. Seems like a PSU issue, likely overcurrent protection, but I didn't think the supply requirements were much different between a 2080 and a 2080Ti? I've swapped back to the 2080 (with an OC) and things are humming along fine. When swapping the video cards I used DDU, clean installs, etc. Reasonable, if your PSU has more than two 8 pin PCIe connectors, I would try moving one to a different source as if the PSU has separate 12v rails, it could trip overcurrent if they were both pulling from the same rail.
|
# ¿ Mar 11, 2019 00:40 |
|
Red_Fred posted:Yeah it's a Dell U2412M. So just 1920x1200 @ 60Hz. The U2412M has DisplayPort, or at least mine does...
|
# ¿ Mar 20, 2019 12:07 |
|
LRADIKAL posted:Please share window location fixes... Dual monitor? Red_Fred posted:Anyway for the 2070 power connections is it fine to use the same cable for both 8 and 6 pin plug? Or should I run a new cable for one of them from my psi?
|
# ¿ Mar 22, 2019 22:31 |
|
I can easily tell whenever my frame rate drops under 80. 100 FPS looks and feels better, but past 144 is diminishing returns outside of very high motion games. Doom 2016 for instance looks and plays incredible at its 200 FPS engine cap if you have a display that can keep up, 60 Hz only looks moderately less smooth overall thanks to a really solid motion blur implementation but the downside is everything is blurred into complete obscurity. You can rapidly turn around in Doom at 200 FPS and also clearly see the imp that was going to attack you from behind, where as at 60 Hz its all a blur and its much more challenging to tell the imp from the rocks behind it. IMO spacial resolution is a problem that has been largely solved with 1080p on 24", 1440p on 27" and 4k on larger displays. But temporal resolution remains a huge problem, 60 Hz is choppy and makes everything so blurry the resolution becomes irrelevant. I can read text while scrolling on a 240 Hz display fairly easily, on the exact same display at 60 Hz it is a completely unreadable smudge until it stops scrolling, this is an unavoidable problem with sample and hold displays like LCDs. The extra resolution in 4k is completely wasted most of the time you are actually using the display for anything that moves, because the only way to keep the motion "smooth" is to blur the image to the point that even 720p would get the job done. And even if you don't use a motion blur, just being a sample and hold display will blur it to hell at 60 Hz. One really good way to see the impact of higher refresh rates and strobes is to look at chase camera comparisons on blur busters and their testufo site. Edit: F% page snipe...
|
# ¿ Mar 26, 2019 23:17 |
|
craig588 posted:You just happened to say it, not calling you out at all, but I like that we've gotten to the point that now 60 FPS isn't good enough. Advancement of technology! It isn't actually anything new, I've always hated 60 Hz. There is a reason I kept using CRTs till my last one died from the horizontal deflection failing in like 2012, its because my 19" CRTs did 100 Hz refresh at the 1280x960 resolution I used the most at the time. I was pretty unhappy in the years in-between before 144 Hz + *sync became available in LCDs. Hell, back in the early 2000s the game I was playing competitively looked good enough at 640x480 and I played it there because the particular monitor I was on could refresh at 160 Hz at that resolution, and people wondered how I could react to stuff so fast...
|
# ¿ Mar 27, 2019 01:55 |
|
Sidesaddle Cavalry posted:Sounds like we need to get ourselves some better brains It wouldn't work, because you don't know when the next frame is going to be delivered so you can't know how bright to make the current frame. Even if you used a fixed strobe interval, the apparent brightness of the screen would vary with the framerate, the more FPS the brighter the screen would appear.
|
# ¿ Mar 27, 2019 22:22 |
|
K8.0 posted:That's true for Gsync. You could however do it with Freesync, since Freesync determines the timing for the next frame ahead of time. Unfortunately that's probably quite a ways off.
|
# ¿ Mar 28, 2019 00:48 |
|
Stickman posted:Could the length of the strobe be varied instead of the brightness? Or would that cause unwanted visual effects? A 2.4 MS strobe is roughly equivalent to running your display at 416 FPS as far as motion blur is concerned. The way ULMB and other strobe modes like it work is by essentially resetting your eyes after every frame, its less controlling what you see and more controlling what you don't see. Sample and hold LCDs induce blur by nature, even if they transitioned from one frame to the next absolutely instantly (0 MS) it would still be blurry on one at 60, 120 or even 240 Hz refresh rate. You have to think about how you see something move on a LCD and how the process actually works. When you move the mouse across the screen the cursor isn't really moving, it is just a series of pictures of the cursor in rapid succession where each new image has it in a different position along a line. But the problem is with sample and hold types, as your eye follows the cursor "moving", your eye is moving at a constant speed across the screen but the images of the cursor aren't actually moving at all, the blur is the disconnect between how much your eye moves while the cursor is stationary during every frame. ULMB works by only very briefly flashing the image and then its black the rest of the time, due to the nature of persistence of vision you only see the flash and the rest of the time you don't see anything, and the flash is so short your eye basically isn't moving relative to the length of time the image is actually there so you don't get the blurring effect.
|
# ¿ Mar 28, 2019 02:45 |
|
TheFluff posted:If that rumor about taping out Ampere this week is true (and Ampere is compute only AFAIUI, no consumer parts), then I'd be very surprised if we see the corresponding consumer cards any sooner than a year from now. Summer/fall 2020 seems more likely to me. At the same time unless 7mn is so much more expensive than 12nm for the same number of transistors that it cancels out the area reduction and corresponding yield increase, it would probably not be a good idea to keep on pumping out these enormous, power hungry and defect prone Turing dies.
|
# ¿ Mar 31, 2019 16:47 |
|
|
# ¿ Apr 25, 2024 00:47 |
|
Direct3d 14? 4D XPoint memory? April 1st?
|
# ¿ Apr 1, 2019 01:41 |