Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
What do people think about refurbished enterprise-grade SSDs? Newegg has refurb Sandisk CloudSpeed Eco 960 GB for $157. Assuming it's not trashed beyond usability by years of being a database disk or a ZFS cache drive, I'm thinking that would make a really nice Steam disk or Postgres store to toy around with for development stuff at home.

Paul MaudDib fucked around with this message at 02:09 on Sep 17, 2016

Adbot
ADBOT LOVES YOU

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Wacky Delly posted:

I know the 960 Evo is going to be the new hotness, but is there anything really "wrong" with the Intel 600p? MSRP is $60 cheaper for the 512GB and performance seems not as good as the 960's will be but still better than the 850's?

It has kind of a funky lockdown mode once it runs out of spare cells.

quote:

How Intel's consumer SSDs expire once you surpass the endurance threshold is troubling. In an almost over-zealous move to protect user data, Intel instituted a feature on many of its existing SSDs that automatically switches it to a read-only mode once you surpass the endurance threshold. Surprisingly, the read-only state only lasts for a single boot cycle. After reboot, the SSD "locks" itself (which means you cannot access the data) to protect the user from any data loss due to the weakened flash. The operating system typically generates error notifications when an SSD switches into a read-only mode, so most users will restart without being aware that the SSD will be inaccessible upon the next reboot. The process to recover the data is unclear. We reached out to Intel to verify if the 600p also has this feature, but have yet to receive a response.

EDIT: 9/23/2016 Intel clarified the nature of the read-only feature, which is not based upon the endurance limit. All SSDs have spare area that is dedicated to replacing failed cells. The Intel 600p only switches into a read-only mode when the spare area is exhausted. Intel also noted users can copy the data from a read-only SSD by installing it as a secondary drive in another computer. Intel provided an official response outlining the recovery procedure, and we include a more detailed explanation in the link.
http://www.tomshardware.com/reviews/intel-600p-series-ssd-review,4738.html

Seems convenient that it just happened to burn through its spare cells at just the point in time where the endurance was completed (which is much lower than it is now I guess?). Not sure I believe that. But whatever.

Other than that it's fantastic for what it is. It's not like 950 Pro fast but it's way faster than the 850 Evo. I was really tempted back when I was looking at getting more space, but that end-of-life lockdown mode thing really gives me the heebie-jeebies :staredog:

Paul MaudDib fucked around with this message at 00:35 on Oct 12, 2016

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Knifegrab posted:

So this is sort of a SSD thread question and sort of a Haus question:

I am going to be getting a much bigger SSD soon, my main drive is on a relatively small SSD. However I was wondering if there were a way to just make a complete copy of my old drive so I can just replace my old drive with my new one, making the new one the boot drive to replace the old without having to do a complete reformat?

Sure. Lots of programs. Let's go with Macrium Reflect.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
That sounds like one of the legit use-cases for a RAMdisk. Or at least a spinning disk.

It'll still probably be fine, but unnecessary wear is unnecessary wear.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
A couple people have mentioned it in passing but I don't think it got the attention it deserved: Samsung just announced the (NVMe) 960 Evo, 250 GB for $130 and 512 GB for $250. :asoiaf:

Interestingly enough this one seems to have a similar cutout as the Intel 600p when it hits its limit? Or maybe Anandtech triggered the read-only mode some other way but something is going on.

Paul MaudDib fucked around with this message at 00:33 on Nov 17, 2016

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

nitsuga posted:

Why is it that the 850 EVO and X400 are the two default options? There's a trove of $60 250 GB drives out there, and some are recommended by other outlets (PNY, SK Hynix, and AMD come to mind). I understand there are some pieces that aren't as desirable (TLC, longevity, performance), but it seems that they are never recommended here.

It's significantly faster than most of the consumer market but not quite as excessive (and expensive) as like the 850 Pro. It's nice as a boot drive.

Samsung has a reputation for reliability and some of the other brands have been known for issues in the past (of course the 840 Evo had a not-so-awesome glitch too).

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Skandranon posted:

They are MORE a ticking time-bomb than the 850. How much more? Hard to say. For personal use, definitely worth the extra $$$. If I had to buy 10+ for some project, would start considering other options.

I've had some data corruption on my 840 Evo at work even with the firmware update installed and Magician updated. It's impossible to say for sure that it wouldn't have happened with another SSD since Facilities can't keep the loving power from dropping out overnight. It's not like most other SSDs have good power-loss prevention either (except Intel), and an inconveniently timed power drop can corrupt literally any storage system, but most other SSDs are not continually rewriting their data blocks to avoid their flash losing its electrical charge. That seems like a major caveat about the 840 Evo and I would advise being on a UPS with it.

Paul MaudDib fucked around with this message at 03:00 on Nov 24, 2016

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

BIG HEADLINE posted:

These are Newegg's not-live-yet SSD deals:

SanDisk SSD Plus 240GB Solid State Drive for $59.99
SanDisk Ultra II 960GB Solid State Drive (SSD) $219.99
Samsung 850 EVO 2.5" 1TB Solid State Drive (SSD) $249.99

Obviously the latter one is the :stare: one that everyone's eying. I've a feeling at ~3am tonight and/or tomorrow everyone's going to be :f5:ing the gently caress out of Newegg. I'm pretty sure I won't be able to snag it, but I've already registered my AMEX card for the $25 off 200 promo so hopefully I can score it for $225 instead of $250.

The Sandisk Ultra II has dropped below $200 on Amazon Lightning deals in the past but hasn't lately. Frankly I'd rather have a 960 GB Ultra II for sub-$200 than an 850 Evo for $250, but at those prices the 850 Evo is the obvious star of the pack here.

Are they still running any Visa Checkout deals?

edit: MX300 750 GB for $99 starting via Newegg eBay, starting 12pm PT

Paul MaudDib fucked around with this message at 03:03 on Nov 24, 2016

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

ItBreathes posted:

What are the thread thoughts on the Tomshardward recommended SK Hynix drives? They seem to offer 850 Evo levels of performance with 60% of the warranty at 63% of the price currently. Not exactly a value proposition but it seems like a reasonable way to save a few bucks right now if the stats are accurate.

Samsung's real value-add is that they can make everything in the drive start to finish whereas I assume SK Hynix probably is just using an off-the-shelf controller. The memory chips are probably fine though, Hynix chips are usually viewed as some of the best-performing chips available at least in the GDDR5 market.

I guess for 60% of the price of an 850 Evo you might as well dive in. But there also have been quite a few OEMs that have had some major issues with early revisions of their SSDs and you never know what might go wrong (Samsung included).

Going back to the "my time is worth something" argument, if it crashes and you have to rebuild, that time alone wipes out most of your savings even if they give you a warranty replacement that works perfectly. It's probably no less reliable than any other cheapo SSD on the market, and if it's 850 Evo performance that's a good value proposition, but the question is whether you're willing to bet your boot disk on it.

Paul MaudDib fucked around with this message at 09:46 on Nov 24, 2016

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

BobHoward posted:

With chips, raw materials are never the problem. Fab (factory) capacity can be. It takes a long time and an investment of several billion USD to bring a single fab online. If it turns out there isn't enough demand to sell what it can make while the fab is relatively new (depreciation on the equipment is fierce), the owner will lose enormous sums of money. So, sometimes the industry is a bit conservative about expanding production capacity.

This huge upfront capital cost of constructing a fab, plus the difficulty of acquiring all the in-house expertise to do it right (or at all), is why there's so few players in NAND flash manufacturing. Which in turn does make it possible for the players involved to pull an OPEC -- that has happened before, in DRAM. They can't do too much of that because once again, the fabs they've got have to run and make significant revenue or they're hosed, but easing off on production to shore up prices or just plain agreeing to fix prices at a higher level? Yup.

This was also due in large part to one of the cyclical collapses of the computer market in the 80s that massively dropped demand for DRAM, and left overseas manufacturers trying to sell it for anything they could get.

The US accused Japanese companies of protectionism and dumping, and slapped a big tariff on DRAM imports. Which led to some computer company getting in major trouble for grinding the printing off the top of chips to try and dodge the tariff (can't find a source but I remember reading about something like that).

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
How is the Micron 1100 2 TB for a Steam drive? Normally goes for about $375 (pretty good) but right now you can get one off eBay for $305 with the ESPRING20 coupon...

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Kazinsal posted:

Is there any recommended method for cloning an SSD to a larger SSD? I'm replacing my 250 GB 850 EVO with a 1 TB MX500.

Macrium Reflect Free.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Armacham posted:

Not the 2TB but my computer came with a 256GB Micron 1100 as the main drive. It benchmarks almost exactly the same as the 512GB Samsung 850 Evo that's in the same system.

I found a forum thread on it where they suggest that the bigger drives are actually slower than the smaller drives for some reason.

Whatever, I guess at $305 for 2 TB it could have actual rocks inside and it would still be a good deal. As long as it's not gonna be like Corsair Force LS-level terrible it'll be fine for a Steam drive.

I noticed they've been running a lot of eBay bucks promos lately, but man, eBay's Q1 numbers must be supremely lovely if they're giving a blanket "20% off everything" coupon like this.

Paul MaudDib fucked around with this message at 04:17 on Mar 10, 2018

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Game running on juiced-up engine from 1995 isn't optimized for SSD performance!? :chanpop:

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Dr. Fishopolis posted:

what the heck does "optimized for ssd performance" mean? what are you talking about?

SSDs read/write a minimum of a 4k page at a time, so you want to structure your data to avoid lots of micro-reads (which is also true on a HDD, but less so since HDDs naturally tend to fragment anyway). If you are going to do this (vs a single data block that you load into memory in a single go) you want to be using multiple IO threads, which is very much the opposite of what you do with a HDD.

https://engineering.linkedin.com/blog/2016/05/designing-ssd-friendly-applications-for-better-application-perfo

You can pretty easily see how a single-threaded engine designed for a PC with 32 MB of RAM in 1995 might not be doing the optimal thing with a super-fast SSD on a PC with, say, 4 GB of RAM in 2017. You're really only getting a fraction of the speed your SSD can offer.

(yeah technically it's kinda multithreaded now, they moved the sound stuff to a different thread... which is probably a whopping 1% of the workload. I really doubt any of the loading is multi-threaded)

Paul MaudDib fucked around with this message at 17:25 on Mar 17, 2018

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

I got one of these last week, haven't had time to install it yet.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
You paid 960 Evo money for a SATA drive. Nice.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
In a recent L1Techs video, Wendell was talking about some SSD he was using that was being dumped used pretty cheaply, it was PCIe-attached flash but not NVMe (so not bootable), just flash plugged directly into the bus. Anyone remember the episode that was from, or what that might be? Is there an advantage to doing that vs just a 960 Evo?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

numerrik posted:

Pc part picker says the mobo m2 slot shares bandwidth with a sata 6.0 gb/s port and that when the m2 slot is populated one sata port will be disabled, if this is my only hd in the machine will this be a problem?

That's a confusing message (and this often isn't clearly explicated in the manuals either) but basically how it works is if you plug a SATA M.2 drive in, it eats the SATA channel from one of the physical SATA ports (you should have like 6, so this should not be an issue). If you plug in a NVMe M.2 drive then it eats some of your PCIe lanes instead (which may potentially disable one of the PCIe slots on the board, it depends on how the board partner laid out the channels). Or possibility #3, it's a NVMe drive in an actual physical PCIe slot using either a native NVMe PCIe card or an NVMe M.2 drive with an adapter sled, in which case you have to look at how your board allows you to map PCIe channels (typically there is a slot that goes to the PCH, and the PEG lanes can be mapped x16, x8x8, or x8x4x4).

A given board+slot also may support only SATA M.2, only NVMe M.2, or both, so some of these options may not apply at all. There are also M.2 slots that only support Wifi cards, and other oddities.

M.2 is a bit of a mess of a standard, but at least not quite as bad as USB Type-C.

Paul MaudDib fucked around with this message at 02:07 on Apr 5, 2018

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Potato Salad posted:

^

I adamantly theorize that one of the factors permitting HEDT owners to hold on to 2500k and even Core systems for so long was the massive usability boost provided by the commoditization of SSDs.

2500K is not HEDT but otherwise OK, yes, my Nehalem laptop is still fine except for the power-efficiency bit, largely because I swapped in an SSD in like 2012 or something.

HEDT is like a hexacore Westmere-EP or a hexacore Sandy Bridge-E or an octocore Ivy-E and yes, there is pretty much no reason to abandon those systems unless you know that you need to.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
What effect will only having x2 lanes for a NVMe SSD have? Obviously it'll lower throughput by at least some extent, but does it also affect IOPS?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Looks like a lot of people are reporting those SX6000 drives fail after a while so I'm aiming a bit higher. Anyone used the HP EX900/920 series? Or should I just suck it up and buy a 960/970 Evo?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
From what I remember reading in reviews, it's not that it has a 140 GB SLC cache, it's that it can either write the cells in SLC mode or QLC mode - the former is faster, the latter has higher capacity. So to start, the drive will run entirely in SLC mode, and then as it fills up it starts switching more and more over to QLC mode, degrading performance as the drive gets more and more full.

128 GB of real SLC would be a lot, like easily $1000+ even at current market prices.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Depending on your board, it may not be capable of natively booting from NVMe, and instead using an oprom. Loading oproms will kill your boot times, this was a common complaint back when NVMe first surfaced maybe around the Z97 days.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Is the EX920 a reasonable sidegrade to the 960 Evo or should I just pony up the Samsung Tax?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Combat Pretzel posted:

Considering how much erratas various CPUs and microcontrollers get over their lifetime, I'd rather have mature PCIe 4 infrastructure over lol-first PCIe 5.

you’re not wrong in a “would I buy this product today as a consumer” sense, but it actually makes complete sense in a product development sense for Intel. Intel got hosed extremely hard on PCIe 4.0 because they were so late to the game due to 10nm delays (the first server gen that supports it was Ice Lake-SP) that basically AMD became the reference platform that all the PCIe 4.0 devices were validated against, and Intel had to be in the position of being “the other one” whose “specific quirks needed to be validated” or whatever.

Sapphire Rapids is basically Server Alder Lake and comes next year, so it’ll be sampling out already for a while (google says Q4 2020, presumably to limited audiences/hyperscalers at that point). Server products generally have longer leadtimes than consumer so they were likely validated about the same time, just consumer is a bit quicker to general market release. This is likely also advantageous in the sense of being a final smoke test, now they have “release ready” PCIe 5.0 and OEMs can validate on that and if there’s a big problem there they can hop on fixing it on the server platform ASAP.

But anyway the point is that Intel wanted to jump the gun with PCIe 5, because they got screwed by 10nm delays delaying their PCIe 4 release and they paid a big price for that, and they absolutely do not want a repeat of that. Obviously Zen4 comes next year, and probably Epyc comes first / samples out significantly ahead of the general consumer launch, I’d imagine if their Q4 number is accurate then they’re probably sampling right now, but Intel is still around a year ahead of AMD on the timeline and that’s exactly what they wanted.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Anyway but apropos of nothing, as a high-end user I really welcome a return to mobos with tons of PCIe muxes/switch chips. One of the things I’ve really whined about a lot is that as a power user who likes to do a lot of things with my systems, modern boards kinda loving suck. Even ignoring the problems of air-cooled GPUs getting bigger every single generation and covering all your slots, let’s just look purely at the CPU configuration. You’ve got x16/x8x8/x8x4x4 from the CPU itself, you have 4 NVMe lanes, and you have the chipset. Even ignoring non-optimal slot utilization you’ve got a GPU and a NVMe and then two “other” things that run at decent ish speeds.

Obviously if you want more stuff you have to lean on the chipset, but that’s not really optimal either. For example my vive wireless needs a dedicated card that runs at 3.0x1, and the chipset lanes on my board are all 2.0x1. Does it work? Probably, but do I really want to find out if I don’t have to? And what if I want to connect something that’s x4 or something?

Also, my impression is that the chipset is substantially slower than a CPU-direct lane, and although I guess I don’t know for sure my impression is that it’s substantially slower than a dedicated mux as well. I’ve always heard “connect fast ethernet/optane SSDs directly to the CPU” as a notional bit of advice even if the chipset is generally fast enough to support it, because the latency is higher. Meanwhile I emailed Highpoint Tech and asked about their NVMe hba cards and the answer there was “it will perform at whatever the IOPS/latency of the underlying device is”. I don’t know how true either of those two bits of arcana really are, but notionally it’s better to have things not running through the CPU - that is part of why thunderbolt eGPU is worse than a direct-connect 3.0x4 GPU as well. All the stops and buffers add latency.

As a power user, I’d really like to have my GPU, my vive wireless adapter, my optane PCIe AIC ssd, a 10gbe SFP ethernet adapter, etc etc all in one system, in their optimal slots (which more or less means CPU lanes wherever possible, with enough lanes at enough speed to saturate them). I realize the real answer at this point is “buy a HEDT system” and yes, that’s the long term plan, but right now HEDT is an ugly set of compromises all of its own. Skylake-X/Cascade Lake-X sucked and Zen2 Threadripper was overpriced and still underperformed Coffee Lake, and Zen3 is MIA for over a year at this point with no sign of an imminent release (hopefully soon and hopefully with Vcache).

Well, with PCIe 5, 16 lanes is actually a lot and you can split that out with muxes and that’s a very acceptable compromise. Take the x8x8 configuration and that can be muxed out to x8x8x8x8, or x8x4x4 could become x8x8x4x4x4x4, and so on. So for the cost of 2 or 3 muxes you have a “pseudo-HEDT” system with 4-6 reasonably fast slots. And with the improvements in chipset, that can be expanded further (dunno if I’d expect to see the “3-mux” configuration in practice).

This niche used to be filled by the “supercarrier” style boards (I think that was an asus or asrock brand name?) and it’s kinda unfortunate it’s gone away, because it went away at the same time the HEDT market became overly expensive and compromised. Right now the most practical solution is really “have a gaming rig and then have an Epyc system with Asrock ROMED8 with the 7x PCIe 4.0x16 slots for everything else” and that kinda sucks.

Adbot
ADBOT LOVES YOU

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Rinkles posted:

Could someone try to explain in laymen's terms what's going on here?

A PCIe gen 3 drive (SN570) beating an expensive gen 4 (SN850) in load times specifically



I wouldn't say "beats" here, at least not significantly, that's pretty much on par in general. Still though you're right that both are TLC, the SN850 almost certainly has a better controller, it has DRAM, the only thing that really leaves is better flash.

game load times and general windows/application performance are extremely heavily dominated by latency. 4K Random Read QD1T1 is the test that usually measures that/manifests the difference, but the SN850 still wins there (76 vs 98 MB/s). Which is what you'd expect from a better controller and all (although of course with QD1T1 there's not a ton for the better controller to get its teeth into).

I concur with TT, it's gotta be better flash, maybe better latency? But it's very odd that it doesn't show up in QD1T1 either.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply