Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

dud root posted:

Do Samsung SSDs juggle data around when idle for wear leveling purposes? Curious if the cells where my large static data is stored have been written to once, or if the controller is smart enough to eventually move 50Gb.italy.holiday.#blessed.mkv to the most used cells

Afaik this happens two ways. One is that, while active and writing something else, the controller decides that some very static blocks are the ones which ought to be written to this time, because they're falling behind the write-erase cycle count of other blocks. It costs some performance but it has to be done.

The other is in the background as you say, but one fun thing is that sometimes it's for data retention rather than wear leveling. Consumer grade planar NAND flash has a rated retention time of maybe circa a year at the ~20nm nodes, and enterprise grade flash as little as 1 month (yes, worse). SSD controllers combat this by experimentally reading data every once in a while, and moving it if the raw uncorrected bit error rate has gotten too high.

PSA: do not use SSDs as offline backup devices.

It's not as bad as those numbers might suggest, my friend who designed enterprise ssd chipsets and told me about this said that manufacturers rate flash for minimum guaranteed retention time at end of life, i.e. after doing the rated maximum write/erase cycles. A consumer SSD far away from wearout should remember data a lot longer than 1 year if left unpowered. Still, don't count on it, use HDDs for storage that might go unpowered a long time.

(Aging is why enterprise NAND flash has a lower retention rating. NAND manufacturers can choose between speccing basically the same memory as either high cycle count / low retention or vice versa. Enterprise SSDS are online storage powered 24/7, and are written to a lot, so it makes sense to go for a different tradeoff than consumer flash.)

Adbot
ADBOT LOVES YOU

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Saukkis posted:

For long time I've wanted to know, what is the failure mode when SSDs reach the maximum write cycle? I would hope that the available writable space slowly decreases, but the the blocks that can't be written on still remain readable? How much variation is there on how many cycles different blocks can be written? Could one block last only 1000 cycles, but the block right next to it manges 5000?

Yes, writable space slowly decreases as blocks go bad. There is indeed a lot of variance in write cycle life, so when flash manufacturers write their specs they have to try to err on the side of caution. This is why that one website which torture tested SSDs found that tons of drives lasted quite a lot longer than the rated lifespan of their flash would suggest.

Arsten posted:

When a block dies, it's lost. You don't get to read what's there anymore. The SSD's firmware, when it detects a failure, will save the data to a reserved block that isn't accessible to anything but the drive. When it runs out of those spare blocks, it uses the non-reserved portion of the drive and the size of the drive decreases. At this point, the file system on top begins to get compromised and data loss is likely.

The model you describe is roughly how hard drives handle bad sectors, but no SSD I am aware of works quite like this. Unlike HDDs the extra capacity is substantial (most MLC consumer drives have 7.3% extra capacity over what the label claims, and most TLC drives even more than that). Also unlike HDDs, none of the extra space is treated as a special pool which goes unused until it's time to replace a bad block.

HDDs work that way because the mapping between host visible sector numbers and physical sector addresses on disk platters is mostly fixed, with a small exception list. This works ok because most sectors will never go bad; the drive firmware only needs to handle on the order of 100-1000 bad blocks that have a special mapping.

SSDs have to do wear leveling, so no host visible block has a fixed flash media location, ever. Anything can be stored anywhere at any time. The point of having extra capacity over the user visible capacity is no longer to handle errors gracefully, it's now to provide both an any-to-any host-to-physical address mapping table and the minimum amount of guaranteed free (or quickly free-able) media capacity the wear leveling algorithm needs to avoid making GBS threads itself when the user is using all the user visible capacity. Dealing with bad blocks falls out of that; when a block goes bad and is marked as unusable, the drive just has slightly less free space to work with.

Any competently written SSD firmware should put itself into a read-only disaster recovery mode long before losing so many blocks that the usable media capacity drops even close to the user visible capacity. (And if that event ever did happen, it would be quite surprising if the drive didn't just brick itself, because the firmware is unlikely to handle not having enough media capacity very gracefully.)

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Kaleidoscopic Gaze posted:

How the hell do you "recondition" a drive? SSDs are single-board products; I can't imagine they actually hire someone to run diagnostics, de-solder chips, and fit fresh ones. They probably just run a diagnostic and say "well, it works" and call it a day.

Short of actually replacing the flash chips on the board, there's nothing they can actually do that would have any real effect.

There is actually some evidence that it's possible to partially "recondition" worn flash chips and get more write cycles out of them with a heating cycle.

I doubt they're doing that though. It's probably just running a lovely diagnostic and labeling it good if it passes.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Arsten posted:

I've had several SSDs that exhibited exactly that behavior. I have a 64GB SSD on my desk right now that reports only 48GB available space. You can continue to write to it and it doesn't have a disaster mode. It does, of course, largely depend on the firmware, but conceptually that's how it works. The fluidity of actual block space is true, but it really brings nothing to the discussion of how an SSD handles failing blocks.

Count me surprised. Is it one of the really old early generation consumer SSDs from circa 2010? Some of those were pretty bad and/or weird by modern standards.

The bad block strategy you described has been used with flash storage media, but usually only in older CF/SD cards and USB sticks. It's not good enough for (good) SSDs.

quote:

Also, if it has 7.3% extra capacity over the capacity reported to the operating system, why do you think that's not hidden to the user? Just because it uses them during wear leveling?

I think there is some miscommunication here. What I am saying is that both the OS and user think a SSD stores as much data as it reports it can. However, behind the scenes there is much more raw capacity, and the SSD doesn't split it into one fixed zone that serves as the actual storage and another that is spare. Instead it's one big pool, and over time as the drive processes write commands your data will end up stored anywhere and everywhere in this pool, even if the drive hasn't mapped out any bad blocks yet.

Maybe it's best to give an example of how that might happen in practice. Let's say you take a SSD out of the box, hook it up, and run a program which writes from the first (visible) sector to the last, in order, and then writes to the first visible sector once more. This hypothetical drive has 128 physical sectors (label these P1 through P128, P for physical), but it tells the outside world there are 16 fewer sectors, labeled V(isible)1 through V112. During the initial full drive write, the SSD stores V1 in P1, V2 in P2, and so on, up to V112 in P112. But when the second write to V1 happens, the SSD does not store the new contents of V1 in P1. Instead it writes them to P113 (the first free physical sector), adjusts its mapping table so it knows V1 is now stored in P113, and marks P1 as what I'm going to call a "zombie": a sector which contains stale user data, and is now safe to erase and reuse whenever convenient.

During the first write pass there's actually no requirement that the SSD uses P1 to store V1, and so on. I just described it happening that way for clarity.

If you are having a "why the gently caress would they do that, it can't be true" reaction, you probably need to familiarize yourself with some key properties of NAND flash, and their ramifications. In particular, the distinction between program (write) and erase, and how that interacts with the block/page hierarchy.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Early lines of the specs say PCIe but if you read further:

quote:

Please note that it supports SATA-based B Key SSD only. It does NOT support PCI-E based B key & any M key SSD.

It's far too cheap to be anything but a USB to SATA converter, tbh. USB to pcie is not currently high enough volume for the whole thing to sell for $20.

In general I kinda wonder how useful an enclosure which bridges NVME to USB mass storage class will be. It should be able to copy data fine, but it's probably safe to assume that any out of band commands like NVME SMART are going to be even less well supported than the SATA equivalents are on random USB SATA bridges. (Which is not very well supported at all.)

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

BIG HEADLINE posted:

Oh, and be sure you're not buying an MSATA drive. They might look similar but they're incompatible. Similarly, part of the confusion with M.2 is that the slots can be linked to the SATA interface or your PCIe interface, and you can only find out which by double-checking your board's manual. Most SATA-linked M.2 slots are in 1-2 year old laptops. People getting all excited to plug a 960 into those are going to get a rude surprise. Open boxes for everybody!

The spec allows M.2 sockets to support both PCIe and SATA (the two interfaces are assigned different pins on the connector), and most laptop chipsets should have the free SATA and PCIe lanes to hook both interfaces up, so there's no excuse for something ~1 year old not having a dual mode socket.

This being the PC industry, lots of them will be single mode anyways. As you say, hooray open boxes!

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

BIG HEADLINE posted:

It also doesn't help that as general purpose storage drives, OSes still need to be optimized for HDDs. SSDs 'choke' on very small sub-4K files, and NVMe drives are no different.

FYI, all popular consumer operating systems (for values of "popular" including obscure Linux or BSD bistros) use file systems that default to 4K clusters, or larger, with partitions aligned to 4K boundaries. In these scenarios the SSD never even sees a sub-4K command regardless of file size.

Also, NVME's advantages are greatest on random small accesses. People like to focus on the throughput advantage PCIe offers over SATA, but the real excitement (imo) is its ultra low latency / overhead command queue architecture. SATA command overhead means you need high queue depth to get even close to the rated IOPs of any SATA drive, but you won't find desktop systems generating high QD loads. An NVME drive should give you lots more IOPs at low QD. Although many operating systems need more tuning to make the most of this, it is a real advantage that is here already, and is quite fun on drives that see the most random IOPs (like the boot disk).

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Zadda posted:

Does Hynix make decent NVMe drives? I recently bought a dell xps and it came with a PC300 SK hynix instead of the Samsung listed on the order (MOD,SSDR,512G,NVME,SMSNG,PM951)

More concerned about reliability than speed difference. Never heard about the brand either.

I don't know anything about the reliability of the whole product, but for what it's worth SK Hynix is an actual manufacturer of flash memory, not a weird offbrand shoveling someone else's poo poo. At about 10% market share, they're either the smallest or second smallest NAND flash supplier, depending on whether you count Intel and Micron's market shares as a single joint venture.

SK Hynix has been around a long time, just not as a very consumer-visible brand. They're also a significant DRAM manufacturer, with a much bigger share of that market than they've captured in flash.

Wrar posted:

SATA will stick around for a while, if only because most chipsets lack the PCI-E lanes to service multiple m.2 devices, and most people don't really benefit from the performance (yet) of NVMe arch.

Intel already has chipsets where a bunch of the SERDES (serializer-deserializer) lanes are flexible, capable of operating in PCIe, SATA, or other modes. SATA and PCIe are similar enough electrically that you can design a single multi-mode SERDES (aka PHY) that supports both, and then it's just a matter of the internal plumbing to selectively connect those SERDES lanes to either a PCIe root complex or SATA AHCI host controller.

You don't get this flexibility on the fly; motherboard designers choose how to allocate SERDES lanes for you. What it means is that support is already there for a market demand driven transition to more PCIe lanes / fewer SATA ports.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Paul MaudDib posted:

Samsung's real value-add is that they can make everything in the drive start to finish whereas I assume SK Hynix probably is just using an off-the-shelf controller.

I know for a fact that SK Hynix has in house design teams doing enterprise SSD controllers.

IDK about consumer though, there might not be enough value add to bother. On the consumer side, there's not much rocket science in the controller chip (particularly mainstream SATA ones), so lots of fabless design houses are capable of doing a decent job. Firmware quality tends to be more important. That's why even mighty Intel stopped designing consumer SSD controllers in house; they can just take someone else's controller, invest in firmware engineering and validation, and end up with a good result.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Surprise Giraffe posted:

Is this nand shortage stuff just a marketing ploy? Unless theres a crunch on raw mats i dont see why they wouldnt produce to demand. Unless they ramped down production like opec or something

With chips, raw materials are never the problem. Fab (factory) capacity can be. It takes a long time and an investment of several billion USD to bring a single fab online. If it turns out there isn't enough demand to sell what it can make while the fab is relatively new (depreciation on the equipment is fierce), the owner will lose enormous sums of money. So, sometimes the industry is a bit conservative about expanding production capacity.

This huge upfront capital cost of constructing a fab, plus the difficulty of acquiring all the in-house expertise to do it right (or at all), is why there's so few players in NAND flash manufacturing. Which in turn does make it possible for the players involved to pull an OPEC -- that has happened before, in DRAM. They can't do too much of that because once again, the fabs they've got have to run and make significant revenue or they're hosed, but easing off on production to shore up prices or just plain agreeing to fix prices at a higher level? Yup.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Combat Pretzel posted:

I take it that the sort of NAND in SD cards is really low endurance? Or is there any reason why there aren't any slow but high capacity SSDs for archival or RAID purposes?

The NAND in SD cards is almost always low-grade TLC (bad endurance), connected to a controller much less sophisticated than a SSD's controller (ie probably not as good at wear leveling and low write amplification).

High capacity in a SSD means you can easily get performance as a side effect, because so much of SSD performance is about parallelism. Individual flash die (*) aren't very fast on their own. It's not entirely wrong to view a single SSD as a RAID 0 of a bunch of flash die.


* - technically, planes: most NAND flash has two "planes" per die, with each plane able to process commands independent from the other. A single NAND flash "chip" (the lump of plastic you see on a printed circuit board) usually has eight or more die inside, so there's a lot of parallelism to exploit here.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

td4guy posted:

Apparently some MSI motherboards come with a gimmicky heatspreader to help lower the temperature of your M.2 NVMe SSD. It turns out they're poorly designed and actually increase the temperatures slightly. Good job, MSI.

http://www.gamersnexus.net/guides/2781-msi-m2-heat-shield-increases-temperatures

You know those oversized RAM heatsinks you see on popular overclocker memory? Same deal. Actually heats the memory up a bit compared to no heatsink at all. Turns out that filling all the space between DIMMs with metal (blocking all possibility of airflow, either forced or convection) is not good, even if there are fins sticking out the top.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Mr. Crow posted:

I bought a Couple MX300 recently for a server build and I'm seeing weird issues and not sure what to think.

I assume you mean Crucial MX300.

There's a few issues here.

1. MX300 is a consumer SSD. If you're going to do something on that server which will be writing to the SSD a lot, you should use an enterprise grade SSD, not a consumer grade one. (The distinction is that enterprise SSDs are designed for both performance and endurance under heavy random write loads.)

2. That said, a MX300 should be able to do somewhere in the ballpark of 500 MB/s. You're not getting the performance you should be.

3. The particular speed you're hitting (275 MB/s) suggests you have SATA issues limiting the SATA link to SATA2 3 Gbps mode (raw throughput of 300 MB/s, real throughput a bit less than that) rather than the full speed a SATA3 drive is capable of (6 Gbps). This could be that your controller can only do 3 Gbps, or it could be a bad cable. The "loss of connection to device" stuff suggests it's a bad cable.

4. If you fix the SATA issues and still see very narrow spikes in write performance like that you shouldn't freak out about it, IMO.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

stevewm posted:

That slot appears to be a "B" keyed slot. Which means it technically could have the following interfaces: PCIe ×2, SATA, USB 2.0 and 3.0, audio, UIM, HSIC, SSIC, I2C and SMBus. However it is not required that all these interfaces are present.

The M.2 spec is really bad in this regard. There are a billion ways to make cards that will not function in a slot with matching keying.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

SlayVus posted:

If it is actually using SMR technology for the platters it's practically useless for when games have to update. Large tracks of data will have to be rewritten if the updates increases file size. It would be fine for video files and music.

This may be nitpicky of me, but it doesn't matter whether a write extends the size of a file here. Any write to a shingle requires a read-modify-write of a larger region, unless the entire region is overwritten in one operation. Shingled drives simply cannot overwrite single sectors in place.

That's a characteristic they share with NAND flash, but the costs are much worse as the baseline performance is spinning disk speed instead of NAND speed, and unlike SSDs there is no parallelism (only one head can be actively doing anything - Seagate made some noise about trying to overcome that but I don't think it's been put into any products).

There's probably tricks they can use to try to hide shingling to some extent. For example, the Samsung EVO approach: reserve a region of the media that operates in the less-dense-but-faster mode, and use it as a write cache. Long as the data you're writing fits in that cache, the drive can work on moving it to permanent storage in the background. (On a SSHD the NAND could be used as the cache.) Such schemes will always have limitations though, so when you beat on the thing hard enough (such as installing a multi gigabyte update) you'll see the terrible performance sooner or later.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Krailor posted:

I could see Optane being successful in laptops. It will let OEMs chuck whatever garbage tier 2tb HDD they can get into a laptop and then slap a 16GB Optane SSD in it and you've got "2tb of storage accelerated with Intel Optane SSD technology".

They can then show it booting windows as fast or faster than a laptop with a puny 256GB SSD.

They can (and sometimes do) already do this today with NAND, at a lower cost. Why do you think a more expensive memory technology with only a marginal improvement over NAND (in this application) will do better than, say, Seagate's SSHDs? (Which have been proving for quite some time that 8GB of cache isn't enough to make a HDD perform anything like a SSD, and that isn't likely to markedly improve with Optane or 16GB cache capacity.)

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
If you've only got a few days left on the warranty, don't wait a day more to start the RMA process IMO. I've RMA'd a Samsung SSD before, and while it is true that they are one of the few SSD vendors that actually stand behind their products, I remember the process being slightly obtuse. They use an outside contractor to handle SSD warranty service, and I remember it being a little bit of an adventure actually getting connected to them through Samsung's 800 number. Once you do get a call through and convince a tech that you need an RMA, they'll want you to jump through a couple more hoops before setting it up.

By the way, what I got back was a remanufactured drive with a label marking it as such, and it was definitely used as SMART showed a significant number of LBAs written. More than the drive I sent to them, not enough that I was pissed off. So, you might not (probably won't) get a like-new drive back. I suspect that any SSD vendor's most common return is not true hardware failure; corrupted internal data structures (the LBA->physical mapping tables, etc) are more likely and can brick SSDs quite easily. When they get these they can send them back out as warranty replacements after little more than a new label, a firmware update, and an ATA secure erase to nuke and pave the previous customer's data while also re-initializing all the data structures.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Potato Salad posted:

Stopgap for what use case? Where, for who?

Swear to heaven, the writing quality at Anandtech is in nosedive.

At least it wasn't Ian Cuttress! I have aneurysms every time I try to read his prose.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Ak Gara posted:

There should be an SSD specialized in capacity and long term storage not transfer speeds/response times.

Consumer SSDs already are that.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

turn left hillary!! noo posted:

SSDs are good for long term storage? Better than HDDs? Somehow I thought they weren't any better.

I was reacting to someone wistfully asking 'why will no one make a SSD optimized for capacity and/or retention over speed?'. Consumer SSDs already are that, about as much as makes sense anyways. You are correct that this still leaves them short of where HDDs are at for offline (no-power) retention time.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

SlayVus posted:

I wonder if you could make it the size of a 3.5" and use super capacitors for data retention. I see that they use them for power loss scenarios, but I wonder if you could use them for that purpose as well.

Supercaps are quite leaky, as a rule. Wouldn't hold any significant charge for more than a few hours, let alone the years you'd need to enhance long term retention.

Rechargeable battery chemistries also tend to have quite a bit of self-discharge. Maybe pack the thing with a ton of lithium primary cells (primary = non-rechargeable, that kind of lithium chemistry can have a shelf life of like 10 years).

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
If you have to have a freezer you may as well hook the thing up to a power source.

It's just not practical to try to make SSDs into an archival medium.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

VulgarandStupid posted:

How important is one SSD for your OS and another for your games? I've been hearing more and more that it makes a difference.

It's probably people who saw huge gains from a SSD / HDD setup and falsely attributed much of the goodness to splitting OS and apps across two drives when 99% of it was just getting something off of spinning rust. Then they try SSD / SSD and confirmation bias themselves into believing that they spent their money wisely.

I am skeptical because, in a single SSD system, if your OS is thrashing the disk so hard that it measurably impacts game performance, you have other problems.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

redeyes posted:

I've had quite a few SSDs over the years from the original Sandforce drives to Samsung 850 Pros and now a Intel 750 NVMe. In all the older SSD cases the wear indicator never really went down maybe to 98% or 97% but nothing more. My 750 is now at 93% after 2 years of usage. I guess I'm just saying Intel seems to make their drives actually report wear whereas other controllers have BS numbers.

Did you assume they're BS because you think you wrote enough to do more wear, or did you calculate about how much life should have been used up based on the total number of LBAs written (usually available through SMART) and the manufacturer's rated write endurance?

I've done the latter with Samsung 840 and 850 Pro drives used in a relatively high write workload and haven't seen anything weird so far.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Harik posted:

Question about the samsung 850 - does it have a severe problem reading trimmed (zero) sectors?

I was trying to clone a 250gb to another disk, it was a nearly fresh install and I was putting it on a 500 instead of a 250. I thought (at the time I started) that a SSD copy would be faster than waiting around for w10 to get it's head out of it's rear end and do an agonizingly slow install. Instead it took north of 2 hours to clone.

It started at 250MB/sec but pretty quickly dropped down to 25-30MB/s when (I'd assume) it copied the 12+GB of windows 10 trash and started copying empty space.

It's either that or my 840 has sustained write problems, and that's a bad sign.

Neither have anything interesting in SMART.

Is the destination an 850 EVO? It's a somewhat expected behavior if so.

EVOs reduce cost by operating most of their flash memory in TLC (three bits per cell) mode. TLC is inherently slower to write to than the MLC mode (2b/cell) used in Pro series drives. To make up for that, Samsung operates a few gigabytes of every EVO's flash memory in SLC mode (1b/cell, even faster write speed than MLC). The drive stages incoming writes in SLC and slowly spools them out to permanent TLC storage in the background.

For daily desktop use you will never hit the edge case of this design, which is filling up the fast SLC write buffer and then observing a huge slowdown. While the buffer isn't full, the drive is effectively just as fast as a Pro. But when you clone an entire drive...

It's harder to observe this effect in larger EVO drives, because they have more flash chips and therefore have enough internal TLC write parallelism to keep up. For $JOB reasons I have tested the 2TB 850 EVO and can report that it will write at SATA interface speed limits (roughly 500MB/s) indefinitely. However I would not be surprised if the 250 and even 500GB sizes have noticeable slowdown after filling the SLC cache area.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

SinineSiil posted:

Could someone link me the post that explained why RAPID was snake oil? I forgot the details.

I can't link because IDK who wrote the post you're thinking of but am ready and willing to write a fresh new rant just for you.

It adds a disk caching layer to an operating system which already has one, and the built in one is fine. Except from Samsung's perspective. Because if they just relied on the built-in, they couldn't differentiate their product by tuning the caching layer to post super high scores in popular artificial benchmarks.

The real world benefits are dubious. Outside of the benchmarks you aren't going to see much difference, because at the end of the day cache is cache and they're not doing something magically better than the built-in. They're just capturing traces of a handful of programs and making sure those specific access patterns are super accelerated. (I mean I don't know they are doing this but RAPID almost has to be this. Like I said, there's a built-in cache and that one is fine for normal users.)

There's more, though. There is a natural tension between consuming large amounts of memory for a highly aggressive disk cache and letting that memory be used for other purposes. A disk cache needs to be tightly integrated with an OS to get memory-usage tuning right. Because RAPID cannot be that well integrated, I'd expect there are many scenarios where it hurts performance, simply by increasing system-wide memory pressure enough that Windows starts swapping when it otherwise wouldn't. This kind of performance tuning is a hard enough problem that I've seen too-much-cache-leads-to-swapping in Linux even though there wasn't an alien third party caching layer in the picture. It's also something that's incredibly hard to capture in an artificial benchmark.

The silly thing is that Samsung's SSDs are really good. They didn't need to write RAPID, they frequently top benchmarks even without it in the picture. Buy the SSD, plug it in, use it, never install Samsung software unless you need to install a firmware update.


fake edit: looks like someone linked while I was writing my rant but imma gonna post it anyways

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

RightClickSaveAs posted:

Is anyone still doing writeups on SSD tech like Anand used to when he was still running the site? He was fantastic at explaining the technology used in SSDs, and 90% of what I know about their workings come from these, but I just realized it's been several years now and I don't know how/if the technology has changed since. Most writeups you can find now go oversimplified (each cell is like a bucket of water!) or super technical into the physics behind it all, I miss the intermediary explanations like he was so good at giving:
https://www.anandtech.com/show/2829/2
https://www.anandtech.com/show/5067/understanding-tlc-nand

Each cell is pretty much like a bucket of water though...

Sorry. The main development since then is 3D NAND. It hasn't changed the fundamental idea of a flash memory cell -- a FET transistor with a special isolated gate which acts a charge trap. The state of charge in the floating gate can be read by applying a low voltage, or altered by applying high energy pulses that induce quantum tunneling through the floating gate's insulation. All of that's the same, the new is methods of building vertical stacks of NAND memory cells on one chip. Or buckets, if you will.

Although the resulting chips take more process steps to manufacture than planar NAND, the overall cost per bit is significantly lower and the density much higher.

Also of note: prior to the development of 3D, planar NAND had been scaled pretty close to the physics-based limit on how small a NAND memory cell can possibly be. That limit was based on the thickness of the oxide insulation around the floating gate. 2x and 1x nm flash process nodes were starting to hit problems with cells being leaky buckets, having less and less useful erase/write cycle lifespan, increased program (write) time, and other issues. 3D offered a nice one-time opportunity to go backwards on feature size to get better floating gate insulation while still moving forward on density; as a result most (maybe all?) 3D NAND on the market to date performs better, retains data longer, and has better write cycle endurance than planar NAND.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Boris Galerkin posted:

This laptop has one M.2 slot but it's used by the wifi card.

Caddys are $35+ from what I'm seeing and also I could probably pick up the cables at a local store today instead of waiting for shipping.

Have you given any thought at all to how much it is going to suck to try to plug one of those cables into the connector all the way at the back of an ODD bay that you cannot conceivably get your fingers inside or see very well into? Or what happens if the cable is then too short to stick out the bay, so now you have to get the SSD onto the other end somehow? And then once everything's connected you have to pack the cable into the bay and shove the SSD in while hoping that you don't manage to lever the bay connector out of its socket. And then you have to worry about it working itself out over time, because the bay connectors are designed to rely on a mechanical latch or screw on the laptop to hold a ODD/caddy in, so they probably don't have detents to really hold the cable in.

It may seem like a proper caddy does very little extra over a cable, but actually it does a whole lot and you will thank yourself for doing this job the right way.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
There are an amazing number of keying and pin assignment options for M.2. What’s posted above barely scratches the surface.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

wargames posted:

I would love a guide to M.2 because its super confusing.

this is not a guide, just the first thing I found that had a list somewhere in it:

http://www.te.com/commerce/DocumentDelivery/DDEController?Action=srchrtrv&DocNm=1-1773702-1NGFFQRG-EN&DocType=DS&DocLang=EN

There are 12 possible keying locations. Many are reserved. Of those not reserved, the standard doesn't require a socket to support all the interfaces defined for that keying option. Some B-key slots will only speak SATA, others may only speak i2c and audio, and others still only PCIe. There is absolutely no guarantee that mechanically compatible means it works. To add to the madness, it's sometimes permitted for a card to have two key slots cut into it so it will work in two different sockets. This is popular on PCIe SSDs, so they can work in 2-lane B-key slots or 4-lane M-key slots.

There is one thing at least: as far as I know, you're safe to try, provided you're not trying to force a card into a socket with incompatible keying. Your SSD might not be visible to the computer because lol you tried to put it in a laptop's dedicated modem slot that has no SATA or PCIe connections, but at least it won't fry.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
There are star citizen fans who have paid fifteen thousand United States dollars to buy a completionist bundle of all (*) the ships (**) so this is actually kinda small stakes for them.

* (not actually all the ships)

** (most ships not yet ‘flyable’ in the not-yet-a-real-game)

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

KOTEX GOD OF BLOOD posted:

What's wrong with using DBAN?

DBAN is built around the assumption that when it overwrites a particular SATA Logical Block Address (LBA) several times in a row, it is overwriting the same physical location in the media each time, and that roughly all of the drive’s physical storage is directly addressable.

HDDs actually work that way, more or less. The spare sector pool (and any failed sectors that have been replaced by spares) are not addressable through SATA, but the amount of spare area on a HDD is tiny.

SSDs are different. They have a Flash Translation Layer which makes it difficult to know whether you’ve actually erased anything by “overwriting” it. The FTL completely virtualizes the LBA address space, has a significantly bigger physical address space than virtual (there’s usually a minimum of 7% more storage than the marketed capacity of the drive), and will nearly always be doing things like “oh you want to overwrite LBA 379? Ok I’ll mark the old physical location for erasure at some unknown time in the future, pull an erased sector out of the free list, write the new data there, and update the logical to physical address mapping table so that future reads get the new data”. After that, the old data is no longer addressable through SATA commands, but an attacker capable of disassembling the drive or hacking its firmware to get raw flash memory access can read it.

When you issue a SATA Secure Erase command to a well designed SSD, it bypasses the FTL and tells the drive to directly and immediately erase all physical storage, including blocks which failed and were retired at some time in the past. This level of erasure is actually better than you can do with DBAN on a HDD since there’s no way for DBAN to erase failed, spared out sectors.

(Some SSDS may implement Secure Erase by encrypting all data stored on the drive, and erasing only the decryption key when asked to do a Secure Erase. Same level of erasure, except potentially vulnerable to the encryption cipher getting cracked in the future.)

Another point against using DBAN on SSDs is that by its nature DBAN wants to overwrite several times, and that’s not good for your SSD’s lifespan. A Secure Erase should put much less wear on the drive.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Star War Sex Parrot posted:

Does DBAN write zeroes or random data? An FTL optimization I'd try would be to only allocate a physical page to an LBA if the value being written is non-zero. In theory then a multiple DBAN pass wouldn't actually affect NAND life unless that process writes non-zero data to the LBAs.

Not sure exactly what DBAN does off the top of my head, and that does seem like a viable optimization. However in that case a zero-writing DBAN wouldn’t be erasing much of anything. If you absolutely have to use something like DBAN to erase an SSD with confidence that the data is gone, it would be best to do many overwrite passes and for those passes to actually be writing to physical storage. (Many = some indeterminate amount, but probably 10 full drive writes or more. You’d need to generate enough write-cycle wear to force wear leveling to recycle even the storage that had an outlier level of wear at the start of the process.)

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

BIG HEADLINE posted:

I've a feeling Samsung will always charge more because they're more or less subsidized by Apple, who exclusively uses *their* SSDs.

"Exclusively" has never been true, fyi. Although Samsung always seemed to be their highest volume supplier, Apple shipped buttloads of Toshiba and Sandforce SSDs in various Mac models. In many cases it was the luck of the draw which vendor you got. There used to be people who would buy a new Mac laptop, check out the hardware info app after taking it out of the box, and if they didn't see a Samsung, would then go through the return dance to fish for one. (Apple's OEM Toshiba SSDs were kinda slow, and Apple has a 2 week, no questions asked, 100% money back return policy.)

These days Apple is shipping as close to full custom SSDs as they can get without buying their own NAND flash fabs. Several years ago they bought a fabless SSD controller design house, and over the past two-ish years they've been transitioning Mac SSDs to use them. It would be very surprising if they tied their private label SSD controllers to a single NAND flash vendor.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

nielsm posted:

Does anyone know of research/whitepapers on the phenomenon of SSD NAND cells losing data if not refreshed once in a while? One of my EE friends doesn't believe this is a thing that actually happens.

An individual NAND cell is a bit like DRAM in that the idea is to trap some electrons in a device which amounts to a capacitor. NAND differs from DRAM in that reads disturb the existing state of charge much less, and leakage is close to zero. However, “much less” and “close to” are not the same as perfect. Tell them they should be ashamed as an EE to assume that it is physically possible to build the moral equivalent of a diode whose IV curve is a perfect 90 degree angle consisting of the -V and +I axes.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

SlayVus posted:

SSDs are still sold as 1000 and not 1024 bytes. So a 1TB SSD is still the same size as a 1TB HDD. Which equates to 931.32GB.

Except that just as with 1TB HDDs, what you actually get is some slightly larger number, the exact value of which varies by manufacturer and drive model. 1TB is storage industry shorthand for “you get at least 1.000e12 bytes”, not “you get exactly”.

As an example I can ssh to some machines equipped with 850 Pro 1TB drives. They show up as 1,024,209,543,168 byte devices, which is 1.024TB or 0.93TiB.

(OP, I can’t find any Samsung Evos to check. I know there’s some on our network somewhere but most of our machines have Pros)

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Pastry Mistakes posted:

I had a friend help me reformat the 840,and during the win 10 clean install it still showed the rolled back nvidia driver as existing. We formatted the drive, so this was more than confusing. After another format from the win 10 USB bootable, we were still seeing the previous nvidia drives (as well as three random other ones that weren't gpu related).

Yeah it's confusing that stuff appeared to persist post format, but that's an extraordinarily unlikely side effect of a failing SSD, so more likely explanations are along the lines of 'your friend didn't really format the thing and the install wasn't truly clean'.

quote:

Long story short my friend takes my 840 and reformats it on another computer.. And somehow turns this thing Raw. Even after writing a new volume, this thing will not show up outside of disk manager.

How hosed is this drive? Is it salvageable or does it sound like it was on its way out?

What does "turns this thing Raw" mean? And are you really sure your friend knows what they're doing? Showing up in disk manager, but not elsewhere, could just mean your friend managed to erase the partition structure (or whatever) without properly setting up new.

Instead of freaking out about things which don't really (imo) seem to be symptoms of a failing disk, get direct information: download Samsung's Magician and use it to report the SSD's health. Alternatively, you could use some other SMART based tool (like Crystal DiskInfo), but Magician's the best choice for a Samsung SSD.

If Magician gives the drive the green light, get a new computer toucher friend who knows how to cleanly reformat a disk.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Pastry Mistakes posted:

We let the windows installation perform the format twice, and after we saw the drives persist after the second reformat, that's when he offered to format the drive.

By Raw I mean it's a completely unallocated disk now. After some research on how to do this poo poo, I bought a USB/sata connector and plugged it into a family members computer, opened disk manager, and created a new volume. The drive has a GPT partition and it's nfts as well with 232gb available. In disk management it shows a 450mb recovery partition, a 100mb efi system partition, and then a 232gb primary partition.
Windows says it's healthy, but when I plug it in to my computer to perform a clean install of windows 10 it can't be detected.

I bought an 860 Evo yesterday out of frustration and out of the box it couldn't be detected either. I popped it in to the other computer and it was also raw, so I did the above steps again.

I installed magician as well, and it can't see the 860 or the 840 so I can't run any checks. Maybe it's due to the usb/sata connector. I've no idea.

Yes, being behind a USB-SATA converter often interferes with SMART reporting tools like Magician. SMART is the name of the SATA feature used by disk drives (both HDD and SSD) to report their health. Unfortunately the USB to SATA protocol conversion means that drives behind a converter don't appear to be ATA devices to the system. There is a protocol for tunneling SMART through anyways, and most modern USB-SATA converters support it, so with the right reporting tool you can work around this. I know how to do this on Linux, unfortunately I don't know how on Windows.

Troubleshooting logic: You've now got two drives which sound like they work fine outside your computer, but have identical (I think? Sounds very similar at least) problems when connected to yours. That means your computer's now the prime suspect, not the SSDs. If your computer is a desktop, first thing I'd try is swapping the SATA cable out, and/or connecting the SSD to a different SATA port on your motherboard.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Shaocaholica posted:

So will all NVME SSDs operate in AHCI mode if necessary? How much faster is AHCI mode on PCIe x4 vs 6G-SATA?

What on earth are you talking about, NVME doesn't support an AHCI mode. It's not an extension of SATA/AHCI, it's an entirely new standard written by a different standards body.

Adbot
ADBOT LOVES YOU

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Paul MaudDib posted:

SSDs read/write a minimum of a 4k page at a time, so you want to structure your data to avoid lots of micro-reads (which is also true on a HDD, but less so since HDDs naturally tend to fragment anyway).

what

HDDs also read/write minimum amounts of data at a time and many modern ones are 4K (aka 'advanced format') so it's even the same quantum. It is incredibly more important (not less) to avoid "micro-reads" on a HDD.

quote:

If you are going to do this (vs a single data block that you load into memory in a single go) you want to be using multiple IO threads, which is very much the opposite of what you do with a HDD.

Using multiple IO threads to generate lots of IOPs is about database-like loads. It's not very applicable in most games since few of them look much like "thousands of clients making random requests". (Unless you're talking about the server side of a MMO, but that's not what we're discussing.)

The only thing that has changed much about game loading performance optimization on SSDs is that you don't have to do very much to get good results. On HDDs or optical media, game load optimization meant linearizing data layout. On SSDs that is still pretty much the best way to go, but if a game developer doesn't bother, it won't tank performance horribly.

quote:

You can pretty easily see how a single-threaded engine designed for a PC with 32 MB of RAM in 1995 might not be doing the optimal thing with a super-fast SSD on a PC with, say, 4 GB of RAM in 2017. You're really only getting a fraction of the speed your SSD can offer.

What fishopolis said, and 'not optimal for SSD' doesn't translate into a weird long delay with the system showing low load before anything starts happening, which is what the OP complained about. IDG why you're obsessing about 1995 games not being optimized for SSDs when there is clearly something else going on.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply