|
Really late to the party here, but I didn't this mentioned anywhere in the thread. Running SpinRite level 2 on SSDs can actually recover performance that is lost over time due to fading data. Don't run levels 3, 4, or 5 as they can shorten the lifespan of the drive due to the finite number of writes that can be performed.
|
# ? Aug 6, 2014 21:18 |
|
|
# ? Apr 28, 2024 03:20 |
|
Alereon posted:Like it says in the OP and was just discussed, there is no need to overprovision good SSDs for desktop use if TRIM is working. Just don't fill the drive all the way full and you get the same effect. If TRIM isn't working or on bad SSDs at least 20% overprovisioning is required. Ok cool, thanks for the reassurance. I got a little paranoid there for some reason, haha.
|
# ? Aug 6, 2014 21:20 |
|
ssa3512 posted:Really late to the party here, but I didn't this mentioned anywhere in the thread. Running SpinRite level 2 on SSDs can actually recover performance that is lost over time due to fading data. Don't run levels 3, 4, or 5 as they can shorten the lifespan of the drive due to the finite number of writes that can be performed. I also hear that dancing the macarena every boot increases your SSD performance, but you have to do the entire motion between power button press and UEFI logo showing up.
|
# ? Aug 6, 2014 21:25 |
|
Hey, I don't know why nobody's brought this up, but you can pray to god to make your SSD's last longer.
|
# ? Aug 6, 2014 21:31 |
|
ssa3512 posted:Really late to the party here, but I didn't this mentioned anywhere in the thread. Running SpinRite level 2 on SSDs can actually recover performance that is lost over time due to fading data. Don't run levels 3, 4, or 5 as they can shorten the lifespan of the drive due to the finite number of writes that can be performed.
|
# ? Aug 6, 2014 21:53 |
|
Alereon posted:SpinRite is scam software from fake "computer expert" Steve Gibson and completely unable to do anything of value, its effect in this case is the same as running a benchmark on the drive. Really? I've used it to successfully recover a number of hard drives. The efficacy of it on SSDs I have heard second hand, given it performed as intended on HDDs I had no reason to doubt that it worked as stated on SSDs.
|
# ? Aug 6, 2014 22:06 |
|
Alereon posted:a partition 100% of the size of the drive that is 80% full is fine, but a partition that is 80% of the drive and 100% full is not, so you face an additional usable capacity reduction of around 10% of the drive. Most importantly, filling an SSD past 80% doesn't cause the drive to blow up, it just reduces performance and efficiency. quote:For Shadley, the real issue is that the LBA range is not likely to be in any sequence of physical locations on the flash in the SSD if wear leveling is working. "This means that the SSD is still fraught with the issue of how to move data around inside the device to actually free space," said Shadley. DarkJC posted:Run it on C. Most games don't benefit as much from an SSD as other applications do, since they typically just load data in sequential batches.
|
# ? Aug 6, 2014 22:16 |
|
ssa3512 posted:Really? I've used it to successfully recover a number of hard drives. The efficacy of it on SSDs I have heard second hand, given it performed as intended on HDDs I had no reason to doubt that it worked as stated on SSDs. Klyith posted:OK, so as I said 20% is not needed. But filling the partition doesn't cause problems until you're in the high 90s either (and windows yells about it). It's not like you have fragmentation. And I'm not convinced you covered the wear leveling thing: I'm not trying to be a dick here, but you've made posts like this several times in this thread now. If you want to learn more about how something works or why a certain recommendation is the way it is versus something else, just ask and we'll be happy to explain or discuss. It's NOT helpful to post some stupid thing or opinion you found on the Internet like it's true, confuse other readers of the thread, then argue about it. The main reason this thread exists and provides value is because of the near infinite volume of wrong information and bad opinions about SSDs out there. You certainly don't have to agree with me or anyone else in this thread, but if you want to have a discussion then you need to do it in a way that adds value to, not detracts from, this thread.
|
# ? Aug 7, 2014 00:16 |
|
Alereon posted:just ask and we'll be happy to explain or discuss I guess one of the things I am tripping over is that this thread has a rather nebulous target audience. I guess most of the people who'd read SHSC can be trusted to look at their drive capacity. But way back when I worked desktop support and saw occasions where through accident or malice someone's drive got near-completely filled up, for a week or more at a time, in ways that I'd think would be bad for SSDs. In your own computer is one thing but would you never overprovision if you installed a ssd for a friend or family member? It also feels kinda schizo that in a thread that rightfully keeps the recommendation list limited to only the most reliable drives, a thing that may enhance reliability in some cases gets tossed.
|
# ? Aug 7, 2014 01:04 |
|
Klyith posted:I guess one of the things I am tripping over is that this thread has a rather nebulous target audience. I guess most of the people who'd read SHSC can be trusted to look at their drive capacity. But way back when I worked desktop support and saw occasions where through accident or malice someone's drive got near-completely filled up, for a week or more at a time, in ways that I'd think would be bad for SSDs. In your own computer is one thing but would you never overprovision if you installed a ssd for a friend or family member? You're correct in that (at least conceptually) TRIM really doesn't do much to solve the wear-leveling issue directly, but the "special sauce" in drives like the EVO is quite good at reducing unnecessary page re-writes to begin with, and TRIM does assist it by helping it keep an up-to-date blockmap of what areas the controller can grab and re-purpose. TRIM + "special sauce" = pretty good wear-leveling, even without you going out of your way to help it. Really, there are two points to over-provisioning: the drive needs basically swap space to let it intelligently manage things when the drive was almost full and getting hammered with writes, and the drive needs a ready bank of "spare" cells to replace failing ones with. For the first issue, it's all about relative size. When you start talking about 250/500/1T sized SSD's, even 5% or 10% free space suddenly becomes a pretty big dumping area for the average user's sub-10GB/day writes, and gives the drive more than enough space to do its thing. Basically, on anything bigger than a 120GB drive, the natural free space you have available to the drive by simply not filling it to 100% (and which is reported to the controller via TRIM) is sufficient for modern GC algorithms. So 20% is overkill. As for the second part, note that the EVO's "SLC" cache is already factored into the reported sizes (eg, for the 120GB version, it's not 120GB - Cache = useable, it's actually 128GB - Cache = 120GB = usable). If you trust Samsung, the reason they were able to move from the 840 line (with over-provisioning but no "SLC cache") to the 840 EVO (minimal over-provisioning and "SLC cache") is because their testing determined that the write endurance for TLC was high enough that, in practice, cells were only being retired at a very low rate, and most users would benefit far more from the fast cache than they would by trying to extend the usable life of the drive as long as possible. So they took the space that used to be for over-provisioning and turned it into a fast cache, and the crowd rejoiced. It makes sense if you think about it: a normal home user with a 250GB drive that's 90% full still has 25GB free--and is unlikely to hammer the drive with 25GB of non-sequential writes in a short enough time that the drive won't be able to handle it. I mean, poo poo, most people don't write 25GB in an entire day, let alone in a near continuous stream. In the end, you can lop off 20% of your drive if you really want, and at least in theory you should have a drive that lasts longer than if you don't. But as this thread has harped on again and again, we're already talking a lifespan of enough years that it'll be obsolete far before it dies for a normal user, so hamstringing yourself on usable size to extend its life even further seems kinda pointless--especially since the drives that are most likely to see a benefit from it (the 120's) are also the ones where you'll see the most impact from the loss of 20% usable size, and are the most likely to be replaced in the near future, far before they'd possibly die. I mean, I could underclock my CPU and probably get a few extra years out of it, too, but why would I?
|
# ? Aug 7, 2014 07:55 |
|
A lot of the details of how wear-leveling has been improved are simply unavailable to us - they're trade secrets. Instead, the best we can look for is evidence of it being a significantly improved problem. I'll be relatively brief: here are two points in favor of "Wear leveling is working well." 1) The Intel DC S3700 This drive has had significant improvements to its performance consistency over previous generations of SSDs, which requires fast and smart management of NAND defragmentation. NAND defragmentation is when wear-leveling occurs, so the better the defragmentation process happens, the better that wear-leveling can perform even with an identical algorithm. AnandTech on the controller and how it differs from the X25 series/Intel 320 series controller Review from AnandTech showcasing extremely high 4K random I/O in enterprise environments, for which the drive is rated to 10 drive writes a day for 5 years. You might note, however, that the drive (like many enterprise drives) is pre-provisioned with a lot of free space - a 200 GB drive has 264 GB of NAND. This is less than the previous generation of Intel enterprise drive, the SSD 710 (200 GB has 320 GB of NAND). Yet the 710 is rated to 500 GB to 1.5 PB of writes on the 200 GB drive; the 200 GB DC S3700's 10 write/day for 10 years works out to 3.65 PB - more than doubling the 710's best case with half the overprovisioning and the same NAND. For more insight into the new controller, you might check it out on a client drive, the DC S3500. Using standard MLC NAND instead of HET MLC and using the same 264 GB of NAND to give 240 GB of user space instead of 200 GB, performance and endurance are lower. The 240 GB drive is rated to a conservative 140 TB written. But you're going from 10,000-30,000 cycle HET MLC to 3,000-5,000 cycle MLC and dropping from 25% overprovisioning to 9%, percent capacity reserved (or 32%/10% percent of user space reserved). Dropping an order of magnitude matches expectations. So a 140 TB guarantee of write endurance on the DC S3500. 583 drive writes. But compare to the old X25-M (G1). That drive, stock 7% spare area, could get 29 TB of write endurance, or 181 drive writes. And that was with 5,000 - 10,000 cycle NAND! Wear leveling improved write endurance by a factor of 3 to 10! But is that 140 TB guarantee necessarily all the drive can take? 2) Consumer SSDs can take a shitload of punishment well beyond their rated endurance Tech Report has been doing a series of articles on SSD endurance. It involved six contenders, including low-endurance drives like the Intel SSD 335 (low-endurance MLC, rated to 22 TB endurance) and Samsung 840 (non-EVO, non-Pro, TLC with DSP, write endurance isn't rated). Data is written sequentally, but with multiple drive fills and some static data per cycle to make sure that wear leveling is tested. As of the latest update, the SSD 335 is kaput... after 750 TB of writes. Good one, Intel-tweaked SandForce controller. They also lost a Kingston HyperX 3K (3K-rated MLC, non-Intel SandForce, incompressible data) at 728 TB. But look at the graph of NAND failures: Everything started failing pretty much at once after 600 TB. That's the kind of graph you'd expect with good wear-leveling, because no one cell was hammered enough to fail early, so to speak. Yeah, it's not perfect - you started getting a cascade of failures over 128 TB rather than the drive just instantly stopping (which the Intel 335 did). But that's pretty loving good wear leveling. That's 2,500 drive writes before 3,000 cycle NAND started failing and 3,033 drive writes before it conked out entirely. That's pretty loving good. And before you go "But that's such big overprovisioning!" No, it's not. It's a 240 GB drive instead of a 256 GB drive because of RAISE - a RAID-like structure of the NAND. Only about 7% of NAND was set aside as spare area. So, spare area to improve performance? Not with modern controllers and client workloads. Spare area to control write endurance? It works pretty well as-is. Take away TRIM and the story changes, maybe, but with TRIM? Go hog-loving-wild. Factory Factory fucked around with this message at 13:43 on Aug 7, 2014 |
# ? Aug 7, 2014 11:39 |
|
Klyith posted:Okey dokey, asking in the nicest way possible: please point me to resources I can read about how modern controllers use trim and solve the problems that used to be solved with blunt force. I'll take a crack at explaining it from a slightly different angle than DrDork. To the OS, a SSD is a device which can store N blocks, numbered 0 to N-1 (the Logical Block Address or LBA). Internally, SSDs have more than N blocks available to store data -- I'll call this number "T" for the true block count -- and they maintain an arbitrary mapping table between LBA numbers and the true internal flash block numbers. T-N is the factory's overprovision, and all SSDs worth buying have some amount of it, even the EVO (IIRC it's not all SLC cache). But T-N isn't the whole story. When you first take a SSD out of the box (or reset it with an ATA Secure Erase), the mapping table is empty. All physical media blocks are unallocated and none of the LBAs are "real" yet. Mapping and allocation of physical space takes place as LBAs are written to. On a fresh SSD, the entire drive (all T blocks) is effectively overprovisioned space. The OS has to write to every legal LBA at least once, allocating N blocks, before overprovisioned space drops to the nominal T-N blocks. The partitioning trick is based on this behavior. Operating systems should never write to unpartitioned LBAs, so the SSD is never forced to allocate real storage to them. TRIM is, in a sense, the inverse of writing to a LBA that first time. It notifies the SSD that a particular LBA is no longer in use, meaning it's safe to unmap it and deallocate its former physical storage. The idea is that an OS should use TRIM on LBAs whose contents (file data) have been deleted. Without TRIM, writes are a one-way ratchet towards T-N. With TRIM the OS is able to give things back, so that the effective overprovisioning grows and shrinks to match the amount of storage actually in use. This is why you're getting told there's no point -- there isn't. If you have a 250GB drive with a 250GB partition and you always have at least 50GB free, with TRIM you should always have about as much effective overprovisioning as a 200GB partition without TRIM. Enterprise poo poo is special because drives designed for enterprise database workloads have to assume that (a) the DB may issue TRIM commands rarely, if ever at all, with nearly every LBA continuously allocated and (b) the DB may hammer on the drive with extremely high IOPS random 4K (or smaller!) writes 24/7, which is the worst case for SSD write amplification. This is why all the enterprise oriented SSDs tend to ship with substantially higher factory overprovisioning. You don't need to emulate that if your workload isn't similar, and you especially don't need to if you aren't going to keep your drive full 100% of the time.
|
# ? Aug 7, 2014 12:09 |
|
Hey all - last time I'm going to shill in here - I've decided to dump my personal stash of SSDs that I've been hoarding, I realize now that I'm never going to have a chance to do the projects that I've been wanting to do. 512 Pros and 1TB EVOs (and mounting brackets) here: http://forums.somethingawful.com/showthread.php?threadid=3656440
|
# ? Aug 8, 2014 16:25 |
|
synthetik posted:Hey all - last time I'm going to shill in here - I've decided to dump my personal stash of SSDs that I've been hoarding, I realize now that I'm never going to have a chance to do the projects that I've been wanting to do. You're a monster.
|
# ? Aug 8, 2014 20:54 |
|
BobHoward posted:(b) the DB may hammer on the drive with extremely high IOPS random 4K (or smaller!) writes 24/7, which is the worst case for SSD write amplification. I don't think the "or smaller" is true generally. Databases will only read or write whole data pages and for most databases this size some multiple of 4kb by default (Oracle is 4kb if I recall, MSSQL and Postgres are 8kb, and MySQL is 16kb).
|
# ? Aug 8, 2014 21:24 |
|
Apparently Oracle's redo log does a lot of 1 KB - 1.5 KB writes. E: Older versions, anyway.
|
# ? Aug 8, 2014 21:51 |
|
When the 850 Evo comes out, is that something we're going to have to wait on for impressions or is it going to be a known quantity?
|
# ? Aug 10, 2014 03:02 |
|
OK, I feel dumb for asking, as I'm sure it's been brought up a bunch before. I'm working with someone that is getting a 1TB 840 EVO for their MacBook. I know a lot of people then grab TRIM Enabler to patch their OS. He uses this system for work, probably doesn't want to patch system files, and I've read that it causes issues with OS X 10.10 / Yosemite (as in, you *must* disable it before the upgrade, and then it won't work with 10.10 unless you boot up in debug mode to allow modified kext usage). So, I'd like for him to just go without TRIM. Would it be safe to just do a ~5-10% overprovision and partition it to 900-950 GB? I don't think he will try to fill his drive up completely. It's taken him 3 years to get his old 750GB HDD up to 400GB used. But he also wanted 1TB to ensure he'd have the space if needed, and may not understand the idea of wear-leveling, write amplification, and TRIM making it necessary to overprovision. Is that OK?
|
# ? Aug 10, 2014 19:29 |
|
Alereon posted:The above is valid for the SSDs that are recommended in this thread, which have firmware with good block management. For lovely SSDs you definitely do want to overprovision by at least 20% on the partition level, because simply reducing the number of blocks the drive has to manage is a boon to performance and reliability. On the topic of lack of TRIM support, I have a relatively naive question. If I had a 120GB SSD that didn't support TRIM connected to a SATA controller that didn't support TRIM, and I filled that drive to the brim with data once, does that drive now believe that every flash cell is in use for all time?
|
# ? Aug 10, 2014 19:41 |
|
Xenomorph posted:Would it be safe to just do a ~5-10% overprovision and partition it to 900-950 GB? I don't think he will try to fill his drive up completely. It's taken him 3 years to get his old 750GB HDD up to 400GB used. But he also wanted 1TB to ensure he'd have the space if needed, and may not understand the idea of wear-leveling, write amplification, and TRIM making it necessary to overprovision. Naffer posted:On the topic of lack of TRIM support, I have a relatively naive question. If I had a 120GB SSD that didn't support TRIM connected to a SATA controller that didn't support TRIM, and I filled that drive to the brim with data once, does that drive now believe that every flash cell is in use for all time? *until you Secure Erase the drive or wipe it on a system that supports TRIM.
|
# ? Aug 10, 2014 19:42 |
|
Alereon posted:No, you would need to overprovision the drive by at least 20% if TRIM will not be enabled. This is because without TRIM the drive has no way of knowing that the contents of flash memory have been deleted, so the drive will always be full. I thought that despite not being as "aggressive" as a SandForce controller or OS-initiated TRIM, that Samsung's passive garbage collection eventually cleans the drive during idle time.
|
# ? Aug 11, 2014 15:29 |
|
Xenomorph posted:I thought that despite not being as "aggressive" as a SandForce controller or OS-initiated TRIM, that Samsung's passive garbage collection eventually cleans the drive during idle time. Non-TRIM garbage collection is basically just defragmentation, and many modern drive controllers don't give a poo poo about that any more because they just store a lookup table for what "sequential" means in onboard RAM. It's a nice thing as long as you're doing re-writes, and it prevents re-writes from filling up the NAND by themselves, but it's no TRIM replacement. This is because, as Alereon said, without TRIM the drive has no idea what is valid data or not. As soon as you have more user writes than the drive's addressable size, the only way the drive knows a block is free for garbage collection is if the OS says to overwrite that block.
|
# ? Aug 11, 2014 15:36 |
|
Additionally, the quality of the garbage collection comes into play when the drive is able to identify what is garbage, either after being marked as such by TRIM or when overwritten. GC quality is the tradeoff between not allowing garbage to build up to an unacceptable degree and not impacting responsiveness and performance with GC. Early drives performed GC when idle, resulting in the best possible benchmarks, but that meant that when garbage built up during active use the drive would pause to run GC, causing a brief but very noticeable hang. Intel's innovation was to perform GC on an aggressive schedule, causing frequent small blips in performance but preventing garbage from building up, making performance more consistent overall. Sandforce's GC is always running in the background, seeking a balance between being aggressive enough to prevent garbage buildup while not impacting performance.
|
# ? Aug 11, 2014 16:44 |
|
My girlfriend has an old 2008 Macbook (5.1, currently running snow leopard, may upgrade to 10.10). The HDD seems like it's on it's last legs as I was thinking of replacing it with an SSD. What would be a good option here? Is there a cheap and reliable sandforce SSD?
|
# ? Aug 11, 2014 20:59 |
|
nop posted:My girlfriend has an old 2008 Macbook (5.1, currently running snow leopard, may upgrade to 10.10). The HDD seems like it's on it's last legs as I was thinking of replacing it with an SSD. What would be a good option here? Is there a cheap and reliable sandforce SSD? The Intel 530 is pretty cheap and reliable, and sandforce.
|
# ? Aug 12, 2014 20:50 |
|
Is enabling TRIM on Linux as simple as mounting the volume with the discard flag?
|
# ? Aug 13, 2014 07:31 |
|
I have a Sandisk 120GB with a Win7 install on it and I just bought a 250GB Samsung 840 Evo; what do I use to clone over the 120 to the new 250?
|
# ? Aug 13, 2014 10:25 |
|
CactusWeasle posted:I have a Sandisk 120GB with a Win7 install on it and I just bought a 250GB Samsung 840 Evo; what do I use to clone over the 120 to the new 250? Samsung's Data Migration software works fine.
|
# ? Aug 13, 2014 10:51 |
|
SwissCM posted:Samsung's Data Migration software works fine. Worked, thanks! Did have an issue where it failed 4 times first, google suggested disabling Windows Defender, worked after that.
|
# ? Aug 13, 2014 16:03 |
|
I'm not sure if this is the best place to put this... I just got a new laptop and it came with a 256gb SSD. I replaced it right away with a Samsung 850 512gb. I was hoping to clone my GF's 128gb SSD to the 256gb, giving her the larger drive with double the capacity. I did that, but the computer doesn't seem to want to boot from the drive. It sees that it's there, but it says something like "Invalid media. Reconnect cable." Is this something to do with it being a Windows 8 drive, and I cloned Windows 7 to it? Should I completely format the drive first? It's a Dell drive going into a Lenovo, if it helps. Edit: Maybe this will work: http://pcsupport.about.com/od/fixtheproblem/ht/rebuild-bcd-store-windows.htm LifeSizePotato fucked around with this message at 18:08 on Aug 13, 2014 |
# ? Aug 13, 2014 17:19 |
|
FSMC posted:Crucial support sucks, which is enough for me. In the end I had to do credit card charge back. I had an M4 128gb SSD that one day didn't boot and it turned out it was that 5000hr firmware bug. They told me the problem, pointed me towards which version firmware I needed and I was back up and running within 20 minutes. I don't get the Crucial hate. I'm running a lot of M4 128's and MX100's for our office machines and I haven't had one problem.
|
# ? Aug 13, 2014 18:03 |
|
goobernoodles posted:I don't get the Crucial hate. Uh, you had to update the firmware because your SSD didn't know how to work after 200 days.
|
# ? Aug 13, 2014 20:31 |
|
ehnus posted:Is enabling TRIM on Linux as simple as mounting the volume with the discard flag? Usually yes. If it's not supported it will warn in dmesg or syslog but should still mount. If you run into trouble there's a bunch stuff I'm not mentioning that isn't too hard to figure out usually.
|
# ? Aug 13, 2014 21:03 |
|
The forums say there are two unread replies I can't see. Bumping to see if they appear! Edit: Yep that worked!
|
# ? Aug 13, 2014 23:35 |
|
I bought an Evo but it didn't come with the cable that all of Samsung's migration guides mention, do I have to do anything besides drop this in and run the software?
|
# ? Aug 14, 2014 18:07 |
|
Aphrodite posted:I bought an Evo but it didn't come with the cable that all of Samsung's migration guides mention, do I have to do anything besides drop this in and run the software? If this is a desktop you can just use any sata cable to plug both drives in. If it is a laptop you would want to get the upgrade kit that comes with the cable.
|
# ? Aug 14, 2014 21:05 |
|
If it's too late to get the upgrade kit, USB to SATA adapters are cheap and plentiful on Newegg and Amazon.
|
# ? Aug 14, 2014 22:04 |
|
Okay, it's a desktop so I just plugged it in. Samsung's data migration software blows though. Let's try Macrium Reflect now.
|
# ? Aug 14, 2014 22:22 |
|
So TechReport looked at the OCZ Arc 100 budget SSD, and OCZ had something to say re: reliability. Excerpted:quote:We can't discuss warranty coverage without mentioning OCZ's somewhat checkered reliability track record. Much of that is well in the past, but even the user reviews for some of the company's earlier Barefoot 3-based drives are filled with complaints about DOA units and premature failures. Happily, though, the tide appears to be turning. The Amazon and Newegg user reviews for OCZ's more recent Vector 150 and Vertex 460 have fewer reports of problems than we've seen in the past. The percentages of one-star and otherwise negative ratings are also lower than for some of OCZ's older SSDs. The full article is full of internal hotlinks. Is it time to rethink OCZ/Toshiba drives? At least, when/if they send it to AnandTech and we see if it's a convincing alternative to an 840 EVO. TR's consistency benchmarks are interesting, but I don't feel that they're as robust as AnandTech's performance consistency benchmarks.
|
# ? Aug 14, 2014 22:41 |
|
|
# ? Apr 28, 2024 03:20 |
|
I have an old Crucial M4 128GB SSD in my main computer right now. I was looking at upgrading to a 256GB Samsung and putting the Crucial in the media PC. I generally use the desktop for gaming - nothing too heavy, Warcraft, Civilization, some shooters. I'm wondering if the 850 is worth the price difference for that use case. A 250GB 840 EVO is $150 CAD right now, and the 850 is $250 CAD Would there be a noticeable difference in terms of Windows/games with an 850, or should I just save the $100?
|
# ? Aug 14, 2014 23:21 |