Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Wiggly Wayne DDS
Sep 11, 2010



Potato Salad posted:

guess whose clients almost exclusively use those drives

fffffffff_fffffffffff_fffffffffff
so who signed off on using bitlocker with that setup?

Adbot
ADBOT LOVES YOU

Lambert
Apr 15, 2018

by Fluffdaddy
Fallen Rib

Potato Salad posted:

guess whose clients almost exclusively use those drives

fffffffff_fffffffffff_fffffffffff

My assumption that TCG Opal/eDrive isn't trustworthy is the reason I'm using software-based Bitlocker instead of hardware-based Bitlocker. With modern CPUs that have AES acceleration, there's barely any performance difference (at least for my usage, your mileage may vary). And I don't have to rely on drive manufacturers implementing encryption correctly.

Lambert fucked around with this message at 14:42 on Nov 6, 2018

Potato Salad
Oct 23, 2014

nobody cares


Wiggly Wayne DDS posted:

so who signed off on using bitlocker with that setup?

Well, not a storage firmware researcher, for starters.


Lambert posted:

My assumption that TCG Opal/eDrive isn't trustworthy is the reason I'm using software-based Bitlocker instead of hardware-based Bitlocker. With modern CPUs that have AES acceleration, there's barely any performance difference (at least for my usage, your mileage may vary). And I don't have to rely on drive manufacturers implementing encryption correctly.

what, if you don't mind sharing, are you using to manage software bitlocker? I've never looked at non hardware bitlocker management

Lambert
Apr 15, 2018

by Fluffdaddy
Fallen Rib

Potato Salad posted:

what, if you don't mind sharing, are you using to manage software bitlocker? I've never looked at non hardware bitlocker management

I'm not an IT professional, so I never had to solve that type of problem for my personal usage. Hope you find a workable solution.

Kairos
Oct 29, 2007

It's like taking a drug. At first it seems you can control it, but before you know it you'll be hooked.

My advice: 'Just say no' to communism.

BIG HEADLINE posted:

Look into the MyDigitalSSD BPX Pro - that drive uses the Phison E12 controller, and at the 960GB SKU, has some pretty eye-watering IOPS scores.

Here's AT's review of the Corsair MP510, which is the same drive: https://www.anandtech.com/show/13438/the-corsair-force-mp510-ssd-review

Unfortunately, as I've lamented a few times - it seems Corsair's in no hurry to get their 960GB out to retail, so the BPX Pro is currently the only Phison E12 drive you can buy.

That's an interesting drive, and I'm definitely considering it as well now. Out of curiosity, if you were wanting to get the MP510 and that's the same thing, why did you decide to get a 970 EVO instead? Because the 970 EVO is (now) cheaper?

(Edit: As it turns out it actually currently isn't; there's a 23% off coupon code 1TBPXP23 on Amazon at the moment that brings the 960GB model down to $200.19.)

Kairos fucked around with this message at 20:25 on Nov 6, 2018

endlessmonotony
Nov 4, 2009

by Fritz the Horse

Potato Salad posted:

guess whose clients almost exclusively use those drives

fffffffff_fffffffffff_fffffffffff

Microsoft's on it. Have a nice nightmare.

https://portal.msrc.microsoft.com/en-us/security-guidance/advisory/ADV180028

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"

Kairos posted:

That's an interesting drive, and I'm definitely considering it as well now. Out of curiosity, if you were wanting to get the MP510 and that's the same thing, why did you decide to get a 970 EVO instead? Because the 970 EVO is (now) cheaper?

(Edit: As it turns out it actually currently isn't; there's a 23% off coupon code 1TBPXP23 on Amazon at the moment that brings the 960GB model down to $200.19.)

Holy poo poo, thank you for that code. Brought the price of that drive down to $111.14. I don't care about the ugly yellow sticker, either - since I plan on putting an EKWB passive heatsink on the thing (not planning on taking *off* said stickers).

Canceled my 970 EVO order. :)

And yes, it's because it was cheaper - I've spent a rather large amount already on my system and still have to buy the board and GPU, which are two of the most expensive components.

BIG HEADLINE fucked around with this message at 21:06 on Nov 6, 2018

Potato Salad
Oct 23, 2014

nobody cares


I have a pretty good sccm instance, so this will actually not be that hard to fix with new GPOs plus a task sequence

It's literally as simple as changing GPO settings then turning bde off and on again

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Potato Salad posted:

guess whose clients almost exclusively use those drives

fffffffff_fffffffffff_fffffffffff

Do they use Crucial or Samsung? Maybe I missed something while skimming the paper but it sounded like they found no method of decrypting Samsung SATA if using TCG Opal.

I always suspected that fear of / knowledge of bad vendor implementations was why Apple chose to avoid Opal and roll their own FDE. I’m still a bit floored by how bad some of those are. Wish someone would fund that research team to analyze a lot more drives...

Potato Salad
Oct 23, 2014

nobody cares


I have a significant fleet of both, but also watch this space as more disclosures come out of nda

Basically, expect more drives and drivers to be put under the knife

Wiggly Wayne DDS
Sep 11, 2010



given the authors only tested those drives i'm not expecting more info to suddenly drop. unless independent research happened to occur concurrently and you're aware of it but still shocked at this reveal..? no hint of ndas anywhere

sounds like their next project isn't analyse more drives so someone else will have to expand on this base. should be fun and i'd expect crucial to be the norm

Laslow
Jul 18, 2007
I just had an Inland Pro 240GB die on me after 5 months.

They’re the Microcenter house brand and they’re crazy cheap. I think I spent $39 on it on a lark. I deserved what I got.

I went ahead and got an Intel 545s as a replacement. I’ve got a decade old X25-M and a really old 335, so I feel it’s the safe choice.

Also after the Crucial M4 and 840 Evo debacles, I’m staying the hell away from goon recommendations, lol. Not trying to be insulting, those were just really just bad luck.

Atomizer
Jun 24, 2007



Laslow posted:

I just had an Inland Pro 240GB die on me after 5 months.

They’re the Microcenter house brand and they’re crazy cheap. I think I spent $39 on it on a lark. I deserved what I got.

I went ahead and got an Intel 545s as a replacement. I’ve got a decade old X25-M and a really old 335, so I feel it’s the safe choice.

I can totally see this happening but it's not like other SSDs (or HDDs for that matter) can't die early deaths nor does it mean that 100% of that same drive (even I have one or two floating around here somewhere) will die after 5 months. The lesson is that each type of drive has a purpose, so if you have a high-performance system, your main PC, etc., then it would make more sense to shoot for a higher-end SSD, and if you're just replacing an HDD on an older system (like I just did last night with an Inland 120 GB and an old laptop that'll be repurposed for a coworker's schoolkid,) or want to replace your bulk storage HDDs with cheap DRAMless SSDs, the Inland and similar drives should suffice. That it died early is a separate topic.

I, too, have an Intel 330, actually, that's working fine and going on 6 years old; given that it's MLC with something like 3k-5k P/E cycles, there's no wonder it's still at ~100% life remaining with only like 14 TBW (out of 700-12k.)

Laslow posted:

Also after the Crucial M4 and 840 Evo debacles, I’m staying the hell away from goon recommendations, lol. Not trying to be insulting, those were just really just bad luck.

Now this is even dumber. Crucial/Micron and Samsung make the top-end SSDs, still, and at the time nobody knew there would be issues with those drives when they were new. There were grumblings about the 840 Evo and its small-geometry 2D TLC NAND, but even after the read/performance issues became clear they've basically resolved that with firmware updates. Avoiding the top-end Samsung or Crucial SSDs, which have no performance issues like predecessors from several years ago, based on aforementioned predecessors' issues, is just idiotic.

Potato Salad
Oct 23, 2014

nobody cares


I like that even affected 840s beat cheap nand unless you intentionally torture tested them, and even to this day don't have endurance issues.

In a fleet that included literally hundreds of 840s, none are dead today, and something like...10? less than 10? had detectable storage performance issues on patch day.

vvv 840 Evo, yes

Potato Salad fucked around with this message at 03:23 on Nov 7, 2018

Atomizer
Jun 24, 2007



That's more of a testament to NAND flash endurance being underestimated than anything else. And I'm assuming you're referring to the 840 Evo, because the 840 was a distinct model from the previous year - the latter being the first mainstream 2D TLC SSD, with a larger process than the former's flash, which was the culprit in the Evo's charge-decay issue.

Laslow
Jul 18, 2007

Potato Salad posted:

In a fleet that included literally hundreds of 840s, none are dead today, and something like...10? less than 10? had detectable storage performance issues on patch day.

vvv 840 Evo, yes
I guess the 840 Evo issues were overblown.

Just curious, what issues did those few drives even have? Were they all 100% fine after the patch?

Potato Salad
Oct 23, 2014

nobody cares


They were all fine postpatch

The issue was slowdown, as in "does a simple powershell perf test find the drive is slower than 300MB/s" or something.

Those 10 (I'm thinking 6 really) drives just needed the refresh tool in Magician to be run manually . Honestly we could've waited for background maintenance introduced in that firmware patch to fix it for us, albeit over a few days.

Fixing that problem cost me very little effort and no materials. And they still run great.

Edit 2: I should add that non-evo devices fail around 2% with, thanks to how counting theory works in bizarre ways and at risk of revealing my fleet size, +/- 3%

Potato Salad fucked around with this message at 05:01 on Nov 7, 2018

Atomizer
Jun 24, 2007



Laslow posted:

I guess the 840 Evo issues were overblown.

Just curious, what issues did those few drives even have? Were they all 100% fine after the patch?

In brief, they suffered from degraded read performance on data that had been sitting on the drive for a while. It's more detailed than that, but it had to do with the stored charges decaying over time. The firmware/software updates addressed this by re-writing old data, correcting the issue at the long-term expense of endurance, but as we've gone over many times, NAND flash endurance isn't as much of a problem as it's been made out to be.

JnnyThndrs
May 29, 2001

HERE ARE THE FUCKING TOWELS
I have an old Win7 install on an 840Evo that I rarely use nowadays - basically a worst-case situation in terms of charge degradation and subsequent slowdown - and after running the updates a few years ago, speed is still good and drive works fine. No complaints here.

Palladium
May 8, 2012

Very Good
✔️✔️✔️✔️

Atomizer posted:

That's more of a testament to NAND flash endurance being underestimated than anything else. And I'm assuming you're referring to the 840 Evo, because the 840 was a distinct model from the previous year - the latter being the first mainstream 2D TLC SSD, with a larger process than the former's flash, which was the culprit in the Evo's charge-decay issue.

Don't worry there will always be dumb whiny cheapskates about how TLC/QLC is killing consumer SSD write endurance making them unsuitable for their multi-million big iron AWS enterprise workloads

endlessmonotony
Nov 4, 2009

by Fritz the Horse

Laslow posted:

I guess the 840 Evo issues were overblown.

Just curious, what issues did those few drives even have? Were they all 100% fine after the patch?

They weren't overblown, people just like to panic over nothing all that significant.

We knew on patch day the bugs would take maybe 20% off the drive lifespan, and the lifespan is ~plenty. You should expect first drives dead caused by the bug in... two-ish years?

It does result in ridiculous slowdown and maybe even data loss if you haven't turned on the computer for months to years.

And the bug does cause accumulating damage in all 840 Evo drives, but technology moves on fast, and this type of damage takes several years to kill a drive. In ten to twenty years all 840 Evo drives will be dead.

oohhboy
Jun 8, 2013

by Jeffrey of YOSPOS
The slow down was quite real but in the end the 840 Evo issue was definitely overblown. The fix took next to no effort and the damage to endurance is pretty much nothing compared to how understated the official numbers are. I can't remember the numbers but actual endurance was just crazy high.

Lambert
Apr 15, 2018

by Fluffdaddy
Fallen Rib
I was very annoyed with the whole 840 Evo situation because Samsung took forever to release the updated firmware for the mSATA version.

Atomizer
Jun 24, 2007



Lambert posted:

I was very annoyed with the whole 840 Evo situation because Samsung took forever to release the updated firmware for the mSATA version.

Oh really? I wasn't aware that they didn't release firmware updates (and there were at least a couple for the 840 Evo to "fix" the same issue) for all versions at the same time. That's the version I have, actually (1 TB, in a tablet) but at least it's up-to-date now.

For some reason Samsung is basically the only [notable] company still making mSATA SSDs; I was surprised they both made them as recently as the 860 Evo, and that Samsung's still doing it instead of like Adata or someone else that's not a top-tier player. I've been debating getting an 860 Evo to replace the aforementioned 840 Evo "just in case," and I do have a few older laptops that still take mSATA (plus you can just drop them into a 2.5" or USB enclosure and continue using them.)

isndl
May 2, 2012
I WON A CONTEST IN TG AND ALL I GOT WAS THIS CUSTOM TITLE
I've mentioned it before but Samsung never put out firmware fixes for the OEM models of the 840 EVO. There's a bunch of devices out there that have no recourse, for example I believe there's a good number of Surfaces that used them and you can't swap out the SSD for an unaffected drive either.

TITTIEKISSER69
Mar 19, 2005

SAVE THE BEES
PLANT MORE TREES
CLEAN THE SEAS
KISS TITTIESS




I thought the msata version was unaffected?

Lambert
Apr 15, 2018

by Fluffdaddy
Fallen Rib

TITTIEKISSER69 posted:

I thought the msata version was unaffected?

It was affected as well.

Atomizer
Jun 24, 2007



isndl posted:

I've mentioned it before but Samsung never put out firmware fixes for the OEM models of the 840 EVO. There's a bunch of devices out there that have no recourse, for example I believe there's a good number of Surfaces that used them and you can't swap out the SSD for an unaffected drive either.

Ah, well that's super lovely. I've previously bitched how none of the manufacturers' management software seems to recognize any OEM SSDs, which is really stupid because they're the same drives and it's not like the system builder has their own software for it. The fact that there are OEM versions of the 840 Evo that desperately need the firmware update(s) but can't get them is hosed up.

TITTIEKISSER69 posted:

I thought the msata version was unaffected?

Of course it'd be affected, it's an issue with the NAND flash itself. It'd be the same problem for any other drive that used that specific flash from Samsung.

Lockback
Sep 3, 2006

All days are nights to see till I see thee; and nights bright days when dreams do show me thee.
The 840 EVO lifespan issue was overblown, but the performance impact was pretty annoying. Given what I paid for it and when it was probably still a really good value for home use, but it was also a good candidate to move to a secondary drive and get replaced by something else.

GRINDCORE MEGGIDO
Feb 28, 1985


My 840 is still chugging, for steam backups.
I benchmarked it a few months ago, seemed in spec vs reviews.

endlessmonotony
Nov 4, 2009

by Fritz the Horse
The slowdown only happens for unpatched or unused drives.

The slow decay of the cells is impossible to fix and will eat all drives, including mine, in a matter of years. I invested my entire fortune into a top-of-the-line SSD only to hear it will not even outlast me, nevermind be the family heirloom it was supposed to be.

Samsung didn't know how their cell technology worked in the long term, and hosed up on both cell durability and ability to store charge. Fortunately for them, they hosed up in opposite directions, so the vastly higher than expected cell durability means the damage from the decay will only be noticeable after about ten years after manufacture, assuming the drive is patched and turned on routinely. In order to get the decay damage to show up sooner, you'd have to be running the drive way above its expected cycles anyway.

It does mean the drives are much worse for static data storage, but eh, when the 840 was new that was a thing you wouldn't do with SSDs anyway.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

endlessmonotony posted:

The slow decay of the cells is impossible to fix and will eat all drives, including mine, in a matter of years. I invested my entire fortune into a top-of-the-line SSD only to hear it will not even outlast me, nevermind be the family heirloom it was supposed to be.

Samsung didn't know how their cell technology worked in the long term, and hosed up on both cell durability and ability to store charge.

You are being far too dramatic about it. Literally all nand flash suffers from fade, since there is no known way to construct a perfect charge trap.

Fade becomes more significant with TLC since now you need to discriminate between eight different levels, not four as with MLC. What Samsung hosed up was failing to make the firmware’s scrubbing algorithm (finds and rewrites faded blocks in the background) sufficiently aggressive to maintain full read performance. Unfortunate, but it was their first attempt at implementing TLC, and you need to understand that scrubbing is A Thing on all SSDs, not just patched 840 EVO.

SSDs aren’t heirlooms. They have a finite lifespan and are not in any way guaranteed to retain data forever with power turned off. Just the opposite, hiding somewhere in every SSD’s datasheet is a maximum power off retention time. (Enterprise SSDs are typically rated for much worse power off retention times than consumer drives, btw, so don’t make the mistake of thinking enterprise is better than consumer in all ways.)

Lambert
Apr 15, 2018

by Fluffdaddy
Fallen Rib
I really regret buying that Swarovski bedazzled 840 Evo now.

Atomizer
Jun 24, 2007



BobHoward posted:

You are being far too dramatic about it. Literally all nand flash suffers from fade, since there is no known way to construct a perfect charge trap.

Fade becomes more significant with TLC since now you need to discriminate between eight different levels, not four as with MLC. What Samsung hosed up was failing to make the firmware’s scrubbing algorithm (finds and rewrites faded blocks in the background) sufficiently aggressive to maintain full read performance. Unfortunate, but it was their first attempt at implementing TLC, and you need to understand that scrubbing is A Thing on all SSDs, not just patched 840 EVO.

SSDs aren’t heirlooms. They have a finite lifespan and are not in any way guaranteed to retain data forever with power turned off. Just the opposite, hiding somewhere in every SSD’s datasheet is a maximum power off retention time. (Enterprise SSDs are typically rated for much worse power off retention times than consumer drives, btw, so don’t make the mistake of thinking enterprise is better than consumer in all ways.)

Thanks for the greater insight into the situation. Why though are enterprise SSDs much worse in terms of retention? I'd think they'd be better, especially with the consumer market drifting towards TLC and QLC whereas the enterprise segment still being perhaps the only area you could still find SLC (if that's not completely been abandoned by now for MLC.)

Krailor
Nov 2, 2001
I'm only pretending to care
Taco Defender

Atomizer posted:

Thanks for the greater insight into the situation. Why though are enterprise SSDs much worse in terms of retention? I'd think they'd be better, especially with the consumer market drifting towards TLC and QLC whereas the enterprise segment still being perhaps the only area you could still find SLC (if that's not completely been abandoned by now for MLC.)

They're worse at powered off data retention. Enterprise SSDs are made to either go in always on servers or workstations that are on at least 5 days a week.

While consumer drives need to support grandma who turns her laptop on once every 2 weeks to check her emails.

Atomizer
Jun 24, 2007



I meant what, technically, causes the disparity in retention? Enterprise stuff tends to have more features (like power loss protection in SSDs,) than consumer products do, so I'm just surprised that this is the case with data retention and I'm curious why that's the case.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
NAND's power off retention gets degraded with each program/erase cycle. So, in reality, retention time is a downwards sloping curve where the x axis is P/E cycles and the y axis is retention time.

However, the guaranteed ratings are typically just two numbers, without reference to the curve. This means the foundry can (and does) choose to rate the exact same product one of two ways, depending on what market they're selling it into: (numbers are bullshit, for illustration purposes only)

1. 10000 P/E cycle write endurance, 30 day power off retention (the enterprise choice)
2. 3000 P/E cycle write endurance, 365 day power off retention (the consumer choice)

Enterprise NAND and drives mostly go into datacenters, where they're powered 24/7, and are definitely going to be backed up. It makes sense to target higher P/E endurance and lower retention (and to optimize the SSD's firmware for same). Consumer drives see way less write load, and are expected to be powered off quite a bit, so they get the opposite tradeoff.

Also note that flash with 0 P/E cycles has a retention time way better than the rating. The rating has to be valid all the way until the end of P/E lifespan, so it's very pessimistic on brand new flash.

Atomizer
Jun 24, 2007



Ah ok, that explains a lot. So it's partially a function of the wear on the drive (or the wear it's expected to incur) and partially a matter of marketing/rating depending on the intended use.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Atomizer posted:

Ah ok, that explains a lot. So it's partially a function of the wear on the drive (or the wear it's expected to incur) and partially a matter of marketing/rating depending on the intended use.

It's also a function of the NAND type. A 100% SLC SSD designed to chew on several TB a day of database ingest logs will still have a much better retention period vs. a QLC drive, simply because it's ability to guess which state the charge trap is in is substantially easier.

Adbot
ADBOT LOVES YOU

Palladium
May 8, 2012

Very Good
✔️✔️✔️✔️
https://www.anandtech.com/show/13512/the-crucial-p1-1tb-ssd-review/6

Micron's new "we should really cut our MSRP by like half" QLC NVMe drive.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply