Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BlankSystemDaemon
Mar 13, 2009



SwissArmyDruid posted:

2011-ish is the year I switched off spinning rust and onto an SSD and never looked back. I was one of the lucky motherfuckers that got an OCZ Vertex 3s that did NOT exhibit any of the controller problems that others were having.
I was one of the people who bought the Intel X25-M that didn't have the SandForce controller which plagued basically every other ODM on the market?

Adbot
ADBOT LOVES YOU

SwissArmyDruid
Feb 14, 2014

by sebmojo
I might have had enough money to get a small SSD, but not THAT much money, duder. It was only a 120GB model, after all, and I had to get it on deep discount from Newegg during Black Friday.

And then they shipped me $400 of DoA parts, then tried to claim their "Iron Guarantee" didn't count during Black Friday, so I swore a fatwa against ever giving them my business again, but at least I got an SSD out of it.

Who knows if I actually had a good sample, though. XP being XP, in retrospect, I'm not sure I would have been able to tell the difference between a malfunctioning controller and one that wasn't. Besides, all my documents folders and crap were mapped to my old boot spinning rust, now relegated to secondary storage, so nuking and paving was relatively painless.

SwissArmyDruid fucked around with this message at 18:42 on Aug 30, 2019

NewFatMike
Jun 11, 2015

craig588 posted:

This is a derail I didn't mean to cause. I just thought it was funny he went from tapes to lossless and still uses 10 dollar portable headphones. Skipped CDs entirely because they're too fragile. This is the same person that says my 10 GB very slow 265 rips might as well be unwatchable. They have source quality rips of blurays on their NAS because they say they can instantly see the difference.

Am I the weird one for keeping source rips on my NAS? I do it because space is cheap and I don't feel like spending a bunch more time researching and encoding.

5er
Jun 1, 2000

Qapla' to a true warrior! :patriot:

D. Ebdrup posted:

I was one of the people who bought the Intel X25-M that didn't have the SandForce controller which plagued basically every other ODM on the market?

I think my subconscious worked very hard to repress memory of sandforce controllers, because I got a pretty strong instinctive revulsion on reading that, and it took me a moment to remember briefly supporting some badly-implemented pcie ssd's about six or seven years ago with my previous employer.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

NewFatMike posted:

Am I the weird one for keeping source rips on my NAS? I do it because space is cheap and I don't feel like spending a bunch more time researching and encoding.

Source quality >50GB BluRay rips? Yeah, you're weird. If you want to encode them yourself, Handbrake et al have a bunch of single-click profiles that are good enough these days that it's real hard to tell the difference between it and source. And if you don't want to encode them yourself, you can take the tactic that many of my friends have: buy the disk to "support the creators" or whatever, then torrent the actual video file for NAS use.

Shipon
Nov 7, 2005

DrDork posted:

Source quality >50GB BluRay rips? Yeah, you're weird. If you want to encode them yourself, Handbrake et al have a bunch of single-click profiles that are good enough these days that it's real hard to tell the difference between it and source. And if you don't want to encode them yourself, you can take the tactic that many of my friends have: buy the disk to "support the creators" or whatever, then torrent the actual video file for NAS use.

I tried doing this, buying an actual bluray and watching it, and the quality on the PS4 was dogshit compared to a torrent copy.

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"

Paul MaudDib posted:

yeah I offered to help my (gainfully employed software-engineer) cousin pick parts for a gaming rig in 2016 and then he went out and built a bulldozer rig without asking me :gonk:

I told him that I was shopping parts for a 5820K rig because it was 6 cores at Haswell performance and he was like "... do you think that stuff really matters!?"

yeah bud, yeah it does :ughh:

"That's why I buy Macs, maaaaan. They just *work*, y'know!?!" :downs:

*exactly one day after the warranty expires*

"Halp my computer box won't work anymore and the Geniuses say my data's gone because of this 'Tee-Two' chip." :saddowns:

Shrimp or Shrimps
Feb 14, 2012


D. Ebdrup posted:

I was one of the people who bought the Intel X25-M that didn't have the SandForce controller which plagued basically every other ODM on the market?

Me too, and I'm still using it as a games drive. It only holds 1 game (BF4) lol

craig588
Nov 19, 2005

by Nyc_Tattoo
I got lucky and as I was shopping for my first SSD the one I ultimately settled on was the 1 of the 3 I was looking at that didn't develop problems. AFAIK the guy I gave it to is still using it.

Indiana_Krom
Jun 18, 2007
Net Slacker
I still have my original Intel 160 GB SSD (320 series which was the successor to the X25-M but with a more confusing name), these days I use it as a USB drive with one of those USB3 or type-C to SATA adapters. Handy for brute forcing large transfers that would be too slow over the wifi and also makes an excellent source for installing windows because it is way faster and more durable than the average thumb drive.

Cojawfee
May 31, 2006
I think the US is dumb for not using Celsius
Will this work with DDR4 3200 C16?

craig588
Nov 19, 2005

by Nyc_Tattoo
3,000 NM (Or in the language of the time: 3 micron)

EdEddnEddy
Apr 5, 2012



I got my first SSD to move up from Raid 0 74G raptors on a P4 rig I believe. 128G Super Talent which ended up being somewhat the same as a few other Samsung Drives pre EVO days. Took me ages to find a flasher that would flash the latest FW to allow it to support Trim.

Drive still works fine to this day somehow.

Upgraded to Raid 0 Plextor M3 Pro's which were the fastest things on the block for a short bit. Those too have continued to work.

I also did bite late jn the Vertex 3 days on a sale where 120G ones were like $30 (When they were usually still like $80+ normally.

Bought 3 and spread them around as they were a fit. So far no issues years later which is strange...

Now it's mostly all EVOs but I grab the occasional PNY or AData when the sale is just too drat good. As long as it's not the DRAM less versions I am happy.

I have a Intel 320 or something I won in a raffle that was a refurb but still seems to work and hasn't hit it's death clock yet at least.

SSD's have cured a lot of slow pc issues over the year for me. With my friends/family, if they want my help with something on the PC, they either follow my advice, or I will not assist if they ignore me and poo poo goes south with their Costco purchase. The Askholes have learned to trust me as while I am not perfect, 99% of the time I save them a lot of pain and misery in the long run as far as hardware purchases go.

Potato Salad
Oct 23, 2014

nobody cares


am I reading there that you have ssds in consumer software raid 0

Potato Salad
Oct 23, 2014

nobody cares


[desire to pontificate increasing]

GutBomb
Jun 15, 2005

Dude?

Potato Salad posted:

[desire to pontificate increasing]

:justpost:

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
Please Potato Salad don’t hurt em

(Actually do)

Even some of the engineers I work with (in an enterprise storage group!!) talk about wanting to use RAID 0 SSDs as backup “cuz it’s so fast”

Because I want to keep a friendly workplace I just smile and nod.

EdEddnEddy
Apr 5, 2012



Raid 0 in an enterprise environment.... Doesn't seem like it ever has a place. Especially with drive speeds now... Is there one?

And for Backups? Really?

Hey, I will do stupid crazy stuff on my own hardware for non critical data. Come on Potato Salad. I can take it.

I do also use Intel SSD Cashing for my Raid 0 2TB WE Blacks since spinning rust is just too slow on its own. It's only a game library so again, nothing really important, but man did it work wonders to stop the boot up thrashing that happens every restart. Now it's pure silence.

Just.... Disable it before you plan to boot anything else that may see those drives or things will start acting really weird.

EdEddnEddy fucked around with this message at 16:41 on Aug 31, 2019

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
My last workplace had like 12 2 TB SSDs in a RAID 1+0 configuration for a pretty high throughput system (pushing 6 Gbps of egress per node with 28 cores and 256 GB of RAM).

RAID0 is fine if your software systems can easily handle entire nodes’ local storage being inaccessible. Google and friends do similar architectures with distributed file systems so the primary job of each node would be to go as fast and hard as possible with as efficient power usage.

EdEddnEddy
Apr 5, 2012



Yea I can understand using it in a 1+0 form scaled up to have multiple levels of redundancy which makes sense. But nobody would do an actual Raid 0 all by its lonesome in an enterprise environment outside of maybe testing throughout or something right?

For giggles, there is this rig at work that someone was throwing away that had 4 4TB HDD in it as well as a 970EVO as the boot drive.

Yoinked that right away but what was funny was the 4 HDD were in a Windows software Raid 0. :stare:

No idea what they planned to do with it like that, but I will say the throughout of 4 2TB drives was pretty drat high. Hit close to 1000MB/s tinkering with it and some large test files.

BlankSystemDaemon
Mar 13, 2009



How all ya'all feel about not using RAID0 is how I feel about using anything other than ZFS. :v:

SwissArmyDruid
Feb 14, 2014

by sebmojo
I'm sure there are gonna be use cases for RAID 0 forever, but really, it's getting harder and harder to saturate storage these days. Just what kind of consumer workload is going to saturate an NVMe link? At PCIe 4.0?

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

EdEddnEddy posted:

Yea I can understand using it in a 1+0 form scaled up to have multiple levels of redundancy which makes sense. But nobody would do an actual Raid 0 all by its lonesome in an enterprise environment outside of maybe testing throughout or something right?

It's the opposite. Enterprise is the only place where it makes sense. At that level, you should be able to pick any random system, destroy it completely, and not suffer any permanent setback. At that point, if you get significantly higher throughput almost all of the time at the cost of having to very occasionally re-do a bit of work, it makes perfect sense.

Just for instance, take a cache or even a search index. They exist as specialized copies of other data. If they're gone, it's work to rebuild them, but no actual data is missing. There might be a performance hit if one goes offline, but more capacity while they're online usually more than outweighs the inconvenience of spinning up a new one.

Zorak of Michigan
Jun 10, 2006

At work, we have many applications running without redundant disks, because there's redundancy elsewhere in the stack. We don't use RAID0, though. We just configure the disks as JBOD and present them to the application. Why end up needing to redistribute or re-create many drives worth of data because one drive failed?

AlternateAccount
Apr 25, 2005
FYGM
It does kinda suck that there's no easy way to pile a few SSDs together for convenience without either drastically increasing your chance of failure(RAID0), or drastically cutting your capacity(RAID10).

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

AlternateAccount posted:

It does kinda suck that there's no easy way to pile a few SSDs together for convenience without either drastically increasing your chance of failure(RAID0), or drastically cutting your capacity(RAID10).

On Windows, at least, you can use Storage Spaces to do more or less exactly that: JBOD but addressable as a single drive letter.

Laslow
Jul 18, 2007
I have to say being an early adopter for an SSD allowed me to use my old Q9450 rig comfortably for way longer than I had any right to be.

Sure, the difference going to a 1276v3 with a GTX 970 was definitely noticeable, but definitely wasn’t nearly as big as you would expect considering the age gap of the Q9450/GTX 580 3GB it replaced.

Now they were a $300 CPU paired with a $600 GPU, so they were top of the line in their day, but still....

FuturePastNow
May 19, 2014


I can't claim to be an early adopter, but I first used a SSD in the computer I built right when Windows 7 came out. It was a X25-M which lives on in someone else's laptop. It's been almost 10 years for that one. On a Samsung EVO now.

I recently set up a new (but built out of old parts) computer to do some video capture and storage and used a hard drive for its OS because that's what I had laying around. I didn't expect the difference would be so great just doing basic Windows stuff, but it's really painful.

KS
Jun 10, 2003
Outrageous Lumpwad


Buy.com is dead, I think. That SSD's still running great in an i7-870 PC I gave to a friend and it turns 10 this week. Funny, I remember all the doom and gloom about SSD lifespans early on.

It's amazing you can still buy laptops with spinning disks. Absolutely miserable.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

DrDork posted:

On Windows, at least, you can use Storage Spaces to do more or less exactly that: JBOD but addressable as a single drive letter.

I've been wondering what the merits of RAID0 vs a spanned volume are. Obviously in both cases if you lose a drive you lose the entire array, but I guess in principle RAID reads are aligned and every read hits every disk, while spanned could allow it to service multiple requests on different drives depending on how the data scatters out across the filesystem? And spanned can be easily grown by adding disks while RAID stripes can't really be rewritten.

It seems like if it's supported, spanned should be preferable in most cases. You can't boot from a spanned volume though, while you could boot from hardware RAID.

Paul MaudDib fucked around with this message at 07:47 on Sep 1, 2019

BlankSystemDaemon
Mar 13, 2009



DrDork posted:

On Windows, at least, you can use Storage Spaces to do more or less exactly that: JBOD but addressable as a single drive letter.
I believe they're called CONCAT arrays.

Paul MaudDib posted:

I've been wondering what the merits of RAID0 vs a spanned volume are. Obviously in both cases if you lose a drive you lose the entire array, but I guess in principle RAID reads are aligned and every read hits every disk, while spanned could allow it to service multiple requests on different drives depending on how the data scatters out across the filesystem? And spanned can be easily grown by adding disks while RAID stripes can't really be rewritten.

It seems like if it's supported, spanned should be preferable in most cases. You can't boot from a spanned volume though, while you could boot from hardware RAID.
A proper implementation of CONCAT will only lose the data on the drive that fails - as opposed to a RAID0 which will not only take the entire array with it, the MTBF of the array will be halved every time you add a disk.
Also, gconcat(8) aka. GEOM CONCAT in FreeBSD can be booted from just fine, as long as you place the firmware-compatible boot-block on the firmwares first disk (ie. what the BIOS calls C: and what UEFI calls disk0) - so it's a question of whether Windows 10 still uses NTLDR62 or has been updated to support Storage Spaces/ReFS.

BlankSystemDaemon fucked around with this message at 12:07 on Sep 1, 2019

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Paul MaudDib posted:

I've been wondering what the merits of RAID0 vs a spanned volume are. Obviously in both cases if you lose a drive you lose the entire array, but I guess in principle RAID reads are aligned and every read hits every disk, while spanned could allow it to service multiple requests on different drives depending on how the data scatters out across the filesystem? And spanned can be easily grown by adding disks while RAID stripes can't really be rewritten.

It seems like if it's supported, spanned should be preferable in most cases. You can't boot from a spanned volume though, while you could boot from hardware RAID.

It really depends on how the spanning system has been implemented. For Storage Spaces with zero redundancy, for example, files are written to a single disk in the array, so compared to RAID0 you get lower performance, and mostly are just getting the convince of not having to deal with multiple drive letters. But you explicitly do NOT lose the entire array of you lose a disk--just what was on that disk. Other systems allow different trade-offs between speed, redundancy, and size.

redeyes
Sep 14, 2002

by Fluffdaddy
If you want a decent logical RAID, meaning you can read each drive individually and get files, StableBit Drivepool is amazing and I still love it. Running mine with 32TB of Hitachi NAS drives.

Dr. Fishopolis
Aug 31, 2004

ROBOT
I switched to unRaid ages ago and never looked back. It's not RAID at all, it's basically JBOD with parity, which is excellent for a shitload of reasons.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Yeah, for consumer use, where the performance of even a single SSD is more than sufficient for just about anything, a JBOD-based system makes a lot more sense than RAID0.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

D. Ebdrup posted:

I believe they're called CONCAT arrays.

A proper implementation of CONCAT will only lose the data on the drive that fails - as opposed to a RAID0 which will not only take the entire array with it, the MTBF of the array will be halved every time you add a disk.
Also, gconcat(8) aka. GEOM CONCAT in FreeBSD can be booted from just fine, as long as you place the firmware-compatible boot-block on the firmwares first disk (ie. what the BIOS calls C: and what UEFI calls disk0) - so it's a question of whether Windows 10 still uses NTLDR62 or has been updated to support Storage Spaces/ReFS.

Right, you only lose what was on that disk... but files can be scattered across multiple disks at a block level, and likely will be for performance reasons, so you will lose half of every file on average.

Sadly storage spaces cannot be booted like ZFS spans, or so I’ve read.

BlankSystemDaemon
Mar 13, 2009



Paul MaudDib posted:

Right, you only lose what was on that disk... but files can be scattered across multiple disks at a block level, and likely will be for performance reasons, so you will lose half of every file on average.

Sadly storage spaces cannot be booted like ZFS spans, or so I’ve read.
ZFS can't do SPAN arrays; it stripes data across vdevs so any pool with more than one vdev automatically comes RAIDN0 where N is either blank, 1, 5, 6, or 7.

AlternateAccount
Apr 25, 2005
FYGM

DrDork posted:

On Windows, at least, you can use Storage Spaces to do more or less exactly that: JBOD but addressable as a single drive letter.

Is the big difference between Storage Spaces and RAID a file vs. block level thing?

redeyes posted:

If you want a decent logical RAID, meaning you can read each drive individually and get files, StableBit Drivepool is amazing and I still love it. Running mine with 32TB of Hitachi NAS drives.

Oh my gosh, I had totally forgotten about this. It's what we used way back when Windows Home Server came out with a new version that didn't support Drive Extender or whatever. I might still have a license lying around...

Palladium
May 8, 2012

Very Good
✔️✔️✔️✔️
While I was stress testing my old 4790K system with a 850 Evo prior to selling it, I was reminded by just how fast it booted from hitting the power button to the Windows desktop: 14 secs. My 8700K needed the same 14 secs to see the POST screen, and another 13 secs to the desktop despite having a EX920 NVMe OS drive.

Adbot
ADBOT LOVES YOU

Laslow
Jul 18, 2007

Palladium posted:

While I was stress testing my old 4790K system with a 850 Evo prior to selling it, I was reminded by just how fast it booted from hitting the power button to the Windows desktop: 14 secs. My 8700K needed the same 14 secs to see the POST screen, and another 13 secs to the desktop despite having a EX920 NVMe OS drive.
The DDR4 adds about 10 seconds for some reason I’m too lazy to look up right now.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply