|
raid 6 or suck dicks imo
|
# ? Aug 21, 2017 07:11 |
|
|
# ? May 3, 2024 23:05 |
|
raid 0: dont be a hero
|
# ? Aug 21, 2017 07:12 |
|
btrfs raid 1
|
# ? Aug 21, 2017 09:40 |
|
make your own with this https://www.youtube.com/watch?v=t-L99pUANaA
|
# ? Aug 21, 2017 13:37 |
|
Agile Vector posted:raid 6 or suck dicks imo checking in on this every few months: is RAID still hosed in BTRFS?
|
# ? Aug 21, 2017 13:40 |
|
it's usable now, don't know why they still have that big, red warning on the wiki (maybe because those last scrub fixes are pretty recent) https://btrfs.wiki.kernel.org/index.php/RAID56
|
# ? Aug 21, 2017 14:02 |
|
hrm, yes, there has been 20 days without a major patch fixing serious data loss issues, this is the kind of filesystem track record on which i shall stake my data!
|
# ? Aug 21, 2017 14:25 |
|
Agile Vector posted:raid 6 or suck dicks imo
|
# ? Aug 21, 2017 14:40 |
|
Tankakern posted:make your own with this well those are kinda cute
|
# ? Aug 21, 2017 15:01 |
|
Jimmy Carter posted:checking in on this every few months: is RAID still hosed in BTRFS? btrfs is completely hosed, why would you use it? use regular linux mdraid, or a hardware raid controller, and a filesystem that is intended not to lose your data
|
# ? Aug 21, 2017 15:13 |
|
eschaton posted:get an older Sun rackmount server for a few hundred with like 128GB of RAM and multiple 10GE ports this would be a cool option except that the best sun x86 gear uses 2.5" disks, which are kind of expensive also oracle's support policy for hobbyists is ... not wonderful. getting solaris 11 is very easy. getting patches for solaris 11 is a pain in the balls
|
# ? Aug 21, 2017 15:14 |
|
zfs
|
# ? Aug 21, 2017 16:06 |
|
Notorious b.s.d. posted:btrfs is completely hosed, why would you use it? no, btrfs owns, but you have to commit to it, knowing just enough so you understand when you have to use nocow, and when you need to balance it and such
|
# ? Aug 21, 2017 17:27 |
|
or you can use zfs
|
# ? Aug 21, 2017 18:05 |
|
i havent thought about file systems at all in years thanks to ntfs. it just works
|
# ? Aug 21, 2017 19:00 |
|
Bloody posted:i havent thought about file systems at all in years thanks to ntfs. it just works refs is an improvement. zfs is good but showing its age. apfs is shaping up to be good on paper btrfs is good but uhhhh
|
# ? Aug 21, 2017 19:08 |
|
ntfs is very very good.
|
# ? Aug 21, 2017 19:09 |
|
Shaggar posted:ntfs is very very good.
|
# ? Aug 21, 2017 19:25 |
|
Shaggar posted:ntfs is very very good. no copy on write let me just make a small edit to this large file and save... oh wait the power went out and I lost the edit and the original
|
# ? Aug 21, 2017 19:55 |
|
|
# ? Aug 21, 2017 20:06 |
|
there are always new and more linux isos. do you really need to store your old ones?
|
# ? Aug 21, 2017 20:47 |
|
i got a qnap ts-251 and it has been very satisfactory
|
# ? Aug 21, 2017 20:48 |
|
Blue Train posted:illmatic and i am both have classics op sincere thanks for making this the first reply
|
# ? Aug 21, 2017 20:51 |
|
Perplx posted:no copy on write this never really happens on ntfs. it is metadata journaling, and pretty good at it because windows crashes a lot.
|
# ? Aug 21, 2017 21:00 |
|
Notorious b.s.d. posted:this would be a cool option except that the best sun x86 gear uses 2.5" disks, which are kind of expensive I wasn't talking about Sun x86 gear, I was talking about something like a [url= https://www.ebay.com/itm/251984487951]T5240[/url] don't know why that one is $1100, I've seen them for $400 and then you also don't have to worry about updates because there aren't any!
|
# ? Aug 21, 2017 21:06 |
|
Shaggar posted:ntfs is very very good. unironically agreeing with shaggar for once NTFS is actually a well-designed filesystem of course that's the design, when it comes to the implementation…
|
# ? Aug 21, 2017 21:07 |
|
eschaton posted:I wasn't talking about Sun x86 gear, I was talking about something like a [url= https://www.ebay.com/itm/251984487951]T5240[/url] you werent thinking big enough sun used x86 for their storage-focused models
|
# ? Aug 23, 2017 14:20 |
|
Notorious b.s.d. posted:you werent thinking big enough thumper
|
# ? Aug 23, 2017 14:24 |
|
its synology, op i got a couple extra 2 bays if you want em
|
# ? Aug 23, 2017 14:29 |
|
code:
also all these drives are from 2013 and not a single one has had to be replaced because zfs just resilvers the occasional bad block while scrubbing each month, unlike garbage hardware raid implementations
|
# ? Aug 23, 2017 18:46 |
|
zfs is so good. I wish oracle wasn't such a poo poo company so we could use it
|
# ? Aug 23, 2017 19:04 |
|
r u ready to WALK posted:
delete everything including your posts
|
# ? Aug 23, 2017 19:13 |
|
r u ready to WALK posted:
lol what kind of poo poo hardware controllers are you using? a perc/LSI would handle that just fine
|
# ? Aug 23, 2017 19:15 |
|
Both PERC and HP Smart Array (at least the old ones) are really bad at handling subsequent disk errors while rebuilding, about the only thing they can deal with is SMART pre-fail warnings. They will kick the redundant drives out of the array on the first write/read error and start rebuilding on the spare, and then proceed to poo poo themselves when they hit another bad block on a different disk even though the original disk might contain the valid data it needs. ZFS on the other hand will not eject a drive from the raid unless it is completely offline and not responding, and since everything is checksummed and copy on write it can still use the partial contents of a drive that might be having hardware issues and verify that the data is good. It is not the fastest in the world but drat good at keeping data integrity. But yeah Oracle should be forced to give ZFS back to the community at gunpoint
|
# ? Aug 23, 2017 19:49 |
|
r u ready to WALK posted:But yeah Oracle should be forced to give ZFS back to the community at gunpoint
|
# ? Aug 23, 2017 19:52 |
|
btrfs also has scrub and can silently fix bad block errors just like zfs, use that instead of a out-of-tree module that you know never will be a part of mainline linux
|
# ? Aug 23, 2017 19:53 |
|
r u ready to WALK posted:Both PERC and HP Smart Array (at least the old ones) are really bad at handling subsequent disk errors while rebuilding, about the only thing they can deal with is SMART pre-fail warnings. They will kick the redundant drives out of the array on the first write/read error and start rebuilding on the spare, and then proceed to poo poo themselves when they hit another bad block on a different disk even though the original disk might contain the valid data it needs. I don't know what gen of PERCs you were working with but they don't jettison on predictive failure unless its exhausted remappable sectors or at least coming close to its limit. It doesn't rebuild the array on the first remap, it takes a whole bunch and you have to ignore the warnings for a while to get to the point you are describing.
|
# ? Aug 23, 2017 20:09 |
|
upload everything to a google drive and use plex cloud
|
# ? Aug 23, 2017 20:54 |
|
wouldn't linux software raid with regular scrubbing be good enough?
|
# ? Aug 23, 2017 21:05 |
|
|
# ? May 3, 2024 23:05 |
|
get a synology because its dead simple or you could roll your own if you want to spend your time at home janitoring computers
|
# ? Aug 23, 2017 22:00 |