|
WindMinstrel posted:All it needs is online capacity expansion and it's superior in every way to mdadm RAID-5. I just did this today at work. One of our Oracle DBs needed a larger space for its data. Created a LUN, attached it to the system. code:
Note: YMMV. We were using ZFS on a single LUN so we weren't getting all its integrity features -- we're not going to run ZFS in a RAID5 setup on a High Availability SAN. ZFS integrity features are really useful for a lot of cheap disks. Furthermore, ZFS would never see the errors most likely because our SAN would fix them first. And we'll get an email after the manufacturer and the distributer gets an email. Hell they'll be on the phone with us before we even READ that email. If something is seriously up they'll send out a team to check it out. (<3 Pillar SAN) Anyway.... I'm not a ZFS zealot. It has its good sides and its bad sides. People have run into interesting bugs and performance issues and it is still in its infancy. That said, when it works, it rocks. And open source? Hell yes. It's on FreeBSD now too so we don't have to use Solaris? HELL YES.
|
# ¿ Mar 19, 2008 06:07 |
|
|
# ¿ Apr 25, 2024 07:46 |
|
FISHMANPET posted:gently caress Oracle. Indeed. According to the recent benchmarks at Moronix, the yet-unreleased native ZFS on Linux and the ZFS on OpenIndiana are much slower than BTRFS. However, BTRFS is a pile of steaming poo poo. They haven't updated their tools in ages (and their tools are the most important part). I don't give a poo poo if they have a stable filesystem design -- if I can't do a RAID5 and if my kernel panics when I break the mirror I really don't give a flying gently caress how great the filesystem is when the features that are most important to me (RAID + healing) are broken or STILL not available. BTRFS has taken a major dump in activity ever since the Sun and Oracle crap started. All I care about anymore is that the BSD community continues hacking away at ZFS (which they've committed to doing). gently caress you Oracle. You ruin everything.
|
# ¿ Nov 24, 2010 16:13 |
|
movax posted:Hey, sanity check on my part: Hitachi 5K3000s (2TB, 5940rpm SATA3 drives) are native 512-byte, 4K emulated as 512, or 512? i.e., should I use zpool ashift=12 when making a vdev out of these to add to my existing pool? I've not yet seen a Hitachi that fakes the 4K crap. I could be wrong though, but that's what I like about the ones I have. The 2TB ones should claim to be 512 and the 3TB ones should claim to be 4K.
|
# ¿ Aug 13, 2011 03:02 |
|
Goon Matchmaker posted:Can I get a critique of my planned NAS please? WTF is the Hitachi coolspin? I've been avoiding any drive that isn't straight up non-green 7200rpm. The Hitachis haven't dropped price in a year, so I'm still waiting to pick up more because there's no way I can justify the same price for another set of HDs a year later.
|
# ¿ Sep 14, 2011 03:16 |
|
Please remember that dedupe on ZFS is going to require INSANE amounts of memory. You've been warned.
|
# ¿ Dec 30, 2011 00:03 |
|
Nam Taf posted:Out of morbid curiousity, can you put some figures to that? Last figures I saw were ~3GB per 1TB. That's *just* for the dedupe. That's not including room for regular ZFS ARC buffering, etc. Note, HAMMER on Dragonfly lets you get by with around 256MB per each TB.
|
# ¿ Jan 5, 2012 01:58 |
|
FISHMANPET posted:The Hitachi drive I'm looking at in particular has "CoolSpin" technology, but I can't figure out what that means. There's also a lot of reviews of early failure on that model, so maybe I'll just skip it and look at the Seagate Green drive. Does Seagate have any of the issues that WD drives do? I've also been looking for a proper explanation of what "CoolSpin" really is.
|
# ¿ Jan 25, 2012 23:41 |
|
|
# ¿ Apr 25, 2024 07:46 |
|
thebigcow posted:ZFS might but FreeBSD's implementation didn't the last time I was up to date on such things. If you're running FreeBSD you need to keep an eye out for zfsd to hit the tree. It's going to probably hit the 10.1-RELEASE (I'm going to bother the developer until I see him commit it) and will provide the hot spare functionality and notification framework for ZFS events (drive failures, etc). WIP code is here: http://svnweb.freebsd.org/base/projects/zfsd/
|
# ¿ Jan 7, 2014 03:16 |