|
Hardware 24 drive bay case 4TB drives, all same make/model/etc. Possible ZFS configurations, arranged by number of disks 20 disks raidz2 = 1 zpool, 2 vdevs of 10 disks, 2 can fail per vdev, 4 can fail in total 32TB per vdev, 64TB total usable, 20% of total drives can fail 22 disks raidz3 = 1 zpool, 2 vdevs of 11 disks, 3 can fail per vdev, 6 can fail in total 32TB per vdev, 64TB total usable, 27.27% of the drives can fail 24 disks raidz2 = 1 zpool, 3 vdevs of 8 disks, 2 can fail per vdev, 6 can fail in total 24TB per vdev, 72TB total usable, 25% of total drives can fail orian fucked around with this message at 12:13 on Jul 10, 2014 |
# ? Jul 10, 2014 04:42 |
|
|
# ? Apr 26, 2024 17:24 |
|
I only work with the ZFS appliance but, you're not going to make a cache vdev?
|
# ? Jul 10, 2014 06:01 |
|
pram posted:I only work with the ZFS appliance but, you're not going to make a cache vdev? I figured it wouldn't be necessary as this is for home use, 2-3 machines at once, media, personal files, ESXi homelab datastore. Non production, basically. Am I wrong in thinking that? Also the server will have 64GB RAM. orian fucked around with this message at 12:09 on Jul 10, 2014 |
# ? Jul 10, 2014 11:53 |
|
I'm building something similar, but with less drives initially. I agree with you that based on my research (and someone more experienced will surely jump in here) is that it's better to put in 64+GB of ram in the machine than deal with dedicated L2ARC or ZIL SSD drives. Also, have you looked at striped mirrored pairs? I was leaning this way for my implementation. A couple of useful links: https://calomel.org/zfs_raid_speed_capacity.html http://nex7.blogspot.ca/2013/03/readme1st.html http://constantin.glez.de/blog/2010/01/home-server-raid-greed-and-why-mirroring-still-best
|
# ? Jul 10, 2014 12:58 |
|
You've picked three entirely valid configs, and you're putting the correct number of disks in each vdev. I'd pick one of the Z2 configs as I think Z3 is overkill. You should be backing up off system for all of the events (fire, theft, etc) that can take out the array rather than worrying about the MTTDL differences of Z2 vs Z3.usurper posted:I'm building something similar, but with less drives initially. I agree with you that based on my research (and someone more experienced will surely jump in here) is that it's better to put in 64+GB of ram in the machine than deal with dedicated L2ARC or ZIL SSD drives. It is definitely better to expand your ARC than use L2ARC, but ZIL serves a different function and is pretty essential if you want serious write IO out of a ZFS system. It also helps alleviate performance problems from fragmentation at high % util.
|
# ? Jul 10, 2014 18:09 |
|
orian posted:I figured it wouldn't be necessary as this is for home use, 2-3 machines at once, media, personal files, ESXi homelab datastore. Non production, basically. Am I wrong in thinking that? Also the server will have 64GB RAM. That ram is fine, it's just I usually see ZFS configured with vdevs for cache/spare/zil. I know the intent log is a big deal for databases. I'd probably just pick #2 and fill in the last 2 disks with a spare vdev.
|
# ? Jul 10, 2014 21:55 |
|
I think I'll settle on the following: 2x 10 disk z2 zpools 1x 4 disk zpool mirror dedicated for ESXi datastore Thanks for the help everyone! orian fucked around with this message at 20:38 on Jul 12, 2014 |
# ? Jul 12, 2014 20:36 |
|
What do you mean by esxi datastore? ESXI installs on a flash drive.
|
# ? Jul 13, 2014 01:46 |
|
the datastore is where the machines go. mine is called "datastore1"
|
# ? Jul 13, 2014 04:29 |
|
thebigcow posted:the datastore is where the machines go. mine is called "datastore1" It's only if you do local storage which is dumb.
|
# ? Jul 13, 2014 04:40 |
|
I'd recently learned that the larger L2ARC device you have the more RAM you need. That was surprising. Either way op, make sure you're using ashift=12 and you should probably read this: http://blog.delphix.com/matt/2014/06/06/zfs-stripe-width/
|
# ? Jul 13, 2014 16:06 |
|
Dilbert As gently caress posted:It's only if you do local storage which is dumb. Local storage is hardly dumb.
|
# ? Jul 13, 2014 16:07 |
|
Beware that once you point an ESXi server to zfs through nfs or iscsi you'll get generally lovely performance out of the box. The reason is that vmware forces every write to be synchronous to ensure data consistency in case of transient failures. In the enterprise you never notice this because everything has big battery backed write caches and respond back with a write completed once it's in cache. It's murder on a home server with onboard sata ports though. You can tell zfs to lie to the application but if your zfs box crashes you can end up with broken VMs that way. If you want to run a big vmware lab it might be worth it to get a used raid controller with cache and play around with sticking the ZIL and L2ARC on it. Or just running a datastore directly on hardware raid5 for write heavy vmdk files.
|
# ? Jul 13, 2014 21:26 |
|
error1 posted:Beware that once you point an ESXi server to zfs through nfs or iscsi you'll get generally lovely performance out of the box. The reason is that vmware forces every write to be synchronous to ensure data consistency in case of transient failures. In the enterprise you never notice this because everything has big battery backed write caches and respond back with a write completed once it's in cache. It's murder on a home server with onboard sata ports though. Or get a small SSD and put the ZIL on it. 5GB is enough.
|
# ? Jul 14, 2014 07:10 |
|
What happens when an SLOG dies these days? A few versions back on FreeBSD it would destroy a pool if the SLOG died.
|
# ? Jul 14, 2014 17:36 |
|
thebigcow posted:What happens when an SLOG dies these days? A few versions back on FreeBSD it would destroy a pool if the SLOG died. modern ZFS handles it gracefully. You lose those transactions but the pool keeps functioning.
|
# ? Jul 14, 2014 22:42 |
|
|
# ? Apr 26, 2024 17:24 |
|
if it were me, for home use only where performance did not matter so much, I would probably do 3x 6+2 raidz2 vdevs. If I wanted it to perform better, I would do 3x 5+2 vdevs, 2x mirrored slog and an l2arc device.
|
# ? Jul 15, 2014 01:51 |