Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
orian
Jun 2, 2004
penz
Hardware
24 drive bay case
4TB drives, all same make/model/etc.

Possible ZFS configurations, arranged by number of disks
20 disks
raidz2 = 1 zpool, 2 vdevs of 10 disks, 2 can fail per vdev, 4 can fail in total
32TB per vdev, 64TB total usable, 20% of total drives can fail

22 disks
raidz3 = 1 zpool, 2 vdevs of 11 disks, 3 can fail per vdev, 6 can fail in total
32TB per vdev, 64TB total usable, 27.27% of the drives can fail

24 disks
raidz2 = 1 zpool, 3 vdevs of 8 disks, 2 can fail per vdev, 6 can fail in total
24TB per vdev, 72TB total usable, 25% of total drives can fail

orian fucked around with this message at 12:13 on Jul 10, 2014

Adbot
ADBOT LOVES YOU

pram
Jun 10, 2001
I only work with the ZFS appliance but, you're not going to make a cache vdev?

orian
Jun 2, 2004
penz

pram posted:

I only work with the ZFS appliance but, you're not going to make a cache vdev?

I figured it wouldn't be necessary as this is for home use, 2-3 machines at once, media, personal files, ESXi homelab datastore. Non production, basically. Am I wrong in thinking that? Also the server will have 64GB RAM.

orian fucked around with this message at 12:09 on Jul 10, 2014

usurper
Oct 19, 2003

Sup
I'm building something similar, but with less drives initially. I agree with you that based on my research (and someone more experienced will surely jump in here) is that it's better to put in 64+GB of ram in the machine than deal with dedicated L2ARC or ZIL SSD drives.

Also, have you looked at striped mirrored pairs? I was leaning this way for my implementation.


A couple of useful links:

https://calomel.org/zfs_raid_speed_capacity.html

http://nex7.blogspot.ca/2013/03/readme1st.html

http://constantin.glez.de/blog/2010/01/home-server-raid-greed-and-why-mirroring-still-best

KS
Jun 10, 2003
Outrageous Lumpwad
You've picked three entirely valid configs, and you're putting the correct number of disks in each vdev. I'd pick one of the Z2 configs as I think Z3 is overkill. You should be backing up off system for all of the events (fire, theft, etc) that can take out the array rather than worrying about the MTTDL differences of Z2 vs Z3.

usurper posted:

I'm building something similar, but with less drives initially. I agree with you that based on my research (and someone more experienced will surely jump in here) is that it's better to put in 64+GB of ram in the machine than deal with dedicated L2ARC or ZIL SSD drives.

It is definitely better to expand your ARC than use L2ARC, but ZIL serves a different function and is pretty essential if you want serious write IO out of a ZFS system. It also helps alleviate performance problems from fragmentation at high % util.

pram
Jun 10, 2001

orian posted:

I figured it wouldn't be necessary as this is for home use, 2-3 machines at once, media, personal files, ESXi homelab datastore. Non production, basically. Am I wrong in thinking that? Also the server will have 64GB RAM.

That ram is fine, it's just I usually see ZFS configured with vdevs for cache/spare/zil. I know the intent log is a big deal for databases.

I'd probably just pick #2 and fill in the last 2 disks with a spare vdev.

orian
Jun 2, 2004
penz
I think I'll settle on the following:

2x 10 disk z2 zpools
1x 4 disk zpool mirror dedicated for ESXi datastore

Thanks for the help everyone!

orian fucked around with this message at 20:38 on Jul 12, 2014

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
What do you mean by esxi datastore?

ESXI installs on a flash drive.

thebigcow
Jan 3, 2001

Bully!
the datastore is where the machines go. mine is called "datastore1"

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

thebigcow posted:

the datastore is where the machines go. mine is called "datastore1"

It's only if you do local storage which is dumb.

feld
Feb 11, 2008

Out of nowhere its.....

Feldman

I'd recently learned that the larger L2ARC device you have the more RAM you need. That was surprising.

Either way op, make sure you're using ashift=12 and you should probably read this:

http://blog.delphix.com/matt/2014/06/06/zfs-stripe-width/

feld
Feb 11, 2008

Out of nowhere its.....

Feldman

Dilbert As gently caress posted:

It's only if you do local storage which is dumb.

Local storage is hardly dumb.

r u ready to WALK
Sep 29, 2001

Beware that once you point an ESXi server to zfs through nfs or iscsi you'll get generally lovely performance out of the box. The reason is that vmware forces every write to be synchronous to ensure data consistency in case of transient failures. In the enterprise you never notice this because everything has big battery backed write caches and respond back with a write completed once it's in cache. It's murder on a home server with onboard sata ports though.

You can tell zfs to lie to the application but if your zfs box crashes you can end up with broken VMs that way. If you want to run a big vmware lab it might be worth it to get a used raid controller with cache and play around with sticking the ZIL and L2ARC on it. Or just running a datastore directly on hardware raid5 for write heavy vmdk files.

Mr Shiny Pants
Nov 12, 2012

error1 posted:

Beware that once you point an ESXi server to zfs through nfs or iscsi you'll get generally lovely performance out of the box. The reason is that vmware forces every write to be synchronous to ensure data consistency in case of transient failures. In the enterprise you never notice this because everything has big battery backed write caches and respond back with a write completed once it's in cache. It's murder on a home server with onboard sata ports though.

You can tell zfs to lie to the application but if your zfs box crashes you can end up with broken VMs that way. If you want to run a big vmware lab it might be worth it to get a used raid controller with cache and play around with sticking the ZIL and L2ARC on it. Or just running a datastore directly on hardware raid5 for write heavy vmdk files.

Or get a small SSD and put the ZIL on it. 5GB is enough.

thebigcow
Jan 3, 2001

Bully!
What happens when an SLOG dies these days? A few versions back on FreeBSD it would destroy a pool if the SLOG died.

feld
Feb 11, 2008

Out of nowhere its.....

Feldman

thebigcow posted:

What happens when an SLOG dies these days? A few versions back on FreeBSD it would destroy a pool if the SLOG died.

modern ZFS handles it gracefully. You lose those transactions but the pool keeps functioning.

Adbot
ADBOT LOVES YOU

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
if it were me, for home use only where performance did not matter so much, I would probably do 3x 6+2 raidz2 vdevs. If I wanted it to perform better, I would do 3x 5+2 vdevs, 2x mirrored slog and an l2arc device.

  • Locked thread