Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Methylethylaldehyde
Oct 23, 2004

BAKA BAKA


mpeg4v3 posted:

I'm sure this has been asked dozens of times, but what's generally considered the best OS/distro to go with for ZFS? I'm particularly drawn to freeNAS, as it seems pretty easy to setup and runs so small, but I've seen a few reports that performance suffers on it. I was hoping not to have to devote a large amount of time to setup and configure an entire OS, but if need be (say, if OpenIndiana or whatever is significantly better), I guess I can.

As long as googling answers and loving with the command line doesn't bug you, then OpenIndiana is probably for you. With VirtualBox you can run whatever windows crap you want, and still get all the cool ZFS stuff. FreeNAS and Nexenta gently caress with the userland which makes VBox kill itself.

Adbot
ADBOT LOVES YOU

movax
Aug 30, 2008



Methylethylaldehyde posted:

The ZIL only needs to hold like 5 seconds worth of writes before they're purged to spinning disks. 5 seconds of ~240MB/sec is ~2GB. The rest you can use as regular cache, which is awesome.

And yeah, I had the intel rebrand of the LSI-1068E card, flashed them with the IT firmware and they work great.


Easiest way I found to shift the data is to go to best buy, buy two or three 2TB drives, move your data to them, break the vdev, remake it, copy the data back, and zero the drive+return them.

Hmm, I'll need to buy like 5 2TB drives to serve as a temporary scratchpad, heh. What if I made an entirely new zpool (off-topic: can you rename zpools?), added my new vdevs to that (initially 2 4x2TB RAID-Zs, maybe 4 4x2TB), copied data from old zpool to new zpool, then destroy old zpool? I'm thinking of just replacing all disks with 2TB models and selling/getting rid of the 1.5TB drives.

I assume that it would be sane to stripe all those vdevs?

adorai
Nov 2, 2002

10/27/04 Never forget

Grimey Drawer

mpeg4v3 posted:

I'm sure this has been asked dozens of times, but what's generally considered the best OS/distro to go with for ZFS?
I am running openindiana, which I am only running because I was running opensolaris, which I was only running for xVM, which no longer works in openindiana. If I had to rebuild today, I would probably go with NexentaStor.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll

Nap Ghost

Isn't NexentaStor "free" only up to about 8TB of used space though? It's not a good long term solution if you're living up to the thread title if you ask me.

movax
Aug 30, 2008



necrobobsledder posted:

Isn't NexentaStor "free" only up to about 8TB of used space though? It's not a good long term solution if you're living up to the thread title if you ask me.

Yep, after that it costs $$$. No good for me, I'm already over 8TB. I will probably move to OpenIndiana soon.

adorai
Nov 2, 2002

10/27/04 Never forget

Grimey Drawer

necrobobsledder posted:

Isn't NexentaStor "free" only up to about 8TB of used space though? It's not a good long term solution if you're living up to the thread title if you ask me.
12tb USED. I believe they intend to up this over time, as drives grow.

movax
Aug 30, 2008



adorai posted:

12tb USED. I believe they intend to up this over time, as drives grow.

What happens when you hit that? Or do they just limit pool creation size to a max usable capacity of 12.000000TB?

wang souffle
Apr 26, 2002


Besides the capacity restriction, are there any other downsides to NexentaStor when compared to OpenIndiana?

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA


wang souffle posted:

Besides the capacity restriction, are there any other downsides to NexentaStor when compared to OpenIndiana?

It uses the Debian userland, so any of the solaris binaries won't work worth poo poo on it. VirtualBox will flip it's poo poo and not work at all on it.

movax
Aug 30, 2008



Hm, thinking about it, wouldn't a 8 drive RAID-Z2 be "safer" then 2 4 drive RAID-Zs? If two drives die in that RAID-Z, the whole pool is hosed since that vdev is hosed.

So trading off IOPS for safety?

e: Hacked up a spreadsheet to figure some poo poo out. I think since my priority is data safety, I'm going to be willing to sacrifice some IOPS (that hopefully get made up by SSD). And I think I'd like to have hot-spares, which will avoid rebuilds I think?

Basic things noticed so far, as # of vdevs goes up, RAID-Z3 begins to be a poor option obviously, as it approaches the same capacity you'd get from a straight mirror, but with poo poo IOPS performance. For 20 2TB drives, doing 4 RAID-Zx devices, mirroring gets me 20TB, -Z3 gets me 16TB (dumb), -Z2 gets me 24TB...at 1/3 the IOPS of a mirror, -Z gets me 32TB, but I don't want to do Z (maybe with a hotspare, but -Z2 just seems smart).

e2: where N is number of drives and M is number of vdevs, with -Z2, when N/M = 4, mirror and -Z2 capacity are identical (logically)...hooray storage solution finding

movax fucked around with this message at 21:17 on Nov 3, 2010

wang souffle
Apr 26, 2002


Thinking about some type of virtualization to run multiple machines on a new ZFS NAS. Which of the these choices would likely work out better to maximize capabilities of the VMs?

1) ESXi with OpenIndiana and raw data drives passed through
2) OpenIndiana with VirtualBox

ufarn
May 30, 2009


I am trying to replace the horrid native torrent client on my My Book Open World II (something like that), and I am following this guide on how to do it. I'm at the 5th mark, and try to get the settings json:

wget -O /root/.config/transmission-daemon/settings.json http://wd.mirmana.com/settings.json/

Unfortunately, it yields an error: No such file or directory.

Am I right in assuming that the called URL is probably dead, as this is a guide over a year old?

Goon Matchmaker
Oct 23, 2003

I play too much EVE-Online

Try the last url without the ending /

ufarn
May 30, 2009


Goon Matchmaker posted:

Try the last url without the ending /
It was something that the forums added automatically in my futile attempt to stop the linkfication, so no dice. But nicely spotted, though.

adorai
Nov 2, 2002

10/27/04 Never forget

Grimey Drawer

movax posted:

For 20 2TB drives, doing 4 RAID-Zx devices, mirroring gets me 20TB, -Z3 gets me 16TB (dumb), -Z2 gets me 24TB...at 1/3 the IOPS of a mirror, -Z gets me 32TB, but I don't want to do Z (maybe with a hotspare, but -Z2 just seems smart).
Do you have to have 4 vdevs? If it were me, I would run 2 9 disk radz2 vdevs with 2 hot spares. 14 disks worth of usable space, immediate rebuild with hot spares. Do you have exactly 20 disks worth of controller capacity? If so, you'll need to drop 1 spare for your SSD.

adorai
Nov 2, 2002

10/27/04 Never forget

Grimey Drawer

wang souffle posted:

1) ESXi with OpenIndiana and raw data drives passed through
2) OpenIndiana with VirtualBox
You could also run Xen with raw disks passed to the guest.

alo
May 1, 2005




adorai posted:

You could also run Xen with raw disks passed to the guest.

I used to use OpenSolaris and Xen (XVM), but moved to ESXi with an OpenIndiana guest. Xen support has been removed in OpenIndiana, so you'd be stuck with an earlier build of OpenSolaris. Xen/XVM has been discontinued by Oracle, so there's no future path for that configuration.

Looking back, OpenSolaris and Xen had a bunch of annoying problems and it never worked 100%. Each new OpenSolaris build would introduce new problems with XVM bits. Services wouldn't start randomly or they'd have to have their startup timeout increased so that they wouldn't fail. The clock would be inaccurate (sometimes it would work, using ntp just made it worse). Sometimes the changes that needed to be made to grub wouldn't be made... A mess, it worked, just not amazingly well (and I've been running Solaris on x86 since ZFS was added).

Go ESXi, I'm happy that I did (so far, it's only been a week). Plain OpenIndiana with no Xen bits has been running like a champ without any issues for me. My biggest complaint so far though is with management. You have to use the Windows-only client. I have to keep Windows machines around at work and home just for this. You can do some simple tasks via the service console over ssh (and there are some features that you can only do over the service console), but a lot of the (documented) functionality is reserved for paying customers.

movax
Aug 30, 2008



adorai posted:

Do you have to have 4 vdevs? If it were me, I would run 2 9 disk radz2 vdevs with 2 hot spares. 14 disks worth of usable space, immediate rebuild with hot spares. Do you have exactly 20 disks worth of controller capacity? If so, you'll need to drop 1 spare for your SSD.

I'm not sure yet, still shifting capacities around in my head. I have a 20-bay chassis, so that's the upper-limit of drives. 16 can run on HBAs, the rest will be off the mobo. I *think* I want hot-spares, because if I understand how they function properly (which I probably don't), that reduces my risk of data loss even more.

So I could do for instance:
18 drives in 2x9 -Z2 vdevs + 2 hot spares for total capacity of 28TB (~1200 IOPS)
18 drives in 3x6 -Z2 vdevs + 3 hot spares (cram the odd disk into the Norco's odd-bay-out maybe) for 24TB (~2250 IOPS)
20 drives in 4x5 -Z2 vdevs w/ no hot spares for 24TB (~3300 IOPS)
20 drives mirrored, 20TB usable, 10k IOPS

IOPS I just assigned an arbitrary guess of 500/device, so I could compare IO performance between configurations. Need to figure out what exactly I am looking for, I suppose.

adorai
Nov 2, 2002

10/27/04 Never forget

Grimey Drawer

alo posted:

I used to use OpenSolaris and Xen (XVM), but moved to ESXi with an OpenIndiana guest. Xen support has been removed in OpenIndiana, so you'd be stuck with an earlier build of OpenSolaris. Xen/XVM has been discontinued by Oracle, so there's no future path for that configuration.
It can still run as a domU. Personally I look forward to when virtio drivers for KVM are included in openindiana, I'd much prefer to run it as a KVM guest on linux than esxi, because at least with linux as the host I can use mdadm and not need any hardware raid.

movax posted:

IOPS I just assigned an arbitrary guess of 500/device, so I could compare IO performance between configurations. Need to figure out what exactly I am looking for, I suppose.
For a 7+2 raidz2 vdev running 7200rpm disks, you are probably right on on your 600 per vdev estimate. But you aren't factoring in the cache. Are you building a high volume VM environment or something? Why are both iops and capacity so important to you?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams


The biggest thing holding me back from going with ESXi is that I have two drives for my system drive, and no way to hardware RAID them together ESXi.

adorai
Nov 2, 2002

10/27/04 Never forget

Grimey Drawer

FISHMANPET posted:

The biggest thing holding me back from going with ESXi is that I have two drives for my system drive, and no way to hardware RAID them together ESXi.
I searched for a way to host just a vmx file and some rdm vmdk's on a flash drive, but couldn't get it to work. Maybe in 4.2/5.0 whatever comes next ...

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll

Nap Ghost

I do have to wonder which OS has worse hardware support though - OpenIndiana or ESXi (free). I get this sinking feeling that I shouldn't buy one of those vSphere Essentials licenses for my home system and using a work-provided license might not fly either.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams


adorai posted:

I searched for a way to host just a vmx file and some rdm vmdk's on a flash drive, but couldn't get it to work. Maybe in 4.2/5.0 whatever comes next ...

Or I getting a supported 2 port RAID card, or just figuring out some way to backup all the configs to the second disk nightly.

movax
Aug 30, 2008



adorai posted:

For a 7+2 raidz2 vdev running 7200rpm disks, you are probably right on on your 600 per vdev estimate. But you aren't factoring in the cache. Are you building a high volume VM environment or something? Why are both iops and capacity so important to you?

Nope, not at all. I would say that this machine will spend 75% of its day idle, doing absolutely nothing. Just storing files that I'll be accessing when I'm home from work.

Right now, I have just 8 1.5TB 7200rpm disks in RAID-Z2, over GigE. I am somewhat disappointed with write performance (~20MB/s) but satisfied with reads, though I think they could be better (~80MB/s). I guess I want to strike a compromise between capacity & IOPS, but if I had to choose, it would be capacity, as 8GB RAM + 2GB ZIL/58G L2ARC can make up for some "lost" IOPS I think.

How exactly do hot-spares function in a -Z/2/3 environment? As soon as a disk is degraded, it begins rebuilding onto the hot-spare disk? Or does the hot-spare behave like it would in a mirror, and seamlessly failover, leaving you with 2-disk redundancy?

Goon Matchmaker
Oct 23, 2003

I play too much EVE-Online

necrobobsledder posted:

I do have to wonder which OS has worse hardware support though - OpenIndiana or ESXi (free). I get this sinking feeling that I shouldn't buy one of those vSphere Essentials licenses for my home system and using a work-provided license might not fly either.

ESXi definitely has worse hardware support. If you're using server hardware from a vendor like Dell or HP, you'll be fine, but if you use off the shelf stuff you might not have very good luck with it.

complex
Sep 16, 2003



ufarn posted:

It was something that the forums added automatically in my futile attempt to stop the linkfication, so no dice. But nicely spotted, though.

It is not dead for me. But, if you still can't get to it, here is the contents of the file

code:
{
    "0.0.0.0": "0.0.0.0", 
    "::": "::", 
    "alt-speed-down": 50, 
    "alt-speed-enabled": false, 
    "alt-speed-time-begin": 540, 
    "alt-speed-time-day": 127, 
    "alt-speed-time-enabled": false, 
    "alt-speed-time-end": 1020, 
    "alt-speed-up": 50, 
    "bind-address-ipv4": "0.0.0.0", 
    "bind-address-ipv6": "::", 
    "blocklist-enabled": true, 
    "dht-enabled": true, 
    "download-dir": "\/shares\/internal\/PUBLIC\/Torrent\/work", 
    "download-limit": 2000, 
    "download-limit-enabled": 1, 
    "encryption": 0, 
    "lazy-bitfield-enabled": true, 
    "max-peers-global": 200, 
    "message-level": 2, 
    "open-file-limit": 32, 
    "peer-limit-global": 120, 
    "peer-limit-per-torrent": 30, 
    "peer-port": 51413, 
    "peer-port-random-enabled": 0, 
    "peer-port-random-high": 65535, 
    "peer-port-random-low": 1024, 
    "peer-port-random-on-start": false, 
    "peer-socket-tos": 8, 
    "pex-enabled": true, 
    "port-forwarding-enabled": true, 
    "preallocation": 1, 
    "proxy": "", 
    "proxy-auth-enabled": false, 
    "proxy-auth-password": "", 
    "proxy-auth-username": "", 
    "proxy-enabled": false, 
    "proxy-port": 80, 
    "proxy-type": 0, 
    "ratio-limit": 2.0000, 
    "ratio-limit-enabled": false, 
    "rpc-access-control-list": "+127.0.0.1,+192.168.*.*", 
    "rpc-authentication-required": false, 
    "rpc-bind-address": "0.0.0.0", 
    "rpc-enabled": true, 
    "rpc-password": "", 
    "rpc-port": 9091, 
    "rpc-username": "", 
    "rpc-whitelist": "*.*.*.*", 
    "rpc-whitelist-enabled": true, 
    "speed-limit-down": 2000, 
    "speed-limit-down-enabled": true, 
    "speed-limit-up": 20, 
    "speed-limit-up-enabled": true, 
    "upload-limit": 20, 
    "upload-limit-enabled": 1, 
    "upload-slots-per-torrent": 14
}

wang souffle
Apr 26, 2002


So what's the final verdict on green drives with ZFS? Is it a known issue that's being worked on because of the 4k sectors that should be resolved in the ZFS implementation in the near future, or is the hardware itself the culprit? Since the server I'm building would be running 24/7, power consumption and noise are actually an issue.

movax
Aug 30, 2008



So doing a bit more reading, I guess you can share hot-spares between vdevs, which is cool (so if I have 3 vdevs, I can share 2 hot-spares between them...I'd have to have pretty poo poo luck for 3 hot-spares to be needed in a given timeframe).

What I've kinda narrowed it down too:
3x6 -Z2s, 24TB Usable (2 hotspares)
4x5 -Z2s, 24TB Usable (no hotspares)

Pretty sure I want the hot-spares and a neat fit into 20-bays, so 3x6 looks tempting. Read somewhere about not using an even # of disks in a vdev though...?


Also, if anyone wants to see the spreadsheet I've been using, here: http://dropbox.movax.org/ZFS.xlsx

e: ^^^ Googling around, WD Greens in particular give users a hard time, and the sector emulation crap on any 4k drive apparently pisses ZFS off. There's a workaround, but I'm not personally willing to risk my data to a "workaround". I'd try to track down non-Green 5400rpm drives. Pretty sure green drives that aren't from WD and are <2TB/don't have 4k sectors are OK though

movax fucked around with this message at 15:09 on Nov 4, 2010

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams


wang souffle posted:

So what's the final verdict on green drives with ZFS? Is it a known issue that's being worked on because of the 4k sectors that should be resolved in the ZFS implementation in the near future, or is the hardware itself the culprit? Since the server I'm building would be running 24/7, power consumption and noise are actually an issue.

I have 1.5TB Samsung Green drives, and I don't think I have any problems.

adorai
Nov 2, 2002

10/27/04 Never forget

Grimey Drawer

movax posted:

Right now, I have just 8 1.5TB 7200rpm disks in RAID-Z2, over GigE. I am somewhat disappointed with write performance (~20MB/s) but satisfied with reads, though I think they could be better (~80MB/s).

How exactly do hot-spares function in a -Z/2/3 environment? As soon as a disk is degraded, it begins rebuilding onto the hot-spare disk? Or does the hot-spare behave like it would in a mirror, and seamlessly failover, leaving you with 2-disk redundancy?
There is something wrong there, I get ~25MBps on a 3+1 raidz1 of green drives. Hot spares sit idle, and begin a rebuild as soon as a disk fails.

wang souffle posted:

So what's the final verdict on green drives with ZFS? Is it a known issue that's being worked on because of the 4k sectors that should be resolved in the ZFS implementation in the near future, or is the hardware itself the culprit? Since the server I'm building would be running 24/7, power consumption and noise are actually an issue.
They won't save you much money, but I am using them successfully in my openindiana box.

movax
Aug 30, 2008



adorai posted:

There is something wrong there, I get ~25MBps on a 3+1 raidz1 of green drives. Hot spares sit idle, and begin a rebuild as soon as a disk fails.

Yeah, I don't know what the gently caress, they are the first-gen Seagate 1.5TB drives w/ firmware patch. I did upgrade/stop under-volting the CPU, which helped boost write performance. Intel NIC + a PowerConnect I figure is halfway decent network infrastructure, so...

So, if I'm settled on 3x6 -Z2 + 2 hotspares...and want to shift my data over, what can I do? Is it possible to zpool export my current pool, and cobble together a machine with 8 SATA ports across mobo + SATA HBA and successfully zpool import the pool and read all the data off it?

adorai
Nov 2, 2002

10/27/04 Never forget

Grimey Drawer

movax posted:

So, if I'm settled on 3x6 -Z2 + 2 hotspares...and want to shift my data over, what can I do? Is it possible to zpool export my current pool, and cobble together a machine with 8 SATA ports across mobo + SATA HBA and successfully zpool import the pool and read all the data off it?
You can add vdevs on the fly, so you can just put your 8 disk pool in with the first two 6 disk vdevs, copy the data over, remove your 8 disks and put in the 8 additional 2tb drives.

movax
Aug 30, 2008



adorai posted:

You can add vdevs on the fly, so you can just put your 8 disk pool in with the first two 6 disk vdevs, copy the data over, remove your 8 disks and put in the 8 additional 2tb drives.

Err, can you copy data from vdev to vdev? I thought you'd have to do pool to pool (so add the first 2 6 disk vdevs to the machine in a different pool, copy poo poo over, kill off old 8x1.5 vdev/pool, add another 6 disk vdev at some point in the future)?

I figure it'll be cheaper for me to just add 2x6 for now, and then 1x6 at the beginning of next year or something.

adorai
Nov 2, 2002

10/27/04 Never forget

Grimey Drawer

movax posted:

Err, can you copy data from vdev to vdev? I thought you'd have to do pool to pool (so add the first 2 6 disk vdevs to the machine in a different pool, copy poo poo over, kill off old 8x1.5 vdev/pool, add another 6 disk vdev at some point in the future)?
That's what I meant, run two pools and just copy from the old pool to the new.

Moey
Oct 22, 2010

I LIKE TO MOVE IT


Has anyone used a P55 board for a file server in a production environment?

I'm thinking about moving a chunk of files off an application server (the disk i/o from these files getting accessed is causing issues with this application, I believe).

About 150gb of data.
About 75 users.

Was thinking about having 1 WD Black drive for the OS, then 4 drives in Raid 10 for the actual files.

Still unsure on drives, I would prefer some WD RE drives, but 4 Velociraptors may work better.

Any thoughts/opinions? Rather avoid buying an expensive Raid card...

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll

Nap Ghost

movax posted:

Right now, I have just 8 1.5TB 7200rpm disks in RAID-Z2, over GigE. I am somewhat disappointed with write performance (~20MB/s) but satisfied with reads, though I think they could be better (~80MB/s). I guess I want to strike a compromise between capacity & IOPS, but if I had to choose, it would be capacity, as 8GB RAM + 2GB ZIL/58G L2ARC can make up for some "lost" IOPS I think.
Umm, how many platters per disk? I've got 4x2TB 4k WDs as a (I'm feeling lucky) 5x1TB array with mixed platter configs (this one all on 512b sectors) and I don't get performance anywhere near that poor in either - they're within about 20% of each other and writes easily saturate a gigabit ethernet connection at 40MBps, in fact. I'm not even using an Intel NIC either - some crappy Realtek onboard NIC with similarly crappy NICs from the clients. I was about to order an Intel NIC but wanted to give the guy a chance. I'm kind of amazed I'm not experiencing any problems functional or performance-related.

movax
Aug 30, 2008



necrobobsledder posted:

Umm, how many platters per disk? I've got 4x2TB 4k WDs as a (I'm feeling lucky) 5x1TB array with mixed platter configs (this one all on 512b sectors) and I don't get performance anywhere near that poor in either - they're within about 20% of each other and writes easily saturate a gigabit ethernet connection at 40MBps, in fact. I'm not even using an Intel NIC either - some crappy Realtek onboard NIC with similarly crappy NICs from the clients. I was about to order an Intel NIC but wanted to give the guy a chance. I'm kind of amazed I'm not experiencing any problems functional or performance-related.

Hm, not sure of the exact model # at the moment (server is still powered off at home, on the road atm), but I think they are the ST31500341AS (7200.11), don't see # of spindles listed in the datasheet from Seagate's website.

How are those 4k-sector drives (with emulation on?) treating you? Do you have that gnop or whatnot workaround active?

dj_pain
Mar 28, 2005



Goon Matchmaker posted:

ESXi definitely has worse hardware support. If you're using server hardware from a vendor like Dell or HP, you'll be fine, but if you use off the shelf stuff you might not have very good luck with it.

Remember http://www.vm-help.com/ is your friend

devmd01
Mar 7, 2006

Elektronik
Supersonik


Moey posted:

About 150gb of data.
About 75 users.
Rather avoid buying an expensive Raid card...

Recipe for disaster.

Adbot
ADBOT LOVES YOU

Moey
Oct 22, 2010

I LIKE TO MOVE IT


devmd01 posted:

Recipe for disaster.

Yea that's what was lingering in the back of my mind...I just couldn't bring myself to say it.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply