|
mpeg4v3 posted:I'm sure this has been asked dozens of times, but what's generally considered the best OS/distro to go with for ZFS? I'm particularly drawn to freeNAS, as it seems pretty easy to setup and runs so small, but I've seen a few reports that performance suffers on it. I was hoping not to have to devote a large amount of time to setup and configure an entire OS, but if need be (say, if OpenIndiana or whatever is significantly better), I guess I can. As long as googling answers and loving with the command line doesn't bug you, then OpenIndiana is probably for you. With VirtualBox you can run whatever windows crap you want, and still get all the cool ZFS stuff. FreeNAS and Nexenta gently caress with the userland which makes VBox kill itself.
|
# ? Nov 3, 2010 08:58 |
|
|
# ? Apr 26, 2024 23:30 |
|
Methylethylaldehyde posted:The ZIL only needs to hold like 5 seconds worth of writes before they're purged to spinning disks. 5 seconds of ~240MB/sec is ~2GB. The rest you can use as regular cache, which is awesome. Hmm, I'll need to buy like 5 2TB drives to serve as a temporary scratchpad, heh. What if I made an entirely new zpool (off-topic: can you rename zpools?), added my new vdevs to that (initially 2 4x2TB RAID-Zs, maybe 4 4x2TB), copied data from old zpool to new zpool, then destroy old zpool? I'm thinking of just replacing all disks with 2TB models and selling/getting rid of the 1.5TB drives. I assume that it would be sane to stripe all those vdevs?
|
# ? Nov 3, 2010 14:26 |
|
mpeg4v3 posted:I'm sure this has been asked dozens of times, but what's generally considered the best OS/distro to go with for ZFS?
|
# ? Nov 3, 2010 16:04 |
|
Isn't NexentaStor "free" only up to about 8TB of used space though? It's not a good long term solution if you're living up to the thread title if you ask me.
|
# ? Nov 3, 2010 16:18 |
|
necrobobsledder posted:Isn't NexentaStor "free" only up to about 8TB of used space though? It's not a good long term solution if you're living up to the thread title if you ask me. Yep, after that it costs $$$. No good for me, I'm already over 8TB. I will probably move to OpenIndiana soon.
|
# ? Nov 3, 2010 16:33 |
|
necrobobsledder posted:Isn't NexentaStor "free" only up to about 8TB of used space though? It's not a good long term solution if you're living up to the thread title if you ask me.
|
# ? Nov 3, 2010 18:04 |
|
adorai posted:12tb USED. I believe they intend to up this over time, as drives grow. What happens when you hit that? Or do they just limit pool creation size to a max usable capacity of 12.000000TB?
|
# ? Nov 3, 2010 18:20 |
|
Besides the capacity restriction, are there any other downsides to NexentaStor when compared to OpenIndiana?
|
# ? Nov 3, 2010 19:00 |
|
wang souffle posted:Besides the capacity restriction, are there any other downsides to NexentaStor when compared to OpenIndiana? It uses the Debian userland, so any of the solaris binaries won't work worth poo poo on it. VirtualBox will flip it's poo poo and not work at all on it.
|
# ? Nov 3, 2010 19:12 |
|
Hm, thinking about it, wouldn't a 8 drive RAID-Z2 be "safer" then 2 4 drive RAID-Zs? If two drives die in that RAID-Z, the whole pool is hosed since that vdev is hosed. So trading off IOPS for safety? e: Hacked up a spreadsheet to figure some poo poo out. I think since my priority is data safety, I'm going to be willing to sacrifice some IOPS (that hopefully get made up by SSD). And I think I'd like to have hot-spares, which will avoid rebuilds I think? Basic things noticed so far, as # of vdevs goes up, RAID-Z3 begins to be a poor option obviously, as it approaches the same capacity you'd get from a straight mirror, but with poo poo IOPS performance. For 20 2TB drives, doing 4 RAID-Zx devices, mirroring gets me 20TB, -Z3 gets me 16TB (dumb), -Z2 gets me 24TB...at 1/3 the IOPS of a mirror, -Z gets me 32TB, but I don't want to do Z (maybe with a hotspare, but -Z2 just seems smart). e2: where N is number of drives and M is number of vdevs, with -Z2, when N/M = 4, mirror and -Z2 capacity are identical (logically)...hooray storage solution finding movax fucked around with this message at 22:17 on Nov 3, 2010 |
# ? Nov 3, 2010 21:09 |
|
Thinking about some type of virtualization to run multiple machines on a new ZFS NAS. Which of the these choices would likely work out better to maximize capabilities of the VMs? 1) ESXi with OpenIndiana and raw data drives passed through 2) OpenIndiana with VirtualBox
|
# ? Nov 3, 2010 22:13 |
|
I am trying to replace the horrid native torrent client on my My Book Open World II (something like that), and I am following this guide on how to do it. I'm at the 5th mark, and try to get the settings json: wget -O /root/.config/transmission-daemon/settings.json http://wd.mirmana.com/settings.json/ Unfortunately, it yields an error: No such file or directory. Am I right in assuming that the called URL is probably dead, as this is a guide over a year old?
|
# ? Nov 3, 2010 22:21 |
|
Try the last url without the ending /
|
# ? Nov 3, 2010 22:35 |
|
Goon Matchmaker posted:Try the last url without the ending /
|
# ? Nov 3, 2010 22:43 |
|
movax posted:For 20 2TB drives, doing 4 RAID-Zx devices, mirroring gets me 20TB, -Z3 gets me 16TB (dumb), -Z2 gets me 24TB...at 1/3 the IOPS of a mirror, -Z gets me 32TB, but I don't want to do Z (maybe with a hotspare, but -Z2 just seems smart).
|
# ? Nov 4, 2010 00:03 |
|
wang souffle posted:1) ESXi with OpenIndiana and raw data drives passed through
|
# ? Nov 4, 2010 00:04 |
|
adorai posted:You could also run Xen with raw disks passed to the guest. I used to use OpenSolaris and Xen (XVM), but moved to ESXi with an OpenIndiana guest. Xen support has been removed in OpenIndiana, so you'd be stuck with an earlier build of OpenSolaris. Xen/XVM has been discontinued by Oracle, so there's no future path for that configuration. Looking back, OpenSolaris and Xen had a bunch of annoying problems and it never worked 100%. Each new OpenSolaris build would introduce new problems with XVM bits. Services wouldn't start randomly or they'd have to have their startup timeout increased so that they wouldn't fail. The clock would be inaccurate (sometimes it would work, using ntp just made it worse). Sometimes the changes that needed to be made to grub wouldn't be made... A mess, it worked, just not amazingly well (and I've been running Solaris on x86 since ZFS was added). Go ESXi, I'm happy that I did (so far, it's only been a week). Plain OpenIndiana with no Xen bits has been running like a champ without any issues for me. My biggest complaint so far though is with management. You have to use the Windows-only client. I have to keep Windows machines around at work and home just for this. You can do some simple tasks via the service console over ssh (and there are some features that you can only do over the service console), but a lot of the (documented) functionality is reserved for paying customers.
|
# ? Nov 4, 2010 00:49 |
|
adorai posted:Do you have to have 4 vdevs? If it were me, I would run 2 9 disk radz2 vdevs with 2 hot spares. 14 disks worth of usable space, immediate rebuild with hot spares. Do you have exactly 20 disks worth of controller capacity? If so, you'll need to drop 1 spare for your SSD. I'm not sure yet, still shifting capacities around in my head. I have a 20-bay chassis, so that's the upper-limit of drives. 16 can run on HBAs, the rest will be off the mobo. I *think* I want hot-spares, because if I understand how they function properly (which I probably don't), that reduces my risk of data loss even more. So I could do for instance: 18 drives in 2x9 -Z2 vdevs + 2 hot spares for total capacity of 28TB (~1200 IOPS) 18 drives in 3x6 -Z2 vdevs + 3 hot spares (cram the odd disk into the Norco's odd-bay-out maybe) for 24TB (~2250 IOPS) 20 drives in 4x5 -Z2 vdevs w/ no hot spares for 24TB (~3300 IOPS) 20 drives mirrored, 20TB usable, 10k IOPS IOPS I just assigned an arbitrary guess of 500/device, so I could compare IO performance between configurations. Need to figure out what exactly I am looking for, I suppose.
|
# ? Nov 4, 2010 00:53 |
|
alo posted:I used to use OpenSolaris and Xen (XVM), but moved to ESXi with an OpenIndiana guest. Xen support has been removed in OpenIndiana, so you'd be stuck with an earlier build of OpenSolaris. Xen/XVM has been discontinued by Oracle, so there's no future path for that configuration. movax posted:IOPS I just assigned an arbitrary guess of 500/device, so I could compare IO performance between configurations. Need to figure out what exactly I am looking for, I suppose.
|
# ? Nov 4, 2010 01:15 |
|
The biggest thing holding me back from going with ESXi is that I have two drives for my system drive, and no way to hardware RAID them together ESXi.
|
# ? Nov 4, 2010 02:03 |
|
FISHMANPET posted:The biggest thing holding me back from going with ESXi is that I have two drives for my system drive, and no way to hardware RAID them together ESXi.
|
# ? Nov 4, 2010 02:29 |
|
I do have to wonder which OS has worse hardware support though - OpenIndiana or ESXi (free). I get this sinking feeling that I shouldn't buy one of those vSphere Essentials licenses for my home system and using a work-provided license might not fly either.
|
# ? Nov 4, 2010 04:03 |
|
adorai posted:I searched for a way to host just a vmx file and some rdm vmdk's on a flash drive, but couldn't get it to work. Maybe in 4.2/5.0 whatever comes next ... Or I getting a supported 2 port RAID card, or just figuring out some way to backup all the configs to the second disk nightly.
|
# ? Nov 4, 2010 04:06 |
|
adorai posted:For a 7+2 raidz2 vdev running 7200rpm disks, you are probably right on on your 600 per vdev estimate. But you aren't factoring in the cache. Are you building a high volume VM environment or something? Why are both iops and capacity so important to you? Nope, not at all. I would say that this machine will spend 75% of its day idle, doing absolutely nothing. Just storing files that I'll be accessing when I'm home from work. Right now, I have just 8 1.5TB 7200rpm disks in RAID-Z2, over GigE. I am somewhat disappointed with write performance (~20MB/s) but satisfied with reads, though I think they could be better (~80MB/s). I guess I want to strike a compromise between capacity & IOPS, but if I had to choose, it would be capacity, as 8GB RAM + 2GB ZIL/58G L2ARC can make up for some "lost" IOPS I think. How exactly do hot-spares function in a -Z/2/3 environment? As soon as a disk is degraded, it begins rebuilding onto the hot-spare disk? Or does the hot-spare behave like it would in a mirror, and seamlessly failover, leaving you with 2-disk redundancy?
|
# ? Nov 4, 2010 14:09 |
|
necrobobsledder posted:I do have to wonder which OS has worse hardware support though - OpenIndiana or ESXi (free). I get this sinking feeling that I shouldn't buy one of those vSphere Essentials licenses for my home system and using a work-provided license might not fly either. ESXi definitely has worse hardware support. If you're using server hardware from a vendor like Dell or HP, you'll be fine, but if you use off the shelf stuff you might not have very good luck with it.
|
# ? Nov 4, 2010 15:29 |
|
ufarn posted:It was something that the forums added automatically in my futile attempt to stop the linkfication, so no dice. But nicely spotted, though. It is not dead for me. But, if you still can't get to it, here is the contents of the file code:
|
# ? Nov 4, 2010 15:29 |
|
So what's the final verdict on green drives with ZFS? Is it a known issue that's being worked on because of the 4k sectors that should be resolved in the ZFS implementation in the near future, or is the hardware itself the culprit? Since the server I'm building would be running 24/7, power consumption and noise are actually an issue.
|
# ? Nov 4, 2010 16:02 |
|
So doing a bit more reading, I guess you can share hot-spares between vdevs, which is cool (so if I have 3 vdevs, I can share 2 hot-spares between them...I'd have to have pretty poo poo luck for 3 hot-spares to be needed in a given timeframe). What I've kinda narrowed it down too: 3x6 -Z2s, 24TB Usable (2 hotspares) 4x5 -Z2s, 24TB Usable (no hotspares) Pretty sure I want the hot-spares and a neat fit into 20-bays, so 3x6 looks tempting. Read somewhere about not using an even # of disks in a vdev though...? Also, if anyone wants to see the spreadsheet I've been using, here: http://dropbox.movax.org/ZFS.xlsx e: ^^^ Googling around, WD Greens in particular give users a hard time, and the sector emulation crap on any 4k drive apparently pisses ZFS off. There's a workaround, but I'm not personally willing to risk my data to a "workaround". I'd try to track down non-Green 5400rpm drives. Pretty sure green drives that aren't from WD and are <2TB/don't have 4k sectors are OK though movax fucked around with this message at 16:09 on Nov 4, 2010 |
# ? Nov 4, 2010 16:06 |
|
wang souffle posted:So what's the final verdict on green drives with ZFS? Is it a known issue that's being worked on because of the 4k sectors that should be resolved in the ZFS implementation in the near future, or is the hardware itself the culprit? Since the server I'm building would be running 24/7, power consumption and noise are actually an issue. I have 1.5TB Samsung Green drives, and I don't think I have any problems.
|
# ? Nov 4, 2010 16:06 |
|
movax posted:Right now, I have just 8 1.5TB 7200rpm disks in RAID-Z2, over GigE. I am somewhat disappointed with write performance (~20MB/s) but satisfied with reads, though I think they could be better (~80MB/s). wang souffle posted:So what's the final verdict on green drives with ZFS? Is it a known issue that's being worked on because of the 4k sectors that should be resolved in the ZFS implementation in the near future, or is the hardware itself the culprit? Since the server I'm building would be running 24/7, power consumption and noise are actually an issue.
|
# ? Nov 4, 2010 16:12 |
|
adorai posted:There is something wrong there, I get ~25MBps on a 3+1 raidz1 of green drives. Hot spares sit idle, and begin a rebuild as soon as a disk fails. Yeah, I don't know what the gently caress, they are the first-gen Seagate 1.5TB drives w/ firmware patch. I did upgrade/stop under-volting the CPU, which helped boost write performance. Intel NIC + a PowerConnect I figure is halfway decent network infrastructure, so... So, if I'm settled on 3x6 -Z2 + 2 hotspares...and want to shift my data over, what can I do? Is it possible to zpool export my current pool, and cobble together a machine with 8 SATA ports across mobo + SATA HBA and successfully zpool import the pool and read all the data off it?
|
# ? Nov 4, 2010 16:23 |
|
movax posted:So, if I'm settled on 3x6 -Z2 + 2 hotspares...and want to shift my data over, what can I do? Is it possible to zpool export my current pool, and cobble together a machine with 8 SATA ports across mobo + SATA HBA and successfully zpool import the pool and read all the data off it?
|
# ? Nov 4, 2010 20:18 |
|
adorai posted:You can add vdevs on the fly, so you can just put your 8 disk pool in with the first two 6 disk vdevs, copy the data over, remove your 8 disks and put in the 8 additional 2tb drives. Err, can you copy data from vdev to vdev? I thought you'd have to do pool to pool (so add the first 2 6 disk vdevs to the machine in a different pool, copy poo poo over, kill off old 8x1.5 vdev/pool, add another 6 disk vdev at some point in the future)? I figure it'll be cheaper for me to just add 2x6 for now, and then 1x6 at the beginning of next year or something.
|
# ? Nov 4, 2010 21:38 |
|
movax posted:Err, can you copy data from vdev to vdev? I thought you'd have to do pool to pool (so add the first 2 6 disk vdevs to the machine in a different pool, copy poo poo over, kill off old 8x1.5 vdev/pool, add another 6 disk vdev at some point in the future)?
|
# ? Nov 4, 2010 21:58 |
|
Has anyone used a P55 board for a file server in a production environment? I'm thinking about moving a chunk of files off an application server (the disk i/o from these files getting accessed is causing issues with this application, I believe). About 150gb of data. About 75 users. Was thinking about having 1 WD Black drive for the OS, then 4 drives in Raid 10 for the actual files. Still unsure on drives, I would prefer some WD RE drives, but 4 Velociraptors may work better. Any thoughts/opinions? Rather avoid buying an expensive Raid card...
|
# ? Nov 4, 2010 22:15 |
|
movax posted:Right now, I have just 8 1.5TB 7200rpm disks in RAID-Z2, over GigE. I am somewhat disappointed with write performance (~20MB/s) but satisfied with reads, though I think they could be better (~80MB/s). I guess I want to strike a compromise between capacity & IOPS, but if I had to choose, it would be capacity, as 8GB RAM + 2GB ZIL/58G L2ARC can make up for some "lost" IOPS I think.
|
# ? Nov 4, 2010 23:12 |
|
necrobobsledder posted:Umm, how many platters per disk? I've got 4x2TB 4k WDs as a (I'm feeling lucky) 5x1TB array with mixed platter configs (this one all on 512b sectors) and I don't get performance anywhere near that poor in either - they're within about 20% of each other and writes easily saturate a gigabit ethernet connection at 40MBps, in fact. I'm not even using an Intel NIC either - some crappy Realtek onboard NIC with similarly crappy NICs from the clients. I was about to order an Intel NIC but wanted to give the guy a chance. I'm kind of amazed I'm not experiencing any problems functional or performance-related. Hm, not sure of the exact model # at the moment (server is still powered off at home, on the road atm), but I think they are the ST31500341AS (7200.11), don't see # of spindles listed in the datasheet from Seagate's website. How are those 4k-sector drives (with emulation on?) treating you? Do you have that gnop or whatnot workaround active?
|
# ? Nov 5, 2010 00:06 |
|
Goon Matchmaker posted:ESXi definitely has worse hardware support. If you're using server hardware from a vendor like Dell or HP, you'll be fine, but if you use off the shelf stuff you might not have very good luck with it. Remember http://www.vm-help.com/ is your friend
|
# ? Nov 5, 2010 00:24 |
|
Moey posted:About 150gb of data. Recipe for disaster.
|
# ? Nov 5, 2010 01:18 |
|
|
# ? Apr 26, 2024 23:30 |
|
devmd01 posted:Recipe for disaster. Yea that's what was lingering in the back of my mind...I just couldn't bring myself to say it.
|
# ? Nov 5, 2010 01:47 |