Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
devilmouse
Mar 26, 2004

It's just like real life.
Anyone have experience with Norco raid enclosures? I'd never heard of them and found this browsing around on newegg: http://www.newegg.com/Product/Product.aspx?Item=N82E16816133007

I realize it's DAS, but it's so very tempting looking.

Adbot
ADBOT LOVES YOU

devilmouse
Mar 26, 2004

It's just like real life.

Vinlaen posted:

Darn. The fact that you can't expand pools/arrays in ZFS is a deal breaker for me as I like to upgrade my storage every few months or every year, etc. :(

You can expand a ZFS POOL with an arbitrary amount of disks, but if you're talking about drobo/unraid-like functionality where you can just add a disk to an existing ZFS DEVICE (well, unless you're going from non-mirrored to mirrored or striped), then no, you're out of luck.

You can however set up another array inside the same pool. So if you had a 4 disk device with parity, you could add an entire second 4 disk array to the same pool. Yeah, it's a waste of disks, but it's an option.

Adding a single disk at a time isn't in the cards for the foreseeable future. Here's an article that talks more about it from the Sun guys: http://blogs.sun.com/ahl/entry/expand_o_matic_raid_z

devilmouse
Mar 26, 2004

It's just like real life.

Suniikaa posted:

A constant theme throughout the thread is that ZFS is the second coming and it is glorious, but what are the disadvantages?

After thinking about it, outside of the expansion, the biggest downside is that it's only supported under Solaris and FreeBSD. There's a FUSE port too, I guess, but last I looked, it was still kind of meh. It's not hard to use per se, but getting your head around the pool management takes a bit of getting used to as well.

devilmouse
Mar 26, 2004

It's just like real life.

wang souffle posted:

I really like the Drobo feature of easily upgradable disks. Does any other solution work like this? Seems like ZFS would be great for everything except for this feature--growing the pool in the future would most likely mean rebuilding it on fresh drives and copying over. Am I missing something?

Edit: Does UnRAID handle this well?

That's the biggest pro in favor of unraid: its expandability. It falls short in most other regards, but it's relatively user-friendly and makes for trivially easy upgrades.

devilmouse
Mar 26, 2004

It's just like real life.
After procrastinating for more than a year, I'm finally putting together my storage machine. I'm either going to run OpenSolaris if I can stomach using Solaris again or I'll just go back to the comforting land of FreeBSD. The machine itself will be our home fileserver, doing the usual lifting: streaming music/movies to a handful of computers around the house, serving as a backup store for the various macs we have lying around, and occasionally serving torrents.

The parts list is, for the most part, pretty standard as far as I can tell from wandering around the AVSforums.

Case: NORCO RPC-4220
CPU: Intel Xeon E3110 3.0GHz LGA 775 65W Dual-Core Processor
Controllers: 2x SUPERMICRO AOC-SAT2-MV8 64-bit PCI-X133MHz
System HD: Kingston SSDNow V Series SNV125-S2/30GB 2.5" Internal Solid State Drive (SSD)
RAM: Kingston 4GB (2 x 2GB) 240-Pin DDR2 SDRAM DDR2 800 (PC2 6400) ECC Unbuffered Desktop Memory Model KVR800D2E6K2/4G
MB: SUPERMICRO MBD-X7SBE LGA 775 Intel 3210 ATX
PSU: CORSAIR CMPSU-750TX
DVD: Sony Optiarc Slim Combo Black SATA Model CRX890S-10
DVD cable: BYTECC 18" Sata and Slim Sata Power 7+6pin Cable, for Sata Slim OD
Fans: 3x Noctua NF-S12B FLX 120mm, 2x Noctua NF-R8-1800 80mm
Sata power cables: 5x custom cables from frozencpu.com
Custom 120mm fanboard from some dude named cavediver

Questions in no particular order:
* Overall, any issues with the parts list?
* Unbuffered or buffered RAM? While I know that I want ECC RAM on the off-chance of a freak occurance, I'm less sure on the question of buffered.
* Is there a better motherboard that I haven't been able to find? The boards with 2 PCIX slots are rare at best and this was the only one I managed to find on newegg.
* ZFS configuration... I'm not sure how to make the best use of the 20 bays in terms of vdevs. I'm not going to be buying all 20 drives at once, so possible options when everything is said and done:
2x 9-disk RAIDZ2 + 2-disk RAIDZ
-or-
2x 8-disk RAIDZ2 + 4-disk RAIDZ
-or-
2x 7-disk RAIDZ2 + 6-disk RAIDZ2

While it's tempting to go to just 2x 10-disk RAIDZ2 vdevs, I don't like the idea of going past the suggested 9-disk limit in ZFS. I'm leaning most strongly towards the last option for the redundancy / expandability, even if it results in the least usable space.

Any other thoughts?

devilmouse
Mar 26, 2004

It's just like real life.

FISHMANPET posted:

You can do better. Drop the PCI-X card for a pair of AOC-USAS-L8i's. They're PCI Express x8 with 2 SAS ports (each SAS port can be broken out into 4 SATA ports, but I believe the 4220 is a SAS case anyway, so you save some cable). They also work great in OpenSolaris (I have one right now). I think that case might also use molex ports, so no need for the SATA power cables.

Also, I would ditch your lame CPU for an AMD Phenom II X2 or X3 or X4. You can get chips that are cheaper and have more cores when you go AMD.

Doh, good catch on the cable stuff. Turns out that it does still use molex power for the drive plane, as well as SAS connectors.

Did you have any problems with the making that UIO bracket fit in your case (assuming you're not using a SuperMicro case)? It's a good looking card otherwise, wonder why newegg doesn't sell it? Going PCIe saves me a bunch of headaches on finding an older MB that supports it.

Any suggestions on a motherboard with onboard video and at least 2x PCIe 8x slots, for the Phenoms?

Thanks for the input!

devilmouse
Mar 26, 2004

It's just like real life.
Thanks for the feedback, guys... I've bumped up the RAM to 8 gigs after seeing how much ZFS appreciated the extra memory. I'm glad you pointed out the AOC-USAS-L8i, too, since using PCI-X was making me sad inside.

Now I just have to find a motherboard that supports all this stuff. Oh, Newegg, why must your motherboard filter be so bad?

devilmouse
Mar 26, 2004

It's just like real life.

Falcon2001 posted:

Out of curiosity, can anyone recommend an opensolaris supported motherboard? Most of the ones I can find are out of manufacturing now; has anyone built a system lately and could recommend a base?

Specifically one for running an opensolaris based NAS system.

For Intel: For a small board, SUPERMICRO MBD-X8SIL-F http://www.newegg.com/Product/Product.aspx?Item=N82E16813182211 or for a full-size SUPERMICRO MBD-X8SIA-F http://www.newegg.com/Product/Product.aspx?Item=N82E16813182235

devilmouse
Mar 26, 2004

It's just like real life.
There's also the 4224 (which supports 24 drives). Norco also offers a 120mm fan bracket for $1 + shipping for the 4224, which helps cut down the noise significantly (and if you replace the 80mm rear fans with quieter versions, it makes things quieter still).

devilmouse
Mar 26, 2004

It's just like real life.

wolrah posted:

Is there anything other than WHS which works reliably and can offer both a pool that single arbitrary size drives can be added to as well as the knowledge that a disk failure will only kill the data on the failed disk, not the entire pool?

Unraid? http://www.lime-technology.com/

devilmouse
Mar 26, 2004

It's just like real life.
Does anyone know a place where I could get reasonably priced mini-sas to mini-sas (SFF-8087) cables? Norco has them, but the cable costs $13 and they want another $8 for shipping per cable and I need to buy 8. Newegg has them for $15/each with a less ridiculous shipping cost, but that still seems high.

devilmouse
Mar 26, 2004

It's just like real life.
Has anyone pulled the trigger and tried out Solaris Express 11? I've got a new fileserver box sitting next to me and I'm hemming and hawing over OpenIndiana and full-on Solaris.

It seems it's still too new for any reasonable comparison articles to have been written.

Edit: Ha! Nevermind... apparently Oracle doesn't give access to patches/updates unless you're paying. That makes it an easier choice, I guess!

devilmouse fucked around with this message at 01:47 on Dec 18, 2010

devilmouse
Mar 26, 2004

It's just like real life.

movax posted:

Trying to stay away from the temptation to use a modified ZFS binary to force ashift to use 4K-sector drives.

I've been using 6 Samsung 2TBs in a raidz2 and the recompiled zpool that supports ashift=12 for a few weeks now and everything seems right as rain. Granted at the same time, I'm still not moving to them fully for another few weeks of use just to be on the safe side, but no complaints (or errors) so far.

devilmouse
Mar 26, 2004

It's just like real life.

Factory Factory posted:

That looks perfect, except that, according to the thread, it has major performance issues with 4-5 disk RAIDZ/Z2 pools. :bang:

A 5 disk raidz vdev/pool will be plenty fast. Not sure where you're seeing that it would have perf problems. The author of that "distro" generally recommends a 5disk raidz or 6 disk raidz2 as the best perf/redundancy option.

devilmouse
Mar 26, 2004

It's just like real life.
Odd. I just saw this on the second to last page after the post you're talking about :

sub.mesa posted:

For 4K disks the ideal vdev configuration is:
- mirrors (no issue with 4K sectors)
- 5-disk RAID-Z (or 9-disk)
- 6-disk RAID-Z2 (or 10-disk)

15MB/s writes is horrid. Oof. That's worse than virtualized performance when I briefly tried that. Right now, I get anywhere between 350-500MB/s writes on a 6x 2TB 4k disks in raidz2 running on Open Indiana, depending on the type of benchmark (dd, filebench, bonnie++) and test.

If FreeBSD supports your hardware, even with 512 sector emulation, you should hit way higher than 15MB/s.

A 4-disk raidz2 seems like a bizarre setup though. If you wanted a 4 disk setup with 2 disk redundancy, why not set up a pair of 2 disks in mirrored vdevs like this:

pool: tank
vdev1: mirror (disk1, disk2)
vdev2: mirror (disk3, disk4)

devilmouse
Mar 26, 2004

It's just like real life.
Capitalize your M?

edit: vvv Foiled! I'll check the syntax when I get home.

devilmouse fucked around with this message at 02:26 on Dec 29, 2010

devilmouse
Mar 26, 2004

It's just like real life.

Combat Pretzel posted:

Anyone of you running OpenSolaris/OpenIndiana in a VM as a file server? What sort of performance are you getting out of it using CIFS?

Anyone?

I might try it today or tomorrow. Right now I'm deciding between running Solaris as the host OS with Virtualbox (I haven't played with 4.0 yet...) providing the VM to other OSs or running ESXi as the host with Solaris for ZFS inside that.

In other news - why would Win Vista have such poo poo read performance from a CIFS share (provided by aforementioned Solaris)? It writes to it over gigabit at 60-80MB/s but only reads at 10MB/s? The internal disk benchmarks are well above saturating gigabit speeds but the poor windows box can't read it for poo poo.

devilmouse
Mar 26, 2004

It's just like real life.

Combat Pretzel posted:

Here's some data from my fiddling:

Interesting stuff. I ended up spending most of yesterday screwing around with it and my results weren't that far off from yours. ZFS ran at about 1/2 to 2/3rds of the speed when I had Solaris installed as a guest under ESXi. Rather than deal with the hassle of it, I decided to just slap Solaris Express 11 on it and virtualize from VirtualBox. I don't have to run any Windows stuff thankfully, just a few linux/BSD instance to test stuff for work.

I'm running it on a Xeon x3440 with 8G of ECC, on a Supermicro X8SI6-F and getting 80 MB/s reads and 100MB/s writes to the CIFS share which is plenty enough. Internal disk benchmarks on a 6x Samsung F4 raidz2 are between 350 MB/s and 450 MB/s.

Everything seems to be up and running now and I'm just letting it run random break-in tests to make sure everything's fine. The only weird thing that's happened so far is a 3 beep tone in the middle of the night that woke me up. But there was nothing in the system or motherboard logs, so I'm half wondering if I didn't just dream it.

devilmouse
Mar 26, 2004

It's just like real life.
Other than the 10-15% perf boost from using an appropriate ashift of 12, is there any downside to using ashift=9 of with 4k drives? My Solaris Express 11 server's been humming along fine without changing the ashift.

devilmouse
Mar 26, 2004

It's just like real life.

movax posted:

Nope, just performance. I'm probably going to re-roll with OpenIndiana; how do you like Solaris Express 11?

It's been fine so far. It was a toss-up between going with Solaris + Virtual Box (with a FreeBSD and Linux guests) or ESXi as a base, and ultimately went with Solaris because I didn't have to run Windows as a guest (plus, I hate dealing with pass-through and sharing the datastore between the guests).

It's got 6x 2TB Samsung F4s running in a raidz2 and the benchmarks put it around 350-450MB/s with ashift=9. With ashift=12, I got a bit more perf, but didn't really feel like it was worth doing sketchy business with a recompiled zpool binary.

devilmouse
Mar 26, 2004

It's just like real life.
I've been using FreeFileSync on windows and it's been totally capable. It has a service you can run or you can run it manually. http://sourceforge.net/projects/freefilesync/

devilmouse
Mar 26, 2004

It's just like real life.

Lowen SoDium posted:

Is there any kind of virtualization software that you can run on top of FreeNAS?

VirtualBox mostly runs on FreeBSD, so it should run on FreeNAS as well.

http://wiki.freebsd.org/VirtualBox

devilmouse
Mar 26, 2004

It's just like real life.
Anyone using 3TB drives in their Norco 4020/4224? Came across this post and now I'm a bit panicky over adding a few 3TB drives to mine.

http://wsyntax.com/cs/killer-norco-case/

devilmouse
Mar 26, 2004

It's just like real life.

DNova posted:

Came here to post this. What current draw are your 3tb drives rated for? You can easily eliminate the MOSFETs as they said in the article.

Haven't ordered them up yet. I was just looking at my dwindling free space and was considering whether it was time for an upgrade.

devilmouse
Mar 26, 2004

It's just like real life.

FISHMANPET posted:

It was posted here a while ago that the Norco 4224 had some problems with 3TB drives, does anyone know if there was an update on the status of that?

Supposedly, 3TB WD Red drives were fine in the 4224 because their power requirements were low enough not to cause problems. I haven't yet tried though as I've still got 6 bays left in mine to fill.

devilmouse
Mar 26, 2004

It's just like real life.
And just like that I pause on pulling the trigger on 6x 6TB Reds to see how these shake out.

devilmouse
Mar 26, 2004

It's just like real life.

G-Prime posted:

Specifically labelled as archival drives. I wonder if that means they have a lower listed MTBF and aren't designed for a large number of power on hours.

The MTBF on the Seagates is 800k vs the Red's 1M, so yeah, 20% lower, but scarier is the difference in load/unload cycles: 300k for the Seagates vs 600k for the Reds.

http://www.seagate.com/files/www-content/product-content/hdd-fam/seagate-archive-hdd/en-us/docs/archive-hdd-dS1834-3-1411us.pdf

vs

http://www.wdc.com/wdproducts/library/SpecSheet/ENG/2879-800002.pdf

devilmouse
Mar 26, 2004

It's just like real life.
What's the current "best" platform for ZFS? It's finally time I upgrade my old x86 Solaris Express install from back in '09 or '10 but I haven't paid much attention to ZFS in the meantime. The only real requirement is that I can run Linux VMs on it.

OpenIndiana? FreeBSD? Is it finally "good enough" on linux?

(My desire to run a full-blown ESXi install is not terribly high because :effort:)

devilmouse
Mar 26, 2004

It's just like real life.
Cool, just wanted to make sure nothing NEW burst on to the scene. I'll probably end up back in the warm embrace of FreeBSD just because that's what I've always been the most comfortable with in terms of *nixes.

Maintaining that Solaris Express install was such a exercise in hilarity whenever I had to apply security updates or whatever. That window when they made the "Community" version and when they stopped supporting it was so brief but I fell into it anyway.

devilmouse
Mar 26, 2004

It's just like real life.
Some people will find this exciting! https://github.com/OpenZFS

devilmouse
Mar 26, 2004

It's just like real life.
It's on github as of today.

devilmouse
Mar 26, 2004

It's just like real life.
Is there a good "setting up freenas for the first time" guide out there? I'm currently burning in all the disks and am comfortable with ZFS, but is there an accepted set of best practices (set up weekly scrubs, set up email notifs, here are the alerts and tunables you care about, etc) for freenas specifically?

devilmouse
Mar 26, 2004

It's just like real life.
Matt's the main dude behind ZFS, going all the way back to its start.

Namedrop: we also sat next to each other in CS classes. He was much smarter than me :eng99:

Adbot
ADBOT LOVES YOU

devilmouse
Mar 26, 2004

It's just like real life.
Jeff started another company with a few of the Sun guys, iirc. But to answer your original question, yeah, Matt is at Delphix (where the CTO was one of the other Sun Skunkworks guys who did DTrace and a bunch of other stuff). I think most of them left Oracle 10+ years ago.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply