Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BlankSystemDaemon
Mar 13, 2009



Good<time of day> sirs, madams and other denominations. After building a new awesome system in a new awesome case, I've discovered that by putting the ssd upside-down and mounting it on the lowest 2.5"/3.5" bay in my case, I can now mount up to 8 3.5" harddrives and have therefore decided to let my computer not only be a workstation but also a filestation for the rest of the house. My current setup consists of a Synology DS210+ with the newest DSM sharing stuff over iSCSI but I've run into a major problem: Diskspace. I've got 2x 2TB WD20EVDS SATA drives running in RAID1 and have just about 148MB left.

Having researched around a bit, I've come to the conclusion that a new 4 bay NAS would cost me as much as a 8 port SATA RAID controller, and therefore I've decided I'd rather buy one of the latter and be able to expand my storage.

What I'm looking for: A hardware RAID pci-ex x4 card (or something to that effect) with 8 ports (for future expansion, as my motherboard only has 2 pci-ex x16 ports (one with graphics card, the other empty) and 1 pci-ex x1 (in use by my audiocard)), that can do background initialization and possibly RAID migration (I might go from RAID5 which I'm planning on starting out with, to RAID6 because I'm pretty paranoid about losing data (even if I have off-site backup, it's rather slow at my measly 10/2Mbps connection)). Furthermore, I'd really like if it used something like a mini-SAS connector to 4x SATA connectors, as that'd make cable management much easier - and it'd look better too).

I've looked at an Adaptec RAID 3805, but have heard from several people that Adaptec have some problems. Furthermore, I haven't found the WD20EVDS on their supported drives list, so I don't even know if that'll work.

Could you please recommend a good controller that matches (or as close a match as is possible) my requirements?

If it's of any use, my entire system specs is listed here.

BlankSystemDaemon fucked around with this message at 17:49 on Mar 4, 2011

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



Longinus00 posted:

If you're truly paranoid you'll want to get a backup raid card (losing your raid card a few years down the line and trying to find another is no fun) and some spare hds. You might also have to get "raid approved"/enterprise hds for use with your card, the WD20EVDS is "consumer", but someone who's more knowledgeable will have to chime in on that.

Probably not a bad idea - we'll see what price I can grab a card at. Like I said, I do have off-site backup.


Hok posted:

Cheapest option is to get onto ebay and pick up a Dell Perc6i card, there's heaps for between $100-$150, then just get a couple of SFF-8484 to 4 x SATA Cables.

Having to modify a card with applying stuff on pins, flashing it and cooling, and providing the airflow worries me a bit - I have 6 fans mounted in my case but they're low-rpn (58cfm@1350rpm, running at ~990-1050rpm) and while they keep my computer cool, I'm not sure it's enough to keep a Perc6i card cooled down properly (I used to work for Dell and seem to recall it being mentioned that that card gets HOT). And since I'm in Denmark, shipping will indubidably be expensive if I get it from eBay or similar.


I'd really just prefer a card that I can buy and get warrenty on.

BlankSystemDaemon
Mar 13, 2009



Synology (DS211+ in my case) with DSM3 is as user-friendly as it can get. I'm very happy with mine, except that I've filled it up and can't easily expand the diskspace because it's a two-bay system with 2x WD20EVDS in RAID1.
The disadvantage of Drobo is that their custom RAID (I think they call it BeyondRAID) cannot easily be moved to another system as no computer understands it, so if the unit fails you'll have to get another Drobo to restore your data.

BlankSystemDaemon
Mar 13, 2009



what is this posted:

Expanding via eSATA to DX5

Yes, but I can't add the harddisks to the volume I already have - I'll have to have another volume, which means another iSCSI target. Thanks for reminding me, though - it's definitely a solution I'll have to look at again. The DX5 is definitely cheaper than the RAID controller I was looking at, and looking at the feature set, I can get everything I need from it except single volume.


lostleaf posted:

Questions about synology ds411j or ds211j. Are you able to plug in an external drive and copy specific files/folders to and from the nas to the external drive using the web interface? Is the raid5 in synology similar to mdadm where I can plug the disks into a ubuntu machine and recover my data?

Yep, you can. I often do this when I have my camera and just need to grab a few images as my NAS is more easily accessible than the back of my computer.


jromano posted:

NAS for small office.
DSM 3.x for Synology integrates with active directory and user/group permissions, and you can back up to an external disk (for off-site storage) easily. Automated backup (to various services) is either there (I'm not at home to check at the moment) or coming in an update. And you can easily use a Synology to back up any computer on to (either via time machine or Windows backup or rsync). A DS210+ like mine is actually meant for small-office, so you could get one of those?

BlankSystemDaemon fucked around with this message at 21:26 on Mar 8, 2011

BlankSystemDaemon
Mar 13, 2009



zelah posted:

So I picked up that synology ds411j and I got everything set up how I want it backup/homenetwork wise, but I'm completely lost when it comes to FTP access. I'd like to have a folder where if I need to get a folder to someone I can just send them a link for them to grab it. The tutorials I've seen are all "oh just enable ftp on the synology (okay done) and then just direct it to your host the end". Can anyone recommend a good step by step handholding tutorial on how to go from step 1 to completion on setting that up?

Two options:

1) Set up individual port forwarding (port 21 for ftp) from WAN to LAN (internet to local network) to your router.

2) Connect your NAS to a free port in your router, then set up a DMZ so people outside can reach the router and the router can reach the outside world, while you are able to connect to it as if it's still part of the LAN. That's how I do it, and it works.

Both options are covered in the manual for your router which, unless it's an arcane jumble of components that's assembled under its own volition, you should be able to find on the glorious internet. :eng101:

BlankSystemDaemon fucked around with this message at 22:09 on Mar 20, 2011

BlankSystemDaemon
Mar 13, 2009



Finally have my NAS set it up the way I want it, so here's what it is if anyone wants to use it as a recommendation:
  • A HP N36L Microserver with 5.5TB in a zfs zraid1 (click here for details)
  • FreeNAS 8.0.2
  • A HP NC112T NIC (important, as the on-board AMD RS785E/SB820M chipset-based NIC have some problems with the FreeBSD driver)
  • A HP RAC controller (IPMI solution with thermal monitoring and virtual KVM, meaning I don't need a screen, keyboard or mouse to turn on/off and manage my server).
With LAGG I get about 1.8Gbps - as near as I can meassure it, em0 (the HP NC112T NIC) runs at 1Gbps all the time while bge0 (the on-board NIC) runs around 80% efficiency with drops after extended use. Even with that in mind, seeing line-speeds (97-98% utilization on 1Gbps cat5e with 9k mtu) over SMB sure is nice, and I'm not even fully utilizing the disks max i/o as that's well over 200MBps (benchmarked I/O with dd and zpool iostat)

I'm running out of ports in my switch though, so I had to order a D-Link DGS-1100-16 16 port managed switch. :smith:


"[oMa posted:

Whackster"]
:words: about zfs setup
I found that the biggest problem was my NIC (as mentioned above), after I took care of such things as too little memory (zfs is very hungry, I went with 8GB for a 5.5TB zraid1, but I was told a typical guideline is 1GB for every 1TB of diskspace across your disks in the zraid, including parity drives).
What CPU is in the machine? The one in my HP N36L is only 1.3GHz with hyperthreading but does SMB and iSCSI with no preformance problems (97-98% utilization on 1Gbps cat5e with 9k mtu, as mentioned earlier).
Also, are you using iSCSI file target or disk target? There's some performance loss on file targets, depending on what you're doing.

I setup FreeNAS in SuperMicro 2U server for a friend of mine, and that acts fine as a iSCSI attached storage for ESXi (no problems with preformance). Thought of looking at FreeNAS rather than NexentaStor?


kloa posted:

Do I need to RAID0 these to make it a single disk, or can I get away with leaving them 2x2TB and make it still work the way I want it to?
RAID0 is a really bad idea, unless you don't want data redundancy - if one disk fails, the entire array will fail.

kloa posted:

Will this work as-is or do I need to add some conversion/encoding somewhere along the way?
Well, what format are the files in? As far as I remember, XBMC plays most things as long as the hardware it runs on can decode it (if you're playing 1080p, you need more power than just a standard resolution dvdrip)

kloa posted:

Also, .iso/.rar files should work fine right? Not having to switch out discs would be nice so I can easily beast through TV seasons!
Again, this depends on XMBC, but as far as I recall it doesn't do iso mounting (this may have changed since I used it). As for rar files, I don't know if it supports on-the-fly decompression. I suspect it doesn't, though. There's not much point in keeping movies/series in rar files anyhow, they're already compressed by the video and audio codecs used.
Since you have the disks, why not rip and convert them to h264/aac - XBMC will play that fine.

BlankSystemDaemon fucked around with this message at 16:32 on Dec 26, 2011

BlankSystemDaemon
Mar 13, 2009



EDIT: poo poo, doublepost. Disregard this.

BlankSystemDaemon
Mar 13, 2009



Steakandchips posted:

More microserver freenas :words:
1: It sounds exactly like the issues I was experiencing with the NIC built in (what I call on-board in my previous post, which might've confused you into asking your first question), even to the speed you're getting (though I saw 80MBps with 9k mtu).
2: None that I've experienced so far, other than mounting it requires complete removal of the motherboard tray. The driver is already compiled into the kernel.
3: I think any gigabit NIC will do, as long as it uses the em driver (check HARDWARE on the manpage I linked). Only reason I got the HP NC112T (along with the RAC) was because it was sold cheap the same place where I bought the Microserver
4: As mentioned previously, it uses the em driver - works out of the box on anything that even thinks it looks vaguely like BSD in a mirror.

How much memory do you have, and why not run zfs zraid1/2? Surely that's the point of zfs/freenas to begin with.
Make sure to use jumbo frames (under Add New/Edit Interface (for whatever nic you end up using, including bge0) you can find something called Options where you add "mtu 9000"). Also make sure to set the same amount on your desktop (Google for how-tos, it's easy). It improves performance quite a lot on large file transfers.

Small note: Haven't I seen you on synirc somewhere? If you have questions, just stalk me in #letsplay or #pcgaming.

BlankSystemDaemon fucked around with this message at 18:27 on Dec 26, 2011

BlankSystemDaemon
Mar 13, 2009



That'll do just fine. It's the same NIC, mine is just HP branded.

Steakandchips posted:

I have 8 gigs of non-ECC 1333 ram in the Microserver, and 16 gigs on my desktop.
Well, that's enough then. I was just wondering if you might be running with the default 1GB which would be a bit too little.

Steakandchips posted:

Regarding why I do not run ZFS zraid 1/2
Fair enough. Do you intend to update your bios (scroll down to find a section on how to do this if you want to, and use the russian one floating around in order to get 5 disks in there? The SAS connector only supports 4 disks, and the other SATA port only runs IDE legacy mode (eg. no possibility of adding it to a zraid as that needs ACHI, and it runs at slower speeds).

Steakandchips posted:

Regarding Jumbo frames
I see. Well, it's not like you can't have computers on your network without it - it'll just benefit the computers that are configured for it.

Steakandchips posted:

Thanks very much man, really appreciated. And yes, you have a very keen eye, I am indeed in synIRC and have added letsplay to my channels (I am already in pcgaming and the GWS related ones and I am currently known as Stockingsandchips for the holiday period!)
You're very welcome. I probably should've added that I'm in euroland, so I might be hard to get in touch with at times on account of being in the future.


Also, if anyone needs an easy way to benchmark their disks write and read speeds respectively:
code:
time dd if=/dev/random of=/path/to/pool/dd.tst bs=1M count=1K
time dd if=/path/to/pool/dd.tst of=/dev/null bs=1M
Depending on your drives and where/what you're doing with your data, you might add conv=sync to the write test as well (read the manpage for more info).

And remember, on FreeBSD, RTFM means read the fine manpages.

EDIT: Wheelchair Stunts is right, /dev/random is better, both because it functions as /dev/urandom on linux in that it's non-blocking and pseudo-random for better testing of compression and caching (especially if you've got zil and log on three (two for mirrored zil, one for log) seperate ssds - and on FreeBSD, /dev/urandom is actually just a symlink to /dev/random for the reasons stated before). Thanks for reminding me of this.

BlankSystemDaemon fucked around with this message at 14:26 on Dec 27, 2011

BlankSystemDaemon
Mar 13, 2009



I can't imagine the 0.2GHz difference between the N36L and N40L doing a whole lot for the performance of the microserver.

Obviously Erratic posted:

Possible onboard NIC issues.
Well, your speeds are a lot lower than the speeds I was getting (and you don't describe drop or anything else like Steakandchips and I both have reported?) Furthermore, the Linux drivers for the N36L work just fine as far as I know.
Try sticking FreeNAS on a 2GB usb disk instead of Win2008R2 since I imagine you want zfs in the first place - it's always possible to buy an Intel 82574L based NIC which should solve any NIC issues on FreeBSD at least.

Have you tried doing:
code:
time dd if=/dev/random of=/path/to/pool/dd.tst bs=1M count=1K
Followed by:
code:
time dd if=/path/to/pool/dd.tst of=/dev/null bs=1M
If you do it, you might also want to open a second TTY and do "zpool iostat 1" as that'll give you real-time read and write speeds of each drive in your pool, refreshing every 1 second.


Tangentially related, if any of you don't have an ODD, you can fit 2 3.5" disks in the ODD bay and use the eSATA connector on the outside for a 6th disk (which will also be ACHI enabled by the bios update mentioned on the last page). With 6x3TB in ZRAID1, that's +10TB of usable diskspace
I've also seen a built-report where the guy put in a ACHI enabled SAS connector and put 4 disks in the ODD tray - which would give you +15TB (though the cost of building something like that with HDD prices being what they are, isn't easy to comprehend). I wish I could remember where I found that link..
I can't remember the exact number in the base 10 vs base 2 calculation, so I ballparked the numbers.

EDIT: %darr: Nope, it had pictures and everything - and it was 4 3.5" drives (I know 4x2.5" will fit in an ODD tray, but those drives have less capacity and usually slower rpms). This was all about maximizing poolsize.

BlankSystemDaemon fucked around with this message at 23:12 on Dec 27, 2011

BlankSystemDaemon
Mar 13, 2009



Steakandchips posted:

Yep, seen that one before. He was laying them on top of each other, horizontally. So four 3.5" fit in there. It was a tight squeeze, I remember that.

E: Found it:

http://hardforum.com/showpost.php?p=1037301406&postcount=311

That the one?
I recall it being on a blog, but I may be wrong. Also, those images aren't working for me at least (they're probably hosted on tinyimg).

Obviously Erratic posted:

Thanks, I haven't tried the DD commands or the iostat command just yet - this was just real-world data transfers (mix between large and small files) and I seemed to be getting that speed pretty reliably.

I'm copying FreeNAS now and will give it a try - I really wanted to use Ubuntu as an all-in-one type server. File Server (ZFS) + SabNZBd + Sickbeard/Couchpotato so was steering clear of FreeNAS for the moment.

I'll report back soon!
The dd commands and iostat will give you real-world speeds on the disk themselves.
How much memory do you have? Remember the golden rule of 1GB for every TB you have (so you'd need 10GB, unfortunately the HP Microserver in all flavors only support 8GB maximum).
FreeBSD is the way I'd recommend going in that case. Sure, when it comes right down to it, choosing between various Linux and Unix implementations is a matter of taste so go with what you decide - just know that FreeNAS is only a tiny part of what FreeBSD can do.

BlankSystemDaemon
Mar 13, 2009



Obviously Erratic and Steakandchips posted:

:words: about Microserver.

Yep, that's exactly the issue at hand with the NIC and FreeBSD. If you were to run those dd commands and do the math, you'd probably see +200MBps transfer rates on your disks due to the way zraid1 works (at least that's what I get).

Oh man, that image. That's exactly what I remember.

Well, 1080p isn't more than ~25Mbps (at least not with :files:, bluray disks are usually at 50Mbps, and the highest broadcast 1080p is from DVB-S2 at 35Mbps) - but the problem isn't as much the ability to saturate gigabit, it's that the connection will drop and it's really annoying having file transfers all over the place.

Regarding 9k jumbo frames, any device which is capable of doing it should be configured to do it - just because those devices will work with it just fine and any other device won't have any problems running at 1500 which is the default.


BotchedLobotomy posted:

Can the PSU handle all those drives without an issue? Its a little baby one isnt it?
As far as I know, yes - remember that the harddisks start up staggered so there isn't a huge drain on the PSU.


Obviously Erratic posted:

Benchmarking zfs with dd
Why would zpool iostat count the parity drive? You won't even be able to use the space on that drive anyway.
Are you only getting 50MBps on the dd transfers on the server itself? Try adding "conv=sync" to the first line.
Do those drives have 4k sectors? If so, you need to account for that when creating your zraid1.
Finally, what drives do you have in your desktop?


Also, I don't get why you guys are satisfied with just getting half speeds.
I demand maximum performance from what I've bought and built. :black101:


Thinking about it.. With casemodding (eg. making the shell of the microserver bigger), I think you could fit up to 10 drives in all utilizing the internal SATA port and the external eSATA port plus a JBOD SAS connector of some description (I don't think pci-express x1 had enough bandwidth for 4 disks though, so you can't fit 14 disks in it).

BlankSystemDaemon fucked around with this message at 10:16 on Dec 28, 2011

BlankSystemDaemon
Mar 13, 2009



There is, but 2.5" drives don't have enough platters / disk density to compete with 3.5" drives, and to fit it you'd need a pci-express x4 4 port JBOD SATA/SAS(+breakout cable if SAS) controller.

Also, if he's not using the internal bays, why he doesn't just get 4x 3.5" to 2.5" and a 2x 2.5" drivebay to fit in a 5.25" tray, I don't know.
This guy must've been running FreeBSD 8.0.1-BETA2 or something earlier, as those had problems with istgt (the FreeBSD iscsi daemon) as that was fixed in -BETA3.

BlankSystemDaemon fucked around with this message at 11:44 on Dec 28, 2011

BlankSystemDaemon
Mar 13, 2009



Obviously Erratic posted:

Unfortunately I'm running WD Green Drives in both the desktop and server. I was able to hit and sustain around 80mb/sec when transferring about 300gb of 1gb files.
I built and tested a pool with a shift=12 for the advanced format drives but didn't see any difference in performance so am now using a shift=9

My dd test on the server itself got 78mb/sec however I didnt try it with the "conv=sync" option.

What does "diskinfo -v ada0" report for sectorsize? Should tell you whether they're 4k sector drives or not - I believe they are though, so just build the pool with ashift=12 if that's the case. It's good practice, and while there may be other factors limiting the speed right now, you definitely should be able to manage +200MBps if the pool is setup properly and everything is running as it should.

Did you run the dd test with ashift=12 or ashift=9?
Please don't do any benchmarking with file transfers from the server to your computer or the other way around, use the two dd commands for every test - that eliminates the NIC/router/switching/lack of jumbo frames/whatever else might be wrong completely.

The conv=sync just pads input blocks to the input buffer size with spaces.


Here are my speeds with 4x HD204UI 4k sector drives
I'm using /dev/zero because the CPU isn't very fast when it comes to generating random data, I've added the conversion to MBps
code:
[debdrup@nerd-nas] ~# time dd if=/dev/zero of=/mnt/storage/dd.tst bs=4096 count=1M conv=sync
1048576+0 records in
1048576+0 records out
4294967296 bytes transferred in 35.596464 secs (120657133 bytes/sec, 115.07 MBps)
[debdrup@nerd-nas] ~# time dd if=/mnt/storage/dd.tst of=/dev/null bs=4096
1048576+0 records in
1048576+0 records out
4294967296 bytes transferred in 8.440626 secs (508844631 bytes/sec, 485.27 MBps)

BlankSystemDaemon fucked around with this message at 12:46 on Dec 28, 2011

BlankSystemDaemon
Mar 13, 2009



Obviously Erratic posted:

Those numbers look good, yeah?
Those numbers are just fine - are you still getting slow NIC file transfer speeds between the server and your desktop? If so (and you want to stay with Ubuntu), try upgrading to the latest firmware and drivers for your NIC.

Wicaeed posted:

Let's say I was semi-curious about fooling around with some ZFS storage, but only had an old as bones PowerEdge 2600 w/1TB worth of 10k U320 SCSI Harddrives in it, could I possibly install something that would run ZFS on it?

The RAID controller is a PERC4/Di and the drive controller is a LSI Logic 53C1030, so assuming those controllers can use disks in JBOD (not all PERC controllers can, in which case you just add however many disks you have in seperate RAID0s.

Corvettefisher posted:

Has Freenas 8 gotten any better? I really like 7, but I need some DRBD support soon and last time I tried 8 isci kept loving up and esxi would not attach it. I know openfiler has it but would like to stay with Freenas unless they are keeping that terrible buggy/slow web interface they put in 8
FreeNAS 8.0.2 is every bit as stable and fast as you can want (even the webui is no longer slow). iSCSI works (the problem you had got fixed in 8.0.1-BETA3 as I recall). The only thing missing is the extendable add-ons FreeNAS 0.7 had, but that's supposedly coming in FreeNAS 8.1 or 8.2 which should be out shortly after newyears.

BlankSystemDaemon fucked around with this message at 11:00 on Dec 29, 2011

BlankSystemDaemon
Mar 13, 2009



Scuttle_SE posted:

I'm looking into getting the Synology DS411+II, looks like a solid machine with some neat features (iSCSI, rsync-target and stuff).

From what I can read the box comes with 2GB of ram. Is this upgradeable? Would be nice to have 4GB right from the start...

Where did you read it comes with 2GB? They were lying bastards.
It comes with 1GB SODIMM DDR2 PC26400 800MHz memory, but can be upgraded to 2GB judging from various posts on Synology's forum. Synology DSM 3.2 is plenty feature rich (there's almost no, if any, difference between hardware versions except amount of drives it can handle, cpu and memory).

I guess the only way to get more out of it would be to install a Linux or Unix flavor on it (since it's a dualcore amd64-capable ATOM D510 at 1.8GHz).
However, if you do that you run into the problem that you really can't run zraid1 on it as 2GB is too little memory (unless you don't plan on putting very big drives in it). 1 GB for every 1 TB of diskspace, including parity drive.
There are of course other software raid solutions, but you'd have to google something like UnRAID or whatever else you end up finding.

EDIT: ↓ I'm well aware of it as I have two Synology boxes here at home that I use for network backup.

BlankSystemDaemon fucked around with this message at 15:06 on Dec 29, 2011

BlankSystemDaemon
Mar 13, 2009



Scuttle_SE posted:

Hmm...the cpu is a Atom D525, and according to Intels website it is capable of 4GB, or am I completely misreading stuff?

Yes, but the motherboard is also a factor in how much memory is supported.

I still don't understand what you need the memory for?

BlankSystemDaemon
Mar 13, 2009



Obviously Erratic posted:

My network speeds over GigE seem to sit at around 80~90MB/Sec - the highest I saw it go was about 117MB/Sec but was able to sustain multiple copies at a time around 80MB/Sec which is WAAAAAY better than I was getting before!

I'm quiet happy with Ubuntu and the ZFS performance so far, I think I do need to tweak a little more though. Any tips on upgrading the firmware & drivers for the N40L's NIC?

Now try using copyhandler or a similar program where you can adjust the buffer size for network transfers.

With regard to drivers - doesn't look like there are any available on the drivers page, unless you're meant to use the N36L drivers which I can't really believe on account of them using seperate chipsets, cpus and possibly a different motherboard.
Contact HP through email? I'm sure they can answer a simple query as to whether drivers are available and where.

BlankSystemDaemon
Mar 13, 2009



Obviously Erratic posted:

Has anyone had any experience or done any benchmarking with a ZFS cache drive SSD in the Micro server?
Considering getting one of the Nexus Double Twin brackets to slot another 2TB in and then adding in a smallish SSD but not sure whether I'd be best to use it for cache or for OS (don't have any issues running from USB on mobo yet)
Well, let's distinguish between cache and zil first (because zfs does, so should you). Cache is for reading, while zfs intent log (referred to as zil or log) is for writing. Cache can be run on 1 device and it typically doesn't need to be much bigger than 4-8GB depending on how much memory you have. Log devices on the other hand need to be a bit bigger, but should also never be run without another functioning as a mirror device (since any writes stored on the log device will be lost if the system loses power, potentially corrupting data). For both, though - go for SLC SSDs as they tend to be much faster in disk i/o.
Moreover, for the HP N36L for example, its cpu is a tad slow (running at only 1.3, though the HP N40L isn't much faster at 1.4) for tasks which aren't threaded (like SMB/CIFS sharing, and some other tasks).
Really, it'd be easier to help you if I knew what you wanted to improve in specific on your server (provided you've already purchased a seperate NIC and you're running loadbalanced link aggregation, else you should look into that first so your ethernet connection doesn't become the next bottleneck (if it isn't already, if you're using FreeBSD with the bge0 standard driver)

Now for a few links or anyone messing about with zfs:
Here is a page discussing the best practices on using zfs. More specific to improving your zfs preformance, here is a tuning guide to zfs (keep in mind the "Tuning is evil" section).

As to your question, I have done some zfs benchmarking and the best speeds I've seen were on 5 Hitashi 15k SAS-connected 450GB drives running in zraid2 with one ssd for cache and two ssds for zil with a 10 Gb fiber connection (the server was a iSCSI attached datastore for a Vmware ESXi) - but it varies a lot based on sector size, what data you're moving (and your buffers on said filetransfers, for example on Windows the default is 512, but if you're running jumbo frames you want to raise it), whether your network is setup properly (with 9k jumbo frames and loadbalcanced link aggregation), and whether there's a break on one of the pairs in the cat5e/cat6 cable you're using (I only add this because it was the reason my TV was always slow to connect) - just to mention a few things.

BlankSystemDaemon fucked around with this message at 14:37 on Dec 31, 2011

BlankSystemDaemon
Mar 13, 2009



Just export your config, do a clean install and import your config. Worked just fine for me.

BlankSystemDaemon
Mar 13, 2009



Another option for hdds are the Samsung F4EG 2TB drives. I got 4 of those running in 4k sektor size zraid1, and I'm getting line-speed over SMB on both read and write. (~110Mbps). Not that I expect this to affect you, but there's an old firmware version that were on drives sold before january 2011 which can experience dataloss if you enable SMART tests on the drives. However, the silly thing is that the newer firmware doesn't reflect a version change so you can't easily tell what drives you have. So it's not a bad idea to flash them as soon as you get them - info is here - luckily on the microserver you don't need to move the drives around, you can set them in the BIOS with the same result.

For zraid1, there's a golden rule about memory: 1GB for every 1TB of diskspace you have in the array (including parity space).

Also, while guest sharing is fine, if you're not the only one on the network or plan on bringing the server anywhere, it might be worth it to use local user authentication on one that has full read-write-execute permissions, and one that's anonymous access that only has read access.

What DrDork said about linux experience is true (although as a long-time BSD user, I have to point out that linux is not unix), but you'll be better off if you go read the FreeBSD handbook. At least familiarize yourself with man through the manpages that are available on freebsd.org.


Additionally, while I remember it, buy another NIC. The bge driver that handles the HP ProLiant NC7760/NC7781 embedded NIC in FreeBSD has problems which will cause drops and bad preformance (along the speeds DrDork mentioned, so he might want to pay attention too). Anyhow, go to manpage for the driver and check the HARDWARE section for any NIC you can easily find and buy, and use that. Personally I went with the HP NC112T (503746-B21) but anything based off any of the chipsets mentioned on that manpage will work fully (just ensure it has 9k jumbo frames support, as you'll want that if the rest of your network supports it).

Just noticed that you asked for memory, here are some that work.

EDIT: Fixed links, added more info. drat it, I seem to go over some of these things every time someone buys a microserver.

BlankSystemDaemon fucked around with this message at 10:04 on Feb 5, 2012

BlankSystemDaemon
Mar 13, 2009



DrDork posted:

This is true. Personally, 50MB/s was fast enough that I didn't feel like spending yet another $50 on a NIC. If you do decide to stay with the HP embedded NIC, do some trial runs moving files around in a manner that simulates how you'll actually use it, and see what happens. Stock, mine would give me ~100MB/s for the first few seconds, and then drop hard to ~30 with a lot of stuttering--which would be fine for transferring small files, but I move big ones around a lot. After some tuning I got the drop to settle at ~50, with no stuttering. There are a lot of guides for how to tune ZFS, and other than it being kinda a trial-and-error process that'll eat up an afternoon, it's not hard, or even strictly necessary. Do remember that (contrary to a lot of the guides) there is no reason to ever use vi unless you actually want to--use nano instead (built-in with FreeNAS) and save yourself the headache.
As a general rule regarding ZFS tuning, there's always this to remember:

"ZFS Evil Tuning Guide [solarisinternals.com/wiki posted:

"]
Tuning is often evil and should rarely be done.

First, consider that the default values are set by the people who know the most about the effects of the tuning on the software that they supply. If a better value exists, it should be the default. While alternative values might help a given workload, it could quite possibly degrade some other aspects of performance. Occasionally, catastrophically so.

Over time, tuning recommendations might become stale at best or might lead to performance degradations. Customers are leery of changing a tuning that is in place and the net effect is a worse product than what it could be. Moreover, tuning enabled on a given system might spread to other systems, where it might not be warranted at all.

Yes, you've been able to tune the zpool so that it runs stable at less than half the performance you can easily expect with the hardware you have - but is 50bux really that much? Also, do note that while the NIC I recommended works, it's by FAR not the only one that does. You could easily check the manpages/HCL and locate a gigabit pci-ex x1 NIC for perhaps as low as 10bux, certainly not over 20bux.

BlankSystemDaemon
Mar 13, 2009



There's also the Ethernet section of the hardware notes for 8.2-RELEASE which is what FreeNAS 8.0x is based on. 8.1/2 will be based on FreeBSD-9.0-RELEASE, but anything that's supported in 8.2-RELEASE is also supported in 9.0-RELEASE.

If you find a specific chipset that you think might work but it's not on the list, simply do a google search on "<chipset> freebsd" and you'll find it very likely someone else has already been wondering the same.
Unfortunately Google's BSD search no longer exist, but just about anything you can wonder has probably already been asked in one form or another.

BlankSystemDaemon
Mar 13, 2009



FISHMANPET posted:

huge image.

That's nice and all, but put timg tags around that poo poo.

BlankSystemDaemon
Mar 13, 2009



Perhaps it would be smart to explain why RAID0 is never a good idea: When you run RAID0, you're basically halving the mean-time-between-failure of the array since all it takes is one out of two drives to fail for you to lose the entire array. Then, of course, there's the oxymoron of calling it RAID0.

Now imagine trying to maintain a hardware raid setup (zfs not an option as it had to be a Windows based server throughout) with four pci-ex cards with two SAS (+ 2 SAS-to-SATA breakout cables) and having the headache of worrying about a controller dying (controllers aren't in production anymore).
Thanks, previous freelance server admin, for setting up this fine nugget of a solution.

BlankSystemDaemon
Mar 13, 2009



Jolan posted:

So I shouldn't need a WOL-capable NAS? What retail NAS solution would you say suits my wants?
WOL is nice, but it's no match for out-of-band management (like Intels IPMI, HPs ILO or Dells DRAC - it's a system that allows you to turn on and off your server, and monitor basic hardware status like temperature, fan rpm and some other stuff depending on the server/solution in question) - and it can be had for servers as small as the HP MicroServer through an extention card, and is standard on all supermicro servers (as far as I know, anyway).

FISHMANPET posted:

What makes more sense is something that will spin down the drives, because a NAS appliance that isn't spinning drives isn't going to be using much power.
Well, you have to weigh spin-down with drive-wear. Basically, spinning a drive down and up causes more wear on a drive than a drive constantly spinning at a lower rpm (5400rpm drives don't take a lot of power when idle, but not spun down, for example) - so for a server that's constantly being accessed throughout a typical usage cycle (such as a day), you might have a period at night where you could have the drives spin down provided they're not going to be needed and then have them scheduled to spin up again before work starts.
However, if you're like me and have a server that's basically used all day (and runs a torrent client at all times, regardless of whether I'm accessing the disks as I use them for storage for my workstation as it only has an SSD), spin-down isn't an option.


And now for something completely different: DLNA on a server is down-right amazing. I realize that the implimentation of DLNA on my old TV sucked rear end now, because the one on my Philips TV can stream just about all the content I have on my server - and just being able to access any content at a moments notice with just one remote and no extra devices (Boxee Box or XMBC on an AppleTV) is so much better than I had imagined. If your tv supports DLNA 1.5 and your NAS supports it too, go use it (assuming you have a TV that does MPEG4/h.264 decoding (and not one of the Samsung TVs that require a very specific codec and profile) - but as far as I know most modern ones do)

BlankSystemDaemon fucked around with this message at 16:06 on Feb 29, 2012

BlankSystemDaemon
Mar 13, 2009



FISHMANPET posted:

Yeah, and when it comes down to it, an idle 5400 RPM drive just isn't pulling much juice. I've got my server with 13 hard drives powered off a 400 Watt PSU, and I'm sure it doesn't use nearly that much. Anybody with a kill-a-watt and a Synology type product care to weight in on power usage?
I have a 210+, 24W is the maximum it reaches during boot and stress and remember testing a DS410+ which ran at 35W during stress (can't remember if I tested boot).

BlankSystemDaemon fucked around with this message at 17:16 on Feb 29, 2012

BlankSystemDaemon
Mar 13, 2009



So this is ReFS then?
Well, what harddisks are you getting? I was kinda hoping that I could switch to Windows as a server os on my HP MicroServer, but if it's getting that bad speeds, it's not really an option.

I should install a trial of vmware workstation, add some scsi or sata disks and try it out myself to get a better feel for it.

BlankSystemDaemon
Mar 13, 2009



Drizzt01 posted:

What do you mean by what hard disks am I getting? I was planning on converting my current homeserver over but after testing it on this system (these are just extra drives) I am rethinking.

If you wanted to just mirror like windows home server does then you are fine as the speeds are pretty much what the disk speed is that you use.
I meant what harddisks are you using in the system?

Also, I'm currently running a zraid1 on four disks and I'd be looking to duplicate a setup like that with ReFS if it can do it.
I get 220MBps read speed and 130Mbps write speed on my HP MicroServer running FreeBSD 9.0-RELEASE with ZFS v28.
However, the problem is the lack of proper library support and network indexing plus file versioning (while shadow copies are supported in zfs through snapshots, it's not perfect and isn't preforming as well as I had hoped).

I'll do the vmware thing though, and report what I find.

Edit: ↓ My apologies, for some reason those images didn't load for me.

BlankSystemDaemon fucked around with this message at 00:02 on Mar 2, 2012

BlankSystemDaemon
Mar 13, 2009



astr0man posted:

Microserver ZFS stuff
A few things:
  • FreeNAS8 on HP MicroServer involves using the bge(4) driver for the on-board NIC, and unfortunately that specific driver doesn't work with that controller on FreeBSD. So like I've done before, I recommend getting a third-party pci-ex x1 NIC (from Intel, 3com or whatever) and using that. Check the freebsd HCL and manpages (start by looking up em(4) which is the driver of a very good series of Intel gigabit NICs with 9k jumbo frames, auto-sensing and everything you could want).
  • If you have a HP MicroServer, you might also want to invest in a out-of-band management card (HP/retailers has it in stock under the model number: 615095-B21) which allows you to monitor fan/temperature along with turning on, off and resetting the server and even using a virtual KVM switch (essentially a java applet which connects to the server and lets you control it from whatever machine you're currently on).
  • If you're looking for auto-expansion on ZFS when replacing drives in a pool, it wasn't implimented until after v15 (don't know when, exactly) - so unless you upgrade your pool in Free-BSD/-NAS manually or wait for 8.1/8.2 which will utilize FreeBSD 9.0-RELEASE, you'll have to export then re-import your pool and let it resilver before you'll see the change in size (more documentation on this can be easily obtained by googling)

BlankSystemDaemon
Mar 13, 2009



Bonobos posted:

I looked at FreeNAS bcs it was brain dead simple to set up but I hear performance isn't so hot.
I wouldn't say that - I get ~200MBps read and 130MBps write, enough to fully saturate the servers NIC with 9k jumbo frames.

Nam Taf posted:

The FreeBSD bge(4) driver for the onboard NIC doesn't support Jumbo Frames or WOL. It's a pain in the arse.
It actually does support 9k jumbo frames - or rather, enabling it does make a change in transfer speeds. However, the FreeBSD bge driver as I mentioned previously doesn't fully work with that on-board NIC so it'll drop connections from time to time making it completely unreliable.

BlankSystemDaemon
Mar 13, 2009



I get +240MBps read and +160MBps writes (and have lagg'd NICs to enable multiple machines to access the NAS at full speed) with my zfs solution on FreeBSD (started with FreeNAS, which is an excellent out-of-the-box solution) on a HP Microserver. It's as cheap as a four-bay NAS solution. Add in a NIC and 8GB memory (both can be had for cheap) and you're good to go provided you're willing to do a bit more reading than Synology requires.

BlankSystemDaemon
Mar 13, 2009



DNova posted:

I just ordered an N40l with 8tb of disk, 8gb of ram, and some extras for $840.40 shipped. Not bad. Do I really need a NIC for it?
Well, it depends. What OS do you plan on installing on it? All BSDs (as far as I know, which includes FreeNAS, if you're going that route, as that's built on FreeBSD) has problems with the bge(4) driver that the onboard NIC uses, which will cause intermiddant very short connection loss and non-optimal speed. Plus, it doesn't fully support 9k jumbo frames (though it will do 4k jumbo frames). As for a NIC that supports 9k jumbo frames and works flawlessly in BSD, check out em(4) hardware section or the FreeBSD 8.2-RELEASE hardware notes.
However, Windows Home Server/Windows Server 2008 R1/2, Solaris(-derivatives), Linux (if you want btrfs (which doesn't have RAID5-like parity features yet), or even zfs-on-linux (although it might not be the perfect option as I've understood from previous mentions in this thread)) or any other OS should work fine as far as I know.
Incidentally, you'll have just over 5TB of diskspace if you choose to go zraid1.

teamdest posted:

considering the other specs and purchase price, you should get 8 nics.
Kidding aside, there's only a pci-ex x16 and an x1 port on the motherboard, so it can only fit a maximum of 2+1 onboard NICs for LAGG. And I'd really much rather use one of those ports for the out-of-band management card.

BlankSystemDaemon fucked around with this message at 11:12 on Mar 13, 2012

BlankSystemDaemon
Mar 13, 2009



DNova posted:

It will be FreeNAS with RAIDZ2; 4tb usable. While throughput is not a big deal, I do want it to have a stable connection, so I'll probably get a NIC it plays well with.

Do you have experience with that remote access card? I'm on the fence about adding that too.


I'm not at all sure about what you're getting at here.
Yeah, I have one myself. It's a good thing, except it has a little bug where it sometimes doesn't let you login giving you a 501 error. I have contacted HP about it, but they can't figure out the issue. However, it's intermittent and doesn't happen very often (and only lasts a minute or two, if you try another browser).
The benefit of the card is that you can hard reset, power on and off and even control the machine over vKVM so you don't need keyboard or screen connected to it - all of which entirely outweighs the small issue of intermittent login problems.

I think he was going for somethinig with the number 8 because you had 8TB of diskspace and 8GB memory.


Mr Chips posted:

I chucked an intel dual-port server NIC in my x16 slot.
Well, that is an option - but I only have two machines on my LAN accessing my server and the onboard NIC + Intel 82574L Chipset-based NIC is plenty for that, plus the price of a dual-port NIC was more than double that of a single-port NIC at the time. Who knows, I might upgrade in the future, once I have faster disks which can keep up with more than two NICs.
Just looked up the price of a HP NIC (HP NC7170 PCI-X Dual Port 1000T Gigabit Server Adapter) based on the Intel PRO/1000 MF Dual Port Server Adapter and it's 50bux. If I have that much leftover by the end of the month, I think I'll know what to do with it.

BlankSystemDaemon fucked around with this message at 16:16 on Mar 13, 2012

BlankSystemDaemon
Mar 13, 2009



Gism0 posted:

Though be careful what you buy, the N40L doesn't have a PCI-X slot.

Oh, right. PCI-EX vs PCI-X. Thanks for catching that. Will have to go back to researching again. :/

BlankSystemDaemon
Mar 13, 2009



IT Guy posted:

You wouldn't have any recommended models for that micro server, do you?
The only difference between the N36L and N40L is 0.2GHz speed difference on the CPU. Since samba sharing doesn't benefit from threaded solutions, it's negligible how much difference you'll actually see (assuming you would see any difference with threading, which isn't always guaranteed unless the work tasks that are being preformed can be run in parallel) between the two unless you're using the machine as a virtual host for guest OS' (something it's supposedly also designed to do).

DNova posted:

Also, for some reason, this piece of poo poo was bundled with it for free when I ordered: http://www.newegg.com/Product/Product.aspx?Item=N82E16859325005
I would love one of those for a thin client to use in the kitchen for recipes and stuff.

BlankSystemDaemon fucked around with this message at 17:50 on Mar 13, 2012

BlankSystemDaemon
Mar 13, 2009



DNova posted:

For $150 you can get a netbook and not deal with some janky software on your computer.
Not with the danish import tax and VAT I can't. Believe me, I've tried.
So my solution has been to work it into a script in calibre which syncs to my Kindle - almost as good a solution.

BlankSystemDaemon
Mar 13, 2009



IT Guy posted:

I wanted to use that slot for a 60GB SSD ZFS cache I had laying around.
Any particular reason why? Cache and zil do do (:smug:) something on ZFS, but it largely depends on the disks you have and their usage. Why not look at the performance without doing any tweaking first? I'm maxing out my LAGG'd NICs over SMB no problem with read and write is around 170MBps and that's without cache or zil drives and on 4x Samsung HD204UI 4k sector drives. Honestly, ZFS is good out of the box.

Also, if you want both cache and zil, you want at least 3 ssd disks (1 at the same size as your total physical memory for cache, 2 at ~20GB each for zil in a (z)raid1 to avoid potential dataloss).


As to your question regarding 4k sectors vs 512b sectors, it only matters if drives you're purchasing in the future which won't have 512b emulation (which I don't ever see happening, ever, personally). 4k sector drives are said to out-perform 512b sector drives, but it heavily depends on the load in question. If you don't want the worry about it (as you can't change mix 512b with 4k sectors in one vdev, just leave it at 512b. If you're ever adding more vdevs, you can have them run 4k sectors just fine.


Incidentally, have you read the ZFS Best Practices Guide or the ZFS Evil Tuning Guide? You really should if you're looking to do performance optimization without doing serious testing first and making sure you don't have any bottlenecks / wrong default-configuration.
Sorry, the post became a bit cluttered in adding on information as I thought of it.

BlankSystemDaemon fucked around with this message at 21:34 on Mar 26, 2012

BlankSystemDaemon
Mar 13, 2009



It is pretty, and quiet.
HPs pictures don't really do it justice, but it's not like it matters anyway. Put it somewhere out of sight, that way it doesn't matter if it's pretty or not.

With Samsung's support thing moved to Seagate, does anyone know how to find the firmware for the Samsung F4EG HD204UI disks (the ones where the new firmware fixes the NCQ/SMART probe data-loss issue? I can't seem to locate it no matter what I do, and I can't remember when my drives were purchased, so I need to flash to the newest firmware before I dare enable SMART.

EDIT: ↓ Sweet, thanks. :)

BlankSystemDaemon fucked around with this message at 11:39 on Apr 3, 2012

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



UndyingShadow posted:

I'm using a N40L with an intel nic on FreeNAS, with 2-tb green drives (1 wd green drive and 5 samsung eco green drives) in a raidz2 array.

My transfer speed when writing is about 60MBs, my transfer speed when reading is about 80MBs. The whole network is gigabit. I'm trying to figure out if this can go faster, ie should I start replacing crappy d-link/trendnet switches, or if this is about expected?
Buy another NIC, the built-in NIC has a problem with the bge driver that FreeBSD uses.
This is like the 10th time I mention this in this thread. Jeez.

EDIT: Just went back through my own posts in this thread and found the manpage for the em(4) driver which supports single and dual-port gigabit NIC on one low-profile pci-ex x1 port. Alternately, there's the ethernet section of the 8.2-RELEASE hardware list which covers just about every driver that FreeBSD 8.2-RELEASE can handle.

BlankSystemDaemon fucked around with this message at 17:34 on Apr 5, 2012

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply