Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Galler
Jan 28, 2008


McRib Sandwich posted:

Well, this is roundly disappointing. Finally got NexentaStor running on my ProLiant (broke down and used a SATA CD drive cause I was tired of messing with USB sticks). For some weird reason, even in a RAID 0-equivalent configuration across 4x500GB drives, I can't seem to coax more than about 30MB/s out of the system. This is over CIFS, gigabit, from the server to a Mac OS X client. Maxed out the server with the 8GB ECC RAM that Galler linked to in his howto, so I don't think memory is a constraint here.

Something I noticed during the file transfers is that the disks are rarely getting hit -- the HDD activity light comes on only in spurts. Some of this is expected due to the way ZFS works with the disks, but it makes me wonder where the bottleneck is occurring here. It didn't matter if I configured my disks a RAID-Z, RAID-Z2 or RAID 0 equivalent, I was always stuck between 20-30MB/s read or write. I know almost no one else here is running NexentaStor, but maybe you've got an idea about what might be going wrong here? This has been a pretty frustrating foray so far, I had comparable Nexenta performance on my VM, with 4 virtual disks being read from a laptop drive...

I didn't test performance at all with nexenta but I would think it would be similar to solaris. The only real difference in our setups should be fairly minor differences in software (I think).

How big are the files you're transferring? Mine slows down a lot when moving a ton of little files and starts hauling rear end when moving fewer large files.

The 500gb of assorted files I sent to mine averaged around 50mb/s write and anything I pull off it maxes out my gigabit ethernet. It was very slow then it started transferring chat logs (shittons of them and many only a few kb each) but as soon as those were out of the way and it got much faster.

Adbot
ADBOT LOVES YOU

McRib Sandwich
Aug 4, 2006
I am a McRib Sandwich

Galler posted:

I didn't test performance at all with nexenta but I would think it would be similar to solaris. The only real difference in our setups should be fairly minor differences in software (I think).

How big are the files you're transferring? Mine slows down a lot when moving a ton of little files and starts hauling rear end when moving fewer large files.

The 500gb of assorted files I sent to mine averaged around 50mb/s write and anything I pull off it maxes out my gigabit ethernet. It was very slow then it started transferring chat logs (shittons of them and many only a few kb each) but as soon as those were out of the way and it got much faster.

I was running the AJA System Test utility, which appears to create a monolithic filesize of your choosing (I tested with 128MB and 512MB). No idea if it's moving small "subfiles" during the test or not. I used Finder copy to drag over some large TV episodes to the drive and that was less conclusive. There seemed to be some zippy periods and some sags during those transfers. I don't know how high-quality the OS X SMB/CIFS implementation is, but even at that, 20MB/s sounds slow to me. Guess I'll have to do some more digging.

teamdest
Jul 1, 2007
Hard drive benchmarking is actually kind of a complicated thing to do, and there's a lot of factors involved. Sending a file from your laptop to your file server is really a pretty poor way to go about it, as anything from the network stack at either end to the files themselves can cause huge variances in performances.

Edit: if you don't know the specifics of whatever system you're using, your test is kind of worthless, since there's no way to know the difference between you setting the test up wrong, running it wrong, or outright running the wrong test, versus an actual problem with the setup you're describing.

Edit 2: First test, get on the system directly at a console and test the direct write speed with something like dd if=/dev/zero of=/filename.blah and see what you get. Remove the issue of CIFS, Gigabit, OS X and just see if the filesystem is vdev and filesystem are up to speed.

teamdest fucked around with this message at 10:37 on Aug 7, 2011

McRib Sandwich
Aug 4, 2006
I am a McRib Sandwich

teamdest posted:

Hard drive benchmarking is actually kind of a complicated thing to do, and there's a lot of factors involved. Sending a file from your laptop to your file server is really a pretty poor way to go about it, as anything from the network stack at either end to the files themselves can cause huge variances in performances.

Edit: if you don't know the specifics of whatever system you're using, your test is kind of worthless, since there's no way to know the difference between you setting the test up wrong, running it wrong, or outright running the wrong test, versus an actual problem with the setup you're describing.

Edit 2: First test, get on the system directly at a console and test the direct write speed with something like dd if=/dev/zero of=/filename.blah and see what you get. Remove the issue of CIFS, Gigabit, OS X and just see if the filesystem is vdev and filesystem are up to speed.

I agree. Console commands are my next step, but the Nexenta Management Console seems pretty (intentionally) limited in scope. I need to find a way to drop into a privileged CLI in this thing, but I'm still learning the ropes.

On the plus side, I did get iSCSI working, the hope being that connecting with a block-level device presentation will also be a little closer to "real" than network filesystem abstractions. I'll definitely report back when I have more info in hand, appreciate the feedback.

McRib Sandwich
Aug 4, 2006
I am a McRib Sandwich

McRib Sandwich posted:

I agree. Console commands are my next step, but the Nexenta Management Console seems pretty (intentionally) limited in scope. I need to find a way to drop into a privileged CLI in this thing, but I'm still learning the ropes.

On the plus side, I did get iSCSI working, the hope being that connecting with a block-level device presentation will also be a little closer to "real" than network filesystem abstractions. I'll definitely report back when I have more info in hand, appreciate the feedback.

A bit more info. Ran this on the ProLiant again, on the Nexenta command line against a 4x500GB drive zpool, configured RAID 0 equivalent. Compression on, synchronous requests written to stable storage was enabled. Results:

code:
$ dd if=/dev/zero of=/volumes/tank1/file.out count=1M

1048576+0 records in
1048576+0 records out
536870912 bytes (537MB) copied, 15.8415 seconds, 33.9MB/s
compared against a rough test of raw CPU throughput:

code:
$ dd if=/dev/zero of=/dev/null count=1M

1048576+0 records in
1048576+0 records out
536870912 bytes (537MB) copied, 2.50262 seconds, 215MB/s
34MB/s on the native filesystem still seems really slow to me. These are old drives, I can't image they need to be realigned like the 4k sector drives do. Any thoughts?

edit: The 34MB/s speeds are also in line with my timed tests in copying over large files to a iSCSI-mounted zvol that I created on top of the same pool.

McRib Sandwich fucked around with this message at 00:13 on Aug 8, 2011

teamdest
Jul 1, 2007

McRib Sandwich posted:

A bit more info. Ran this on the ProLiant again, on the Nexenta command line against a 4x500GB drive zpool, configured RAID 0 equivalent. Compression on, synchronous requests written to stable storage was enabled. Results:

code:
$ dd if=/dev/zero of=/volumes/tank1/file.out count=1M

1048576+0 records in
1048576+0 records out
536870912 bytes (537MB) copied, 15.8415 seconds, 33.9MB/s
compared against a rough test of raw CPU throughput:

code:
$ dd if=/dev/zero of=/dev/null count=1M

1048576+0 records in
1048576+0 records out
536870912 bytes (537MB) copied, 2.50262 seconds, 215MB/s
34MB/s on the native filesystem still seems really slow to me. These are old drives, I can't image they need to be realigned like the 4k sector drives do. Any thoughts?

edit: The 34MB/s speeds are also in line with my timed tests in copying over large files to a iSCSI-mounted zvol that I created on top of the same pool.

Excellent, so it seems that the issue is somewhere in ZFS or in the drives themselves. are they already built and have data on them, or could you break the array to test the drives individually?

McRib Sandwich
Aug 4, 2006
I am a McRib Sandwich

teamdest posted:

Excellent, so it seems that the issue is somewhere in ZFS or in the drives themselves. are they already built and have data on them, or could you break the array to test the drives individually?

Actually, I ended up doing that last night after posting that update. I found that running the same dd command on a pool comprised of a single drive delivered about the same write speed -- 34MB/s. That said, I would've expected increased performance from a RAID 10 zpool of those four drives, but I didn't see any increase in write performance in that configuration.

Anyway, I have free reign over these drives and can break them out as needed. What other tests should I run against them?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
What's your CPU usage when you are moving data? If compression is on, I'm guessing that's killing your speeds.

teamdest
Jul 1, 2007

McRib Sandwich posted:

Actually, I ended up doing that last night after posting that update. I found that running the same dd command on a pool comprised of a single drive delivered about the same write speed -- 34MB/s. That said, I would've expected increased performance from a RAID 10 zpool of those four drives, but I didn't see any increase in write performance in that configuration.

Anyway, I have free reign over these drives and can break them out as needed. What other tests should I run against them?

Well I would expect your write speed and read speed to pick up on a striped array, that's kind of strange. A Mirror, not so much, since it has to write the same data twice. Can you try it locally on a striped array instead of a mirrored stripe? Just trying to eliminate variables. And could I see the settings of the pools you're making? something like `zfs get all <poolname>` should output them, though I don't know if that's the exact syntax.

Henrik Zetterberg
Dec 7, 2007

Methylethylaldehyde posted:

So windows is being held as the security standard?

Sadly, yes.

Methylethylaldehyde posted:

Also, if you're able to use VLANs, it would be possible to hook the mail sever up to the disk store directly on it's own VLAN, so it's not possible for anything BUT the mail server to talk to it?

It's just a regular file server, not a mail server.


Anyway, I'll probably just end up throwing a bunch of disks in a RAID array into my desktop PC.

Longinus00
Dec 29, 2005
Ur-Quan

McRib Sandwich posted:

Actually, I ended up doing that last night after posting that update. I found that running the same dd command on a pool comprised of a single drive delivered about the same write speed -- 34MB/s. That said, I would've expected increased performance from a RAID 10 zpool of those four drives, but I didn't see any increase in write performance in that configuration.

Anyway, I have free reign over these drives and can break them out as needed. What other tests should I run against them?

Take one of the drives and format it as a bog standard UFS drive or something and benchmark it. See if the bottleneck comes from the hardware or the software. Did you examine your cpu usage when you were benchmarking the drives earlier?

NickPancakes
Oct 27, 2004

Damnit, somebody get me a tissue.

If I have extra good hardware sitting around, is FreeNAS with RAID-Z still a good route for a homebrew NAS box or are there better options out there?

Hok
Apr 3, 2003

Cog in the Machine
Well I"m finally going to build myself a decent NAS box, I had 2 HP microservers delivered this morning, just ordered 4 x 3TB drives and 8GB of ram for one of them, I'll decided what NAS software I'm using in the next day or so before the drives arrive.

I figure 9TB after raid should last me a while, not sure what the next expansion step will be after this one, here's hoping 6gb drives are out before I need to upgrade again.

Just need to make sure that none of the drives in my old desktop/file server box hear about it and decide it's time to finally die, it would just be my luck to have a drive go while copying everything off it.

The other ones' going to be my sickbeard/sabnzb/utorrent box

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

bloodynose posted:

If I have extra good hardware sitting around, is FreeNAS with RAID-Z still a good route for a homebrew NAS box or are there better options out there?

It's a solid choice. Not the fastest, but solid.

McRib Sandwich
Aug 4, 2006
I am a McRib Sandwich

teamdest posted:

Well I would expect your write speed and read speed to pick up on a striped array, that's kind of strange. A Mirror, not so much, since it has to write the same data twice. Can you try it locally on a striped array instead of a mirrored stripe? Just trying to eliminate variables. And could I see the settings of the pools you're making? something like `zfs get all <poolname>` should output them, though I don't know if that's the exact syntax.

Sure thing, output is below. I sliced up the pool a few ways this evening, including RAID 10 again, RAID-Z2, and RAID 0. Always hit a ceiling of about 34-35MB/s write using dd, without any other I/O hitting the disks in the pool. Enabling or disabling compression didn't seem to make much difference, either.

I pulled a couple samsung drives out of the unit and replaced with a third WD to see if vendor commonality would make a difference, doesn't seem to so far. Here's the output from that 3-disk WD array in RAID 0.

code:

root@nexenta:/export/home/admin# zpool status
  pool: syspool
 state: ONLINE
 scan: scrub repaired 0 in 0h2m with 0 errors on Mon Aug  8 23:36:25 2011
config:

        NAME        STATE     READ WRITE CKSUM
        syspool     ONLINE       0     0     0
          c0t5d0s0  ONLINE       0     0     0

errors: No known data errors

  pool: tank1
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank1       ONLINE       0     0     0
          c0t0d0    ONLINE       0     0     0
          c0t2d0    ONLINE       0     0     0
          c0t3d0    ONLINE       0     0     0

errors: No known data errors

root@nexenta:/export/home/admin# zfs get all tank1

NAME   PROPERTY              VALUE                  SOURCE
tank1  type                  filesystem             -
tank1  creation              Mon Aug  8 23:32 2011  -
tank1  used                  512M                   -
tank1  available             1.34T                  -
tank1  referenced            512M                   -
tank1  compressratio         1.00x                  -
tank1  mounted               yes                    -
tank1  quota                 none                   default
tank1  reservation           none                   default
tank1  recordsize            128K                   default
tank1  mountpoint            /volumes/tank1         local
tank1  sharenfs              off                    default
tank1  checksum              on                     default
tank1  compression           off                    local
tank1  atime                 on                     default
tank1  devices               on                     default
tank1  exec                  on                     default
tank1  setuid                on                     default
tank1  readonly              off                    default
tank1  zoned                 off                    default
tank1  snapdir               hidden                 default
tank1  aclmode               discard                default
tank1  aclinherit            restricted             default
tank1  canmount              on                     default
tank1  xattr                 on                     default
tank1  copies                1                      default
tank1  version               5                      -
tank1  utf8only              off                    -
tank1  normalization         none                   -
tank1  casesensitivity       sensitive              -
tank1  vscan                 off                    default
tank1  nbmand                off                    default
tank1  sharesmb              off                    default
tank1  refquota              none                   default
tank1  refreservation        none                   default
tank1  primarycache          all                    default
tank1  secondarycache        all                    default
tank1  usedbysnapshots       0                      -
tank1  usedbydataset         512M                   -
tank1  usedbychildren        76.5K                  -
tank1  usedbyrefreservation  0                      -
tank1  logbias               latency                default
tank1  dedup                 off                    default
tank1  mlslabel              none                   default
tank1  sync                  standard               default

root@nexenta:/export/home/admin# dd if=/dev/zero of=/volumes/tank1/file.out count=1M

1048576+0 records in
1048576+0 records out
536870912 bytes (537 MB) copied, 15.5217 seconds, 34.6 MB/s

Running pfstat while commiting a huge amount of /dev/zero to disk using dd, I was able to get the load average as high as 1.25 over a few minutes time:

code:

   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP       
  3386 root     3620K 1636K cpu0    10    0   0:00:32  39% dd/1
  2802 root        0K    0K sleep   99  -20   0:00:12 1.7% zpool-tank1/138
  1552 root       22M   20M sleep   59    0   0:00:16 0.2% hosts-check/1
   830 root       42M   40M sleep   59    0   0:00:25 0.2% python2.5/55
   813 root       67M   29M sleep   59    0   0:00:22 0.1% nms/1
  3008 root       22M   20M sleep   44    5   0:00:04 0.1% volume-check/1
  2431 admin    4304K 2952K cpu1    59    0   0:00:00 0.0% prstat/1
   409 root     3620K 2132K sleep   59    0   0:00:03 0.0% dbus-daemon/1
     5 root        0K    0K sleep   99  -20   0:00:19 0.0% zpool-syspool/138
   559 root       35M 4580K sleep   59    0   0:00:00 0.0% nmdtrace/1
  1700 root     6064K 4368K sleep   59    0   0:00:00 0.0% nfsstat.pl/1
  1595 root       22M   20M sleep   59    0   0:00:03 0.0% ses-check/1
   551 www-data   17M 7512K sleep   59    0   0:00:00 0.0% apache2/28
    11 root       12M   11M sleep   59    0   0:00:14 0.0% svc.configd/14
   441 root       11M 7000K sleep   59    0   0:00:07 0.0% smbd/15
   540 root       23M   14M sleep   59    0   0:00:01 0.0% fmd/22
   357 root     2600K 1576K sleep  100    -   0:00:00 0.0% xntpd/1
   240 root     6860K 4408K sleep   59    0   0:00:00 0.0% nscd/34
   370 messageb 3492K 1940K sleep   59    0   0:00:00 0.0% dbus-daemon/1
   913 root       48M 4812K sleep   59    0   0:00:00 0.0% nmc/1
   251 root     3604K 2620K sleep   59    0   0:00:00 0.0% picld/4
Total: 77 processes, 619 lwps, load averages: 1.21, 0.82, 0.46

And for good measure, the output from a few more dd sessions. 3 WD drives in RAID-0:

code:

root@nexenta:/export/home/admin# dd if=/dev/zero of=/volumes/tank1/file.out count=1M
1048576+0 records in
1048576+0 records out
536870912 bytes (537 MB) copied, 15.5217 seconds, 34.6 MB/s

root@nexenta:/export/home/admin# dd if=/dev/zero of=/volumes/tank1/file.out count=1M
1048576+0 records in
1048576+0 records out
536870912 bytes (537 MB) copied, 15.5664 seconds, 34.5 MB/s

root@nexenta:/export/home/admin# dd if=/dev/zero of=/volumes/tank1/file.out count=5M
5242880+0 records in
5242880+0 records out
2684354560 bytes (2.7 GB) copied, 76.7602 seconds, 35.0 MB/s

root@nexenta:/export/home/admin# dd if=/dev/zero of=/volumes/tank1/file.out count=5M
5242880+0 records in
5242880+0 records out
2684354560 bytes (2.7 GB) copied, 76.8237 seconds, 34.9 MB/s

root@nexenta:/export/home/admin# dd if=/dev/zero of=/volumes/tank1/file.out count=5M
5242880+0 records in
5242880+0 records out
2684354560 bytes (2.7 GB) copied, 76.2735 seconds, 35.2 MB/s

If anyone else has ProLiant ZFS benchmarks, it would be great to have some more data points to compare against here.

Galler
Jan 28, 2008


I'll be happy to but I'm really not understanding the dd command/benchmarking process.

If you can give me some basic step by step instructions on doing whatever it is I need to do I will.

e: I go all retarded whenever I get in front of a unix terminal. It's really obnoxious.

Galler fucked around with this message at 05:32 on Aug 9, 2011

McRib Sandwich
Aug 4, 2006
I am a McRib Sandwich

Galler posted:

I'll be happy to but I'm really not understanding the dd command/benchmarking process.

If you can give me some basic step by step instructions on doing whatever it is I need to do I will.

e: I go all retarded whenever I get in front of a unix terminal. It's really obnoxious.

All the dd command is telling the machine to do is to write binary zeros (from the /dev/zero "device") to the output file (hence "of") called file.out on my zpool named tank1. The "count" attribute just tells the OS how many blocks worth of zeros to write.

If you're in a place to provide some benchmarks that would be awesome, but the last thing I want to do is encourage you to dive into the command line with a command as powerful as dd if you're not comfortable doing it. If you're not careful, you can nuke things pretty quickly.

Does napp-it provide any sort of benchmarking you could run? If I'm remembering correctly, bonnie++ is part of the tool suite that gets installed with the platform, as an example.

doctor_god
Jun 9, 2002

McRib Sandwich posted:

Bunch of dd stuff

Don't forget to take the blocksize into account - by default dd transfers in 512 byte increments, and ZFS doesn't really like that, since writes that are less than a full block require the block to be read, modified, and written back to disk. Your filesystem appears to be using the default 128K recordsize, so you should get much better throughput transferring in increments that are a multiple of 128KB:

code:
ein ~ # dd if=/dev/zero of=/pool/test count=1M
1048576+0 records in
1048576+0 records out
536870912 bytes (537 MB) copied, 98.9068 s, 5.4 MB/s

ein ~ # zfs get recordsize pool
NAME  PROPERTY    VALUE    SOURCE
pool  recordsize  128K     default

ein ~ # dd if=/dev/zero of=/pool/test bs=128K count=4K
4096+0 records in
4096+0 records out
536870912 bytes (537 MB) copied, 1.62324 s, 331 MB/s

ein ~ # dd if=/dev/zero of=/pool/test bs=512K count=1K
1024+0 records in
1024+0 records out
536870912 bytes (537 MB) copied, 1.28789 s, 417 MB/s
Edit: This is a 10x2TB RAIDZ-2 pool running on Linux via ZFS-FUSE.

doctor_god fucked around with this message at 05:49 on Aug 9, 2011

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Shilling 2: Electric Boogaloo

Not only is my old NAS still ready for five drives of fun, just waiting for you to buy it, but I've put up a Chenbro server chassis that holds 4 hotswap SATA drives and a mini-ITX board for a bunch off retail.

SA Mart

McRib Sandwich
Aug 4, 2006
I am a McRib Sandwich

doctor_god posted:

Don't forget to take the blocksize into account - by default dd transfers in 512 byte increments, and ZFS doesn't really like that, since writes that are less than a full block require the block to be read, modified, and written back to disk. Your filesystem appears to be using the default 128K recordsize, so you should get much better throughput transferring in increments that are a multiple of 128KB:

code:
ein ~ # dd if=/dev/zero of=/pool/test count=1M
1048576+0 records in
1048576+0 records out
536870912 bytes (537 MB) copied, 98.9068 s, 5.4 MB/s

ein ~ # zfs get recordsize pool
NAME  PROPERTY    VALUE    SOURCE
pool  recordsize  128K     default

ein ~ # dd if=/dev/zero of=/pool/test bs=128K count=4K
4096+0 records in
4096+0 records out
536870912 bytes (537 MB) copied, 1.62324 s, 331 MB/s

ein ~ # dd if=/dev/zero of=/pool/test bs=512K count=1K
1024+0 records in
1024+0 records out
536870912 bytes (537 MB) copied, 1.28789 s, 417 MB/s
Edit: This is a 10x2TB RAIDZ-2 pool running on Linux via ZFS-FUSE.

Interesting. Is 128K a reasonable default zpool block size for general use? The end goal here is to present a zvol over iSCSI to my Mac, since Nexenta doesn't come with AFP / netatalk rolled in. As far as I can tell, the default block size on HFS+ is 4K for disks larger than 1GB, and I can't find a straightforward way to change that in Disk Utility. Should I specify a 4K block size for my zpool, then? That seems way small for ZFS stripes, but 128K seems extremely wasteful as an HFS+ block size. Admittedly I'm out of my element when it comes to this sort of tuning, but something about this block size discrepancy doesn't sound quite right to me.

Galler
Jan 28, 2008


Turns out I was just loving up the path to my zfs volume. Command makes sense again.

Anyway did some things. Not really sure what to make of all this.

code:
root@NappITNAS:/Tank/Primary# zfs get recordsize Tank
NAME  PROPERTY    VALUE    SOURCE
Tank  recordsize  128K     default

root@NappITNAS:/Tank/Primary# dd if=/dev/zero of=/Tank/Primary/file.out count=1M
1048576+0 records in
1048576+0 records out
536870912 bytes (537 MB) copied, 15.8574 s, 33.9 MB/s

root@NappITNAS:/Tank/Primary# dd if=/dev/zero of=/Tank/Primary/file.out bs=128K count=4K
4096+0 records in
4096+0 records out
536870912 bytes (537 MB) copied, 1.00772 s, 533 MB/s

root@NappITNAS:/Tank/Primary# dd if=/dev/zero of=/Tank/Primary/file.out bs=128K count=4K
4096+0 records in
4096+0 records out
536870912 bytes (537 MB) copied, 1.08442 s, 495 MB/s

root@NappITNAS:/Tank/Primary# dd if=/dev/zero of=/Tank/Primary/file.out bs=512K count=1K
1024+0 records in
1024+0 records out
536870912 bytes (537 MB) copied, 1.56316 s, 343 MB/s

root@NappITNAS:/Tank/Primary# dd if=/dev/zero of=/Tank/Primary/file.out bs=512K count=1K
1024+0 records in
1024+0 records out
536870912 bytes (537 MB) copied, 0.490415 s, 1.1 GB/s

root@NappITNAS:/Tank/Primary# dd if=/dev/zero of=/Tank/Primary/file.out bs=128K count=40K
40960+0 records in
40960+0 records out
5368709120 bytes (5.4 GB) copied, 16.2994 s, 329 MB/s

root@NappITNAS:/Tank/Primary# dd if=/dev/zero of=/Tank/Primary/file.out bs=512K count=10K
10240+0 records in
10240+0 records out
5368709120 bytes (5.4 GB) copied, 16.4235 s, 327 MB/s
Oh and yeah there is a benchmark utility in napp-it but last time I tried to use it I couldn't figure out how to get any output out of it and then while trying to find that I accidentally ran it again and it wouldn't stop running until I rebooted. Also half the reason I got this thing was to learn knew stuff that I wouldn't otherwise get exposed to so I'm fine poking around. Especially now when I've got 2 copies of everything on this NAS.

Oh, hey, there's a dd bench thing too.

code:
write 1.024 GB via dd, please wait...
time dd if=/dev/zero of=/Tank/dd.tst bs=1024000 count=1000

1000+0 records in
1000+0 records out

real        2.8
user        0.0
sys         1.2

1.024 GB in 2.8s = 365.71 MB/s Write

read 1.024 GB via dd, please wait...
time dd if=/Tank/dd.tst of=/dev/null bs=1024000

1000+0 records in
1000+0 records out

real        0.7
user        0.0
sys         0.5

1.024 GB in 0.7s = 1462.86 MB/s Read

write 10.24 GB via dd, please wait...
time dd if=/dev/zero of=/Tank/dd.tst bs=1024000 count=10000

10000+0 records in
10000+0 records out

real       32.7
user        0.0
sys        12.8

10.24 GB in 32.7s = 313.15 MB/s Write

read 10.24 GB via dd, please wait...
time dd if=/Tank/dd.tst of=/dev/null bs=1024000

10000+0 records in
10000+0 records out

real       27.0
user        0.0
sys         7.9

10.24 GB in 27s = 379.26 MB/s Read
Running that bonnie thing. I'll check it in the morning as I should have been asleep an hour ago.

Galler fucked around with this message at 07:07 on Aug 9, 2011

doctor_god
Jun 9, 2002

McRib Sandwich posted:

Interesting. Is 128K a reasonable default zpool block size for general use? The end goal here is to present a zvol over iSCSI to my Mac, since Nexenta doesn't come with AFP / netatalk rolled in. As far as I can tell, the default block size on HFS+ is 4K for disks larger than 1GB, and I can't find a straightforward way to change that in Disk Utility. Should I specify a 4K block size for my zpool, then? That seems way small for ZFS stripes, but 128K seems extremely wasteful as an HFS+ block size. Admittedly I'm out of my element when it comes to this sort of tuning, but something about this block size discrepancy doesn't sound quite right to me.

I don't have any direct experience with iSCSI (I'm just sharing my pool via Samba), but I'd expect the defaults to be OK since 128K is evenly divisible by 4K. Most software will probably deal with data in chunks larger than the filesystem blocksize - Windows seems to use 16K chunks when I'm copying data on and off the NAS, and it's still fast enough to saturate GBit Ethernet.

Galler
Jan 28, 2008


code:
NAME 	SIZE 	Bonnie 	Date(y.m.d) 	File 	Seq-Wr-Chr 	%CPU 	Seq-Write
Tank 	7.25T 	start 	2011.08.08 	16G 	34 MB/s 	94 	198 MB/s

%CPU 	Seq-Rewr 	%CPU 	Seq-Rd-Chr 	%CPU 	Seq-Read
65 	116 MB/s 	44 	25 MB/s 	98 	307 MB/s

%CPU 	Rnd Seeks 	%CPU 	Files 	Seq-Create 	Rnd-Create
43 	551.4/s 	5 	16 	15475/s 	15211/s 

quote:

Most important values are: Sequential read and write values (Block) (Seq-Write & Seq-Read)

Bonnie ++ 1.03:

"Bonnie Results
Bonnie runs six tests, three that perform writes, and three that perform reads.
Here's the description provided by the benchmark's author for each of these tests:

Write Tests (Out)

* Per-Character: The file is written using the putc() stdio macro. The loop that does the writing should be small enough to fit into any reasonable I-cache.
The CPU overhead here is that required to do the stdio code plus the OS file space allocation.
* Block: The file is created using write(2). The CPU overhead should be just the OS file space allocation.
* Rewrite: Each Chunk (currently, the size is 16384) of the file is read with read(2), dirtied, and rewritten with write(2), requiring an lseek(2).
Since no space allocation is done, and the I/O is well-localized, this should test the effectiveness of the filesystem cache and the speed of data transfer.

Read Tests (In)

* Per-Character: The file is read using the getc() stdio macro. Once again, the inner loop is small. This should exercise only stdio and sequential input.
* Block: The file is read using read(2). This should be a very pure test of sequential input performance.
* Random Seeks: This test runs SeekProcCount (currently 4) processes in parallel, doing a total of 4000 lseek()s to locations in the file computed using
by random() in bsd systems, drand48() on sysV systems. In each case, the block is read with read(2). In 10% of cases, it is dirtied and written back with write(2).
The idea behind the SeekProcCount processes is to make sure there's always a seek queued up."

movax
Aug 30, 2008

Galler posted:

code:
NAME 	SIZE 	Bonnie 	Date(y.m.d) 	File 	Seq-Wr-Chr 	%CPU 	Seq-Write
Tank 	7.25T 	start 	2011.08.08 	16G 	34 MB/s 	94 	198 MB/s

%CPU 	Seq-Rewr 	%CPU 	Seq-Rd-Chr 	%CPU 	Seq-Read
65 	116 MB/s 	44 	25 MB/s 	98 	307 MB/s

%CPU 	Rnd Seeks 	%CPU 	Files 	Seq-Create 	Rnd-Create
43 	551.4/s 	5 	16 	15475/s 	15211/s 

What params did you pass to get this nice output?

Galler
Jan 28, 2008


That's how napp-it handles that bonnie benchmarking thing.

Napp-it's gui makes out really easy. Just push the benchmark button and wait.

movax
Aug 30, 2008

Galler posted:

That's how napp-it handles that bonnie benchmarking thing.

Napp-it's gui makes out really easy. Just push the benchmark button and wait.

Ah, of course.

e: got html script working:

movax fucked around with this message at 16:06 on Aug 9, 2011

McRib Sandwich
Aug 4, 2006
I am a McRib Sandwich
Feeling a bit guilty here, feels like I killed this thread with all the talk of benchmarks.

Anyway, I have done still more testing -- installed Solaris and Nexenta Core on different disks, put napp-it on top of each and ran bonnie++ on the system. In both Solaris and Nexenta, bonnie++ said that I did better than 135MB/s sequential read, and 121MB/s sequential write. I'm still seeing way crappier performance when I actually go to *use* this thing, though, instead of just benchmarking it.

I've noticed a particular behavior when doing copy tests that I thought was normal for ZFS, but now I'm beginning to wonder. When I copy a large file over, I'll see lots of progress in the Mac OS X copy window all at once, then it'll suddenly stall. This is exactly concurrent with the disks making write noises. I had just figured that ZFS was caching to RAM and then flushing to disk during these periods. Is this expected behavior? The finder copy progress pretty much halts any time the disks themselves are getting hit, which seems out of sorts to me. :sigh:

Galler
Jan 28, 2008


I just sent a 45.9 GB (49,359,949,824 bytes) file from win 7 to my microserver and it took 9 minutes. So 85 MB/s I guess. It did slow down a bit once it got to about 8 GB transferred which I suppose would make sense if ZFS uses RAM as a buffer (8 GB RAM), but it only slowed about 10 MB/s as opposed to coming to a screeching halt.

clavicle
Jan 19, 2006

I'd like some help picking parts for my DIY NAS. I've spent the past few days considering a bunch of options like Atom/Zacate and decided that for the availability of parts and cost, it's really much better sticking with mini-ATX for now. I went for parts that are cheap as possible without buying no name/dubious stuff, hence AMD over Intel since i3s by themselves cost twice as much as that AMD proc. I'm in Brazil so these prices might seem a bit scary to you; that's normal, unfortunately. It used to be much worse.

Here's what I have so far:


Athlon II X2 240 Box - R$ 96,96 (61 USD)
Gigabyte GA-880GMA-UD2H - R$ 280,00 (172 USD)
4x Seagate ST2000DL003 (Barracuda Green 2TB 5900 RPM 64MB Cache SATA 6.0Gb/s 3.5) - 4x R$ 183 (112 USD) - R$ 732 (450 USD)
CoolerMaster TM-350-PMSR (350 W) PSU - R$ 80,00 (49 USD)
2x 2 Gb Crucial CT25664BA1339 - 2x R$ 55 - R$ 110 (68 USD) [I know, overkill]
Case: TBD


Total so far: USD 800 (R$ 1300)


Sounds steep -- believe me, I know -- but the HP ProLiant N36L I was looking at to serve as the *basis* of my setup costs 860 USD by itself here.

There's two things I'm torn about : whether I can find a cheaper motherboard that's as good as that Gigabyte one, and what PSU to pick. If someone could lend a hand and look through Boadica (our stuck-in-1999-webdesign version of newegg) for something that'd be better, I'd appreciate it. Here's the links for AM3 motherboards and PSUs. Even though the site's in Portuguese it's easy to navigate because there's a dropdown to the right of "Modelos" where everything that's being sold shows up. (you click "Listar" to... list :aaa:)

If you think this setup just sucks altogether, please say so as well. Is AMD fundamentally wrong here? I'm expecting low power consumption during most of the day, as it will be idling.

NextWish
Sep 19, 2002

clavicle posted:

I'd like some help picking parts for my DIY NAS. I've spent the past few days considering a bunch of options like Atom/Zacate and decided that for the availability of parts and cost, it's really much better sticking with mini-ATX for now. I went for parts that are cheap as possible without buying no name/dubious stuff, hence AMD over Intel since i3s by themselves cost twice as much as that AMD proc. I'm in Brazil so these prices might seem a bit scary to you; that's normal, unfortunately. It used to be much worse.

Here's what I have so far:


Athlon II X2 240 Box - R$ 96,96 (61 USD)
Gigabyte GA-880GMA-UD2H - R$ 280,00 (172 USD)
4x Seagate ST2000DL003 (Barracuda Green 2TB 5900 RPM 64MB Cache SATA 6.0Gb/s 3.5) - 4x R$ 183 (112 USD) - R$ 732 (450 USD)
CoolerMaster TM-350-PMSR (350 W) PSU - R$ 80,00 (49 USD)
2x 2 Gb Crucial CT25664BA1339 - 2x R$ 55 - R$ 110 (68 USD) [I know, overkill]
Case: TBD


Total so far: USD 800 (R$ 1300)


Sounds steep -- believe me, I know -- but the HP ProLiant N36L I was looking at to serve as the *basis* of my setup costs 860 USD by itself here.

There's two things I'm torn about : whether I can find a cheaper motherboard that's as good as that Gigabyte one, and what PSU to pick. If someone could lend a hand and look through Boadica (our stuck-in-1999-webdesign version of newegg) for something that'd be better, I'd appreciate it. Here's the links for AM3 motherboards and PSUs. Even though the site's in Portuguese it's easy to navigate because there's a dropdown to the right of "Modelos" where everything that's being sold shows up. (you click "Listar" to... list :aaa:)

If you think this setup just sucks altogether, please say so as well. Is AMD fundamentally wrong here? I'm expecting low power consumption during most of the day, as it will be idling.
Jesus - you are getting loving ripped. Just order the N36-L from america (110v) or australia (v240) and use a mail forwarder http://www.shipito.com/

clavicle
Jan 19, 2006

NextWish posted:

Jesus - you are getting loving ripped. Just order the N36-L from america (110v) or australia (v240) and use a mail forwarder http://www.shipito.com/

I know, but it's like this for almost anything that comes from abroad, we have high taxes and import tariffs. If I use a mail forwarder and it gets stopped by customs I have to pay 60% in taxes.

NextWish
Sep 19, 2002

clavicle posted:

I know, but it's like this for almost anything that comes from abroad, we have high taxes and import tariffs. If I use a mail forwarder and it gets stopped by customs I have to pay 60% in taxes.
Find a trust worthy goon to send it to you without a commercial invoice inside? (i.e. gift)

Or!

Get it shipped to Bolivia, Peru, etc and catch return flight (or drive) and pick it up.

heeen
May 14, 2005

CAT NEVER STOPS

clavicle posted:

Sounds steep -- believe me, I know -- but the HP ProLiant N36L I was looking at to serve as the *basis* of my setup costs 860 USD by itself here.
drat! The N36L just dropped to eur 155 ($220) in Germany. That is post taxes.

clavicle
Jan 19, 2006

heeen posted:

drat! The N36L just dropped to eur 155 ($220) in Germany. That is post taxes.

I'm considering getting it anyway, but it's a lot more expensive than that on newegg ($300 and $325 versions).

heeen
May 14, 2005

CAT NEVER STOPS

clavicle posted:

I'm expecting low power consumption during most of the day, as it will be idling.
I get 30W when it is idling with the disks spun down and the cpu scaled down.
40W with the drives active, 50W when transferring stuff.

code:
freenas# sysctl dev.cpu.0.freq
dev.cpu.0.freq: 100

freenas# smartctl -n standby -i /dev/ada0 && smartctl -n standby -i /dev/ada1 && smartctl -n standby -i /dev/ada2 && smartctl -n standby -i /dev/ada3
smartctl 5.41 2011-06-09 r3365 [FreeBSD 8.2-RELEASE-p2 amd64] (local build)
Copyright (C) 2002-11 by Bruce Allen, [url]http://smartmontools.sourceforge.net[/url]

Device is in STANDBY mode, exit(2)
freenas# smartctl -n standby -i /dev/ada0; smartctl -n standby -i /dev/ada1; smartctl -n standby -i /dev/ada2; smartctl -n standby -i /dev/ada3
smartctl 5.41 2011-06-09 r3365 [FreeBSD 8.2-RELEASE-p2 amd64] (local build)
Copyright (C) 2002-11 by Bruce Allen, [url]http://smartmontools.sourceforge.net[/url]

Device is in STANDBY mode, exit(2)
smartctl 5.41 2011-06-09 r3365 [FreeBSD 8.2-RELEASE-p2 amd64] (local build)
Copyright (C) 2002-11 by Bruce Allen, [url]http://smartmontools.sourceforge.net[/url]

Device is in STANDBY mode, exit(2)
smartctl 5.41 2011-06-09 r3365 [FreeBSD 8.2-RELEASE-p2 amd64] (local build)
Copyright (C) 2002-11 by Bruce Allen, [url]http://smartmontools.sourceforge.net[/url]

Device is in STANDBY mode, exit(2)
smartctl 5.41 2011-06-09 r3365 [FreeBSD 8.2-RELEASE-p2 amd64] (local build)
Copyright (C) 2002-11 by Bruce Allen, [url]http://smartmontools.sourceforge.net[/url]

Device is in STANDBY mode, exit(2)

LiquidRain
May 21, 2007

Watch the madness!

Howdy folks, looking for a weekend project for my NAS/HTPC!

I have a 2GHz AMD dual-core (socket 939) with 2GB RAM, two 2TB HDDs, and a GeForce 9400. The thing's already built and ready to go. I'm well versed in Linux and looking for tips/suggestions on the best distro to go with. It may also be that the best "distro" is Windows. You let me know. Here are my needs:

1. RAID 1 with hot-swap failover (the drives in the system are in hot-swap cages)
2. SMB sharing
3. XBMC
4. Good sleep or hibernate support (both is ideal, one or the other is acceptable)
5. Bluetooth keyboard/mouse support

I want this to act as little like a Linux box as possible in the end. When I turn on the machine, XBMC should show up. When the box comes back from sleeping and wakes up (via, say, Wake-on-LAN or you know, a power button), XBMC should fire up. When I hit up its web port, I want as NAS-like an interface as possible.

It'd be pretty sweet, too, if I could configure the box to go to sleep when I hit the power button.

I'd still love the ability to shell into the box as a Linux playground for anything I may want (if I do go with Linux), but that's tertiary, as before this I was considering a dedicated NAS box. (I was considering the Synology DS211j for its excellent power management)

What distro, programs, or (better yet!) guide would get me off on the right foot?

LiquidRain fucked around with this message at 22:32 on Aug 11, 2011

IOwnCalculus
Apr 2, 2003





XBMC Live, installed to a hard drive, is just a well-done Ubuntu/XBMC integration. There's nothing about it that should preclude you from setting up a mdraid RAID1 array to install everything to, or SSHing in and installing samba for SMB/CIFS.

I can't remember the last time I dealt with bluetooth keyboard and mouse on Linux, if I ever actually did, and I can't comment on sleep/hibernate. That said I know I've seen reviews of some of the little remote-style keyboard/mouse combos that have had positive Linux experiences.

Tornhelm
Jul 26, 2008

clavicle posted:

I know, but it's like this for almost anything that comes from abroad, we have high taxes and import tariffs. If I use a mail forwarder and it gets stopped by customs I have to pay 60% in taxes.

Even if you pay 60% extra in taxes, it will still probably come out cheaper than building it yourself. The entire box itself is about the same price as the psu/cpu/mobo if you get it from newegg, cheaper if you get it from somewhere else and it wouldn't add much more to the cost when you add in the case too. Then all you need is the ram and hdd's.

Also, see if your country has a HP site - quite a few Australians bought one that way after the main stores ran out of stock, and the price that way might be competitive.

Puddin
Apr 9, 2004
Leave it to Brak
Can anyone recommend a video card to suit the N36L? Picked one up the other day and I'm really impressed with it. Love the slide out motherboard.

Adbot
ADBOT LOVES YOU

Tornhelm
Jul 26, 2008

Puddin posted:

Can anyone recommend a video card to suit the N36L? Picked one up the other day and I'm really impressed with it. Love the slide out motherboard.

Either the ati 5450 or the nvidia 210/520 are the most popular options.

https://spreadsheets1.google.com/spreadsheet/ccc?authkey=CImGkocJ&hl=en&key=tHx2pDE7M2SIWsbq0hs352A&hl=en&authkey=CImGkocJ#gid=0

Thats from the first post of the OCAU owners thread, and it gives a pretty damned good cross-section of the kinds of things people are using it for and what gear they're running it with. The biggest thing to remember with the microserver is that it doesn't come with audo ports so you will either need to use a seperate soundcard (possibly usb based depending on your OS) or have hdmi audio for your videocard and run the sound through that.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply