|
McRib Sandwich posted:Well, this is roundly disappointing. Finally got NexentaStor running on my ProLiant (broke down and used a SATA CD drive cause I was tired of messing with USB sticks). For some weird reason, even in a RAID 0-equivalent configuration across 4x500GB drives, I can't seem to coax more than about 30MB/s out of the system. This is over CIFS, gigabit, from the server to a Mac OS X client. Maxed out the server with the 8GB ECC RAM that Galler linked to in his howto, so I don't think memory is a constraint here. I didn't test performance at all with nexenta but I would think it would be similar to solaris. The only real difference in our setups should be fairly minor differences in software (I think). How big are the files you're transferring? Mine slows down a lot when moving a ton of little files and starts hauling rear end when moving fewer large files. The 500gb of assorted files I sent to mine averaged around 50mb/s write and anything I pull off it maxes out my gigabit ethernet. It was very slow then it started transferring chat logs (shittons of them and many only a few kb each) but as soon as those were out of the way and it got much faster.
|
# ? Aug 7, 2011 08:25 |
|
|
# ? Apr 23, 2024 13:02 |
|
Galler posted:I didn't test performance at all with nexenta but I would think it would be similar to solaris. The only real difference in our setups should be fairly minor differences in software (I think). I was running the AJA System Test utility, which appears to create a monolithic filesize of your choosing (I tested with 128MB and 512MB). No idea if it's moving small "subfiles" during the test or not. I used Finder copy to drag over some large TV episodes to the drive and that was less conclusive. There seemed to be some zippy periods and some sags during those transfers. I don't know how high-quality the OS X SMB/CIFS implementation is, but even at that, 20MB/s sounds slow to me. Guess I'll have to do some more digging.
|
# ? Aug 7, 2011 09:40 |
|
Hard drive benchmarking is actually kind of a complicated thing to do, and there's a lot of factors involved. Sending a file from your laptop to your file server is really a pretty poor way to go about it, as anything from the network stack at either end to the files themselves can cause huge variances in performances. Edit: if you don't know the specifics of whatever system you're using, your test is kind of worthless, since there's no way to know the difference between you setting the test up wrong, running it wrong, or outright running the wrong test, versus an actual problem with the setup you're describing. Edit 2: First test, get on the system directly at a console and test the direct write speed with something like dd if=/dev/zero of=/filename.blah and see what you get. Remove the issue of CIFS, Gigabit, OS X and just see if the filesystem is vdev and filesystem are up to speed. teamdest fucked around with this message at 10:37 on Aug 7, 2011 |
# ? Aug 7, 2011 10:33 |
|
teamdest posted:Hard drive benchmarking is actually kind of a complicated thing to do, and there's a lot of factors involved. Sending a file from your laptop to your file server is really a pretty poor way to go about it, as anything from the network stack at either end to the files themselves can cause huge variances in performances. I agree. Console commands are my next step, but the Nexenta Management Console seems pretty (intentionally) limited in scope. I need to find a way to drop into a privileged CLI in this thing, but I'm still learning the ropes. On the plus side, I did get iSCSI working, the hope being that connecting with a block-level device presentation will also be a little closer to "real" than network filesystem abstractions. I'll definitely report back when I have more info in hand, appreciate the feedback.
|
# ? Aug 7, 2011 18:11 |
|
McRib Sandwich posted:I agree. Console commands are my next step, but the Nexenta Management Console seems pretty (intentionally) limited in scope. I need to find a way to drop into a privileged CLI in this thing, but I'm still learning the ropes. A bit more info. Ran this on the ProLiant again, on the Nexenta command line against a 4x500GB drive zpool, configured RAID 0 equivalent. Compression on, synchronous requests written to stable storage was enabled. Results: code:
code:
edit: The 34MB/s speeds are also in line with my timed tests in copying over large files to a iSCSI-mounted zvol that I created on top of the same pool. McRib Sandwich fucked around with this message at 00:13 on Aug 8, 2011 |
# ? Aug 8, 2011 00:10 |
|
McRib Sandwich posted:A bit more info. Ran this on the ProLiant again, on the Nexenta command line against a 4x500GB drive zpool, configured RAID 0 equivalent. Compression on, synchronous requests written to stable storage was enabled. Results: Excellent, so it seems that the issue is somewhere in ZFS or in the drives themselves. are they already built and have data on them, or could you break the array to test the drives individually?
|
# ? Aug 8, 2011 12:41 |
|
teamdest posted:Excellent, so it seems that the issue is somewhere in ZFS or in the drives themselves. are they already built and have data on them, or could you break the array to test the drives individually? Actually, I ended up doing that last night after posting that update. I found that running the same dd command on a pool comprised of a single drive delivered about the same write speed -- 34MB/s. That said, I would've expected increased performance from a RAID 10 zpool of those four drives, but I didn't see any increase in write performance in that configuration. Anyway, I have free reign over these drives and can break them out as needed. What other tests should I run against them?
|
# ? Aug 8, 2011 16:45 |
|
What's your CPU usage when you are moving data? If compression is on, I'm guessing that's killing your speeds.
|
# ? Aug 8, 2011 16:49 |
|
McRib Sandwich posted:Actually, I ended up doing that last night after posting that update. I found that running the same dd command on a pool comprised of a single drive delivered about the same write speed -- 34MB/s. That said, I would've expected increased performance from a RAID 10 zpool of those four drives, but I didn't see any increase in write performance in that configuration. Well I would expect your write speed and read speed to pick up on a striped array, that's kind of strange. A Mirror, not so much, since it has to write the same data twice. Can you try it locally on a striped array instead of a mirrored stripe? Just trying to eliminate variables. And could I see the settings of the pools you're making? something like `zfs get all <poolname>` should output them, though I don't know if that's the exact syntax.
|
# ? Aug 8, 2011 16:52 |
|
Methylethylaldehyde posted:So windows is being held as the security standard? Sadly, yes. Methylethylaldehyde posted:Also, if you're able to use VLANs, it would be possible to hook the mail sever up to the disk store directly on it's own VLAN, so it's not possible for anything BUT the mail server to talk to it? It's just a regular file server, not a mail server. Anyway, I'll probably just end up throwing a bunch of disks in a RAID array into my desktop PC.
|
# ? Aug 8, 2011 16:56 |
|
McRib Sandwich posted:Actually, I ended up doing that last night after posting that update. I found that running the same dd command on a pool comprised of a single drive delivered about the same write speed -- 34MB/s. That said, I would've expected increased performance from a RAID 10 zpool of those four drives, but I didn't see any increase in write performance in that configuration. Take one of the drives and format it as a bog standard UFS drive or something and benchmark it. See if the bottleneck comes from the hardware or the software. Did you examine your cpu usage when you were benchmarking the drives earlier?
|
# ? Aug 8, 2011 17:27 |
|
If I have extra good hardware sitting around, is FreeNAS with RAID-Z still a good route for a homebrew NAS box or are there better options out there?
|
# ? Aug 8, 2011 18:06 |
|
Well I"m finally going to build myself a decent NAS box, I had 2 HP microservers delivered this morning, just ordered 4 x 3TB drives and 8GB of ram for one of them, I'll decided what NAS software I'm using in the next day or so before the drives arrive. I figure 9TB after raid should last me a while, not sure what the next expansion step will be after this one, here's hoping 6gb drives are out before I need to upgrade again. Just need to make sure that none of the drives in my old desktop/file server box hear about it and decide it's time to finally die, it would just be my luck to have a drive go while copying everything off it. The other ones' going to be my sickbeard/sabnzb/utorrent box
|
# ? Aug 9, 2011 02:53 |
|
bloodynose posted:If I have extra good hardware sitting around, is FreeNAS with RAID-Z still a good route for a homebrew NAS box or are there better options out there? It's a solid choice. Not the fastest, but solid.
|
# ? Aug 9, 2011 03:10 |
|
teamdest posted:Well I would expect your write speed and read speed to pick up on a striped array, that's kind of strange. A Mirror, not so much, since it has to write the same data twice. Can you try it locally on a striped array instead of a mirrored stripe? Just trying to eliminate variables. And could I see the settings of the pools you're making? something like `zfs get all <poolname>` should output them, though I don't know if that's the exact syntax. Sure thing, output is below. I sliced up the pool a few ways this evening, including RAID 10 again, RAID-Z2, and RAID 0. Always hit a ceiling of about 34-35MB/s write using dd, without any other I/O hitting the disks in the pool. Enabling or disabling compression didn't seem to make much difference, either. I pulled a couple samsung drives out of the unit and replaced with a third WD to see if vendor commonality would make a difference, doesn't seem to so far. Here's the output from that 3-disk WD array in RAID 0. code:
code:
code:
|
# ? Aug 9, 2011 04:48 |
|
I'll be happy to but I'm really not understanding the dd command/benchmarking process. If you can give me some basic step by step instructions on doing whatever it is I need to do I will. e: I go all retarded whenever I get in front of a unix terminal. It's really obnoxious. Galler fucked around with this message at 05:32 on Aug 9, 2011 |
# ? Aug 9, 2011 05:25 |
|
Galler posted:I'll be happy to but I'm really not understanding the dd command/benchmarking process. All the dd command is telling the machine to do is to write binary zeros (from the /dev/zero "device") to the output file (hence "of") called file.out on my zpool named tank1. The "count" attribute just tells the OS how many blocks worth of zeros to write. If you're in a place to provide some benchmarks that would be awesome, but the last thing I want to do is encourage you to dive into the command line with a command as powerful as dd if you're not comfortable doing it. If you're not careful, you can nuke things pretty quickly. Does napp-it provide any sort of benchmarking you could run? If I'm remembering correctly, bonnie++ is part of the tool suite that gets installed with the platform, as an example.
|
# ? Aug 9, 2011 05:36 |
|
McRib Sandwich posted:Bunch of dd stuff Don't forget to take the blocksize into account - by default dd transfers in 512 byte increments, and ZFS doesn't really like that, since writes that are less than a full block require the block to be read, modified, and written back to disk. Your filesystem appears to be using the default 128K recordsize, so you should get much better throughput transferring in increments that are a multiple of 128KB: code:
doctor_god fucked around with this message at 05:49 on Aug 9, 2011 |
# ? Aug 9, 2011 05:43 |
|
Shilling 2: Electric Boogaloo Not only is my old NAS still ready for five drives of fun, just waiting for you to buy it, but I've put up a Chenbro server chassis that holds 4 hotswap SATA drives and a mini-ITX board for a bunch off retail. SA Mart
|
# ? Aug 9, 2011 06:20 |
|
doctor_god posted:Don't forget to take the blocksize into account - by default dd transfers in 512 byte increments, and ZFS doesn't really like that, since writes that are less than a full block require the block to be read, modified, and written back to disk. Your filesystem appears to be using the default 128K recordsize, so you should get much better throughput transferring in increments that are a multiple of 128KB: Interesting. Is 128K a reasonable default zpool block size for general use? The end goal here is to present a zvol over iSCSI to my Mac, since Nexenta doesn't come with AFP / netatalk rolled in. As far as I can tell, the default block size on HFS+ is 4K for disks larger than 1GB, and I can't find a straightforward way to change that in Disk Utility. Should I specify a 4K block size for my zpool, then? That seems way small for ZFS stripes, but 128K seems extremely wasteful as an HFS+ block size. Admittedly I'm out of my element when it comes to this sort of tuning, but something about this block size discrepancy doesn't sound quite right to me.
|
# ? Aug 9, 2011 06:36 |
|
Turns out I was just loving up the path to my zfs volume. Command makes sense again. Anyway did some things. Not really sure what to make of all this. code:
Oh, hey, there's a dd bench thing too. code:
Galler fucked around with this message at 07:07 on Aug 9, 2011 |
# ? Aug 9, 2011 07:00 |
|
McRib Sandwich posted:Interesting. Is 128K a reasonable default zpool block size for general use? The end goal here is to present a zvol over iSCSI to my Mac, since Nexenta doesn't come with AFP / netatalk rolled in. As far as I can tell, the default block size on HFS+ is 4K for disks larger than 1GB, and I can't find a straightforward way to change that in Disk Utility. Should I specify a 4K block size for my zpool, then? That seems way small for ZFS stripes, but 128K seems extremely wasteful as an HFS+ block size. Admittedly I'm out of my element when it comes to this sort of tuning, but something about this block size discrepancy doesn't sound quite right to me. I don't have any direct experience with iSCSI (I'm just sharing my pool via Samba), but I'd expect the defaults to be OK since 128K is evenly divisible by 4K. Most software will probably deal with data in chunks larger than the filesystem blocksize - Windows seems to use 16K chunks when I'm copying data on and off the NAS, and it's still fast enough to saturate GBit Ethernet.
|
# ? Aug 9, 2011 07:22 |
|
code:
quote:Most important values are: Sequential read and write values (Block) (Seq-Write & Seq-Read)
|
# ? Aug 9, 2011 13:23 |
|
Galler posted:
What params did you pass to get this nice output?
|
# ? Aug 9, 2011 15:01 |
|
That's how napp-it handles that bonnie benchmarking thing. Napp-it's gui makes out really easy. Just push the benchmark button and wait.
|
# ? Aug 9, 2011 15:37 |
|
Galler posted:That's how napp-it handles that bonnie benchmarking thing. Ah, of course. e: got html script working: movax fucked around with this message at 16:06 on Aug 9, 2011 |
# ? Aug 9, 2011 15:42 |
|
Feeling a bit guilty here, feels like I killed this thread with all the talk of benchmarks. Anyway, I have done still more testing -- installed Solaris and Nexenta Core on different disks, put napp-it on top of each and ran bonnie++ on the system. In both Solaris and Nexenta, bonnie++ said that I did better than 135MB/s sequential read, and 121MB/s sequential write. I'm still seeing way crappier performance when I actually go to *use* this thing, though, instead of just benchmarking it. I've noticed a particular behavior when doing copy tests that I thought was normal for ZFS, but now I'm beginning to wonder. When I copy a large file over, I'll see lots of progress in the Mac OS X copy window all at once, then it'll suddenly stall. This is exactly concurrent with the disks making write noises. I had just figured that ZFS was caching to RAM and then flushing to disk during these periods. Is this expected behavior? The finder copy progress pretty much halts any time the disks themselves are getting hit, which seems out of sorts to me.
|
# ? Aug 11, 2011 03:39 |
|
I just sent a 45.9 GB (49,359,949,824 bytes) file from win 7 to my microserver and it took 9 minutes. So 85 MB/s I guess. It did slow down a bit once it got to about 8 GB transferred which I suppose would make sense if ZFS uses RAM as a buffer (8 GB RAM), but it only slowed about 10 MB/s as opposed to coming to a screeching halt.
|
# ? Aug 11, 2011 04:17 |
|
I'd like some help picking parts for my DIY NAS. I've spent the past few days considering a bunch of options like Atom/Zacate and decided that for the availability of parts and cost, it's really much better sticking with mini-ATX for now. I went for parts that are cheap as possible without buying no name/dubious stuff, hence AMD over Intel since i3s by themselves cost twice as much as that AMD proc. I'm in Brazil so these prices might seem a bit scary to you; that's normal, unfortunately. It used to be much worse. Here's what I have so far: Athlon II X2 240 Box - R$ 96,96 (61 USD) Gigabyte GA-880GMA-UD2H - R$ 280,00 (172 USD) 4x Seagate ST2000DL003 (Barracuda Green 2TB 5900 RPM 64MB Cache SATA 6.0Gb/s 3.5) - 4x R$ 183 (112 USD) - R$ 732 (450 USD) CoolerMaster TM-350-PMSR (350 W) PSU - R$ 80,00 (49 USD) 2x 2 Gb Crucial CT25664BA1339 - 2x R$ 55 - R$ 110 (68 USD) [I know, overkill] Case: TBD Total so far: USD 800 (R$ 1300) Sounds steep -- believe me, I know -- but the HP ProLiant N36L I was looking at to serve as the *basis* of my setup costs 860 USD by itself here. There's two things I'm torn about : whether I can find a cheaper motherboard that's as good as that Gigabyte one, and what PSU to pick. If someone could lend a hand and look through Boadica (our stuck-in-1999-webdesign version of newegg) for something that'd be better, I'd appreciate it. Here's the links for AM3 motherboards and PSUs. Even though the site's in Portuguese it's easy to navigate because there's a dropdown to the right of "Modelos" where everything that's being sold shows up. (you click "Listar" to... list ) If you think this setup just sucks altogether, please say so as well. Is AMD fundamentally wrong here? I'm expecting low power consumption during most of the day, as it will be idling.
|
# ? Aug 11, 2011 04:32 |
|
clavicle posted:I'd like some help picking parts for my DIY NAS. I've spent the past few days considering a bunch of options like Atom/Zacate and decided that for the availability of parts and cost, it's really much better sticking with mini-ATX for now. I went for parts that are cheap as possible without buying no name/dubious stuff, hence AMD over Intel since i3s by themselves cost twice as much as that AMD proc. I'm in Brazil so these prices might seem a bit scary to you; that's normal, unfortunately. It used to be much worse.
|
# ? Aug 11, 2011 04:54 |
|
NextWish posted:Jesus - you are getting loving ripped. Just order the N36-L from america (110v) or australia (v240) and use a mail forwarder http://www.shipito.com/ I know, but it's like this for almost anything that comes from abroad, we have high taxes and import tariffs. If I use a mail forwarder and it gets stopped by customs I have to pay 60% in taxes.
|
# ? Aug 11, 2011 05:06 |
|
clavicle posted:I know, but it's like this for almost anything that comes from abroad, we have high taxes and import tariffs. If I use a mail forwarder and it gets stopped by customs I have to pay 60% in taxes. Or! Get it shipped to Bolivia, Peru, etc and catch return flight (or drive) and pick it up.
|
# ? Aug 11, 2011 06:06 |
|
clavicle posted:Sounds steep -- believe me, I know -- but the HP ProLiant N36L I was looking at to serve as the *basis* of my setup costs 860 USD by itself here.
|
# ? Aug 11, 2011 13:57 |
|
heeen posted:drat! The N36L just dropped to eur 155 ($220) in Germany. That is post taxes. I'm considering getting it anyway, but it's a lot more expensive than that on newegg ($300 and $325 versions).
|
# ? Aug 11, 2011 19:37 |
|
clavicle posted:I'm expecting low power consumption during most of the day, as it will be idling. 40W with the drives active, 50W when transferring stuff. code:
|
# ? Aug 11, 2011 19:49 |
|
Howdy folks, looking for a weekend project for my NAS/HTPC! I have a 2GHz AMD dual-core (socket 939) with 2GB RAM, two 2TB HDDs, and a GeForce 9400. The thing's already built and ready to go. I'm well versed in Linux and looking for tips/suggestions on the best distro to go with. It may also be that the best "distro" is Windows. You let me know. Here are my needs: 1. RAID 1 with hot-swap failover (the drives in the system are in hot-swap cages) 2. SMB sharing 3. XBMC 4. Good sleep or hibernate support (both is ideal, one or the other is acceptable) 5. Bluetooth keyboard/mouse support I want this to act as little like a Linux box as possible in the end. When I turn on the machine, XBMC should show up. When the box comes back from sleeping and wakes up (via, say, Wake-on-LAN or you know, a power button), XBMC should fire up. When I hit up its web port, I want as NAS-like an interface as possible. It'd be pretty sweet, too, if I could configure the box to go to sleep when I hit the power button. I'd still love the ability to shell into the box as a Linux playground for anything I may want (if I do go with Linux), but that's tertiary, as before this I was considering a dedicated NAS box. (I was considering the Synology DS211j for its excellent power management) What distro, programs, or (better yet!) guide would get me off on the right foot? LiquidRain fucked around with this message at 22:32 on Aug 11, 2011 |
# ? Aug 11, 2011 22:30 |
|
XBMC Live, installed to a hard drive, is just a well-done Ubuntu/XBMC integration. There's nothing about it that should preclude you from setting up a mdraid RAID1 array to install everything to, or SSHing in and installing samba for SMB/CIFS. I can't remember the last time I dealt with bluetooth keyboard and mouse on Linux, if I ever actually did, and I can't comment on sleep/hibernate. That said I know I've seen reviews of some of the little remote-style keyboard/mouse combos that have had positive Linux experiences.
|
# ? Aug 12, 2011 00:46 |
|
clavicle posted:I know, but it's like this for almost anything that comes from abroad, we have high taxes and import tariffs. If I use a mail forwarder and it gets stopped by customs I have to pay 60% in taxes. Even if you pay 60% extra in taxes, it will still probably come out cheaper than building it yourself. The entire box itself is about the same price as the psu/cpu/mobo if you get it from newegg, cheaper if you get it from somewhere else and it wouldn't add much more to the cost when you add in the case too. Then all you need is the ram and hdd's. Also, see if your country has a HP site - quite a few Australians bought one that way after the main stores ran out of stock, and the price that way might be competitive.
|
# ? Aug 12, 2011 09:28 |
|
Can anyone recommend a video card to suit the N36L? Picked one up the other day and I'm really impressed with it. Love the slide out motherboard.
|
# ? Aug 12, 2011 11:40 |
|
|
# ? Apr 23, 2024 13:02 |
|
Puddin posted:Can anyone recommend a video card to suit the N36L? Picked one up the other day and I'm really impressed with it. Love the slide out motherboard. Either the ati 5450 or the nvidia 210/520 are the most popular options. https://spreadsheets1.google.com/spreadsheet/ccc?authkey=CImGkocJ&hl=en&key=tHx2pDE7M2SIWsbq0hs352A&hl=en&authkey=CImGkocJ#gid=0 Thats from the first post of the OCAU owners thread, and it gives a pretty damned good cross-section of the kinds of things people are using it for and what gear they're running it with. The biggest thing to remember with the microserver is that it doesn't come with audo ports so you will either need to use a seperate soundcard (possibly usb based depending on your OS) or have hdmi audio for your videocard and run the sound through that.
|
# ? Aug 12, 2011 13:41 |