Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BnT
Mar 10, 2006

I'm looking to move my torrent client to a dedicated box instead of running my workstation 24/7, oh and I'm out of free space. The Synology/QNAP stuff looks good, if expensive.

What about FreeNAS? Can it handle 300 torrents at 2TB+? Does it support RSS downloading of torrents?

Adbot
ADBOT LOVES YOU

BnT
Mar 10, 2006

Longinus00 posted:

FreeNAS isn't a bittorrent program. You should be able to install either transmission or rtorrent for your bittorent needs.

I understand that it's not a program but an OS. As an OS/NAS it would meet my needs (storage, streaming). I'm wondering if any goons use FreeNAS as a bittorrent RSS downloader as well? Are any of the available torrent clients stable?

BnT
Mar 10, 2006

Longinus00 posted:

What do you mean by stable?

I'd be looking for a torrent client that could handle seeding a couple hundred torrents without crashing, or needing to be restarted/babysat. Typically only one to five of these are actually active at a time.

As for downloading, even if it's a script that runs periodically on the NAS against my RSS feeds, that'd be fine. As long as I don't have to manually upload a torrent to the NAS every time I want to download something.

Basically I'm just looking to deliver and seed TV content on a quiet, low-power, and stable platform without much daily micromanagement.

BnT
Mar 10, 2006

illamint posted:

Have you tried using a utility like iperf to check end-to-end performance independent of the filesystem?

This guy shows how to install iperf for FreeNAS 8.0. Run jperf on your workstation and go to town. This could quickly help you determine whether it's a network issue or not.

BnT
Mar 10, 2006

tonic posted:

Also, the adapter is currently connected via the PCI-E x1 port, would switching it to the x16 make any difference?

Even the slowest PCIe v1.0 1x card should be given a 250MB/sec bandwidth lane by the motherboard. Assuming you have a PCIe v2.0 board, you should be getting a 500MB/sec lane in the 1x port. I highly doubt moving it would make any difference.

What OS/filesystem are you using? Are there any differences between the drives from the OS's point of view? For instance, does write caching get disabled when the drive is connected to the PCIe card? If not then I'd look at firmware/drivers next, I guess.

BnT
Mar 10, 2006

Henrik Zetterberg posted:

I'm kind of confused as to why it seems there's 2 unallocated partitions in Disk1, both of seemingly random size.

Is something dicked up, or should I just have some patience and wait the couple days for initialization to finish?

You need a GPT partition table. The default MBR partition table on Windows 7 is limited to 2TB of addressable space. If that doesn't work for you, you'll need to look up 'diskpart' probably. Also, it's probably going to take at least a day to finish initializing, but you should be able use it, it'll just be slow in the mean time.

BnT
Mar 10, 2006

Galler posted:

One thing to note is that while it will run ESXi just fine and you can fill it up with VMs no problem you can't (at least as far as I can tell - if someone has figured out how please let me know) pass the drives directly to the VMs. So depending on your NAS software it may or may not be able to do a software RAID of your drives IF the NAS software is in a VM.

I've read that with the Microserver's SATA controller it is possible to do RDM in VMware, although probably not supported. This would be mapping the drives and not the controller directly into the VM. I don't have one of these microservers yet, but I'll check it out for sure when I find the budget. I'm fairly sure you'd still need to have a local storage disk in order to store the VM definitions and device mappings on, so at the minimum you'd need something like this:

Microserver with 4GB RAM
USB thumbdrive with ESXi installed onto it
One local disk imported into the VMware host as a local storage
Additional disks mapped into the FreeNAS or whatever VM with RDM

This guy has a pretty good guide to making the RDM portion happen. Edit: apparently when making the actual RDM, you should use vmkfstools -r instead of vmkfstools -z or you'll encounter issues with ZFS.

I've been curious about this configuration as well, since it really seems that ZFS is best used when it has direct control over the underlying storage.

BnT fucked around with this message at 22:02 on Sep 8, 2011

BnT
Mar 10, 2006

atomjack posted:

Depending on how many ports you want, and what kind of PCI Express slots you have available, I'd recommend either this 4-port PCIe x1 card, or this 2 SAS-port (up to 8 devices) PCIe x8 card.

If anyone is in the market for an 8-port card that supports 3TB drives, the IBM M1015 is an OEM LSI card and can be easily flashed into a nice JBOD/IT mode card. They're usually pretty cheap on eBay. I'll report back when I brick mine!

BnT
Mar 10, 2006

I'm looking for a hand with ZFS sharing. I'm not sure if this is possible, but I'm trying to make some shares like this:

tank/media shared via NFS
tank/media/video shared via SMB
tank/media/audio shared via SMB

Right now I just have tank/media shared via both NFS and SMB and it works fine, but I'd like to have individual video and audio SMB shares to only expose relevant data to applications.

Would I need to make nested datasets/filesystems to make this happen? If so, is it possible to move data instantly from a parent dataset into a nested dataset?

BnT
Mar 10, 2006

Galler posted:

e: I love the gently caress out my Microserver. I just wish I could afford a decent RAID/controller/JBOD/whatever card that would work in the server and in ESX. I sure as gently caress don't need more storage space it's just that it would be fun to do it.

I'm very happy with my cheap raid card: IBM m1015, an 8-port SAS card which supports 3TB drives. I paid $70 for a used one on eBay. I flashed it back to an OEM LSI BIOS (see my previous post), and it's playing nice with ESXi and ZFS in passthrough. I'm not using the Microserver though, and not sure if you can get VT-d working on it.

BnT
Mar 10, 2006

adorai posted:

Are you passing through the whole card or just doing RDM for the disks?
ZFS doesn't use the raid functionality, he should be able to import his ZFS array with any card that supports the same functionality he is using to get the disks passed through.

I just pass the whole card into an OpenIndiana guest. Everything just showed up and worked.

Using that storage back in the ESX host seems to work best for me with NFS. Using iSCSI makes the ESX host take forever to boot up while it times out finding the target, where it just shows the NFS storage as being offline until that guest boots up.

BnT
Mar 10, 2006

Cool Matty posted:

I'm thinking some sort of NAS with easy expandability is exactly what they need. Am I wrong? I'm really looking for some good recommendations here, along with what sort of software I could use to automate this process for them.

You mention cheap and terabytes, but not a ballpark on either. Going on that bit, you might want to look at the Synology 1511+, or match a budget to their other products. They have good reviews, lots of development, and would probably fit your requirements.

BnT
Mar 10, 2006

Jolan posted:

Thank you for your reply. I just noticed that the latest DNS-320 firmware update is supposed to have added support for drives up to 3TB, which is another plus. However, quite a few issues seem to remain.

Do you have any other suggestions aside from the DNS-320 that would fit my requirements?

You might want to look at the Synology lineup too. They're in the budget and apparently do the Time Machine thing. They also have a ton of cool features. Disclaimer: I don't know anything about modern Macs.

BnT
Mar 10, 2006

Is there a recommended frequency to scrub a ZFS pool? Do you just cron it at some point? I'm on Openindiana/RAIDZ2 if that matters.

BnT
Mar 10, 2006

FISHMANPET posted:

:psyduck:

Is that a thing? Is that a thing people do?

I do this and it's amazing. I have the following hardware config:

Xeon E3-1230
Supermicro mATX C202
16GB ECC
4GB USB thumb-drive with ESXi v5
40GB SSD ESX local datastore/host cache on internal SATA
IBM m1015 SAS/SATA card with IT firmware (vt-d passthrough into Guest #1)
\- 7 x Hitachi 3TB 5400-RPM drives (RaidZ6)
\- Intel 20GB SLC SSD (ZIL)

Guest #1:
OpenIndiana/Napp-It, 1vCPU, 6GB dedicated RAM, ~16GB vdisk
ZFS Raidz6 with ZIL on the SSD
Lots of NFS and SMB exports, including an NFS export for the main ESX datastore

I use NFS for exporting the storage to ESX, as ESX freaks out and takes forever if it boots when an iSCSI target isn't available. NFS is a lot more graceful, and the other guests are just unavailable until Guest #1 starts and the NFS datastore is online. Large NFS reads and writes are absurdly fast (300MB-500MB/sec) for guests, and easily saturate the gig interface for SMB or NFS over the network.

I'm using this thing to store and process lots of video and it's perfect for these large, sequential reads and writes. I imagine if you had something with more random I/O (like a database or an Exchange server) it might be a bit of a dog unless you got faster drives.

Lastly, I'm always at or near 100% memory usage on ESXi with 16GB. At some point I might grab another 16GB but I have some bills to pay at the moment, heh.

BnT
Mar 10, 2006

"[oMa posted:

Whackster"]
Is there something obvious I'm missing that I should be looking at? Is moving the ZIL out to it's own SSD going to make a huge difference?

Does your network support jumbo frames? That might increase throughput by a good bit. ZIL will have no impact on reads, only writes. Also, you are on gigabit ethernet, right?

BnT
Mar 10, 2006

zero0ne posted:

If you see a 8 port SAS card under ~400 bucks, and it is based on a LSI chip (the 1068E I think), it will only have support for up to 2TB per channel. This is an issue with the "outdated" LSI chip most of them are based on.

The exception to this is the IBM m1015 (based on the LSI SAS2008 chip), which supports larger drives. You can grab these for about $70 used/"new pull" or $170 new. You'll have to buy some cables too and flash them, but after all that hassle they're excellent JBOD cards for ZFS installs and reasonably priced too.

edit: added a link to the flashing guide I used

BnT fucked around with this message at 16:23 on Jan 7, 2012

BnT
Mar 10, 2006

TerryLennox posted:

The room where the NAS is usually remains at 28C to 30C.

That's 82F to 86F for us yanks. Anything you can do about that? If it's in a server room can you move it to a lower position? Aside from that, keep the fans clean and make backups.

BnT
Mar 10, 2006

Some Hitachi blog:

quote:

“Hitachi CoolSpin drives deliver a substantial power savings over their desktop-class counterparts, simplifying CE product design and paving the way for a new class of set-top boxes and DVRs that are smaller, quieter and more reliable for consumers,” said Larry Swezey, director, Consumer and Commercial HDD Marketing and Strategy, Hitachi Global Storage Technologies.

I think this is how a marketing guy says "green"? But yeah, basically a motor that uses less power.

I'm using seven of the 3TB Hitachi 5400s with good success. None were DOA, none have failed, and they're performing very well for me in ZFS RAIDZ2 on Solaris.

BnT
Mar 10, 2006

lenitic posted:

I have a mac which is chock full of photos.
...
Another biggish disk to act as a Time Machine destination for the mac's main drive

You should be able to cover the Time Machine with FreeNAS. I have one working with OpenIndiana although it took a bit of reading to get it running.

My only suggestion is to go with a true backup solution for your irreplaceable photos. One copy on your FreeNAS and one on whatever cloud-backup product. You'd only have to spend the backup bandwidth once per picture, and you're very unlikely lose all your copies if something bad happens.

BnT
Mar 10, 2006

3TB Reds are $139.99 on Amazon today.

Adbot
ADBOT LOVES YOU

BnT
Mar 10, 2006

wang souffle posted:

I'm currently running OpenIndiana on an i3 board with an M1015 and 8x 2TB drives in a raidz2 setup. There are a couple VMs running on top using KVM, but that's been a little unwieldy.

I'd like to move to a setup with VT-d, ECC, and ESXi. I'd pass through the M1015 to an OpenIndiana guest (or other ZFS-friendly OS) to host the array. That would still require the ZFS OS to be running if it was hosting the VM datastore, however. Does anyone have a better suggestion on how to make VMs easier to manage?

What's the preferred chipset for VT-d these days? I read somewhere that AMD includes it on most chipsets but Intel is all over the place.

Is ECC overkill?

Have you considered giving Linux/KVM/ZFSonLinux a shot first? I understand that this doesn't directly address your issues with KVM, although Linux implementations might have some better KVM management options.

I recently moved from an ESXi/OpenIndiana/M1015/raidz2 to CentOS/ZFSonLinux. I'm very pleased with performance, have fully redundant hypervisor storage, and have more SATA ports available to ZFS than I did before. zpool import was a breeze, and VMDK files were converted without too many headaches with qemu-img. The built-in KVM management in CentOS (virt-manager via X11) is fine for my minimal uses and I don't have any Windows VMs for vSphere management anymore. I'd probably recommend some other distro to anyone else, but I've been using RHEL forever for work. I'm using a Supermicro C202 chipset with an Intel E3-1230, had no issues with VT-d, and found the IPMI to be very useful for headless operation.

In my view if ZFS isn't overkill, neither is ECC.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply