Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
GokieKS
Dec 15, 2012

Mostly Harmless.
This is half NAS related and half home networking related, but I'm curious as to what configuration and performance people get from their NAS or file servers on Mac OS X (if there are many at all who have some a setup).

I use ZFS on Linux on Ubuntu Server 12.04 LTS on my file server (currently just my last desktop with a Phenom II X3 720 and 4GB of DDR2 repurposed, with 6 2TB drives in 2 RAID-Z vdevs - it's due for an update and will be replaced by a Haswell Xeon E3 build with ECC soon), and the one aspect that I've never been satisfied with is file sharing with Macs. Until OS X 10.9 Mavericks, I used NFS (because SMB performance was beyond terrible), which after way too much time spent on researching, testing, and "tuning" I was able to use async mode and larger buffer size to get OK performance (peak transfer rates of ~80MB/s over GbE). This was still not great (by comparison a Windows client would be able to reach 120MB/s via SMB), but it was usable enough, even if performance wasn't terribly consistent, and there were other issues (could never get user mapping to work, so security is limited to simply only allowing connections from my subnet, and sometimes file renames would just fail, or some files wouldn't appear in Finder).

After upgrading to OS X 10.9 however, I discovered that Mavericks pretty much broke NFS auto-mounting. It did however also get upgraded Samba file sharing, and after a lot more time spent on researching, testing, and tuning (force SMB2, force TCP_NODELAY, set specific send/receive buffer), I'm able to get usable performance out of SMB (peak transfer rates of ~70MB/s), and that's what I'm currently using. It's far from ideal though, as in addition to the subpar performance (compared to SMB via Windows client), there's also always a weird delay when copying files using Finder - as far as I can tell, the transfer progress would always be stuck at 0 for the first hundred or so MB of data, and then suddenly catch up. This only seems to happen on transfers between the Mac and the file server - transfers between the Mac and a Windows file share works fine. And this happens with both my Hackintosh (i3 3225/GA-H77N using onboard Realtek GbE) and my 2012 rMBP (using the TB-GbE adapter), so doesn't seem to be a network hardware specific thing on the Mac client side.

I have not tried AFP, mostly because I hate the idea of having to set up yet another protocol for just one OS (even though that's what NFS ended up being in my case, at least it would also be usable if I added another *nix box), not to mention it's really basically a dead protocol at this point - Time Machine backups are literally the only reason to use AFP now, as even Apple is pushing SMB(2) as of 10.9. But if I can't figure out how to get SMB to work better, I might just have to when I rebuild my file server. iSCSI is also not an option, as I need the shares to be available to multiple clients (and I don't want to pay for an iSCSI initiator that somehow Apple still hasn't added into OS X).

Anyway, TL;DR: What do people use with Macs, and what kind of performance do you get out of them?

Adbot
ADBOT LOVES YOU

GokieKS
Dec 15, 2012

Mostly Harmless.

Ninja Rope posted:

I use SMB2 over 802.11n and get ~75mbit to a FreeBSD/ZFS server. Haven't noticed any issues renaming files or getting progress bars to work.

Well, with wireless it's really kind of a moot point, as even on AC you're not realistically going to get better than maybe 300Mbps in a best-case scenario.

spoon daddy posted:

Just get netatalk 3.0, ppa is located at https://launchpad.net/~jofko/+archive/ppa

It is drop dead simple to setup and configure and I get ~100MB/s with afp. It has the added bonus of being able to emulate a time machine share which I use for backing up my mac.

Yeah, netatalk would be what I'd use for AFP. I really would prefer to not use AFP for the reasons I previously mentioned, but at this point everything I've been able to find seems to indicate it's probably my only real choice for getting good performance, so pragmatism will probably win out and I'll set that up when I do my file server refresh.

GokieKS
Dec 15, 2012

Mostly Harmless.

Ninja Rope posted:

Sorry, that was a typo. I get ~75 megabytes per second through wireless, which is comparable to what I get wired from Windows or OSX.

I'm real curious as to how you're getting 75MB/s over 802.11n, which has a maximum theoretical throughput of 450Mbps (56.25MB/s) in 3x3 mode.

Edit: actually, there are 4x4 equipment which has a theoretical throughput of 600Mbps, which is exactly 75MB/s, but there's still literally no possible way of getting that in real world usage.

GokieKS fucked around with this message at 23:47 on Jan 19, 2014

GokieKS
Dec 15, 2012

Mostly Harmless.

Harik posted:

Speaking of alternates, I'm assuming DELL/HP etc have versions of the same LSI card, what are their part numbers to watch ebay for?

Dell's is the PERC H200, but not sure about HP. It should be noted though that they have two versions (Integrated / Adapter, and Modular) - the former (which is usually just referred to as the H200) is what you want. The Modular version is pretty obviously different and not designed to fit into a PCIe slot, so even if the listing doesn't specify it's still very easy to tell.

E: Here looks like a list of cards based on the LSI controllers:

LSI Logic Cards

LSI9240 series use iMR mode (integrated MegaRaid)

LSI9240-4i Supports RAIDs 0, 1, 10, 5, 50 and JBOD
LSI9240-8i Supports RAIDs 0, 1, 10, 5, 50 and JBOD
LSI920x/921x series are LSI HBA / Fusion MPT 2.0 all in IR mode support RAIDs 0, 1, 1e, and 10, IT mode = passthrough only

LSI9200-8e
LSI9201-16e SAS2116 version
LSI9201-16i SAS2116 version
LSI9202-16e Dual SAS2008 controller using PCIe 16x
LSI9210-8i OEM version of LSI9211, vertical SAS ports
LSI9211-4i Horizontal SAS ports
LSI9211-8i Horizontal SAS ports
LSI9212-4i4e 1×4 SAS external port and 4x single internal SAS/ SATA 7-pin connectors

IBM SAS HBA’s

IBM ServeRAID M1015 similar to LSI 9240-8i but the ServeRAID M1015 does not support RAID 5 unless you add the ‘Advanced feature key’ to enable it
IBM ServeRAID M1115 newer version of the IBM ServeRAID M1015
IBM 6 Gb Performance Optimized HBA (46M0912) – LSI-9240-8i (SSD enhanced)
IBM 6 Gb SAS Host Bus Adapter (46M0907) – LSI 9212-4i4e – 4x Internal SAS 2/ SATA III and 1x 4 SAS 2 SFF-8088 external connector.

Dell Cards

Dell PERC H200 ships with IT firmware but seems similar to the LSI 9211-8i
Dell Perc H310

Cisco Cards

Cisco UCSC RAID SAS 2008M-8i

Fujitsu Cards

Fujitsu D2607 – Rebranded LSI 9211-8i ?

Oracle (Sun) Cards

SUN SGX-SAS6-EXT-Z (p/n 375-3641) – LSI 9200-8e (external connectors)
SUN SGX-SAS6-INT-Z (p/n 375-3640) – LSI 9211/10-8i (internal connectors)

Intel RAID Cards

Intel RS2WC080 looks identical to the LSI 9240-8i and IBM M1015, but supports RAID 5 and RAID 50 like the 9240-8i.

Intel Proprietary PCIe x4 Cards

Intel RMS2AF040 (Proprietary PCIe 4x)
Intel RMS2AF080 (Proprietary PCIe 4x) As above but 8 port

Hewlett Packard HBA’s

HP H220 Host Bus Adapter
HP H221 Host Bus Adapter (2x External SFF-8088 connectors)
HP H222 Host Bus Adapter
HP H220i Host Bus Adapter
HP H210i Host Bus Adapter

Supermicro Proprietary Format UIO Cards

SuperMicro AOC-USAS2-L8iR – 9240-8i spec’d but with 16MB cache and RAID 5 but no RAID 1E has IR firmware (UIO Card!)
SuperMicro AOC-USAS2-L8E – HBA version so most like a 9211-8i has IT firmware (UIO Card!)
SuperMicro AOC-USAS2-L8i – LSI 9240-8i spec’d no RAID 5 but does have RAID 1E has IR Firmware (UIO Card!)

Lenovo RAID Controllers

Lenovo 67Y1460 is a barely re-branded LSI 9240-8i.

GokieKS fucked around with this message at 11:24 on Jan 25, 2014

GokieKS
Dec 15, 2012

Mostly Harmless.

Pudgygiant posted:

What's the market for WD blues anyway? Green is for poors, black is for less than SSD speed but more than SSD storage, red is for NAS, RE is for enterprise, but I don't get where blues fit into that.

Blue is just their jack-of-all-trades drive for people who don't really know or care, and the one you're likely to find in OEM machines.

GokieKS
Dec 15, 2012

Mostly Harmless.

pro con posted:

Also, is RaidZ2 completely necessary? I understand the greater risk of array degradation, but I was hoping ZFS' error checking would cover for some of that. I'd rather get away with RaidZ1 for capacity if at all possible.

How important is your data? Or, if you do have a backup solution (that's not redundancy), how important is your time? With RAID-Z1 if a second drive in the array (vdev) fails after you've replaced the first failure but are still rebuilding, you're basically screwed. And remember that the bigger the drives, the longer that process takes, so the bigger the risk becomes.

GokieKS
Dec 15, 2012

Mostly Harmless.

DEAD MAN'S SHOE posted:

ZFS Raidz-2 question: can I solve this by recursively offlining each 512B configured drive, reconfiguring it then adding and resilvering? Been getting the warning since I upgraded FreeNAS.

I'm not sure if that will work, but even if it does it seems like an absolutely terrible idea as you would be spending at least 80 hours (assuming that 16h number shown represents the entire zpool) resilvering it 5 times. You really should back up the data (which you should be doing anyway), rebuild the zpool with proper ashift, and then copy it back.

DEAD MAN'S SHOE posted:

I can take it as read that the drives in question cannot be configured to support the native block size then. Shame.

What? You just have to set the entire zpool to use 4K alignment (which is fine for 512B drives as well) when you create it:

code:
zpool create -o ashift=12 tank raidz2 <DRIVES>

GokieKS fucked around with this message at 02:56 on Jan 28, 2014

GokieKS
Dec 15, 2012

Mostly Harmless.

frunksock posted:

Any recommendations on cases or enclosures for 8+ 3.5" drives? I saw that Silverstone DS380 people on this page have been talking about, which looks really nice, but I might also like something a little less cramped for working in. I don't want a massive tower, but I also don't need one quite that compact.

Fractal Design R4 has space for 8 drives. For more drives than that, you should probably consider server chassis.

GokieKS
Dec 15, 2012

Mostly Harmless.
If it's Sundial Micro you're referring to, they are fine and have been around for a long time - I bought a Cooler Master case from them back during the ATCS line days (before many of that team left to form SilverStone).

And though the DS380 was first shown at Computex last year, it seems like it wasn't until around CES this year that it was actually ready for market, and so it doesn't look like it's made it to retail channels yet.

GokieKS
Dec 15, 2012

Mostly Harmless.

frunksock posted:

Yeah, that's the one. Okay, thanks. Has anyone actually received one from them, though? Or are they just listing it before they have actual stock?

A guy from this thread got his/hers from them.

GokieKS
Dec 15, 2012

Mostly Harmless.

frunksock posted:

Back to the DS380, is there a ECC Mini-ITX motherboard with 9+ SATA and SAS ports that fits? Or a 8+ port SAS card that's either under 6" long or 2.35" tall? Apparently the E3C224D4I-14S won't fit I'd rather not get one of those Atom boards.

SuperMicro also makes a Haswell Xeon mITX board with a built-in LSI 2308, the X10SL7-F. No dual GbE NICs though, which is disappointing.

GokieKS
Dec 15, 2012

Mostly Harmless.

necrobobsledder posted:

That board is micro ATX. I have never seen an Intel board with 4 DIMM sockets that mini ITX. I saw there's some Avoton boards with CPUs that have slots for 4 SO-DIMMs though. Supermicro has a board that sorta meets all these requirements... but you will be paying out the nose to the point where you'd rather just build a micro ATX system or drop back to mini ITX with an add-on SAS controller.

http://www.supermicro.com/products/motherboard/atom/x10/a1sai-2550f.cfm

Oh, oops, I completely goofed. Yeah, a standard mITX board just really doesn't have enough space to include a built-in HBA. And ASRock also makes a Avoton mITX board with 4 DIMM slots and a ton of extra SATA ports from Marvell SATA controllers (which Silverstone actually specifically recommends for the DS380), and at $370 I guess it's not terrible for what you get, but... it's still Avoton.

frunksock: if I'm reading the DS380 component size restrictions right, you should be able to use a low profile HBA with it. The IBM M1115 that I have here is about 57mm / 2.25 in tall, and I think most of the other common LSI2008 rebrands should be around the same size.

GokieKS fucked around with this message at 02:30 on Feb 5, 2014

GokieKS
Dec 15, 2012

Mostly Harmless.
I'd be much more weary of the SATA ports being off a Marvell controller than them being 3Gbps.

GokieKS
Dec 15, 2012

Mostly Harmless.
Heh, I was actually just thinking about RAID-Z3. I finally ordered the MB/CPU/RAM for my new file server as Newegg had/has the SM X10SL7-F and E3 1220v3 as a combo deal (saving about $40), and I'm trying to plan out my build and future expansion to determine what case to get. The board comes with 6 SATA ports off the Intel controller and 8 off the LSI 2308, and I have a M1115 for another 8, giving me a total of 22 SATA ports. As I don't think I'll be adding any more HBAs, that comes out to be nearly perfect for a NORCO RPC-4020/4220 with 20 hot swap bays (and the other 2 ports can be used for ZIL + L2ARC or OS drive). My current file server has 6 x 2TB drives in 2 3-disk RAID-Z vdevs, and I while I'm definitely going to move the data off and rebuild it as a higher-redundancy vdev, I don't want to add any additional drives of that size, so it's going to be 6-disk RAID-Z2. That leaves me with 14 other ports, which means I can do either 2 7-disk RAID-Z3 vdevs, or 2 RAID-Z2s, one with 8 disks and the other with 6. And I haven't quite decided on which yet.

GokieKS fucked around with this message at 15:41 on Feb 19, 2014

GokieKS
Dec 15, 2012

Mostly Harmless.

D. Ebdrup posted:

EDIT: ↓ Just be aware that you should strive to have the same drives in all vdevs in your zpool to avoid causing unnecessary wear on one drive, and that if you'll be slowly adding vdevs to your pool over time, you'll start with some wear on the oldest drives when you'll be adding the newest drives.

Yeah... the 2 3 disk RAID-Z1 vdevs I have are somewhat close in age, which is why I was going to rebuild it as a single RAID-Z2 and didn't want to add any more drives. This new file server will be the rebuilt vdev and either a 7-disk RAID-Z3 or 6-disk RAID-Z2 vdev of 3TB drives, with the last 7/8 drives coming in the future.

GokieKS
Dec 15, 2012

Mostly Harmless.
A random correctable ECC error once in a while is basically normal. If you discover that you start to get them much more frequently though and almost always from the same stick, then I'd probably be looking to replace it.

GokieKS
Dec 15, 2012

Mostly Harmless.
So, ordered the Norco RPC-4020. Was actually leaning towards the 4220, but Newegg via eBay has the 4020 with free shipping (but not on the 4220 or on either when ordered from their website), which saves me another ~$20 in addition to the $20 cost difference and savings from not having to buy SAS reverse breakout cables - total cost came to be about $90 cheaper compared to the 4220, which is worth the slight increase in annoyance when it comes to cable management.

Also ordered the 3 x 120mm fan bracket (also off ebay, because Norco's web store charges $15 to ship the $11 part), so now I need to figure out what fans to use. Being in a 1 bedroom apartment, I'm going to have to keep the thing in my living room, so I'd like to keep it somewhat quiet. Normally my go-to fans at 120mm are the Corsair SP120 Quiet Editions which I use two of in my FT03 Mini, but I'm not sure that's going to provide enough airflow. I haven't used them personally, but the SP120 High Performance Edition seem to be well received (it was the winner of Xbit Labs' 120mm fan roundup for faster RPM fans). Anyone have any experience with using 120mm fans in a Norco RPC-4020/4220/4224 (preferably loaded with drives)? I do expect to eventually actually use all 20 drive bays, so would want something that will be able to handle that.

GokieKS
Dec 15, 2012

Mostly Harmless.
There's just no other place that I can put it when taking into consideration size, power outlet location, and network cabling. And really, noise from the drives shouldn't be that bad - I'm going to be the only one using it and most of the time they'll just be idling. My current file server is an old Phenom II X3 in an even older CoolerMaster WaveMaster case running 7 HDDs, and the fan noise definitely is a much bigger factor than hard drive noise.

So, still looking for fan recommendations if anyone has any!

GokieKS
Dec 15, 2012

Mostly Harmless.

thebigcow posted:

Re-evaluate why you want this thing. Don't be that guy.

I want this thing because I need at least 12 drives' worth of redundant network accessible storage with further expansion options. Short of something absurdly expensive from niche companies like Caselabs, 4U server enclosures are basically my only option.

So what "that guy" are you referring to, exactly? That guy who thinks he knows what other people need better than they do?

GokieKS
Dec 15, 2012

Mostly Harmless.

DNova posted:

That guy who has 20 drives full of pirated animes on a server in his livingroom.

SamDabbers posted:

It's all completely legitimate Linux ISOs okay?!

I was going to guess furry porn, actually. Meticulously cataloged, of course. :haw:

Drats, I've been seen through so easily. :rolleyes:

thebigcow posted:

"That Guy" that buys giant enterprise things to keep in his tiny apartment and regrets the noise and later regrets ever spending money on it. I'm not even going to ask what you need 12+ presumably 4TB drives of storage for.

My apartment is actually quite spacious (especially for a 1BR), and barring some major unforeseen issues I'm also not going to be living in one for that much longer. When I actually do move into a house, it will be completely (re)wired with access jacks in every room connected to a central server/communications room/closet, with a rack for my file server and enterprise-level network equipment (which I need for my job), so no, I really don't think I'm going to regret this purchase at all. And even while it's being used in my apartment, I fully expect to be able to bring the noise under control with some proper fans - which, you know, is where I'm looking for actual suggestions instead of these "you shouldn't buy this thing that you've already bought" responses. :colbert:

And also no, not all 4TB drives. I currently have 6 2TB drives which are near full capacity, and my plan was to add either 7 or 8 (still debating RZ2 vs RZ3) 3 TB drives. If 4TB drives go on sale for a good price, might as well, but 20/26TB (8TB from 6-disk RZ2 + 12TB from 7-disk RZ3 / 18TB from 8-disk RZ2) should have me set for the next few years.

GokieKS
Dec 15, 2012

Mostly Harmless.

alo posted:

I have three scythe 120mm fans each rated for 25cfm and 7.5dBA. Make sure you also replace the 80mm fans on the back of the case. I have two 80mm 30cfm 25dBA fans -- that's the main source of noise on mine.

I had to turn everything off here to notice that there are hard drive seek noises.

I assume these are the 800 RPM GTs? 25 CFM actually seems pretty low - how many drives do you have and what kind of CPU/HDD temps are you seeing? If 3 of those is really sufficient for a full set of drives, then the Corsairs that I'm fond of should definitely be enough.

And yeah, those back 80mms will be replaced too. They should have much less impact on the temperature of the drives though, so I'm not as worried about those as I am with the 120mms.

GokieKS
Dec 15, 2012

Mostly Harmless.

alo posted:

I only have half of my bays full. The fans I have are no longer sold, but here's their newegg page: http://www.newegg.com/Product/Product.aspx?Item=N82E16835185056. I just looked for comparable fans from the same manufacturer and holy poo poo they're selling them for ~40 bucks a fan (on newegg at least)

Anyhow, my "system temp" as reported by ESXi is 32C. My drives are about 33-37C according to SMART.

It seems like all the Scythe fans on Newegg are not actually sold by Newegg but by another vendor. The 800 RPM Gentle Typhoon which I guess would be the most comparable Scythe fan available now is about $20, which is pretty much in line with what most "name brand" quality fans go for.

Anyway, it sounds like I should good with 3 Corsairs. Thanks for the information!

GokieKS
Dec 15, 2012

Mostly Harmless.
Got in my Norco RPC-4020 and the 120mm fan plane this week, and my file server migration is well underway. A few thoughts:

1. I tested out the stock 6 x 80mm fans configuration just to see how loud it is, and it was hilariously so. The 3 x 120mm fan plane might well be a must-have for this thing. As I discovered that I actually had 3 120mm fans laying around - 2 NZXT fans which I had previously bought when I was trying to make a DIY "air conditioner" (don't ask) and the stock Corsair H60 fan, I figured I'd use those and see how they work. The Corsair is PWM, while the NZXTs are not, but all three are actually very quiet, enough that I didn't order any 120mm fans and won't replace them unless I find that they're insufficient as I add more drives. I did remove the 2 rear 80mm fans altogether, and ordered 2 Arctic F8 PWMs, which hopefully should be fairly quiet as well.

2. I'd forgotten how terrible the mounting system for Intel's stock HSF is. On the upside, it's quiet enough under low load that I don't need to go out and buy another Cooler Master TX3 (which will fit in a 4U and is carried by my local Micro Center) immediately and can decide if I want to stick with that (not my first choice, as use of 92mm fans are very limiting), find a 120mm tower HSF that will fit (there are only a very few), or get a top-down (Noctua NH-C14 would probably work great, but is a tad overkill). I also used my old PC&P 610W PSU, and despite it not being modular, the size of the case meant that cabling was really not much of an issue at all, even with 14 SATA cables. So really don't regret not getting the 4220 at all.



3. Supermicro's IPMI is pretty great... when it works. Which it does not for me in any capacity other from the web GUI, because they are Java programs that refuse to work (either at all, or properly) on any platform I tried (OS X, W7 in a VM, WinXP in a VM, W7). It probably requires a specific older version of JRE, but I'll be damned if can be bothered to try a bunch of them to find out. The lesson, as always, is: gently caress Java. And I couldn't get Virtual Media to work for the life of me, either with my W7 machine or my old file server acting hosting the share. Along the way of trying to get it to work, I managed to screw up the web GUI too, and had to use a Linux live USB disk to use the IPMI configure utility to reset it.

4. Installed ESXi 5.5 (had to customize it to get proper NIC drivers), before deciding that virtualizing my storage was probably a dumb idea all things considered (just adds more places where things can get screwed up), so nuked it and just installed Ubuntu LTS. Installing the necessary packages (Samba, Netatalk, ZFS on Linux) went without a hitch, as did importing my old zpool. So now it's chugging along doing a full scrub of the two 3x2TB RZ1 vdevs, which should take about ~14 hours.

5. Ordered 8 WD Red 3TBs - was a bit hesitant to get them from Newegg, but they had a 10% off coupon that meant saving over $100 total, and that was hard to pass up. And maybe by ordering that many at once they'll ship them well? Not sure if 16GB of RAM will be sufficient once I add those 8 - I wanted to wait for 16GB sticks of ECC UDIMM to become widely available (at non-absurd prices) to bring up to 48GB total, but who knows when that'll happen, so might just have to get another two 8GB sticks. Also, still not sure if I want to add an SSD to use as SLOG/ZIL drive.

GokieKS
Dec 15, 2012

Mostly Harmless.

phosdex posted:

Regarding Supermicro IPMI, on my x10sl7 board I have to use Java 6u45 to get the remote console type stuff to work. If you set IE to compatibility mode it will stop giving you java update prompts every time you navigate to the summary page and the remote view page (I don't know how to make Firefox or Chrome stop complaining). Make sure to update the IPMI firmware, the java versions change.

I've never got the virtual media thing working from the web interface. If you use their IPMI View tool that is standalone, the virtual media thing from there has worked for me.

X10SL7 is what I have as well, and I did upgrade the IPMI firmware (after resetting it) to try to get Virtual Media to work. I didn't try JRE 6 though - with JRE 7 IPMIView just wouldn't start at all, and the Java Applet would just kind of hang once I try to mount Virtual Media. Maybe one day if I cared enough I'll go muck around with it some more, but for now I can just SSH into it to do whatever I need, so no real need.

eddiewalker posted:

I did the same thing. Each drive will come in a bubble suit in an individual cardboard box with all the small boxes in a larger one. I think Newegg is selling a bad batch, just based on how many of the very recent reviewers complain about 50% arriving DOA.

After getting my RMA replacement from a DOA last week, my two Newegg 3tbs seem to be running fine but I grabbed a parity drive from Amazon just in case.

Well crap. Hope I get lucky I guess? :shrug:

GokieKS
Dec 15, 2012

Mostly Harmless.

Don Lapre posted:

It looks like one or more of the clips on your heatsink/fan are rotated incorrectly.

The stock Intel HSF doesn't need the clips to be rotated to be installed - just pushed through. The rotation is to make it easier to prepare to push through or remove. My HSF is installed securely and working fine (CPU at 35C, fan spinning at ~1000RPM).

Also, it's coming off soon anyway.

GokieKS
Dec 15, 2012

Mostly Harmless.
HDDs from Newegg arrived. While my 8 drives were in fact individually in smaller boxes with 1 drive per instead of the styrofoam holders (which probably require buying 12 drives?), they actually seem pretty well packaged, with a pretty rigid air bubble holder that prevents the drive from being able to move at all within the small boxes:



Time to test and hope they're all good!

E: Well, one of them was DOA. Makes a high-pitched buzzing noise and isn't being detected by the system. The rest were detected properly, and SMART data looks good. Now running badblocks on them, hopefully will only have to RMA the one drive. Might just request a refund and pick up a drive from Micro Center since they have them for $125 this month - paying the sales tax is probably worth not having to wait for the new drive to arrive.

GokieKS fucked around with this message at 20:40 on Mar 6, 2014

GokieKS
Dec 15, 2012

Mostly Harmless.

Sickening posted:

Anybody have any experience with EMC's pricing model? We haven't made it to the stage of the itemized quote yet, but I am interested to know where the money is made. The usual suspect is always support, but I am really curious to the pricing of their fast cache disks.

This is the thread you're looking for.

The Gunslinger posted:

My brother needs his fileserver in his living room for whatever reason and already bought everything except the case. What's a decent, small case that'll take a MicroATX board and is relatively quiet? Only needs to take 2-4 hard drives tops.

BitFenix Prodigy M / Phenom MicroATX are probably the smallest mATX cases that will hold 4 3.5" drives without much fuss. If you want something more traditional in appearance, Silverstone TJ-08E and Fractal Design Define Mini might be worth looking at.

As for the Node 804... it does look intriguing, but I was less than impressed with the final product in the case of the Node 304, and honestly this strikes me as a really weird choice that tries to be decent at both for use as a storage box or as a normal PC/server, but makes a lot of compromises in each.

GokieKS
Dec 15, 2012

Mostly Harmless.

ch1mp posted:

Quick question: I'm making a cheap freenas box - just to play around and learn with - it will not hold any important data at this point (no ZFS). I am especially interested in the serviio and other video streaming functionality.

I am looking at the cheap haswell's for a cpu. My question is - does better cpu graphic = better streaming? ie - will i see a significant improvement in streaming performance moving up from celeron to something with 4xxx graphics (all other things being equal).

No. For streaming without transcoding any CPU that you can actually buy new at this point will be fine. For transcoding the GPU is a non-factor unless some solution that actually takes advantage of QuickSync has finally popped up (which I don't believe it has) - it's just CPU performance. So a CPU that has a higher-end iGPU will actually perform better than the Celeron, but not because it has a better GPU.

GokieKS
Dec 15, 2012

Mostly Harmless.

PitViper posted:

edit: Looks like I can, according to the photos on their website. The bottom cage moves back far enough to accommodate two bottom cages, plus the top cage. Now I just have to figure out where to buy an extra drive cage.

The bottom 6xHDD cage is fixed. The space to the back of it is for the PSU.

GokieKS
Dec 15, 2012

Mostly Harmless.
Oh, you were looking at the R2. On that one it actually looks like both the top and bottom cages are the same - the description says:

quote:

Both HDD cages can be removed or repositioned - Top HDD cage can be removed or repositioned for increased airflow whereas bottom HDD cage can be repositioned further into the case to allow for front radiator mount.

Since I'm not seeing anywhere that top HDD cage can be reposition to except to the bottom of the case, I think you should be able to just buy another top HDD cage and use two of them. But it's hard to tell for sure, so you should probably contact Fractal Design to confirm.

GokieKS
Dec 15, 2012

Mostly Harmless.

eightysixed posted:

How does one power so many HDDs?

You can expand one Molex plug into many SATA power plugs. Depends on the drives being used, power supply output can be a concern - drives can require substantially more power for spin-up than they do during use.

GokieKS
Dec 15, 2012

Mostly Harmless.
If you truly want the most SATA ports possible without going into stuff like expanders, SuperMicro offers a MB (X10SL7-F) that has a built-in LSI 8 port HBA, and the ability to add 2 more (provided they use PCIe x8 slot, which the 8 and 16 port LSI-based ones do), so you could have a total of 16 + 16 + 8 + 6 = 46 SATA devices. It's what I use in my file server, though in my case just the built-in ports and 1 8-port HBA (IBM M1115) is enough for my 4U Norco case with 20 HDD bays (currently have 14 drives).

GokieKS
Dec 15, 2012

Mostly Harmless.

Falcon2001 posted:

I guess I haven't looked up Linux documentation in a while, but I found FreeBSD to be frighteningly badly documented. Stuff like the default package manager in the latest release pointing to an empty repo because of an almost year-old security issue instead of the nearly identical one that apparently everyone used anyway, etc.

Part of it was also just that FreeNAS itself had big gaps in the documentation that assumed you were already FreeBSD savvy. That's not necessarily a bad thing but it was certainly painful for it to suddenly just drop huge parts of an in-depth tutorial without a link or anything.

Yeah, I don't know what obscure Linux distro you'd have to be comparing it to for FreeBSD to be considered well documented, but certainly it wasn't compared to, say, Ubuntu or Debian, in my experience. My first foray into ZFS was actually on FreeBSD, and it was amazingly jarring in how much more effort it took to figure out all the random issues you encounter along the way compared to Ubuntu (which is what I now use for my file server).

GokieKS
Dec 15, 2012

Mostly Harmless.

I'm familiar with his posts and I agree that it's a very useful guide, though I'm not sure why you linked it in response to me - I was talking about when I first started using ZFS with FreeBSD 8, a long time ago, and I was referring to BSD, not ZFS.

evol262 posted:

I'd also say that FreeBSD is amazingly well documented, especially in comparison to Ubuntu and Debian. Debian (and Ubuntu) are atrociously documented, due large in part to Debian's "any init/mta/whatever system you want" a la carte philosophy towards system building. I'm not saying that FreeBSD has better support than Debian or SuSE, or that trying to find help for odd problems is easier. It isn't. The Ubuntu forums, Arch wiki, etc are all much better. But the FreeBSD Handbook is amazing documentation, up there with the RHEL(/CentOS) Deployment Guide, Fedora, etc.

I'm a developer for Red Hat, so I'm pretty familiar with their docs. I could have used SuSE's. Other distros are lacking.

When I say "documentation", I mean this, or this (Red Hat access portal is down right now), or this for FreeBSD, which says "here's how to do absolutely everything you may need to go on a system or where to find more resources, all in one place". Almost every question you'd every have on FreeBSD should be answered in this handbook, or you can get a pointer for where to go next.

PFSense butchers FreeBSD to the point this is all useless. NAS4Free/FreeNAS do not. It's totally applicable.

Ubuntu doesn't do this (they're trying, but it's pretty lackluster right now). Debian isn't anywhere close and probably never will be. Same for Arch. Gentoo was ok in the past, but pretty bad now.

Again, not forums, serverfault questions, or whatever. Actual documentation.

OK, I digress, if you were looking for documentation as an academic exercise, yeah, the handbook is nice. But for "I ran into this issue and I need to know how to fix it", it's definitely easier for a popular Linux distro as you said.

GokieKS
Dec 15, 2012

Mostly Harmless.

Krailor posted:

Either USB3 or eSATA would be the fastest, realistic, storage options. I think USB 3 is technically faster but in real world tests they're both pretty close to each other.

However, if you're willing to :homebrew: then mounting an iSCSI drive from a NAS w/ SSD drives via a dedicated 10GBE link would give you the absolute fastest connection. And make your wallet cry.

eSATA is faster than USB 3.0, both theoretically (it's just SATA speed, so up to 6Gbps vs. 5Gbps for USB 3.0) and in practice (it has lower overhead). USB 3.0 is a lot more common and portable though, as you're going to be hard pressed to find 6Gbps eSATA ports on most computers.

And for iSCSI to NAS to be faster than eSATA, it would have to be multiple drives in some sort of striping RAID configuration.

Basically, what it comes down to is if you want fast external storage for just one machine, use eSATA. If you want fast external storage for multiple machines but only one at a time, use USB 3.0. And if you want fast external storage for multiple machines simultaneously, go with a NAS/file server.

GokieKS
Dec 15, 2012

Mostly Harmless.

infinite99 posted:

I hope you can figure out what's going on because I just picked one up :(

This deal is crazy good if my searching for the individual parts is still relevant. I think the motherboard alone is going for about $300 if it's like the -F version.

It is the -F version, but the MB is going for that because it's an outdated model that is not longer available (aimed at people who need it as a replacement part), not because it's competitive with current $300 motherboards. Nobody buying new would pay that much for a MB to use with a Lynnfield Xeon instead of a Haswell Xeon setup.

For $150 though, that definitely is a fantastic deal. I'd have thought long and hard about getting that instead of my X10SL7-F + E3 1220v3 if it had been available.

GokieKS
Dec 15, 2012

Mostly Harmless.

Star War Sex Parrot posted:

Edit: and a handful of LSI 9300-8i (12Gb/ s) but I'd really have no use for that.

Selling them to Goons for cheap seem like a pretty good use. :)

GokieKS
Dec 15, 2012

Mostly Harmless.

Falcon2001 posted:

For reals? Good thing you mentioned that. Also, upon further investigation I'm apparently unlikely to be able to use this for a minecraft server so now I'm debating whether I should try and build out a real server to pull double-duty or just say gently caress it and get an appliance again and then throw some more RAM in my main machine.

Honestly it just keeps coming back to not really needing much more than a straight up NAS. Not running a website or anything intensive means that the difference between a synology box and NAS4Free comes down to geek cred and UI, neither of which are all that crazy important to me.

Why are you unable to use it as a Minecraft server? It certainly shouldn't be due to hardware performance issues - I'm not that familiar with Minecraft server software, but an Ivy Bridge Pentium with 16GB of RAM should be way more than you need.

And are you planning on using more 3.5" HDDs than the device officially supports? If you plan on getting to 6 eventually you can do 2 VDEVs of 3 drives in RAID-Z1 and start with just one. If you're only going to use the 4 that it comes with 3.5" drive bays for, then you should start with the 4 drives right off the bat and go either 3+1 RAID-Z1 (3 drives worth or usable space), or stripe+mirror (2 drives worth of usable space, and a better option than a 2+2 RAID-Z2).

GokieKS
Dec 15, 2012

Mostly Harmless.

Falcon2001 posted:

Question is run vs run effectively, the MC thread was under the impression it would be underpowered (and this is probably true since the whole MC being pretty badly optimized thing). Mostly just don't want to buy it just so I can find out it's not going to work and have to return it.

It would have to be colossally badly optimized to not be able to run on a 2.5GHz Ivy Bridge CPU with 16GB of RAM. And a quick glance at http://minecraft.gamepedia.com/Server/Requirements/Dedicated shows that there is no chance that it won't be enough.

Adbot
ADBOT LOVES YOU

GokieKS
Dec 15, 2012

Mostly Harmless.

infinite99 posted:

Thanks for the explanation! I guess I didn't quite understand vdevs all that well. So let's say I pick up 4 drives to start out and they're all 3TB. I should have 9TB of usable space. If I wanted to use some 2 TB disks from my old server, Could I make a new vdev out of those disks and then create a new pool out of the 3TB disk vdev and the 2TB disks I just added? If I had 3 disks for that second vdev, there'd be 4TB usable which would give me an overall total of 13TB of space?

Am I completely off with how that works?

You can think of each vdev is basically a RAID array in the traditional sense, and a zpool as a collection of vdevs. So you can have a zpool that starts off with a 4-disk RAID-Z1 vdev, then add a 2-disk stripe vdev, then add a 8-disk RAID-Z3 vdev, if you wanted to. You would just have different levels of redundancy for different data depending on which vdev it happened to be put on. But since that's generally not a great idea, usually you want to start with the minimum level of redundancy you want for the zpool, and only add vdevs that have the same, or higher, level of redundancy down the road.

In your example, you would not need to create a new zpool for both the 3+1 RAID-Z vdev and new 2+1 vdev - you can simply create the new vdev and add it to your existing zpool.

Do note however that replacing an existing vdev is not a simple process. For example in PitViper's situation, when he wants to replace one entire vdev of 4 2TB drives with larger ones, he actually would have replace them one at a time, and rebuild (resliver) the vdev after each one, for 4 times total.

GokieKS fucked around with this message at 22:29 on Jul 16, 2014

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply