Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Synthetic Violence posted:

I've been looking at upgrading my NAS from it's current configuration to something that will give me a little more space, plus I want to do iSCSI for my HTPC and Desktop and have that separate from my main share. How is iSCSI on ZFS? Can I expect decent performance? Or would I be better off just mapping the disks directly?
iSCSI is really only a good idea at home if you don't mind getting capped by your network all of a sudden rather than your disks and you have a way to actually isolate all that storage traffic appropriately. I went crazy and had iTunes hooked up to an iSCSI target that gets shared out via CIFS (really, iTunes gives me so much of a headache in admin that almost offsets any interoperability and usability in the Apple ecosystem) and I wouldn't do it unless you've got a pretty solidly built network and you can isolate a storage subnet off correctly to avoid physical contention on an already-congested network.

Adbot
ADBOT LOVES YOU

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Eh, to run games from your NAS, it might come handier to use iSCSI. Some games like to throw a poo poo fit trying to run them from a SMB share.

As far as iSCSI on ZFS goes, let it use the default 8KB block size to keep the metadata overhead down, and format NTFS with 8KB clusters. Use jumbo frames. And possibly a direct link to keep the network switch from loving things up. Over here, I max out the connection with iSCSI.

Combat Pretzel fucked around with this message at 23:20 on Dec 15, 2014

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Synthetic Violence posted:

Lenovo TS440 for $299.99
It has the E3-1225v3. For those of you looking to do NAS4Free or FreeNAS.
Do note that it doesn't actually come with any of the drive sleds (just blanks), so you need to budget in for them--they're $16/ea or so. Other 4-in-3 cages might fit, too, but I'm not sure what ones exactly. Still, the CPU alone usually sells for ~$220, plus another $40 or so for the RAM, and you're basically talking $40 for the case/mobo/PSU. Hell of a deal, and probably within spitting distance of the price of whatever low-budget DIY NAS you could put together ($50 mobo, $50 PSU, $50 RAM, $50 CPU, $50 case = $250 and you've got a much less powerful CPU).

BlankSystemDaemon
Mar 13, 2009



That makes me wonder if it's possible to use remote SCSI boot to have the OS on a RAIDz3 pool with 6 disks (plus ZIL and L2ARC) via 2x pci-ex 10GbE SFP+ with jumbo frames in both the server and workstation/gaming rig respectively.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Yeah, should be. I've read some instructions in the past about how to make a PXE image to get Windows to boot from iSCSI. Would be a nice option for easy rollbacks, but 10GBit hardware is expensive as hell.

Synthetic Violence
Oct 18, 2012

Fuck machine.
Grimey Drawer

necrobobsledder posted:

iSCSI is really only a good idea at home if you don't mind getting capped by your network all of a sudden rather than your disks and you have a way to actually isolate all that storage traffic appropriately. I went crazy and had iTunes hooked up to an iSCSI target that gets shared out via CIFS (really, iTunes gives me so much of a headache in admin that almost offsets any interoperability and usability in the Apple ecosystem) and I wouldn't do it unless you've got a pretty solidly built network and you can isolate a storage subnet off correctly to avoid physical contention on an already-congested network.


Combat Pretzel posted:

Eh, to run games from your NAS, it might come handier to use iSCSI. Some games like to throw a poo poo fit trying to run them from a SMB share.

As far as iSCSI on ZFS goes, let it use the default 8KB block size to keep the metadata overhead down, and format NTFS with 8KB clusters. Use jumbo frames. And possibly a direct link to keep the network switch from loving things up. Over here, I max out the connection with iSCSI.

Thanks guys. I figured that network speeds would be the only thing I had to really worry about. I should have that covered though, I spent a whole day rewiring everything so each computer hits a switch port separately now. Before I had 4 computers all sharing the same gigabit link. Plus I bought a 4 port NIC for my NAS and a switch that can do LACP so I should be okay on the bandwidth now. I still have some cables to run to the NAS but performance was reasonable (80 MB/s) with a test share (and I didn't follow Combat Pretzel's recommendations).


DrDork posted:

Do note that it doesn't actually come with any of the drive sleds (just blanks), so you need to budget in for them--they're $16/ea or so. Other 4-in-3 cages might fit, too, but I'm not sure what ones exactly. Still, the CPU alone usually sells for ~$220, plus another $40 or so for the RAM, and you're basically talking $40 for the case/mobo/PSU. Hell of a deal, and probably within spitting distance of the price of whatever low-budget DIY NAS you could put together ($50 mobo, $50 PSU, $50 RAM, $50 CPU, $50 case = $250 and you've got a much less powerful CPU).

Yeah, should have mentioned that. It's a hard price to beat. A case from supermicro that does the same thing will run you $350-$400. A case. I bought one to build. I'll see how it goes.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I just mentioned the $200-ish Supermicro cases on Ebay a while ago with full on sleds and some even including rails. If you're ok with rackmounts in your house, you should head directly to Ebay to take advantage of companies that are unloading obsolete (but still fine for most home use) equipment. CPUs have gotten so drat fast and the DRAM market stagnated so much that a server from 4 years ago is still extremely capable. Meanwhile, a new low-end server from Dell or HP that's somewhat comparable in market segment would run me $1200+ typically. I ain't saving $800+ in electricity over the course of its lifetime in my house at a piddly $.07 / kWh.

I guess by you buying a LACP capable switch you're one of the crazy people then. Welcome to the club, all aboard the crazy home datacenter train!

BlankSystemDaemon
Mar 13, 2009



If you're like me and have a managed switch because you need the IGMP proxy, your switch may offer link aggregation without LACP just fine. Even in the super-cheap D-Link DGS-1100-16 I have, it's supported.

Speaking of which, be aware that even if you use 802.1ax-2008, single-connection data transfers do not exceed the speed of a single link in the aggregated links (see exception below). It works really well with 802.1s to create fully redundant and load-balancing switched networks.
LACP is just a feature that lets 802.1ax-2008 be automatically configured rather than you having to set it up manually.

The exception to the above-mentioned single-connection data transfer limit is to pipe it with stdio through netcat or bbcp with mbuffer on both sides.

BlankSystemDaemon fucked around with this message at 17:26 on Dec 16, 2014

Mr Shiny Pants
Nov 12, 2012

Synthetic Violence posted:

Lenovo TS440 for $299.99
It has the E3-1225v3. For those of you looking to do NAS4Free or FreeNAS.

I've been looking at upgrading my NAS from it's current configuration to something that will give me a little more space, plus I want to do iSCSI for my HTPC and Desktop and have that separate from my main share. How is iSCSI on ZFS? Can I expect decent performance? Or would I be better off just mapping the disks directly?

Also does anyone here have any experience with the new 6TB Reds? Do they work pretty well? Have you had to replace any yet?

I bought this one, anyone know which memory it takes? I found out through a supplier that it takes unregistered ECC but I am not sure. :(

Intel's ARK site only mentions that ECC is supported but not if it's UDIMM or RDIMM.

EDIT: Ok, Through the Kingston site it gives me the following: KTL-TS316ELV/8G

These are bloody expensive compared to a set of 4 * 8GB sticks. 124 Euro's per instead of 350 for four sticks.

These are low voltage sticks do I need them? Or do regular ones also work?

Mr Shiny Pants fucked around with this message at 19:37 on Dec 16, 2014

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
I wanted 16GB with mine so this is the kit I got: http://smile.amazon.com/Crucial-1600MT-PC3-12800-240-Pin-CT2KIT102472BD160B/dp/B008EMA5VU/ref=pd_bxgy_pc_text_y
Which also happens to be the recommended bundle, and also about the cheapest quality memory option there is on Amazon. But even if you don't get that kit, it should at least tell you what to look for.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Synthetic Violence posted:

and I didn't follow Combat Pretzel's recommendations
Are you creating iSCSI extents on files or zvols?

Mr Shiny Pants
Nov 12, 2012

FISHMANPET posted:

I wanted 16GB with mine so this is the kit I got: http://smile.amazon.com/Crucial-1600MT-PC3-12800-240-Pin-CT2KIT102472BD160B/dp/B008EMA5VU/ref=pd_bxgy_pc_text_y
Which also happens to be the recommended bundle, and also about the cheapest quality memory option there is on Amazon. But even if you don't get that kit, it should at least tell you what to look for.

Thanks. This should do the trick: http://www.amazon.com/Kingston-Valu...VR16LE11K4%2F32

Synthetic Violence
Oct 18, 2012

Fuck machine.
Grimey Drawer

Combat Pretzel posted:

Are you creating iSCSI extents on files or zvols?

I did a file. I set up to make sure I could get everything working properly over my network. It was really more of a test because I'm still lacking a couple of parts to complete everything. After I had it setup I played around with it a bit and got that number. It also confirmed my fears that the storage disk my HTPC is going bad.

Could I expect more perfomance out of a Zvol vs a file? Right now my plan is to limp along with the drive in my HTPC right now until after the holidays when I can get all the necessary resources shipped to put the iSCSI stuff on a separate zpool from everything else.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Synthetic Violence posted:

Could I expect more perfomance out of a Zvol vs a file?
If you use FreeNAS and istgt (standard in 9.3, needs to check experimental target in 9.2.1.x) and a sparse zvol, you get TRIM support. Not sure how things look on Linux in that regard. If you insist on using files on ZFS instead, be sure to set the block size to something smaller, like 8KB or 16KB, otherwise for any small read, ZFS would be hauling a whole 128KB block into memory, small random IO might kill performance in that case.

thebigcow
Jan 3, 2001

Bully!
zvols were designed for that sort of thing, what are the benefits of using a file?

Mr Shiny Pants
Nov 12, 2012

thebigcow posted:

zvols were designed for that sort of thing, what are the benefits of using a file?

You can copy it easily between systems? I would also go with a Zvol, but it seems like a benefit.

BlankSystemDaemon
Mar 13, 2009



If you're moving the zvol device extend, it's as easy as moving a file extend if your other server is also using zfs by using zfs send | zfs receive. Speaking of zvols, they're also quite useful for jails, and especially for bhyve.
I imagine a future where a set of blade servers with FreeBSD running zfs (plus carp for hast) and bhyve can be both a both a SAN and a virtualization cluster.

BlankSystemDaemon fucked around with this message at 14:40 on Dec 17, 2014

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
There are literally a handful of companies I know of that would have the money for blades and consider using ZFS in production. Almost all the ZFS using organizations I've seen are using whitebox builds and Supermicro servers and such because they're below the tier where they're able to shell out for blades (and the support / consulting costs of keeping them happy). Building out a poor man's SAN with ZFS is a nice idea in theory, but basically I don't see the mechanics working out for anyone besides Joyent and a couple other providers that are deeply embedded in the Solaris hegemony of operations and instrumentation that would use ZFS as a go-to for a SAN solution. I've seen dozens try to use vSAN (they chose... poorly...) for example but ZFS yields a lot of cricket chirping.

knox_harrington
Feb 18, 2011

Running no point.

I want to change my boot disk on my N40L to a SSD, can I just clone the existing drive onto the new one and let it take care of itself? I'm running WHS2011

For that matter should I try any other operating system? I'm fairly happy with WHS but it's not perfect. I use the server for DLNA / plex media serving and for backups of 3 other computers.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

necrobobsledder posted:

ZFS yields a lot of cricket chirping.

My ZFS Hobo-SAN in my garage is working nicely, but gently caress me if I would want anything like it in production. If your budget is "parts and scotch when poo poo inevitably breaks", they're a pretty compelling solution for stuffing a shitton of files into one spot. It also has fairly good iSCSI and SMB performance, I'm able to get 400-500 MB/sec reads and writes off the SSD scratch disk I have mounted via iSCSI

It's gone wonky on me a few times over the 5 or so years I've had it, and twice required a complete reinstall of the OS to fix the issues. The nice part is that entire process took like 2 hours, the downside is good loving luck finding anyone who knows anything about solaris to help fix it.

Civil
Apr 21, 2003

Do you see this? This means "Have a nice day".

knox_harrington posted:

I want to change my boot disk on my N40L to a SSD, can I just clone the existing drive onto the new one and let it take care of itself? I'm running WHS2011

For that matter should I try any other operating system? I'm fairly happy with WHS but it's not perfect. I use the server for DLNA / plex media serving and for backups of 3 other computers.

I guess you could, but you'd never see a difference. Especially with your use case, it would really be a waste of money, unless you frequently reboot and really care a lot about how fast it comes up.

I don't think you'll find a better OS if you enjoy using WHS's client backup/restore.

Mr Shiny Pants
Nov 12, 2012

Methylethylaldehyde posted:

My ZFS Hobo-SAN in my garage is working nicely, but gently caress me if I would want anything like it in production. If your budget is "parts and scotch when poo poo inevitably breaks", they're a pretty compelling solution for stuffing a shitton of files into one spot. It also has fairly good iSCSI and SMB performance, I'm able to get 400-500 MB/sec reads and writes off the SSD scratch disk I have mounted via iSCSI

It's gone wonky on me a few times over the 5 or so years I've had it, and twice required a complete reinstall of the OS to fix the issues. The nice part is that entire process took like 2 hours, the downside is good loving luck finding anyone who knows anything about solaris to help fix it.

I have it installed on my Microserver and I have literally never had any unplanned down time. This array has moved from Solaris to Linux, has been expanded and changed from a RaidZ mirror to a RaidZ1. It has been the most rock solid RAID config I've ever worked with. Running it on stable hardware I don't see why it should go tits up.


Taking the disks out of a machine, plugging them into any other machine that runs ZFS: Do a zpool import and of you go. Not worrying about RAID controller types, firmwares, server brands and all the other stuff is really slick. Hell if you keep the version number low enough you can go from Solaris -> FreeBSD -> Linux and back.

Mr Shiny Pants fucked around with this message at 05:43 on Dec 19, 2014

Mr Shiny Pants
Nov 12, 2012
My TS440 should be arriving today, anyone looking for a mini review? The memory is in backorder :( So only 4GB.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Mr Shiny Pants posted:

Running it on stable hardware I don't see why it should go tits up.
This is probably the most common problem. "I put ZFS on my 'gently' used Pentium 3 budget system with a Realtek NIC that I found in a dumpster and it crashed! ZFS sucks!" A lot of people who are drawn to ZFS because it's free are also the same people who try to re-use cut-rate equipment because it is also free, or very low cost. Some people will never comprehend that the difference between a consumer and server motherboard isn't simply the addition of ECC RAM. That said, ZFS does put you in a better position to recover from said crashes than a lot of other filesystems, so as long as you have reasonable expectations, have at.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Mr Shiny Pants posted:

I have it installed on my Microserver and I have literally never had any unplanned down time. This array has moved from Solaris to Linux, has been expanded and changed from a RaidZ mirror to a RaidZ1. It has been the most rock solid RAID config I've ever worked with. Running it on stable hardware I don't see why it should go tits up.


Taking the disks out of a machine, plugging them into any other machine that runs ZFS: Do a zpool import and of you go. Not worrying about RAID controller types, firmwares, server brands and all the other stuff is really slick. Hell if you keep the version number low enough you can go from Solaris -> FreeBSD -> Linux and back.

That's more or less what I did each time, "Welp, it poo poo itself again, I should probably stop loving with it." zpool import -f fuck_goons, success.

Now that I have it on server hardware with ECC ram and no longer have 12 4 year old WD greens attached to it, it's much much happier.

Edit: Most of the issues were in fact caused by those lovely WD green drives causing the mpt_sas subsystem to hang waiting for IO, which caused the whole system to repeatedly poo poo it's pants, which caused me to reinstall thinking it would fix it.

Mr Shiny Pants
Nov 12, 2012

Methylethylaldehyde posted:

That's more or less what I did each time, "Welp, it poo poo itself again, I should probably stop loving with it." zpool import -f fuck_goons, success.

Now that I have it on server hardware with ECC ram and no longer have 12 4 year old WD greens attached to it, it's much much happier.

Edit: Most of the issues were in fact caused by those lovely WD green drives causing the mpt_sas subsystem to hang waiting for IO, which caused the whole system to repeatedly poo poo it's pants, which caused me to reinstall thinking it would fix it.

I am running it on Seagate 3TBs :aaa: Seeing the BackBlaze blog posts about their reliability...........

Seems like it's gonna be a more expensive Christmas than I expected. I just ordered a M1015 SAS adapter, SAS -> SATA Multicable and an Icy Dock 4 Bay for my Microserver N40L. :)

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I still have a brigade of WD greens and Samsung Spinpoint drives and aside from disk failures that seem to manifest only when I spin down the machine for some downtime, I haven't had any issues with availability. Granted, I've mucked with my drives to re-enable TLER and such when it's possible, the majority for me now can't be enabled with wdidle anymore. I don't have 10g ethernet and am trying my damnedest to avoid buying a switch with LACP so I don't expect to see 400GB/s anything. Hell, I'm not sure if my LSI 1068E is even capable of that. While I do use my NAS for hosting VMs here and there, it's not imperative I have disk I/O as much as some random place for shared storage of ISOs (NFS shares for ESXi) and templates.

However, this setup is not very far off from what I used to work with at a start-up I was at before - we literally white-boxed some Supermicro machines and one of the other devs' Alienware machine from college for our various servers. AWS EC2 wasn't around, and the cost for these was pretty bare-bones and given we didn't push them hard (aside from that poor, poor build server) it worked fine.

Unless I needed a lot of reliable storage for production and wanted to host it locally, I wouldn't use ZFS for production, period. There's some ok use cases I've seen where people were trying to run some Cassandra clusters with it and to try to keep the storage pretty reliable without having to buy some RAID controllers, but if you're doing anything Big Data and you're on some shoestring budget, I get the impression you're not about to last very long as a company.

dorkanoid
Dec 21, 2004

When do you guys swap drives?

I have a drive that apparently passes SMART, but has Raw_Read_Error_Rate: 60798086 and Seek_Error_Rate: 74216291 (while the other disks in the NAS have 0 on these parameters), I'm trying to figure out if I should run out tomorrow to get a replacement, or if I should gamble it...

Rexxed
May 1, 2010

Dis is amazing!
I gotta try dis!

dorkanoid posted:

When do you guys swap drives?

I have a drive that apparently passes SMART, but has Raw_Read_Error_Rate: 60798086 and Seek_Error_Rate: 74216291 (while the other disks in the NAS have 0 on these parameters), I'm trying to figure out if I should run out tomorrow to get a replacement, or if I should gamble it...

I replace them when they've gone bad. That disk has gone bad. It's time to replace it before poo poo gets hosed up.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

dorkanoid posted:

When do you guys swap drives?

I have a drive that apparently passes SMART, but has Raw_Read_Error_Rate: 60798086 and Seek_Error_Rate: 74216291 (while the other disks in the NAS have 0 on these parameters), I'm trying to figure out if I should run out tomorrow to get a replacement, or if I should gamble it...

When ZFS tells me to.

eightysixed
Sep 23, 2004

I always tell the truth. Even when I lie.

dorkanoid posted:

When do you guys swap drives?

I have a drive that apparently passes SMART, but has Raw_Read_Error_Rate: 60798086 and Seek_Error_Rate: 74216291 (while the other disks in the NAS have 0 on these parameters), I'm trying to figure out if I should run out tomorrow to get a replacement, or if I should gamble it...

Rexxed posted:

I replace them when they've gone bad. That disk has gone bad. It's time to replace it before poo poo gets hosed up.

Yea, don't worry about the Pass/Fail. Look at the Raw values. As Rexxed said, go get a new one. You're on really borrowed time.

BlankSystemDaemon
Mar 13, 2009



Regarding ZFS for SANs, I know of several companies which use it in enterprise solutions built with enterprise hardware and budget. One company in particular uses it as datastore for an ESX cluster over Infiniband (at ~54Gbps).

dorkanoid
Dec 21, 2004

eightysixed posted:

Yea, don't worry about the Pass/Fail. Look at the Raw values. As Rexxed said, go get a new one. You're on really borrowed time.

Yeah, ordered two 3TB REDs, to replace the two last non-REDs in my NAS.

I could probably take this chance to migrate from Xpenology to FreeNAS...

Sub Rosa
Jun 9, 2010




What problems have you had with Xpenology? I love it.

My Rhythmic Crotch
Jan 13, 2011

The stupidest storage thing I've ever seen :downs:

dorkanoid
Dec 21, 2004

Sub Rosa posted:

What problems have you had with Xpenology? I love it.

No real problems, but the procedure for upgrading is a bit obnoxious.

The guides/downloads at xpenology.nl help, at least :)

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
At the first glance I expected this to be some ancient stuff, given that he's using 500GB drives (via USB?!), but the videos are from Nov 2014. What the gently caress.

--edit:
So he went with USB3 cards that have dedicated controllers per port to get 5GBps bus speed? What for? Uncached sequential IO isn't going to fill that bandwidth. Random IO especially not. And by using a SATA-USB bridge, you're eschewing NCQ, which will make random IO even worse. Plus the translation overhead in the chipset. Even cheap SATA cards do 3GBps, that should be enough per disk. But hey, he's a networking guy and knows servers.

Combat Pretzel fucked around with this message at 20:08 on Dec 20, 2014

IOwnCalculus
Apr 2, 2003





That piece of poo poo cost $4500.

FORTY FIVE HUNDRED DOLLARS IN 2014.



I mean, that thing is going to suck up huge amounts of power, be slower than poo poo, unreliable as poo poo, huge, and expensive.

Oh, and it uses Storage Spaces! :downs:

FunOne
Aug 20, 2000
I am a slimey vat of concentrated stupidity

Fun Shoe
Amazon has the TS-431-US on sale for $280 today, normally about 390.

Is this a good deal? I've been considering moving my storage out of my main desktop and getting some real RAID, but is this a decent model? Its the cheapest I've seen for a 4 drive unit lately.

Adbot
ADBOT LOVES YOU

My Rhythmic Crotch
Jan 13, 2011

In which the creator defends his creation

indeed

edit: those are apparently used 2.5" consumer laptop doodie poopy drives

My Rhythmic Crotch fucked around with this message at 20:42 on Dec 20, 2014

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply