|
Synthetic Violence posted:I've been looking at upgrading my NAS from it's current configuration to something that will give me a little more space, plus I want to do iSCSI for my HTPC and Desktop and have that separate from my main share. How is iSCSI on ZFS? Can I expect decent performance? Or would I be better off just mapping the disks directly?
|
# ? Dec 15, 2014 21:17 |
|
|
# ? Apr 27, 2024 17:41 |
|
Eh, to run games from your NAS, it might come handier to use iSCSI. Some games like to throw a poo poo fit trying to run them from a SMB share. As far as iSCSI on ZFS goes, let it use the default 8KB block size to keep the metadata overhead down, and format NTFS with 8KB clusters. Use jumbo frames. And possibly a direct link to keep the network switch from loving things up. Over here, I max out the connection with iSCSI. Combat Pretzel fucked around with this message at 23:20 on Dec 15, 2014 |
# ? Dec 15, 2014 23:17 |
|
Synthetic Violence posted:Lenovo TS440 for $299.99
|
# ? Dec 16, 2014 09:27 |
That makes me wonder if it's possible to use remote SCSI boot to have the OS on a RAIDz3 pool with 6 disks (plus ZIL and L2ARC) via 2x pci-ex 10GbE SFP+ with jumbo frames in both the server and workstation/gaming rig respectively.
|
|
# ? Dec 16, 2014 11:36 |
|
Yeah, should be. I've read some instructions in the past about how to make a PXE image to get Windows to boot from iSCSI. Would be a nice option for easy rollbacks, but 10GBit hardware is expensive as hell.
|
# ? Dec 16, 2014 13:20 |
|
necrobobsledder posted:iSCSI is really only a good idea at home if you don't mind getting capped by your network all of a sudden rather than your disks and you have a way to actually isolate all that storage traffic appropriately. I went crazy and had iTunes hooked up to an iSCSI target that gets shared out via CIFS (really, iTunes gives me so much of a headache in admin that almost offsets any interoperability and usability in the Apple ecosystem) and I wouldn't do it unless you've got a pretty solidly built network and you can isolate a storage subnet off correctly to avoid physical contention on an already-congested network. Combat Pretzel posted:Eh, to run games from your NAS, it might come handier to use iSCSI. Some games like to throw a poo poo fit trying to run them from a SMB share. Thanks guys. I figured that network speeds would be the only thing I had to really worry about. I should have that covered though, I spent a whole day rewiring everything so each computer hits a switch port separately now. Before I had 4 computers all sharing the same gigabit link. Plus I bought a 4 port NIC for my NAS and a switch that can do LACP so I should be okay on the bandwidth now. I still have some cables to run to the NAS but performance was reasonable (80 MB/s) with a test share (and I didn't follow Combat Pretzel's recommendations). DrDork posted:Do note that it doesn't actually come with any of the drive sleds (just blanks), so you need to budget in for them--they're $16/ea or so. Other 4-in-3 cages might fit, too, but I'm not sure what ones exactly. Still, the CPU alone usually sells for ~$220, plus another $40 or so for the RAM, and you're basically talking $40 for the case/mobo/PSU. Hell of a deal, and probably within spitting distance of the price of whatever low-budget DIY NAS you could put together ($50 mobo, $50 PSU, $50 RAM, $50 CPU, $50 case = $250 and you've got a much less powerful CPU). Yeah, should have mentioned that. It's a hard price to beat. A case from supermicro that does the same thing will run you $350-$400. A case. I bought one to build. I'll see how it goes.
|
# ? Dec 16, 2014 15:22 |
|
I just mentioned the $200-ish Supermicro cases on Ebay a while ago with full on sleds and some even including rails. If you're ok with rackmounts in your house, you should head directly to Ebay to take advantage of companies that are unloading obsolete (but still fine for most home use) equipment. CPUs have gotten so drat fast and the DRAM market stagnated so much that a server from 4 years ago is still extremely capable. Meanwhile, a new low-end server from Dell or HP that's somewhat comparable in market segment would run me $1200+ typically. I ain't saving $800+ in electricity over the course of its lifetime in my house at a piddly $.07 / kWh. I guess by you buying a LACP capable switch you're one of the crazy people then. Welcome to the club, all aboard the crazy home datacenter train!
|
# ? Dec 16, 2014 16:56 |
If you're like me and have a managed switch because you need the IGMP proxy, your switch may offer link aggregation without LACP just fine. Even in the super-cheap D-Link DGS-1100-16 I have, it's supported. Speaking of which, be aware that even if you use 802.1ax-2008, single-connection data transfers do not exceed the speed of a single link in the aggregated links (see exception below). It works really well with 802.1s to create fully redundant and load-balancing switched networks. LACP is just a feature that lets 802.1ax-2008 be automatically configured rather than you having to set it up manually. The exception to the above-mentioned single-connection data transfer limit is to pipe it with stdio through netcat or bbcp with mbuffer on both sides. BlankSystemDaemon fucked around with this message at 17:26 on Dec 16, 2014 |
|
# ? Dec 16, 2014 17:06 |
|
Synthetic Violence posted:Lenovo TS440 for $299.99 I bought this one, anyone know which memory it takes? I found out through a supplier that it takes unregistered ECC but I am not sure. Intel's ARK site only mentions that ECC is supported but not if it's UDIMM or RDIMM. EDIT: Ok, Through the Kingston site it gives me the following: KTL-TS316ELV/8G These are bloody expensive compared to a set of 4 * 8GB sticks. 124 Euro's per instead of 350 for four sticks. These are low voltage sticks do I need them? Or do regular ones also work? Mr Shiny Pants fucked around with this message at 19:37 on Dec 16, 2014 |
# ? Dec 16, 2014 19:27 |
|
I wanted 16GB with mine so this is the kit I got: http://smile.amazon.com/Crucial-1600MT-PC3-12800-240-Pin-CT2KIT102472BD160B/dp/B008EMA5VU/ref=pd_bxgy_pc_text_y Which also happens to be the recommended bundle, and also about the cheapest quality memory option there is on Amazon. But even if you don't get that kit, it should at least tell you what to look for.
|
# ? Dec 16, 2014 19:37 |
|
Synthetic Violence posted:and I didn't follow Combat Pretzel's recommendations
|
# ? Dec 16, 2014 19:38 |
|
FISHMANPET posted:I wanted 16GB with mine so this is the kit I got: http://smile.amazon.com/Crucial-1600MT-PC3-12800-240-Pin-CT2KIT102472BD160B/dp/B008EMA5VU/ref=pd_bxgy_pc_text_y Thanks. This should do the trick: http://www.amazon.com/Kingston-Valu...VR16LE11K4%2F32
|
# ? Dec 16, 2014 19:57 |
|
Combat Pretzel posted:Are you creating iSCSI extents on files or zvols? I did a file. I set up to make sure I could get everything working properly over my network. It was really more of a test because I'm still lacking a couple of parts to complete everything. After I had it setup I played around with it a bit and got that number. It also confirmed my fears that the storage disk my HTPC is going bad. Could I expect more perfomance out of a Zvol vs a file? Right now my plan is to limp along with the drive in my HTPC right now until after the holidays when I can get all the necessary resources shipped to put the iSCSI stuff on a separate zpool from everything else.
|
# ? Dec 16, 2014 21:38 |
|
Synthetic Violence posted:Could I expect more perfomance out of a Zvol vs a file?
|
# ? Dec 16, 2014 22:32 |
|
zvols were designed for that sort of thing, what are the benefits of using a file?
|
# ? Dec 17, 2014 03:54 |
|
thebigcow posted:zvols were designed for that sort of thing, what are the benefits of using a file? You can copy it easily between systems? I would also go with a Zvol, but it seems like a benefit.
|
# ? Dec 17, 2014 07:37 |
If you're moving the zvol device extend, it's as easy as moving a file extend if your other server is also using zfs by using zfs send | zfs receive. Speaking of zvols, they're also quite useful for jails, and especially for bhyve. I imagine a future where a set of blade servers with FreeBSD running zfs (plus carp for hast) and bhyve can be both a both a SAN and a virtualization cluster. BlankSystemDaemon fucked around with this message at 14:40 on Dec 17, 2014 |
|
# ? Dec 17, 2014 14:37 |
|
There are literally a handful of companies I know of that would have the money for blades and consider using ZFS in production. Almost all the ZFS using organizations I've seen are using whitebox builds and Supermicro servers and such because they're below the tier where they're able to shell out for blades (and the support / consulting costs of keeping them happy). Building out a poor man's SAN with ZFS is a nice idea in theory, but basically I don't see the mechanics working out for anyone besides Joyent and a couple other providers that are deeply embedded in the Solaris hegemony of operations and instrumentation that would use ZFS as a go-to for a SAN solution. I've seen dozens try to use vSAN (they chose... poorly...) for example but ZFS yields a lot of cricket chirping.
|
# ? Dec 17, 2014 20:55 |
|
I want to change my boot disk on my N40L to a SSD, can I just clone the existing drive onto the new one and let it take care of itself? I'm running WHS2011 For that matter should I try any other operating system? I'm fairly happy with WHS but it's not perfect. I use the server for DLNA / plex media serving and for backups of 3 other computers.
|
# ? Dec 18, 2014 22:35 |
|
necrobobsledder posted:ZFS yields a lot of cricket chirping. My ZFS Hobo-SAN in my garage is working nicely, but gently caress me if I would want anything like it in production. If your budget is "parts and scotch when poo poo inevitably breaks", they're a pretty compelling solution for stuffing a shitton of files into one spot. It also has fairly good iSCSI and SMB performance, I'm able to get 400-500 MB/sec reads and writes off the SSD scratch disk I have mounted via iSCSI It's gone wonky on me a few times over the 5 or so years I've had it, and twice required a complete reinstall of the OS to fix the issues. The nice part is that entire process took like 2 hours, the downside is good loving luck finding anyone who knows anything about solaris to help fix it.
|
# ? Dec 18, 2014 22:48 |
|
knox_harrington posted:I want to change my boot disk on my N40L to a SSD, can I just clone the existing drive onto the new one and let it take care of itself? I'm running WHS2011 I guess you could, but you'd never see a difference. Especially with your use case, it would really be a waste of money, unless you frequently reboot and really care a lot about how fast it comes up. I don't think you'll find a better OS if you enjoy using WHS's client backup/restore.
|
# ? Dec 18, 2014 23:17 |
|
Methylethylaldehyde posted:My ZFS Hobo-SAN in my garage is working nicely, but gently caress me if I would want anything like it in production. If your budget is "parts and scotch when poo poo inevitably breaks", they're a pretty compelling solution for stuffing a shitton of files into one spot. It also has fairly good iSCSI and SMB performance, I'm able to get 400-500 MB/sec reads and writes off the SSD scratch disk I have mounted via iSCSI I have it installed on my Microserver and I have literally never had any unplanned down time. This array has moved from Solaris to Linux, has been expanded and changed from a RaidZ mirror to a RaidZ1. It has been the most rock solid RAID config I've ever worked with. Running it on stable hardware I don't see why it should go tits up. Taking the disks out of a machine, plugging them into any other machine that runs ZFS: Do a zpool import and of you go. Not worrying about RAID controller types, firmwares, server brands and all the other stuff is really slick. Hell if you keep the version number low enough you can go from Solaris -> FreeBSD -> Linux and back. Mr Shiny Pants fucked around with this message at 05:43 on Dec 19, 2014 |
# ? Dec 19, 2014 05:39 |
|
My TS440 should be arriving today, anyone looking for a mini review? The memory is in backorder So only 4GB.
|
# ? Dec 19, 2014 05:45 |
|
Mr Shiny Pants posted:Running it on stable hardware I don't see why it should go tits up.
|
# ? Dec 19, 2014 07:06 |
|
Mr Shiny Pants posted:I have it installed on my Microserver and I have literally never had any unplanned down time. This array has moved from Solaris to Linux, has been expanded and changed from a RaidZ mirror to a RaidZ1. It has been the most rock solid RAID config I've ever worked with. Running it on stable hardware I don't see why it should go tits up. That's more or less what I did each time, "Welp, it poo poo itself again, I should probably stop loving with it." zpool import -f fuck_goons, success. Now that I have it on server hardware with ECC ram and no longer have 12 4 year old WD greens attached to it, it's much much happier. Edit: Most of the issues were in fact caused by those lovely WD green drives causing the mpt_sas subsystem to hang waiting for IO, which caused the whole system to repeatedly poo poo it's pants, which caused me to reinstall thinking it would fix it.
|
# ? Dec 19, 2014 07:13 |
|
Methylethylaldehyde posted:That's more or less what I did each time, "Welp, it poo poo itself again, I should probably stop loving with it." zpool import -f fuck_goons, success. I am running it on Seagate 3TBs Seeing the BackBlaze blog posts about their reliability........... Seems like it's gonna be a more expensive Christmas than I expected. I just ordered a M1015 SAS adapter, SAS -> SATA Multicable and an Icy Dock 4 Bay for my Microserver N40L.
|
# ? Dec 19, 2014 07:47 |
|
I still have a brigade of WD greens and Samsung Spinpoint drives and aside from disk failures that seem to manifest only when I spin down the machine for some downtime, I haven't had any issues with availability. Granted, I've mucked with my drives to re-enable TLER and such when it's possible, the majority for me now can't be enabled with wdidle anymore. I don't have 10g ethernet and am trying my damnedest to avoid buying a switch with LACP so I don't expect to see 400GB/s anything. Hell, I'm not sure if my LSI 1068E is even capable of that. While I do use my NAS for hosting VMs here and there, it's not imperative I have disk I/O as much as some random place for shared storage of ISOs (NFS shares for ESXi) and templates. However, this setup is not very far off from what I used to work with at a start-up I was at before - we literally white-boxed some Supermicro machines and one of the other devs' Alienware machine from college for our various servers. AWS EC2 wasn't around, and the cost for these was pretty bare-bones and given we didn't push them hard (aside from that poor, poor build server) it worked fine. Unless I needed a lot of reliable storage for production and wanted to host it locally, I wouldn't use ZFS for production, period. There's some ok use cases I've seen where people were trying to run some Cassandra clusters with it and to try to keep the storage pretty reliable without having to buy some RAID controllers, but if you're doing anything Big Data and you're on some shoestring budget, I get the impression you're not about to last very long as a company.
|
# ? Dec 19, 2014 19:25 |
|
When do you guys swap drives? I have a drive that apparently passes SMART, but has Raw_Read_Error_Rate: 60798086 and Seek_Error_Rate: 74216291 (while the other disks in the NAS have 0 on these parameters), I'm trying to figure out if I should run out tomorrow to get a replacement, or if I should gamble it...
|
# ? Dec 19, 2014 19:50 |
|
dorkanoid posted:When do you guys swap drives? I replace them when they've gone bad. That disk has gone bad. It's time to replace it before poo poo gets hosed up.
|
# ? Dec 19, 2014 20:10 |
|
dorkanoid posted:When do you guys swap drives? When ZFS tells me to.
|
# ? Dec 19, 2014 22:35 |
|
dorkanoid posted:When do you guys swap drives? Rexxed posted:I replace them when they've gone bad. That disk has gone bad. It's time to replace it before poo poo gets hosed up. Yea, don't worry about the Pass/Fail. Look at the Raw values. As Rexxed said, go get a new one. You're on really borrowed time.
|
# ? Dec 19, 2014 23:03 |
Regarding ZFS for SANs, I know of several companies which use it in enterprise solutions built with enterprise hardware and budget. One company in particular uses it as datastore for an ESX cluster over Infiniband (at ~54Gbps).
|
|
# ? Dec 19, 2014 23:13 |
|
eightysixed posted:Yea, don't worry about the Pass/Fail. Look at the Raw values. As Rexxed said, go get a new one. You're on really borrowed time. Yeah, ordered two 3TB REDs, to replace the two last non-REDs in my NAS. I could probably take this chance to migrate from Xpenology to FreeNAS...
|
# ? Dec 19, 2014 23:17 |
|
What problems have you had with Xpenology? I love it.
|
# ? Dec 20, 2014 16:33 |
|
The stupidest storage thing I've ever seen
|
# ? Dec 20, 2014 19:14 |
|
Sub Rosa posted:What problems have you had with Xpenology? I love it. No real problems, but the procedure for upgrading is a bit obnoxious. The guides/downloads at xpenology.nl help, at least
|
# ? Dec 20, 2014 19:46 |
|
At the first glance I expected this to be some ancient stuff, given that he's using 500GB drives (via USB?!), but the videos are from Nov 2014. What the gently caress. --edit: So he went with USB3 cards that have dedicated controllers per port to get 5GBps bus speed? What for? Uncached sequential IO isn't going to fill that bandwidth. Random IO especially not. And by using a SATA-USB bridge, you're eschewing NCQ, which will make random IO even worse. Plus the translation overhead in the chipset. Even cheap SATA cards do 3GBps, that should be enough per disk. But hey, he's a networking guy and knows servers. Combat Pretzel fucked around with this message at 20:08 on Dec 20, 2014 |
# ? Dec 20, 2014 19:52 |
|
That piece of poo poo cost $4500. FORTY FIVE HUNDRED DOLLARS IN 2014. I mean, that thing is going to suck up huge amounts of power, be slower than poo poo, unreliable as poo poo, huge, and expensive. Oh, and it uses Storage Spaces!
|
# ? Dec 20, 2014 20:25 |
|
Amazon has the TS-431-US on sale for $280 today, normally about 390. Is this a good deal? I've been considering moving my storage out of my main desktop and getting some real RAID, but is this a decent model? Its the cheapest I've seen for a 4 drive unit lately.
|
# ? Dec 20, 2014 20:28 |
|
|
# ? Apr 27, 2024 17:41 |
|
In which the creator defends his creation indeed edit: those are apparently used 2.5" consumer laptop doodie poopy drives My Rhythmic Crotch fucked around with this message at 20:42 on Dec 20, 2014 |
# ? Dec 20, 2014 20:40 |