|
Paul MaudDib posted:Ordered tonight, unsurprisingly they aren't going to ship until after Chinese New Year. Ordered 2x16 GB of DDR4 ECC from Superbiiz as well. Well, I may be able to shove this guy into my rack (will have to measure). 4x3.5" internal then throw an icydock in the 5.25 for another 6x2.5" of 7mm drives. Looking at the pics, it will be complete hell to build in though. https://www.newegg.com/Product/Product.aspx?Item=N82E16811147250
|
# ? Feb 16, 2018 20:27 |
|
|
# ? Apr 17, 2024 22:26 |
|
Oh, I forgot, one of the standard recommendations is this Natex special, normally the 2670 but it now looks like they have 2680v2s as well. They're Sandy Bridge/Ivy Bridge so they do pull a fair amount of power, but you won't find a better deal for a 16C32T or 20C40T system. Note that you will need to populate at least one channel on each of the processors, so you can't run a single stick. https://natex.us/motherboards/s2600cp2j-combo/e5-2670-v1/?sort=priceasc https://natex.us/motherboards/s2600cp2j-combo/e5-2680-v2/?sort=priceasc If you are willing to fart around with engineering samples you might be able to build something that's a little faster for the same money, but YMMV there. https://www.techspot.com/review/1218-affordable-40-thread-xeon-monster-pc/ https://www.techspot.com/review/1155-affordable-dual-xeon-pc/ These are all EEB pattern boards, which are intended for rack servers. Some but not all tower cases support them, but it wouldn't be a problem with a rackmount case.
|
# ? Feb 16, 2018 21:15 |
|
I am trying to shove this into a wall mount network cabinet, so depth is a huge issue. I'll have a max of 16.5" from front to back. EDIT: http://www.supermicro.com/products/chassis/2U/523/SC523L-505B Less internal storage, but ATX. I'll end my derail here. Moey fucked around with this message at 22:08 on Feb 16, 2018 |
# ? Feb 16, 2018 21:42 |
|
There’s those old Rackable / SGI SE3016 servers that may work for short depth racks. Downside is they’re slower on the SATA interface than you’d hope but it should work for home media systems fine minus the loud fan that can be nodded fairly easily now from what I’ve gathered. There’s accompanying servers that were attached to those or you can look for half / short depth rack mini ITX chassis options.
|
# ? Feb 17, 2018 00:27 |
|
For anyone going down the FreeNAS 11 Docker route, I wish you luck. I tried for about 4 hours to get it all working, and it just wasn't coming together. Couldn't get RancherOS to sanely mount in an NFS share, eventually ending up in the VM refusing to boot altogether because it hung when trying to mount the share (even though SSHing in and manually mounting it prior to that worked instantly and perfectly). At this point, I've rebuilt most of my old Corral containers as iocage jails, and it's all performing admirably. I'm really just disappointed. I'm a professional sysadmin in my day life, and I couldn't get this whole thing put together, so either I'm far less competent than I think I am, or the state of the ecosystem around Rancher is less than great. Maybe next time around, I'll just launch a straight Linux VM and manage the containers by hand, or maybe 11.2 will make it all Just Work the way that Corral did.
|
# ? Feb 17, 2018 22:01 |
|
G-Prime posted:For anyone going down the FreeNAS 11 Docker route, I wish you luck. I tried for about 4 hours to get it all working, and it just wasn't coming together. Couldn't get RancherOS to sanely mount in an NFS share, eventually ending up in the VM refusing to boot altogether because it hung when trying to mount the share (even though SSHing in and manually mounting it prior to that worked instantly and perfectly). I’m going to sing praises of what I just did again. Took ESXI/FreeNAS 9 instance with some zpools and installed Proxmox (Debian + GUI and few packages around HA LXC and Qemu/KVM) on a mirrored SSD spool. I made an LXC for plex (which was previously a VM.) an LXC for Docker (again another VM) and Debian just has the ZFS crap, everything else is inside LXC or VMs for windows / hackintosh. Whole thing took less than a weekend until I was back where I was with a new 10 drive zpool + the 6 drive zpool I had originally. Now I have all the memory available to docker, as opposed to 16 gigs dedicated to FreeNAS (Due to passing thru the controllers before.) and am able to spin up a new LXC in seconds. When I later installed an LXC for passing through the USB printer to do AirPrint + google cloud print + windows printing I noticed it uses 12 megs of memory for the whole LXC.
|
# ? Feb 17, 2018 22:39 |
|
Yeah, I'm kinda underwhelmed by FreeNAS 11, Corral felt better, I wish they'd just worked the bugs out instead of throwing a tantrum and scrapping everything.
|
# ? Feb 18, 2018 00:51 |
Corral was, by all accounts, jhbs baby - his leaving probably had something to do with it getting poo poo-canned.
|
|
# ? Feb 18, 2018 01:09 |
|
CommieGIR posted:Yeah, I'm kinda underwhelmed by FreeNAS 11, Corral felt better, I wish they'd just worked the bugs out instead of throwing a tantrum and scrapping everything. The upside is that 11 plans on pulling in basically everything that made Corral good. I still don't entirely understand the point in releasing Corral just to immediately EOL it to piece-meal port it over to 11, but whatever. Having Docker in 11 would be super nice, though, and I can't wait for them to do so.
|
# ? Feb 18, 2018 05:02 |
|
DrDork posted:The upside is that 11 plans on pulling in basically everything that made Corral good. I still don't entirely understand the point in releasing Corral just to immediately EOL it to piece-meal port it over to 11, but whatever. Yeah...when. The Beta GUI isn't as impressive, 11s features feel like warmed over 9, and they went from an impressive BHyve implementation with in browser console to VNC only VMs. I hope they catch up soon, because it's kinda losing its appeal.
|
# ? Feb 18, 2018 05:05 |
|
Well, yeah, 11 is basically 9.4 or thereabouts, just with a name-change and a promise of future features. Which makes me wonder why they hated Corral enough to abandon it, because obviously a ton of work had been put into it, and even more work will have to go into back-porting poo poo into 11.
|
# ? Feb 18, 2018 05:37 |
|
The one thing I wish I realized when I was moving my services from Corral to FN11 jails is that which UI you log into determines what kind of jails you get. I used the old UI to build them because it actually works, and now I have to pray I don’t lose them whenever the transition happens
|
# ? Feb 18, 2018 21:08 |
|
They claim they'll have a warden -> iocage conversion built into 11.2, so that should hopefully not be an issue. Personally, I just built mine via the CLI with iocage, knowing that was coming. Kicking myself at this point for having not done this months ago. Deluge was choking really badly under the Docker VM in Corral. I had to allocate 7 cores to the VM just so it wouldn't sputter out trying to handle 400 torrents. Now, in a jail, it's keeping one core at about 30%. Insane difference.
|
# ? Feb 18, 2018 21:13 |
|
Anyone here bought refurbs off of Goharddrive? Looks like they had some 8TB HGST Ultrastar HE8s for $150 on Amazon a couple days ago. Obviously they're used with reset SMART data because that's what they do, but it seems like overall they've still got a positive reputation, and those HGST drives are dead reliable to begin with. Works out to about $90 saved versus buying and shucking Easystores after tax since I need to buy four of them.
|
# ? Feb 19, 2018 19:54 |
|
FWIW, the WD EasyStore's are $159.99 right now. (You may realize this and be including tax in your $90 estimate.)
|
# ? Feb 19, 2018 20:26 |
|
Yeah, that's what I was comparing based on. The Easystores go on sale for $160 so often that it's not worth buying them for more than that.
|
# ? Feb 19, 2018 20:45 |
First part of the new server build complete. Here's the parts I used: Chassis: Chenbro NR-40700 from here: https://www.ebay.com/i/253419031346?rt=nc . I offered $400 and it was accepted straight away. Shipping to Australia was very expensive though. System Core - Natex.us Bundle - Intel S2600CP2J Motherboard, 2 x Xeon 5820v2, 128gb DDR3 Ram. About $1000 US - Intel RMM4 IPMI module Storage - Intel 900P 280g Optane - for storage VM - Samsung 830 256gb SSD - Secondary Datastore. May end up getting something larger and newer depending on how much stuff I put on it. - SanDisk 16gb Ultra USB drive - ESXi install Cooling - Supermicro SNK-P0048AP4 CPU coolers. These are too loud, I'll probably replace them with Noctua units that are more expensive but nowhere near as loud for better cooling. Initial build was much easier than I'd thought it would be - there were lots of dire warnings about bios settings, bricked motherboards due to flashing, etc but everything worked well. Assembled, plugged into the case, cleared CMOS, flashed latest BIOS via the EFI update, all good. I'm still tossing up what I do drive wise - at the moment I have two ML10V2 servers running Napp-It on Omnios, each with an 8 drive Raid Z2 array with Toshiba 3tb SATA drives. I'd like to move to a 16 drive Raid Z3 with 6tb drives but may just end up adding another 8 drive Z2 array with 6tb drives and moving the current drives over. This would double my current capacity from approx 30tb to 60. Noise wise it's fine. My room with my desktop and the ML10V2's still running sits at about 48db according to some poxy iPhone app, and with the NR40700 running with CPU fans disconnected (they're cool enough passively for now) it's about 53. This will drop when I put it in the rack and put the rack back in the garage as summer is nearly over.
|
|
# ? Feb 19, 2018 21:56 |
|
With the upcoming Crashplan changes, I've been evaluating backup solutions and I've got to say...Duplicacy is pretty nice! I've got a terabyte of OneDrive storage from various promotions over the years and it's super easy to use it as a backup target. When I eventually get around to setting up a backblaze backup target I can only assume it will be just as easy. But, the thing I really like about it is that its resource usage is basically non existent. I haven't seen it use more than a couple percent CPU and a couple hundred MB RAM and that was during my initial backup.
|
# ? Feb 20, 2018 16:40 |
|
Thermopyle posted:With the upcoming Crashplan changes, I've been evaluating backup solutions and I've got to say...Duplicacy is pretty nice! Duplicacy has issues with large datasets: https://github.com/duplicati/duplicati/issues/2544#issuecomment-366439875
|
# ? Feb 20, 2018 17:42 |
|
CommieGIR posted:Duplicacy has issues with large datasets: Duplicacy != Duplicati
|
# ? Feb 20, 2018 18:17 |
Well just pulled the trigger on a 4 bay QNAP TS431P and a WD Red 8TB disk. I think I'm going to set up raid a raid level when I buy the other 3 disks to complete the set. Question: Can I start storing stuff on the disk right now, and then when I buy the newer disks set up a RAID Array, copy data over to the RAID Array and then format and add the old drive to the array? Or basically keep in mind that I need to keep a backup and just redo the whole disk Raid array when I have all 4. EDIT: The term i was looking for was RAID Migration. For QNAP this article explains how to expand the drive sizes and RAID. Still need to keep a back up but from first blush, I can expand and upgrade from a single drive to 4 drives and install RAID at the same time. Would probably be an all day process though as you need to install each drive slowly as it builds up the RAID Array. Chillyrabbit fucked around with this message at 22:48 on Feb 21, 2018 |
|
# ? Feb 20, 2018 19:59 |
|
Thermopyle posted:With the upcoming Crashplan changes, I've been evaluating backup solutions and I've got to say...Duplicacy is pretty nice! I like the sound of all of this. I'm going to need to sort something out as a Crashplan replacement and I think I have reliable enough storage now across two locations that I can probably stop paying for backup services.
|
# ? Feb 20, 2018 20:04 |
|
Thermopyle posted:Duplicacy != Duplicati My mistake, sorry.
|
# ? Feb 20, 2018 20:29 |
|
Thanks everyone for your help. I've gotten to the point where I'm happy and can sleep well at night with my two redundant machines purring away in the basement. Daily snapshots, daily replication, biweekly scrubs, email alerts, UPS monitoring. Full offsite backup is next. Gonna be tricky with 10tb. EDIT: Primary SUPERMICRO MBD-X10SL7-F-O with 14 SATA ports. IT mode. HTML5 IPMI. Xeon E3-1241v3 16gb ECC RAM 7x4tb HGST NAS Replicated Backup SUPERMICRO MBD-X10SL7-F-O Pentium G3258 (Which started this whole thing since I had it laying around and it supported ECC) 16gb ECC RAM 7x4tb WD RED Both in Rosewill 4U with Kingwin hotswap bays. Ziploc fucked around with this message at 21:54 on Feb 20, 2018 |
# ? Feb 20, 2018 21:46 |
|
Thermopyle posted:With the upcoming Crashplan changes Are these different from the changes last year where the family plan went away and it was basically Business or nothing?
|
# ? Feb 20, 2018 23:53 |
|
As far as I'm aware, that's it. I had renewed mine not long before the announcement so I've still got some time.
|
# ? Feb 21, 2018 00:08 |
|
Sheep posted:Are these different from the changes last year where the family plan went away and it was basically Business or nothing? No, that's it.
|
# ? Feb 21, 2018 05:08 |
|
If there's no need for HA and storage needs are about 100TB, there is absolutely zero reason to look at something like Ceph instead of just a 12+ drive ZFS (or mdadm) box, right? It looks like you have to throw absolutely huge amounts of resources at Ceph before it begins to approach the performance of a pretty simple zfs setup.
Twerk from Home fucked around with this message at 16:32 on Feb 21, 2018 |
# ? Feb 21, 2018 16:26 |
|
12 drives would be awfully small for Ceph, but I'd want to hear more about your environment to make a solid recommendation about what you ought to do. I'm currently using FreeNAS with an 8x8TB RAIDz2 setup, and while I'm very happy with it, my dream is to find the money to build a Ceph cluster, so that I can have a good level of data protection but still have the flexibility to buy drives as needed.
|
# ? Feb 21, 2018 20:35 |
|
Zorak of Michigan posted:12 drives would be awfully small for Ceph, but I'd want to hear more about your environment to make a solid recommendation about what you ought to do. I want to try Ceph at home one way or another, and am considering recommending it to a group I work with. An academic lab that I consult with (not as IT, but writing software for them) currently has 1 storage box with 12x10TB hard drives in RAID 6 on an LSI raid card. Write performance is really lovely on it, which makes me think that the RAID card is likely the bottleneck and mdadm or zfs might actually be faster, given that it's got a lot of CPU power. They need another 100TB of storage soon-ish, and their IT guy mentioned that he wants to look at other options than just using a RAID card. They don't need HA, and don't need the storage to appear to be in a unified pool. What I'm getting at is for the same money, Ceph is just going to be way slower than their current solution, right? I (and their IT guy) are also curious if zfs on linux on the new storage server would outperform the 12-drive wide RAID 6 on a MegaRAID card, in which case they could move everything over to the new box, and re-do the original box's raid as software instead of hardware. I also don't know a ton about Ceph and maybe I'm misunderstanding and it can be cost and performance competitive with linux software raid at the 100TB - 200TB space.
|
# ? Feb 21, 2018 22:34 |
|
Have them look into a professionally built and supported appliance from the people FreeNAS partners with. You can get real big-boy hobo-SANs with 24 drive bays for a few thousand bucks, minus drive costs, and the NAS level drives work pretty well so no need for the fancy SAS ones. Also make damned sure that they have backups for that poo poo, a 16 or 35 tape autoloader deck off ebay and a new-ish LTO 7/8 drive will go a long way to making sure that when someone gives it the rm -rf by accident they aren't completely hosed. Even a quarterly full backup and weekly incrementals to the same set of tapes could make the difference between defaulting on their grant obligations due to technical issues or recovering in time to qualify for more grant money. Edit: It also means when one of the lab techs or vising professors shows up with a crypto infection, you don't end up having half the array trashed before you notice the issue. Methylethylaldehyde fucked around with this message at 01:54 on Feb 22, 2018 |
# ? Feb 22, 2018 01:51 |
|
Twerk from Home posted:An academic lab that I consult with (not as IT, but writing software for them) currently has 1 storage box with 12x10TB hard drives in RAID 6 on an LSI raid card. Write performance is really lovely on it, which makes me think that the RAID card is likely the bottleneck Which performance metric is it lagging on?
|
# ? Feb 22, 2018 04:19 |
|
Zorak of Michigan posted:Which performance metric is it lagging on? Sustained write is poorer than expected. I don't have specifics off of the top of my head, but the workload deals more with long sustained reads / writes than lots of small I/O.
|
# ? Feb 22, 2018 04:21 |
|
Assuming you've checked the OS' network parameters and aren't getting hosed by something there (this may not be a thing anymore, but I was a UNIX admin in the 90s and bad networking defaults were a fact of life), then yeah, I'd be giving that network card the side eye. Methylethylaldehyde's recommendations sound pretty wise to me.
|
# ? Feb 22, 2018 05:03 |
|
Zorak of Michigan posted:Assuming you've checked the OS' network parameters and aren't getting hosed by something there (this may not be a thing anymore, but I was a UNIX admin in the 90s and bad networking defaults were a fact of life), then yeah, I'd be giving that network card the side eye. Methylethylaldehyde's recommendations sound pretty wise to me. Good point on the network side, super nice storage doesn't do dick if you're using a 3com switch from 2003 to do the core routing. 10GbE from the hobo-SAN to the core switch, 1GbE to the endpoints. If an endpoint is especially fancy, like the virtualization cluster, it too gets a 10GbE uplink. You can get pretty nice switches with multiple 10GbE links for not a whole lot of cash money these days. Also jumbo packets for days.
|
# ? Feb 22, 2018 06:24 |
|
You should try to separate out storage traffic on separate VLANs that use larger MTUs as well. For anything hitting the Internet, 1500 is basically here to stay honestly.
|
# ? Feb 22, 2018 17:06 |
|
Twerk from Home posted:I want to try Ceph at home one way or another, and am considering recommending it to a group I work with. Ceph is fun to play with. I would never want to be responsible for a production Ceph cluster though, too many horror stories. Also performance is not great and if you need HA get ready to run two clusters in case you have a problem with your primary. Also AFAIK Ceph still doesn't tolerate node outages very well, if you lose a node for any reason (say, reboot) it starts to rebalance right away. ZFS on Linux with some SSD should outperform Ceph easily. It's also infinitely easier to use. Novo fucked around with this message at 20:12 on Feb 22, 2018 |
# ? Feb 22, 2018 20:09 |
|
One of my colleagues two HHDs just failed in his raid 1 NAS. How does swapping to a new one work? Also I believe they are barracudas which I am reading is not a good choice so should I get him to get either ironwolf or reds and in that cause how would that work with getting his stuff on the new HDDs?
Hulebr00670065006e fucked around with this message at 13:15 on Feb 23, 2018 |
# ? Feb 23, 2018 11:12 |
|
In most cases, if the drive is totally dead, you simply remove the dead drive and slot in the new one. Basic NAS systems will need you to shut the system down to do this. More advanced ones are happy to let you do so while it's running. It should take care of things from there. If the drive is failing, but not entirely dead (like it's throwing errors but more or less still working), there's usually an option in the NAS management software to detach the drive in software before you pull the hardware, which will make the NAS happier. Barracudas are, indeed, less than optimal for NAS use. If he decided to switch to a NAS-specific type of drive, there's no additional work needed--the system doesn't give a gently caress what type of drive you use.
|
# ? Feb 23, 2018 13:43 |
|
|
# ? Apr 17, 2024 22:26 |
|
DrDork posted:Barracudas are, indeed, less than optimal for NAS use. If he decided to switch to a NAS-specific type of drive, there's no additional work needed--the system doesn't give a gently caress what type of drive you use. So I bought 8 4tb barracudas, with an LSI megaraid. (college brokegoon so no wd reds) when drives die I can just replace them with wd reds even with different speeds? Will it cap speeds of other drives or does the raid card just handle all of it on its own?
|
# ? Feb 25, 2018 23:21 |