Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Moey
Oct 22, 2010

I LIKE TO MOVE IT

Paul MaudDib posted:

Ordered tonight, unsurprisingly they aren't going to ship until after Chinese New Year. Ordered 2x16 GB of DDR4 ECC from Superbiiz as well.

I can't tell you what to buy but if you think can't do it on the LGA1151 E3 platform I'd look for proof that you can do it on the E5 platform on the processor of your choice. The difference between processing 1080p sources and 4K sources could potentially be a lot, and you should pin down exactly how much you need. You could potentially do 4X of an E3 build on an E5 platform... maybe 8x if you really splash out on a $3k+ chip... not really 16x+. You need a realistic estimate of what you need to process your source files vs what you are willing to spend.

The Skylake-X chips are actually fast as gently caress because they can do AVX-512, which is very fast at video processing. At 4K that stuff does matter, even on x264. With 4K sources, you probably do need serious hardware especially if you expect multiple streams running at the same time.

Well, I may be able to shove this guy into my rack (will have to measure). 4x3.5" internal then throw an icydock in the 5.25 for another 6x2.5" of 7mm drives. Looking at the pics, it will be complete hell to build in though.

https://www.newegg.com/Product/Product.aspx?Item=N82E16811147250

Adbot
ADBOT LOVES YOU

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Oh, I forgot, one of the standard recommendations is this Natex special, normally the 2670 but it now looks like they have 2680v2s as well. They're Sandy Bridge/Ivy Bridge so they do pull a fair amount of power, but you won't find a better deal for a 16C32T or 20C40T system. Note that you will need to populate at least one channel on each of the processors, so you can't run a single stick.

https://natex.us/motherboards/s2600cp2j-combo/e5-2670-v1/?sort=priceasc

https://natex.us/motherboards/s2600cp2j-combo/e5-2680-v2/?sort=priceasc

If you are willing to fart around with engineering samples you might be able to build something that's a little faster for the same money, but YMMV there.

https://www.techspot.com/review/1218-affordable-40-thread-xeon-monster-pc/

https://www.techspot.com/review/1155-affordable-dual-xeon-pc/

These are all EEB pattern boards, which are intended for rack servers. Some but not all tower cases support them, but it wouldn't be a problem with a rackmount case.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
I am trying to shove this into a wall mount network cabinet, so depth is a huge issue. I'll have a max of 16.5" from front to back.

EDIT:

http://www.supermicro.com/products/chassis/2U/523/SC523L-505B

Less internal storage, but ATX.

I'll end my derail here.

Moey fucked around with this message at 22:08 on Feb 16, 2018

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
There’s those old Rackable / SGI SE3016 servers that may work for short depth racks. Downside is they’re slower on the SATA interface than you’d hope but it should work for home media systems fine minus the loud fan that can be nodded fairly easily now from what I’ve gathered. There’s accompanying servers that were attached to those or you can look for half / short depth rack mini ITX chassis options.

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.
For anyone going down the FreeNAS 11 Docker route, I wish you luck. I tried for about 4 hours to get it all working, and it just wasn't coming together. Couldn't get RancherOS to sanely mount in an NFS share, eventually ending up in the VM refusing to boot altogether because it hung when trying to mount the share (even though SSHing in and manually mounting it prior to that worked instantly and perfectly).

At this point, I've rebuilt most of my old Corral containers as iocage jails, and it's all performing admirably.

I'm really just disappointed. I'm a professional sysadmin in my day life, and I couldn't get this whole thing put together, so either I'm far less competent than I think I am, or the state of the ecosystem around Rancher is less than great. Maybe next time around, I'll just launch a straight Linux VM and manage the containers by hand, or maybe 11.2 will make it all Just Work the way that Corral did.

Hughlander
May 11, 2005

G-Prime posted:

For anyone going down the FreeNAS 11 Docker route, I wish you luck. I tried for about 4 hours to get it all working, and it just wasn't coming together. Couldn't get RancherOS to sanely mount in an NFS share, eventually ending up in the VM refusing to boot altogether because it hung when trying to mount the share (even though SSHing in and manually mounting it prior to that worked instantly and perfectly).

At this point, I've rebuilt most of my old Corral containers as iocage jails, and it's all performing admirably.

I'm really just disappointed. I'm a professional sysadmin in my day life, and I couldn't get this whole thing put together, so either I'm far less competent than I think I am, or the state of the ecosystem around Rancher is less than great. Maybe next time around, I'll just launch a straight Linux VM and manage the containers by hand, or maybe 11.2 will make it all Just Work the way that Corral did.

I’m going to sing praises of what I just did again. Took ESXI/FreeNAS 9 instance with some zpools and installed Proxmox (Debian + GUI and few packages around HA LXC and Qemu/KVM) on a mirrored SSD spool. I made an LXC for plex (which was previously a VM.) an LXC for Docker (again another VM) and Debian just has the ZFS crap, everything else is inside LXC or VMs for windows / hackintosh. Whole thing took less than a weekend until I was back where I was with a new 10 drive zpool + the 6 drive zpool I had originally. Now I have all the memory available to docker, as opposed to 16 gigs dedicated to FreeNAS (Due to passing thru the controllers before.) and am able to spin up a new LXC in seconds. When I later installed an LXC for passing through the USB printer to do AirPrint + google cloud print + windows printing I noticed it uses 12 megs of memory for the whole LXC.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
Yeah, I'm kinda underwhelmed by FreeNAS 11, Corral felt better, I wish they'd just worked the bugs out instead of throwing a tantrum and scrapping everything.

BlankSystemDaemon
Mar 13, 2009



Corral was, by all accounts, jhbs baby - his leaving probably had something to do with it getting poo poo-canned.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

CommieGIR posted:

Yeah, I'm kinda underwhelmed by FreeNAS 11, Corral felt better, I wish they'd just worked the bugs out instead of throwing a tantrum and scrapping everything.

The upside is that 11 plans on pulling in basically everything that made Corral good. I still don't entirely understand the point in releasing Corral just to immediately EOL it to piece-meal port it over to 11, but whatever.

Having Docker in 11 would be super nice, though, and I can't wait for them to do so.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

DrDork posted:

The upside is that 11 plans on pulling in basically everything that made Corral good. I still don't entirely understand the point in releasing Corral just to immediately EOL it to piece-meal port it over to 11, but whatever.

Having Docker in 11 would be super nice, though, and I can't wait for them to do so.

Yeah...when. The Beta GUI isn't as impressive, 11s features feel like warmed over 9, and they went from an impressive BHyve implementation with in browser console to VNC only VMs.

I hope they catch up soon, because it's kinda losing its appeal.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Well, yeah, 11 is basically 9.4 or thereabouts, just with a name-change and a promise of future features. Which makes me wonder why they hated Corral enough to abandon it, because obviously a ton of work had been put into it, and even more work will have to go into back-porting poo poo into 11.

Chilled Milk
Jun 22, 2003

No one here is alone,
satellites in every home
The one thing I wish I realized when I was moving my services from Corral to FN11 jails is that which UI you log into determines what kind of jails you get. I used the old UI to build them because it actually works, and now I have to pray I don’t lose them whenever the transition happens

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.
They claim they'll have a warden -> iocage conversion built into 11.2, so that should hopefully not be an issue. Personally, I just built mine via the CLI with iocage, knowing that was coming.

Kicking myself at this point for having not done this months ago. Deluge was choking really badly under the Docker VM in Corral. I had to allocate 7 cores to the VM just so it wouldn't sputter out trying to handle 400 torrents. Now, in a jail, it's keeping one core at about 30%. Insane difference.

IOwnCalculus
Apr 2, 2003





Anyone here bought refurbs off of Goharddrive? Looks like they had some 8TB HGST Ultrastar HE8s for $150 on Amazon a couple days ago. Obviously they're used with reset SMART data because that's what they do, but it seems like overall they've still got a positive reputation, and those HGST drives are dead reliable to begin with. Works out to about $90 saved versus buying and shucking Easystores after tax since I need to buy four of them.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

FWIW, the WD EasyStore's are $159.99 right now.

(You may realize this and be including tax in your $90 estimate.)

IOwnCalculus
Apr 2, 2003





Yeah, that's what I was comparing based on. The Easystores go on sale for $160 so often that it's not worth buying them for more than that.

Beaucoup Haram
Jun 18, 2005

First part of the new server build complete.

Here's the parts I used:

Chassis: Chenbro NR-40700 from here: https://www.ebay.com/i/253419031346?rt=nc . I offered $400 and it was accepted straight away. Shipping to Australia was very expensive though.
System Core - Natex.us Bundle - Intel S2600CP2J Motherboard, 2 x Xeon 5820v2, 128gb DDR3 Ram. About $1000 US
- Intel RMM4 IPMI module
Storage - Intel 900P 280g Optane - for storage VM
- Samsung 830 256gb SSD - Secondary Datastore. May end up getting something larger and newer depending on how much stuff I put on it.
- SanDisk 16gb Ultra USB drive - ESXi install
Cooling - Supermicro SNK-P0048AP4 CPU coolers. These are too loud, I'll probably replace them with Noctua units that are more expensive but nowhere near as loud for better cooling.

Initial build was much easier than I'd thought it would be - there were lots of dire warnings about bios settings, bricked motherboards due to flashing, etc but everything worked well. Assembled, plugged into the case, cleared CMOS, flashed latest BIOS via the EFI update, all good.

I'm still tossing up what I do drive wise - at the moment I have two ML10V2 servers running Napp-It on Omnios, each with an 8 drive Raid Z2 array with Toshiba 3tb SATA drives. I'd like to move to a 16 drive Raid Z3 with 6tb drives but may just end up adding another 8 drive Z2 array with 6tb drives and moving the current drives over. This would double my current capacity from approx 30tb to 60.

Noise wise it's fine. My room with my desktop and the ML10V2's still running sits at about 48db according to some poxy iPhone app, and with the NR40700 running with CPU fans disconnected (they're cool enough passively for now) it's about 53. This will drop when I put it in the rack and put the rack back in the garage as summer is nearly over.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

With the upcoming Crashplan changes, I've been evaluating backup solutions and I've got to say...Duplicacy is pretty nice!

I've got a terabyte of OneDrive storage from various promotions over the years and it's super easy to use it as a backup target. When I eventually get around to setting up a backblaze backup target I can only assume it will be just as easy.

But, the thing I really like about it is that its resource usage is basically non existent. I haven't seen it use more than a couple percent CPU and a couple hundred MB RAM and that was during my initial backup.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Thermopyle posted:

With the upcoming Crashplan changes, I've been evaluating backup solutions and I've got to say...Duplicacy is pretty nice!

I've got a terabyte of OneDrive storage from various promotions over the years and it's super easy to use it as a backup target. When I eventually get around to setting up a backblaze backup target I can only assume it will be just as easy.

But, the thing I really like about it is that its resource usage is basically non existent. I haven't seen it use more than a couple percent CPU and a couple hundred MB RAM and that was during my initial backup.

Duplicacy has issues with large datasets:

https://github.com/duplicati/duplicati/issues/2544#issuecomment-366439875

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell


Duplicacy != Duplicati

Chillyrabbit
Oct 24, 2012

The only sword wielding rabbit on the internet



Ultra Carp
Well just pulled the trigger on a 4 bay QNAP TS431P and a WD Red 8TB disk.

I think I'm going to set up raid a raid level when I buy the other 3 disks to complete the set.

Question: Can I start storing stuff on the disk right now, and then when I buy the newer disks set up a RAID Array, copy data over to the RAID Array and then format and add the old drive to the array?

Or basically keep in mind that I need to keep a backup and just redo the whole disk Raid array when I have all 4.

EDIT: The term i was looking for was RAID Migration. For QNAP this article explains how to expand the drive sizes and RAID.

Still need to keep a back up but from first blush, I can expand and upgrade from a single drive to 4 drives and install RAID at the same time. Would probably be an all day process though as you need to install each drive slowly as it builds up the RAID Array.

Chillyrabbit fucked around with this message at 22:48 on Feb 21, 2018

IOwnCalculus
Apr 2, 2003





Thermopyle posted:

With the upcoming Crashplan changes, I've been evaluating backup solutions and I've got to say...Duplicacy is pretty nice!

I've got a terabyte of OneDrive storage from various promotions over the years and it's super easy to use it as a backup target. When I eventually get around to setting up a backblaze backup target I can only assume it will be just as easy.

But, the thing I really like about it is that its resource usage is basically non existent. I haven't seen it use more than a couple percent CPU and a couple hundred MB RAM and that was during my initial backup.

I like the sound of all of this. I'm going to need to sort something out as a Crashplan replacement and I think I have reliable enough storage now across two locations that I can probably stop paying for backup services.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Thermopyle posted:

Duplicacy != Duplicati

My mistake, sorry.

Ziploc
Sep 19, 2006
MX-5
Thanks everyone for your help. I've gotten to the point where I'm happy and can sleep well at night with my two redundant machines purring away in the basement.

Daily snapshots, daily replication, biweekly scrubs, email alerts, UPS monitoring.

Full offsite backup is next. Gonna be tricky with 10tb.

EDIT:

Primary
SUPERMICRO MBD-X10SL7-F-O with 14 SATA ports. IT mode. HTML5 IPMI.
Xeon E3-1241v3
16gb ECC RAM
7x4tb HGST NAS

Replicated Backup
SUPERMICRO MBD-X10SL7-F-O
Pentium G3258 (Which started this whole thing since I had it laying around and it supported ECC)
16gb ECC RAM
7x4tb WD RED

Both in Rosewill 4U with Kingwin hotswap bays.

Ziploc fucked around with this message at 21:54 on Feb 20, 2018

Sheep
Jul 24, 2003

Thermopyle posted:

With the upcoming Crashplan changes

Are these different from the changes last year where the family plan went away and it was basically Business or nothing?

IOwnCalculus
Apr 2, 2003





As far as I'm aware, that's it. I had renewed mine not long before the announcement so I've still got some time.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Sheep posted:

Are these different from the changes last year where the family plan went away and it was basically Business or nothing?

No, that's it.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
If there's no need for HA and storage needs are about 100TB, there is absolutely zero reason to look at something like Ceph instead of just a 12+ drive ZFS (or mdadm) box, right? It looks like you have to throw absolutely huge amounts of resources at Ceph before it begins to approach the performance of a pretty simple zfs setup.

Twerk from Home fucked around with this message at 16:32 on Feb 21, 2018

Zorak of Michigan
Jun 10, 2006

12 drives would be awfully small for Ceph, but I'd want to hear more about your environment to make a solid recommendation about what you ought to do.

I'm currently using FreeNAS with an 8x8TB RAIDz2 setup, and while I'm very happy with it, my dream is to find the money to build a Ceph cluster, so that I can have a good level of data protection but still have the flexibility to buy drives as needed.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Zorak of Michigan posted:

12 drives would be awfully small for Ceph, but I'd want to hear more about your environment to make a solid recommendation about what you ought to do.

I'm currently using FreeNAS with an 8x8TB RAIDz2 setup, and while I'm very happy with it, my dream is to find the money to build a Ceph cluster, so that I can have a good level of data protection but still have the flexibility to buy drives as needed.

I want to try Ceph at home one way or another, and am considering recommending it to a group I work with.

An academic lab that I consult with (not as IT, but writing software for them) currently has 1 storage box with 12x10TB hard drives in RAID 6 on an LSI raid card. Write performance is really lovely on it, which makes me think that the RAID card is likely the bottleneck and mdadm or zfs might actually be faster, given that it's got a lot of CPU power. They need another 100TB of storage soon-ish, and their IT guy mentioned that he wants to look at other options than just using a RAID card.

They don't need HA, and don't need the storage to appear to be in a unified pool. What I'm getting at is for the same money, Ceph is just going to be way slower than their current solution, right? I (and their IT guy) are also curious if zfs on linux on the new storage server would outperform the 12-drive wide RAID 6 on a MegaRAID card, in which case they could move everything over to the new box, and re-do the original box's raid as software instead of hardware.

I also don't know a ton about Ceph and maybe I'm misunderstanding and it can be cost and performance competitive with linux software raid at the 100TB - 200TB space.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA
Have them look into a professionally built and supported appliance from the people FreeNAS partners with. You can get real big-boy hobo-SANs with 24 drive bays for a few thousand bucks, minus drive costs, and the NAS level drives work pretty well so no need for the fancy SAS ones.

Also make damned sure that they have backups for that poo poo, a 16 or 35 tape autoloader deck off ebay and a new-ish LTO 7/8 drive will go a long way to making sure that when someone gives it the rm -rf by accident they aren't completely hosed. Even a quarterly full backup and weekly incrementals to the same set of tapes could make the difference between defaulting on their grant obligations due to technical issues or recovering in time to qualify for more grant money.

Edit: It also means when one of the lab techs or vising professors shows up with a crypto infection, you don't end up having half the array trashed before you notice the issue.

Methylethylaldehyde fucked around with this message at 01:54 on Feb 22, 2018

Zorak of Michigan
Jun 10, 2006

Twerk from Home posted:

An academic lab that I consult with (not as IT, but writing software for them) currently has 1 storage box with 12x10TB hard drives in RAID 6 on an LSI raid card. Write performance is really lovely on it, which makes me think that the RAID card is likely the bottleneck

Which performance metric is it lagging on?

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Zorak of Michigan posted:

Which performance metric is it lagging on?

Sustained write is poorer than expected. I don't have specifics off of the top of my head, but the workload deals more with long sustained reads / writes than lots of small I/O.

Zorak of Michigan
Jun 10, 2006

Assuming you've checked the OS' network parameters and aren't getting hosed by something there (this may not be a thing anymore, but I was a UNIX admin in the 90s and bad networking defaults were a fact of life), then yeah, I'd be giving that network card the side eye. Methylethylaldehyde's recommendations sound pretty wise to me.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Zorak of Michigan posted:

Assuming you've checked the OS' network parameters and aren't getting hosed by something there (this may not be a thing anymore, but I was a UNIX admin in the 90s and bad networking defaults were a fact of life), then yeah, I'd be giving that network card the side eye. Methylethylaldehyde's recommendations sound pretty wise to me.

Good point on the network side, super nice storage doesn't do dick if you're using a 3com switch from 2003 to do the core routing. 10GbE from the hobo-SAN to the core switch, 1GbE to the endpoints. If an endpoint is especially fancy, like the virtualization cluster, it too gets a 10GbE uplink. You can get pretty nice switches with multiple 10GbE links for not a whole lot of cash money these days. Also jumbo packets for days.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
You should try to separate out storage traffic on separate VLANs that use larger MTUs as well. For anything hitting the Internet, 1500 is basically here to stay honestly.

Novo
May 13, 2003

Stercorem pro cerebro habes
Soiled Meat

Twerk from Home posted:

I want to try Ceph at home one way or another, and am considering recommending it to a group I work with.

An academic lab that I consult with (not as IT, but writing software for them) currently has 1 storage box with 12x10TB hard drives in RAID 6 on an LSI raid card. Write performance is really lovely on it, which makes me think that the RAID card is likely the bottleneck and mdadm or zfs might actually be faster, given that it's got a lot of CPU power. They need another 100TB of storage soon-ish, and their IT guy mentioned that he wants to look at other options than just using a RAID card.

They don't need HA, and don't need the storage to appear to be in a unified pool. What I'm getting at is for the same money, Ceph is just going to be way slower than their current solution, right? I (and their IT guy) are also curious if zfs on linux on the new storage server would outperform the 12-drive wide RAID 6 on a MegaRAID card, in which case they could move everything over to the new box, and re-do the original box's raid as software instead of hardware.

I also don't know a ton about Ceph and maybe I'm misunderstanding and it can be cost and performance competitive with linux software raid at the 100TB - 200TB space.

Ceph is fun to play with. I would never want to be responsible for a production Ceph cluster though, too many horror stories. Also performance is not great and if you need HA get ready to run two clusters in case you have a problem with your primary. Also AFAIK Ceph still doesn't tolerate node outages very well, if you lose a node for any reason (say, reboot) it starts to rebalance right away.

ZFS on Linux with some SSD should outperform Ceph easily. It's also infinitely easier to use.

Novo fucked around with this message at 20:12 on Feb 22, 2018

Hulebr00670065006e
Apr 20, 2010
One of my colleagues two HHDs just failed in his raid 1 NAS. How does swapping to a new one work? Also I believe they are barracudas which I am reading is not a good choice so should I get him to get either ironwolf or reds and in that cause how would that work with getting his stuff on the new HDDs?

Hulebr00670065006e fucked around with this message at 13:15 on Feb 23, 2018

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
In most cases, if the drive is totally dead, you simply remove the dead drive and slot in the new one. Basic NAS systems will need you to shut the system down to do this. More advanced ones are happy to let you do so while it's running. It should take care of things from there. If the drive is failing, but not entirely dead (like it's throwing errors but more or less still working), there's usually an option in the NAS management software to detach the drive in software before you pull the hardware, which will make the NAS happier.

Barracudas are, indeed, less than optimal for NAS use. If he decided to switch to a NAS-specific type of drive, there's no additional work needed--the system doesn't give a gently caress what type of drive you use.

Adbot
ADBOT LOVES YOU

Ezekial
Jan 10, 2014

DrDork posted:

Barracudas are, indeed, less than optimal for NAS use. If he decided to switch to a NAS-specific type of drive, there's no additional work needed--the system doesn't give a gently caress what type of drive you use.

So I bought 8 4tb barracudas, with an LSI megaraid. (college brokegoon so no wd reds) when drives die I can just replace them with wd reds even with different speeds? Will it cap speeds of other drives or does the raid card just handle all of it on its own?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply