Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Hughlander
May 11, 2005

Sagacity posted:

Ah, interesting. The reason I ask is because I'm still unsure whether to go for something like Xpenology or using FreeNAS. The latter is utterly spergy about robustness but I'm not sure if I care enough about that yet. :)

I'd also like to run ESXi on that, but I think both Xpenology and FreeNAS are not that happy about being hosted in a VM (unless you don't mind SMART not working and so on).

I'm in the same boat reading up as much as I can on Xpenology, thinking of following Don Lapre's example machine with 16 gigs and ESXi as a Christmas break hack idea. Anyone have personal experience of Xpenology on ESXi?

Adbot
ADBOT LOVES YOU

Hughlander
May 11, 2005

Sagacity posted:

I'm now leaning towards going for FreeNAS on top of ESXi.

The Supermicro X10SL7-F has an Intel SATA controller onboard and an LSI SAS one. I can use the latter to connect a bunch of SATA disks and port forward this entire controller to the FreeNAS ESXi. Then the Intel controller can host an SSD with the ESXi datastore.

This should be pretty robust, but still give plenty of flexibility. The only thing I have to give up is the Mini-ITX form factor and I'll have to go with MicroATX instead.

Edit: With an Intel Xeon E3-1240v3, which supports VT-d.

When you spec that out can you share it? It's a bit more than I was originally thinking but does seem to be a far nicer get-up.

Hughlander
May 11, 2005

kiwid posted:

For any Canadians, the Newegg.ca shell shocker at 1:00pm EST is the 4TB WD Red drives.

http://www.newegg.ca/Special/ShellShocker.aspx?cm_sp=ShellShocker-_-22-236-599-_-12232013_2

edit: $184.99, meh.

3tb reds are the US daily. $124.99 With Promo Code: EMCYTZT5279

Hughlander
May 11, 2005

So ready to pull the trigger on some Red's for a NAS. But not sure which for the price. $160 for 4TB reds vs $300 for 6TB reds. Or $40/Tb vs $50/Tb. I'm waffling between 4-4s or 4-6s and paying the $80 premium for the more space. Any long terms thoughts on it?

Hughlander
May 11, 2005

spog posted:

My new NAS has been stuck in an Amazon delivery truck for 3 days while they repeatedly lie about trying to deliver it.

Does that count as 'cloud storage'?

Sorry it must be my new AWS instance, so yes.

Hughlander
May 11, 2005

Anyone have experience with the seagate 4 TB NAS drives? They're on sale for $15 less than the Reds were on sale last time, so I'm thinking of getting 6 for ZFS/Z2 but I just been looking at the Reds for so long I have no idea how Seagate is these days.

Hughlander
May 11, 2005

Mea Culpa. I read it on mobile without graphics and the text didn't say anything about Seagate.

Passing till the next sale I guess thanks guys.

Hughlander
May 11, 2005

Straker posted:

"worth buying" price right now is about $25/TB with a bit of a premium for 5TB and large premium for 6TB drives, so that's about right.

I didn't think anyone really bought external drives to use them externally anyway, it's just stupid bullshit that they work out cheaper than internal drives most of the time, because Seagate is like Microsoft thinking nobody actually puts computers together any more, except twice as stupid and incompetent. That means don't buy WD externals, but Seagate externals should generally still be safe to buy.

Not sure I'd put $25/TB into a NAS. WD Reds are 40-48/TB depending on size and sales, and seems to be the standard. I just got 4TB for $160 and feel that's where it normally is "on sale"

Hughlander
May 11, 2005

Jago posted:

It's not necessary, but it should last through as my writes as FreeNAS will ever put on it, unlike a thumb drive.

edit after thinking:
thewirecutter.com/reviews/the-best-usb-3-0-thumb-drive/ (32 GB fast, as tested)
Just buy a couple of these so you have a ready replacement if necessary.

http://www.newegg.com/Product/Product.aspx?Item=N82E16820167180
SSD's start at 120GB and 64 bucks for reliable ones, so it's probably not worth the cost.

Freenas doesn't write to the boot device and recommends you put it in read only mode.

Hughlander
May 11, 2005

Combat Pretzel posted:

I've put a cheap SSD in to move the system and log files onto it, so that the actual data array can idle when I'm not at home/inactive (work + sleep) or on vacation.

That sounds cool. Is there a guide to follow for that? I have a large unused SSD in my system since I thought originally about a zip or l2 arc just to realize with my setup itd be useless.

Hughlander
May 11, 2005

Combat Pretzel posted:

Just stuff it into the NAS, create a separate pool, go to System > Settings > System Dataset and select the new pool. You need to reboot for it to complete properly, otherwise it still creates IO on the previous pool. Other than that, you need to set timeouts on the disks and APM to level 64 for them to spin down. You can do that in the properties of the disks in Storage > View Disks. Select a generous timeout, otherwise they keep spinning up and down in general use, if you don't generate enough IO.

--edit: You can move plugins and everything there, too. Probably even required to do this, if you want the data pool disks to spin down.

When you said move the system I thought you meant the boot drive to the SSD. I was thinking of doing what you wrote above but now that I think about it what I really need to do is automate it for all of the VMs as well like mount /var/log on all Machines from there.

Hughlander
May 11, 2005

Is it possible to have freenas serve an iSCSI device to a windows machine that formats it as ntfs and Then also have the freenas mount it read only?

I want to have calibre manage ebooks on a win7 machine that may or may not be on while at the same time having access to said books over a web server in a freenas jail. Calibre expects direct drive access and mounting the drive via nfs causes renames to fail.

Hughlander
May 11, 2005

Moey posted:

Or run ESXi on your host, then have a freenas vm and pass your raid card through. You will still get smart data and whatnot.

That's what I do as long as you can do the vt passthrough.

Hughlander
May 11, 2005

mayodreams posted:

ZFS does own but it can't alleviate the probability of a drive failing during a rebuild of a RAID5 volume, particularly as the size of the individual drives goes up.

RAIDZ2 for life.

You mean raidz2 for the next 2-3 years until drive size increases enough that the probability of multiple drives failing is too high and we do ten years of raidz3!

Hughlander
May 11, 2005

IOwnCalculus posted:




Yeah, they aren't kidding when they say unlimited, and the fact that they make it relatively easy to support NAS-type environments is the icing on the cake.

Just takes forever to upload. I was at 29 days for 2 months. Took 4 total to get 4Tb done with speeds starting at 12Mbit and ending at 2.

Hughlander
May 11, 2005

eddiewalker posted:

Is it their incoming speeds that are so limited?

I decided to get serious about offsite after a storm killed the onboard NIC in my N54L (and *everything* else in the house connected to an ethernet cable)

I was planning to park the NAS at my grandparents house and Crashplan off their sweet, sweet, totally-wasted gigabit upload for a few days. Bad plan?

Given the estimate was literally weeks at the same value, I assumed it was a straight up look at how much is done is left and throttle at the application protocol level.

Hughlander
May 11, 2005

phosdex posted:

With FreeNAS it sort of doesn't really matter if your thumb drive dies after a couple of months. As long as you make backups of your config you can install to a new thumb drive, boot from it, import your config and be back up and running really quickly. I've tested this. I use just some cheap kingston thumb drive that came in a 4 pack and are actually really terrible performance-wise.

What set of files or dirs count as config here?

Hughlander
May 11, 2005

Furism posted:

RE: "proper backup." I think for the actually critical stuff the best bet is still paying for OneDrive, DropBox or any of those. The stuff will be off-site, replicated in various data-centers and all that. If you're worried about the NSA getting a hold of the files, just use TrueCrypt.

(that is, unless you know how to setup a VPS server with a SyncThing client or something like that - that comes cheaper but you have to manage it and stuff).

Best of both worlds: crash plan with your own encryption key.

Hughlander
May 11, 2005

LmaoTheKid posted:

So o spin up a test VM of FreeNAS last night. So far so good but I have one issue: Transmission kind of sucks.

Deluge or rtorrent would be preferred and it looks like I can set up jails for one of them.


I guess my main question is, how does FreeNAS handle custom jails if I upgrade? Are the preserved?

What's wrong with transmission? I use it with a docker container that only lets it use a VPN IP so never looked at other server based clients. If you have the API set up does the client really matter?

Hughlander
May 11, 2005

LmaoTheKid posted:

No support for multiple watch folders is a deal breaker for me.

E: I'm kind of thinking of going with unRAID.

Which is used for what? Multiple output folders? I really never did a lot of research there and just been dumping sabbzbd and transmission in the same output folder that sick rage, couchpotato, headphones, and Mylar are looking to post process in...

Hughlander
May 11, 2005

Way off topic, but I fly internationally a lot and my iPad is about 80 gigs of synced tv shows I have no other time to watch. Does anyone have a mobile app anywhere near the class of plex pass for iOS for that use case? I just say keep the last 5 episodes unwatched of these 20 shows and every time I'm near wifi it gets freshened. F

Hughlander
May 11, 2005

This probably isn't the right thread for it so if people want to point me elsewhere do so...

my NAS just lost a SSD that wasn't mirrored (Or really used for anything, I was going to use it for L2ARC before realizing how stupid that was.) At the same time my Windows 10 desktop lost it's non-system rotational drive. As I replaced it with a 2TB drive and restored from crashplan, I put the two together and wondered how annoying it'd be to lose the System SSD.

Is there an easy way on Windows 10 to mirror an existing SSD to a partition of a much larger rotational so I could just continue to boot if the SSD died?

Hughlander
May 11, 2005

G-Prime posted:

And realistically, docker containers themselves are leaps and bounds ahead of jails in terms of usability for the average person. Migration of configs might not be fun, but it's totally doable. My biggest worry is no longer having separate IPs for every app I run (which isn't actually a problem, I just got used to muscle memory for logging into my various things).

Just use the --ip option to docker run.

Hughlander
May 11, 2005

The Milkman posted:

Is there a way to invoke zeroconf (or whtatever) magicks to give containers a .local address?

I don't know of one. I looked briefly to setup a docker cups for AirPrint and just said gently caress it and did it on the base vm.

Hughlander
May 11, 2005

Thanks Ants posted:

I think I'm going to stick with 9.10 for a few months and give a chance for things to mature a bit more

That's my plan though I may drop esxi from my solution. Right now I have a 32 gig Xeon running esxi. Pass through the LSI and 16 gigs to freenas then run 4 gigs for plex committed, 12 gigs with 8 committed to a docker host running 15 containers, and then as hoc windows / OS X Linux servers. I could see just doing one boot2docker with 24 gigs on freenas 10 and simplifying it.

Hughlander
May 11, 2005

G-Prime posted:

I just bumped mine from 16 to 20, probably going to need to head toward 24 soon. I've determined at this point that with my library size, Plex absolutely dominates my system. Radarr looks like it's hitting things pretty hard too. If I keep Plex running, Deluge just crashes repeatedly. It's brutal. I'm going to have to reassess how I'm going to handle things.

That was why I specified different vms originally so I could control the resource allocation. Using jackett sonarr and radarr on Linux is a huge sink. (All mono) you can also specify memory and CPU limits per container but haven't played much there.

Hughlander
May 11, 2005

Matt Zerella posted:

Skip FreeNAS/ECC and go with unRAID because it's The Best and who cares about bit rot at home.

My counterpoint: It's like $40 more for ECC, and until unRAID gets snapshots and multiple parity drives it's not worth considering with 64TB of raw storage.

Hughlander
May 11, 2005

Matt Zerella posted:

You can do multiple parity with unraid. 2 counts right? ;)

Or you can do SnapRAID/OpenMediaVault and get native dockers/kvm?

I dunno, the new FreeNAS is really slick but after playing with it and the mental gymnastics you have to do with ZFS, it just seems kind of silly for home use unless you're doing home lab stuff where you need the features. Just my 2 cents.

I've been enamored with how easy unRAID has been in terms of setting and forgetting, with the only PITA being preclearing drives. And maybe with 64TB of storage that might take forever.

I did a quick google and it said it only supported one parity. If that's wrong i'll retract that. :) I'd also argue that 64TB is homelab. But the whole setting and forgetting is also why I'd go FreeNAS with ECC, ZFS /z2, overlaping snapshots, and crashPlan. The only time I even think about it is when I need to mount a snapshot 'cuz calibre shat the bed again.

Hughlander
May 11, 2005

If you guys use Crashplan listen here...

I run Crashplan in Docker that autoupdates itself and when it does I usually go a week or two without resetting the VM memory options to 4g so it crashes constantly and takes forever to catch up. This last time it was saying 1.4 - 2.8 years to catch up so I got sick of it and poke and prodded and came across
<dataDeDupAutoMaxFileSizeForWan> in the my.service.xml file. It seems to specify the max file size before it'll try to dedup to the crashplan server. It's set to 0 or all files. I set it to 1. My throughput went from 4-600kps to 30Mbps. I was trying to paste a datadog graph but imgur kept saying that it couldn't take it... I'm uploading 3-4 megabytes a second and the time to catch up dropped to 23 days and after a day it's now 19 days.

Hughlander
May 11, 2005

IOwnCalculus posted:

Which Docker container are you using? This one seems to remember my java mx setting across reboots and updates.

jrcs/crashplan, I considered that one but didn't want the client portion. I may reconsider it when this update is done though...

Hughlander
May 11, 2005

Thermopyle posted:

That's weird. I've never messed with it and CrashPlan always maxes out my measly 5mbps upload.

It only becomes an issue when you pass 4-5 TB of data. Below that the native 1G is sufficient.

Hughlander
May 11, 2005

fletcher posted:

I think that's the culprit. Loads of OutOfMemoryError entries in /usr/local/crashplan/log/service.log.0

I thought I had given it 8192m but looking at /usr/local/crashplan/bin/run.conf it must have gotten reverted to 1200m at some point.

Sucks that OOM error wiped out my existing backup that took like 2 years to upload :(

It didn't it's just the way that line is reporting. It got through 10% of syncing blocks with the server is what it's saying. Fix the memory through the java mix command and in 10 hours or so it'll catch up.

Note afaik you need to use the console not any config file.

Hughlander
May 11, 2005

I got 6tb usable left. When I fill it I'm adding a second group of 6 8TB drives.

Hughlander
May 11, 2005

EconOutlines posted:

It could be just a weird thing on my end but I've had some funky issues with Spideroak backup and sync on its own. Their app seems overly clunky and slow as well.

Anyways, being spoiled by Dropbox, I went back to them with Boxcryptor for encryption. I've been able accumulate almost 30GB over the last 4 years, so it suites my needs.

I think in the packrats thread suggesting Dropbox and 30GB backup set isn't a reasonable comparison. My crashplan console says: 2,531,854 files / 11.4 TB and I'm probably in the low end around here.

Hughlander
May 11, 2005

DrDork posted:

It's a perfectly reasonable solution for probably 98% of people, the same way that a cheap franken-box is probably a reasonable solution for a lot more people than is a custom built ZFS/Xeon/fiber-channel rack-mount. No reason not to mention rational solutions from time to time.




That said I'm still pricing out fiber lines for my house because gently caress reasonable!

98% of the people don't belong in this thread though. The 98% wouldn't think of a home NAS to begin with. At most they'd do a WD Book or something.

That does remind me though I think in the next year I'm going to add another 6-8 drives and move to a rackmount case. Any thread recommendations for something that will let me hot swap 15-16 drives?

Hughlander
May 11, 2005

Dr. Poz posted:

I'm by no means putting SpiderOak through its paces with my usage, but I've been using it for a little under two years. The only issues with the software I've ever hard has been when I wanted to run it on Arch Linux, and even then I installed it by converting a Debian package to an AUR so I didn't exactly take the "happy path." My main reason for using them is their focus on privacy. I have recommended it to friends and coworkers and would happily continue to do so. I've never used Crashplan or any of the other major providers though, so my experience is limited.

What is the focus on privacy? CrashPlan encrypts the backups by default, has an option to add a second password that's not your account password to access the default key on the server, and finally allows you to do so with your own supplied key that's never sent to the server if you want the max security. (With obviously no case of recovery if you lose said key.) I'm approaching it from the opposite end, I'm about to hit year 3 with my CrashPlan subscription and haven't used others.

I may give the free tier a try but it seems that the feature set is less than CrashPlan for some things that I use at least once a month. IE: Grab a random file from an offline computer and view it on my phone. Browse deleted files from last year and choose a directory to restore since I didn't think I'd want to watch that TV show but changed my mind. etc...

Hughlander
May 11, 2005

Thermopyle posted:

Are the spider oak clients open source? I don't think CrashPlan clients are, so you're just placing your faith in them that they've implemented encryption like they say they have.

Note that I'm not saying this is something worth worrying about, only that it is a plausible and not insane thing to worry about.

Looks like the SpiderOak desktop client isn't open source.
And for Crashplan, it's java... I'm sure you could crack the jar and see that they're calling the JDKs AES calls just like you're supposed to. I'm equally sure there's someone out there who has.

Hughlander
May 11, 2005

Saukkis posted:

How ae security updates to the libraries handled with jails? If I understood your symlink remark correctly, then the jailed application will still use the OS libraries, so that will take care of security updates. But if all the libraries come from the OS then what's the point of jailing. And if the jail has it's own copy of the libraries, then does that mean that whenever there is a security update to any of the included libraries it's necessary to release a security update to your application. I have the same concern with Ubuntu Snap and I wasn't able to figure out how they are addressing it with a cursory look. Docker can be included in this too.

If I was a developer considering Docker I would have to think hard if I want to shoulder that much responsibility. I work as a sysop and if it comes from the repositories it's our responsibility, the developer doesn't need to care. If there's an update to kernel, OpenSSL, Java, Python or Apache, Red Hat will send us an email and we'll do what's necessary. The developer will only have to worry about the piece of php, ruby or java they have written.

There's not much difference with docker. You start your dockerfile with
FROM ubuntu:latest

And the do a docker pull before building your container. Ban, latest security fixes applied.

Hughlander
May 11, 2005

insularis posted:

Seriously, do this. I just migrated all my jail stuff from FreeNAS (by which I mean, started over) to a new power sipping Xeon D 1541 box with a single NVMe drive running ESXi using the SSD for the VMs who each use FreeNAS for bulk storage if they need it. Holy poo poo, it is night and day how much better life is. Stability, updating, configuration, management, performance ... all through the roof compared to jails.

I used to be all about the jails. Now I say gently caress jails. My storage server is now a single purpose appliance, just serve storage.

How much did that run you? I'm considering a second box for a Docker host. I want to run Docker on bare metal with the maximum RAM and don't see a good way to do that with my existing FreeNAS setup.

Adbot
ADBOT LOVES YOU

Hughlander
May 11, 2005

I'm currently thinking of installing backblaze on an Windows Server 2016 VM on the freenas machine and just expose the volumes over SMB. Any downside to doing that? I had the Crashplan family account for like 4 years and don't really want to drastically increase the cost of backup but do want the connivence of a full restore if needed. B2 estimated at something like 850$ per year just for the NAS.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply