|
Sagacity posted:Ah, interesting. The reason I ask is because I'm still unsure whether to go for something like Xpenology or using FreeNAS. The latter is utterly spergy about robustness but I'm not sure if I care enough about that yet. I'm in the same boat reading up as much as I can on Xpenology, thinking of following Don Lapre's example machine with 16 gigs and ESXi as a Christmas break hack idea. Anyone have personal experience of Xpenology on ESXi?
|
# ¿ Dec 19, 2013 16:20 |
|
|
# ¿ Apr 25, 2024 07:12 |
|
Sagacity posted:I'm now leaning towards going for FreeNAS on top of ESXi. When you spec that out can you share it? It's a bit more than I was originally thinking but does seem to be a far nicer get-up.
|
# ¿ Dec 20, 2013 04:15 |
|
kiwid posted:For any Canadians, the Newegg.ca shell shocker at 1:00pm EST is the 4TB WD Red drives. 3tb reds are the US daily. $124.99 With Promo Code: EMCYTZT5279
|
# ¿ Dec 23, 2013 19:42 |
|
So ready to pull the trigger on some Red's for a NAS. But not sure which for the price. $160 for 4TB reds vs $300 for 6TB reds. Or $40/Tb vs $50/Tb. I'm waffling between 4-4s or 4-6s and paying the $80 premium for the more space. Any long terms thoughts on it?
|
# ¿ Aug 8, 2014 17:39 |
|
spog posted:My new NAS has been stuck in an Amazon delivery truck for 3 days while they repeatedly lie about trying to deliver it. Sorry it must be my new AWS instance, so yes.
|
# ¿ Aug 17, 2014 02:00 |
|
Anyone have experience with the seagate 4 TB NAS drives? They're on sale for $15 less than the Reds were on sale last time, so I'm thinking of getting 6 for ZFS/Z2 but I just been looking at the Reds for so long I have no idea how Seagate is these days.
|
# ¿ Sep 28, 2014 05:07 |
|
Mea Culpa. I read it on mobile without graphics and the text didn't say anything about Seagate. Passing till the next sale I guess thanks guys.
|
# ¿ Sep 28, 2014 06:04 |
|
Straker posted:"worth buying" price right now is about $25/TB with a bit of a premium for 5TB and large premium for 6TB drives, so that's about right. Not sure I'd put $25/TB into a NAS. WD Reds are 40-48/TB depending on size and sales, and seems to be the standard. I just got 4TB for $160 and feel that's where it normally is "on sale"
|
# ¿ Nov 28, 2014 04:37 |
|
Jago posted:It's not necessary, but it should last through as my writes as FreeNAS will ever put on it, unlike a thumb drive. Freenas doesn't write to the boot device and recommends you put it in read only mode.
|
# ¿ Dec 1, 2014 15:35 |
|
Combat Pretzel posted:I've put a cheap SSD in to move the system and log files onto it, so that the actual data array can idle when I'm not at home/inactive (work + sleep) or on vacation. That sounds cool. Is there a guide to follow for that? I have a large unused SSD in my system since I thought originally about a zip or l2 arc just to realize with my setup itd be useless.
|
# ¿ Dec 1, 2014 22:12 |
|
Combat Pretzel posted:Just stuff it into the NAS, create a separate pool, go to System > Settings > System Dataset and select the new pool. You need to reboot for it to complete properly, otherwise it still creates IO on the previous pool. Other than that, you need to set timeouts on the disks and APM to level 64 for them to spin down. You can do that in the properties of the disks in Storage > View Disks. Select a generous timeout, otherwise they keep spinning up and down in general use, if you don't generate enough IO. When you said move the system I thought you meant the boot drive to the SSD. I was thinking of doing what you wrote above but now that I think about it what I really need to do is automate it for all of the VMs as well like mount /var/log on all Machines from there.
|
# ¿ Dec 2, 2014 07:02 |
|
Is it possible to have freenas serve an iSCSI device to a windows machine that formats it as ntfs and Then also have the freenas mount it read only? I want to have calibre manage ebooks on a win7 machine that may or may not be on while at the same time having access to said books over a web server in a freenas jail. Calibre expects direct drive access and mounting the drive via nfs causes renames to fail.
|
# ¿ Mar 24, 2015 15:31 |
|
Moey posted:Or run ESXi on your host, then have a freenas vm and pass your raid card through. You will still get smart data and whatnot. That's what I do as long as you can do the vt passthrough.
|
# ¿ Apr 4, 2015 20:34 |
|
mayodreams posted:ZFS does own but it can't alleviate the probability of a drive failing during a rebuild of a RAID5 volume, particularly as the size of the individual drives goes up. You mean raidz2 for the next 2-3 years until drive size increases enough that the probability of multiple drives failing is too high and we do ten years of raidz3!
|
# ¿ Apr 23, 2015 07:02 |
|
IOwnCalculus posted:
Just takes forever to upload. I was at 29 days for 2 months. Took 4 total to get 4Tb done with speeds starting at 12Mbit and ending at 2.
|
# ¿ May 21, 2015 03:50 |
|
eddiewalker posted:Is it their incoming speeds that are so limited? Given the estimate was literally weeks at the same value, I assumed it was a straight up look at how much is done is left and throttle at the application protocol level.
|
# ¿ May 21, 2015 15:23 |
|
phosdex posted:With FreeNAS it sort of doesn't really matter if your thumb drive dies after a couple of months. As long as you make backups of your config you can install to a new thumb drive, boot from it, import your config and be back up and running really quickly. I've tested this. I use just some cheap kingston thumb drive that came in a 4 pack and are actually really terrible performance-wise. What set of files or dirs count as config here?
|
# ¿ Dec 17, 2015 15:12 |
|
Furism posted:RE: "proper backup." I think for the actually critical stuff the best bet is still paying for OneDrive, DropBox or any of those. The stuff will be off-site, replicated in various data-centers and all that. If you're worried about the NSA getting a hold of the files, just use TrueCrypt. Best of both worlds: crash plan with your own encryption key.
|
# ¿ May 17, 2016 16:04 |
|
LmaoTheKid posted:So o spin up a test VM of FreeNAS last night. So far so good but I have one issue: Transmission kind of sucks. What's wrong with transmission? I use it with a docker container that only lets it use a VPN IP so never looked at other server based clients. If you have the API set up does the client really matter?
|
# ¿ May 22, 2016 23:53 |
|
LmaoTheKid posted:No support for multiple watch folders is a deal breaker for me. Which is used for what? Multiple output folders? I really never did a lot of research there and just been dumping sabbzbd and transmission in the same output folder that sick rage, couchpotato, headphones, and Mylar are looking to post process in...
|
# ¿ May 23, 2016 04:08 |
|
Way off topic, but I fly internationally a lot and my iPad is about 80 gigs of synced tv shows I have no other time to watch. Does anyone have a mobile app anywhere near the class of plex pass for iOS for that use case? I just say keep the last 5 episodes unwatched of these 20 shows and every time I'm near wifi it gets freshened. F
|
# ¿ Jun 14, 2016 15:00 |
|
This probably isn't the right thread for it so if people want to point me elsewhere do so... my NAS just lost a SSD that wasn't mirrored (Or really used for anything, I was going to use it for L2ARC before realizing how stupid that was.) At the same time my Windows 10 desktop lost it's non-system rotational drive. As I replaced it with a 2TB drive and restored from crashplan, I put the two together and wondered how annoying it'd be to lose the System SSD. Is there an easy way on Windows 10 to mirror an existing SSD to a partition of a much larger rotational so I could just continue to boot if the SSD died?
|
# ¿ Feb 7, 2017 00:40 |
|
G-Prime posted:And realistically, docker containers themselves are leaps and bounds ahead of jails in terms of usability for the average person. Migration of configs might not be fun, but it's totally doable. My biggest worry is no longer having separate IPs for every app I run (which isn't actually a problem, I just got used to muscle memory for logging into my various things). Just use the --ip option to docker run.
|
# ¿ Mar 18, 2017 05:52 |
|
The Milkman posted:Is there a way to invoke zeroconf (or whtatever) magicks to give containers a .local address? I don't know of one. I looked briefly to setup a docker cups for AirPrint and just said gently caress it and did it on the base vm.
|
# ¿ Mar 18, 2017 17:14 |
|
Thanks Ants posted:I think I'm going to stick with 9.10 for a few months and give a chance for things to mature a bit more That's my plan though I may drop esxi from my solution. Right now I have a 32 gig Xeon running esxi. Pass through the LSI and 16 gigs to freenas then run 4 gigs for plex committed, 12 gigs with 8 committed to a docker host running 15 containers, and then as hoc windows / OS X Linux servers. I could see just doing one boot2docker with 24 gigs on freenas 10 and simplifying it.
|
# ¿ Mar 19, 2017 17:28 |
|
G-Prime posted:I just bumped mine from 16 to 20, probably going to need to head toward 24 soon. I've determined at this point that with my library size, Plex absolutely dominates my system. Radarr looks like it's hitting things pretty hard too. If I keep Plex running, Deluge just crashes repeatedly. It's brutal. I'm going to have to reassess how I'm going to handle things. That was why I specified different vms originally so I could control the resource allocation. Using jackett sonarr and radarr on Linux is a huge sink. (All mono) you can also specify memory and CPU limits per container but haven't played much there.
|
# ¿ Mar 19, 2017 22:53 |
|
Matt Zerella posted:Skip FreeNAS/ECC and go with unRAID because it's The Best and who cares about bit rot at home. My counterpoint: It's like $40 more for ECC, and until unRAID gets snapshots and multiple parity drives it's not worth considering with 64TB of raw storage.
|
# ¿ Mar 24, 2017 15:22 |
|
Matt Zerella posted:You can do multiple parity with unraid. 2 counts right? I did a quick google and it said it only supported one parity. If that's wrong i'll retract that. I'd also argue that 64TB is homelab. But the whole setting and forgetting is also why I'd go FreeNAS with ECC, ZFS /z2, overlaping snapshots, and crashPlan. The only time I even think about it is when I need to mount a snapshot 'cuz calibre shat the bed again.
|
# ¿ Mar 24, 2017 15:38 |
|
If you guys use Crashplan listen here... I run Crashplan in Docker that autoupdates itself and when it does I usually go a week or two without resetting the VM memory options to 4g so it crashes constantly and takes forever to catch up. This last time it was saying 1.4 - 2.8 years to catch up so I got sick of it and poke and prodded and came across <dataDeDupAutoMaxFileSizeForWan> in the my.service.xml file. It seems to specify the max file size before it'll try to dedup to the crashplan server. It's set to 0 or all files. I set it to 1. My throughput went from 4-600kps to 30Mbps. I was trying to paste a datadog graph but imgur kept saying that it couldn't take it... I'm uploading 3-4 megabytes a second and the time to catch up dropped to 23 days and after a day it's now 19 days.
|
# ¿ Mar 27, 2017 05:24 |
|
IOwnCalculus posted:Which Docker container are you using? This one seems to remember my java mx setting across reboots and updates. jrcs/crashplan, I considered that one but didn't want the client portion. I may reconsider it when this update is done though...
|
# ¿ Mar 27, 2017 14:04 |
|
Thermopyle posted:That's weird. I've never messed with it and CrashPlan always maxes out my measly 5mbps upload. It only becomes an issue when you pass 4-5 TB of data. Below that the native 1G is sufficient.
|
# ¿ Mar 27, 2017 19:02 |
|
fletcher posted:I think that's the culprit. Loads of OutOfMemoryError entries in /usr/local/crashplan/log/service.log.0 It didn't it's just the way that line is reporting. It got through 10% of syncing blocks with the server is what it's saying. Fix the memory through the java mix command and in 10 hours or so it'll catch up. Note afaik you need to use the console not any config file.
|
# ¿ Apr 8, 2017 07:03 |
|
I got 6tb usable left. When I fill it I'm adding a second group of 6 8TB drives.
|
# ¿ Apr 10, 2017 01:12 |
|
EconOutlines posted:It could be just a weird thing on my end but I've had some funky issues with Spideroak backup and sync on its own. Their app seems overly clunky and slow as well. I think in the packrats thread suggesting Dropbox and 30GB backup set isn't a reasonable comparison. My crashplan console says: 2,531,854 files / 11.4 TB and I'm probably in the low end around here.
|
# ¿ Apr 19, 2017 20:56 |
|
DrDork posted:It's a perfectly reasonable solution for probably 98% of people, the same way that a cheap franken-box is probably a reasonable solution for a lot more people than is a custom built ZFS/Xeon/fiber-channel rack-mount. No reason not to mention rational solutions from time to time. 98% of the people don't belong in this thread though. The 98% wouldn't think of a home NAS to begin with. At most they'd do a WD Book or something. That does remind me though I think in the next year I'm going to add another 6-8 drives and move to a rackmount case. Any thread recommendations for something that will let me hot swap 15-16 drives?
|
# ¿ Apr 20, 2017 16:38 |
|
Dr. Poz posted:I'm by no means putting SpiderOak through its paces with my usage, but I've been using it for a little under two years. The only issues with the software I've ever hard has been when I wanted to run it on Arch Linux, and even then I installed it by converting a Debian package to an AUR so I didn't exactly take the "happy path." My main reason for using them is their focus on privacy. I have recommended it to friends and coworkers and would happily continue to do so. I've never used Crashplan or any of the other major providers though, so my experience is limited. What is the focus on privacy? CrashPlan encrypts the backups by default, has an option to add a second password that's not your account password to access the default key on the server, and finally allows you to do so with your own supplied key that's never sent to the server if you want the max security. (With obviously no case of recovery if you lose said key.) I'm approaching it from the opposite end, I'm about to hit year 3 with my CrashPlan subscription and haven't used others. I may give the free tier a try but it seems that the feature set is less than CrashPlan for some things that I use at least once a month. IE: Grab a random file from an offline computer and view it on my phone. Browse deleted files from last year and choose a directory to restore since I didn't think I'd want to watch that TV show but changed my mind. etc...
|
# ¿ Apr 20, 2017 19:35 |
|
Thermopyle posted:Are the spider oak clients open source? I don't think CrashPlan clients are, so you're just placing your faith in them that they've implemented encryption like they say they have. Looks like the SpiderOak desktop client isn't open source. And for Crashplan, it's java... I'm sure you could crack the jar and see that they're calling the JDKs AES calls just like you're supposed to. I'm equally sure there's someone out there who has.
|
# ¿ Apr 20, 2017 22:27 |
|
Saukkis posted:How ae security updates to the libraries handled with jails? If I understood your symlink remark correctly, then the jailed application will still use the OS libraries, so that will take care of security updates. But if all the libraries come from the OS then what's the point of jailing. And if the jail has it's own copy of the libraries, then does that mean that whenever there is a security update to any of the included libraries it's necessary to release a security update to your application. I have the same concern with Ubuntu Snap and I wasn't able to figure out how they are addressing it with a cursory look. Docker can be included in this too. There's not much difference with docker. You start your dockerfile with FROM ubuntu:latest And the do a docker pull before building your container. Ban, latest security fixes applied.
|
# ¿ May 25, 2017 19:02 |
|
insularis posted:Seriously, do this. I just migrated all my jail stuff from FreeNAS (by which I mean, started over) to a new power sipping Xeon D 1541 box with a single NVMe drive running ESXi using the SSD for the VMs who each use FreeNAS for bulk storage if they need it. Holy poo poo, it is night and day how much better life is. Stability, updating, configuration, management, performance ... all through the roof compared to jails. How much did that run you? I'm considering a second box for a Docker host. I want to run Docker on bare metal with the maximum RAM and don't see a good way to do that with my existing FreeNAS setup.
|
# ¿ May 26, 2017 00:51 |
|
|
# ¿ Apr 25, 2024 07:12 |
|
I'm currently thinking of installing backblaze on an Windows Server 2016 VM on the freenas machine and just expose the volumes over SMB. Any downside to doing that? I had the Crashplan family account for like 4 years and don't really want to drastically increase the cost of backup but do want the connivence of a full restore if needed. B2 estimated at something like 850$ per year just for the NAS.
|
# ¿ Aug 23, 2017 18:49 |