Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
H110Hawk
Dec 28, 2006

FISHMANPET posted:

Does anyone offer anything like Crashplan's family plan? Backing up to my server isn't a huge deal, but it was nice to throw my mom's and wife's computers on my plan without having to decide if it was worth $50 a year or not to back up a couple gigabytes of data.

This is pretty much me as well, locally I need something that could preferably run directly on my Synology. Remotely I love getting that email from Crashplan that my parents and in-law's computers are all checked in and at 100%. I'm not going to cobble together a series of cron jobs, alerting, and bullshit to backup my dad's windows computer.

Adbot
ADBOT LOVES YOU

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Yeah the greatest thing in the world is I threw my mom's computer into my Crashplan account and I just don't have to think about it anymore. This was after I spent a couple hours with a USB-SATA adapter pulling files off of old laptops of hers that didn't boot anymore.

Thanks Ants
May 21, 2004

#essereFerrari


If you have a bunch of disk space on a Synology NAS then their Cloud Station Backup product might do the trick:

https://www.synology.com/en-uk/dsm/feature/desktop_backup

From there you can push the data to S3/Azure/[url=https://www.backblaze.com/blog/synology-cloud-backup-guide/]Backblaze[/url etc.

Proteus Jones
Feb 28, 2013



Personally, I think I'm looking using Glacier with the Synology Glacier client. (another option is Google's Coldline)

Between my Time Machine backups and home lab VMs, I'm looking at close to 3 to 4 TB of data for the initial load. (I also have all my Blu Ray rips for PLEX, but I can always re-rip those). And since this is really only for catastrophic recovery, I'm cool with non-instant access to data. There may be occasionally spikes every few months as I reconfigure the lab around different projects, but really only relatively small deltas from week to week.

I'd really like to use B2, but I've heard that using the Synology Cloudsync client is not a good idea for backups, and HyperBackup is not B2 aware.

Are there any other services that would be analogous? I'd really like to avoid any PC/Mac clients and stick with clients that run off the Synology. All local data ends up on the Synology anyhow, so that's where I'd like to run the offsite backups from if possible.

Thanks Ants
May 21, 2004

#essereFerrari


Azure Cool Blob is priced close to Glacier but without the retrieval caveats.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Thanks Ants posted:

Azure Cool Blob is priced close to Glacier but without the retrieval caveats.

I had to look up if that's the actual product name. "Cool Blob" is a lot worse than Glacier or Coldline.

Thanks Ants
May 21, 2004

#essereFerrari


Hughlander
May 11, 2005

I'm currently thinking of installing backblaze on an Windows Server 2016 VM on the freenas machine and just expose the volumes over SMB. Any downside to doing that? I had the Crashplan family account for like 4 years and don't really want to drastically increase the cost of backup but do want the connivence of a full restore if needed. B2 estimated at something like 850$ per year just for the NAS.

Thanks Ants
May 21, 2004

#essereFerrari


If you can get them to mount and work then you might as well go for it, but https://help.backblaze.com/hc/en-us/articles/217665478-Why-don-t-you-backup-network-drives-NAS-drives-

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
....but you can run SMB native on FreeNAS? Why double layer?

IOwnCalculus
Apr 2, 2003





I think he's talking about going the other direction - run the backblaze client inside a Windows VM that mounts the shared data via SMB. The problem is that most of these apps, intentionally or not, won't back up network shares inside of Windows.

Crashplan on Linux, on the other hand, couldn't tell the difference if it had to.

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.
If somebody wants to get fancy and experiment, you could try mounting your data via SMB to a drive mount, and then use an NTFS symlink to that on a real drive, then point the Backblaze client at THAT to see if it can resolve across the link and if it does it without complaining.

lurksion
Mar 21, 2013
Even though its no longer relevant after next year, something interesting with Crashplan and symlinks is that though it refuses to follow symlinks, you can fake it out by setting things up with a normal folder structure, and then replacing it with symlinks at a higher level.

i.e.
1) set up real folder /backups/folder/ and point Crashplan at it
2) delete /backups/folder/
3) symlink /backups/ (NOT /backups/folder/ )
4) Crashplan happily backups what is now a symlinked /backups/folder/

At least on Linux.

So even if Backblaze might be able to finessed in the same manner if it's uncooperative.

caberham
Mar 18, 2009

by Smythe
Grimey Drawer
What's a good provider for backing up photos? Amazon glacier looks good like some sort of absolute long term disaster access vault but what if I want some interim service?

Internet Explorer
Jun 1, 2005





caberham posted:

What's a good provider for backing up photos? Amazon glacier looks good like some sort of absolute long term disaster access vault but what if I want some interim service?

Have you looked at Google Photos or Apple Photos?

caberham
Mar 18, 2009

by Smythe
Grimey Drawer
I did, and synchronization between those programs and Lightroom is a major hassle.

Google photos seem to be alright though

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
I want to use Google Photos but I've got 20 years of pictures organized into folders, and the stupid desktop sync program discards information about the folder name. Why can't they automatically create albums from my folders??

Hughlander
May 11, 2005

IOwnCalculus posted:

I think he's talking about going the other direction - run the backblaze client inside a Windows VM that mounts the shared data via SMB. The problem is that most of these apps, intentionally or not, won't back up network shares inside of Windows.

Crashplan on Linux, on the other hand, couldn't tell the difference if it had to.

Correct, and I'm on Crashplan on Linux now, but not sure I want to jump to their small business. I did their family plan and have 8 machines backed up including 2 VPSes in the cloud. I'm just looking at options now. It looks like the change from Crashplan home to Small Business will also lose all backup history which is pretty bullshit IMO. And is the main reason I'd rather jump than take a 5x price increase for 1/10th the service.

Yaoi Gagarin
Feb 20, 2014

Maybe you can make a symbolic link to a share instead of mapping the drive?

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

VostokProgram posted:

Maybe you can make a symbolic link to a share instead of mapping the drive?

Another vote for symbolic links just for the fact they were created to aid in Unix migrations and compatibility.

Just be careful about the flags when making those links.

thetzar
Apr 22, 2001
Fallen Rib
I'm a proud owner of two DS916+s, and I need some advice. My intention is to have these mirror each other, and once they're synced, have one local and the other remote, serving as backups and persistent data access for me.

I am: A photographer and designer with just over 4 terabytes of important data on my Mac Pro. The Data 'lives' locally there, and is backed up to a single Time Machine drive and Crashplan (for now, lol).

My plan: to use the Diskstation(s) as a backup target. However, I was thinking that instead of using Time Machine to do this work, I'd rather have the data structured more regularly on the DiskStation, and therefore more easily accessible remotely or from other machines if ever needed. So I'd just set up a regular share on the DS and push sync data to it from the Mac. I would love to have versioning.

So the obvious way to go about this seems to be to set up Cloud Station. I'd make a share on the DS called 'Photography' and push my 'Photography' folder from my local drive up to it using Cloud Station Drive. I'd then use the Cloud Station/Sync functionality to sync the two DiskStations together. HOWEVER, I've heard disparaging things about the capabilities of Cloud Station Drive, especially when it's handling a -lot- of files, which this would be. So I've considered instead using a software package called ChronoSync on the Mac to push updates to the Synology over AFP or SMB.

My questions:

- If I enable Cloud Station on the Photography share, will the DS record versioning on the files within it, even if they're placed there via Chronosync over SMB or AFP, instead of via Cloud Station Drive?

- Am I being stupid, and should I just nut up and use Cloud Station Drive?

- Chronosync has its own versioning-like "Archive" functionality, but this would seem stupid to use of the DS is handling that, right?

- Is there anything else I need to know about how any of this would work?

Thank you all!

Farmer Crack-Ass
Jan 2, 2001

this is me posting irl
I'm toying with the idea of setting up a Linux box as an iSCSI target and then pointing an ESXi host at it. If I later expand the storage array in the Linux box, will the ESXi host (or other iSCSI client) pick up that there's additional space?

Internet Explorer
Jun 1, 2005





Not automatically but it is easy to do. Look up vmfs expand datastore.

Steakandchips
Apr 30, 2009

So if I have 2 synology nases, and I have the exact same stuff on both, on the same LAN, but then, i take 1 of them far away, and link them together via the internet, what do I need to do make them continuously synced?

i.e. if I write something on one of them, I want it to be available on the other.

sync should be bi-directional.

What is the keyword I am looking for? I don't think synology cloud-sync does this, as it is a master/client relationship, i.e. 1 directional.

rsync?

Proteus Jones
Feb 28, 2013



Steakandchips posted:

So if I have 2 synology nases, and I have the exact same stuff on both, on the same LAN, but then, i take 1 of them far away, and link them together via the internet, what do I need to do make them continuously synced?

i.e. if I write something on one of them, I want it to be available on the other.

sync should be bi-directional.

What is the keyword I am looking for? I don't think synology cloud-sync does this, as it is a master/client relationship, i.e. 1 directional.

rsync?

Sorry I misread. (removed all kinds of bad advice). I need to check something real quick and I'll update.

According to the Synology KB, CloudSync will do bidirectional sync. It's a setting in the wizard when you set it up.

quote:

Sync direction: Select whether you want the sync to be Bidirectional,Download local changes only, or Upload local changes only.
https://www.synology.com/en-us/knowledgebase/DSM/help/CloudSync/cloudsync

Proteus Jones fucked around with this message at 16:15 on Aug 26, 2017

Thanks Ants
May 21, 2004

#essereFerrari


Bidirectional sync is infinitely more difficult since it involved conflict resolution.

This package claims to do it https://www.synology.com/en-us/knowledgebase/DSM/help/CloudStationClient/DScloudclient

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

Steakandchips posted:

So if I have 2 synology nases, and I have the exact same stuff on both, on the same LAN, but then, i take 1 of them far away, and link them together via the internet, what do I need to do make them continuously synced?

i.e. if I write something on one of them, I want it to be available on the other.

sync should be bi-directional.

What is the keyword I am looking for? I don't think synology cloud-sync does this, as it is a master/client relationship, i.e. 1 directional.

rsync?

Unison https://www.cis.upenn.edu/~bcpierce/unison/ is built to keep two computers updated to have the latest files. It uses rsync on the back end to do the diffs and transmission handling, but uses it so both computers do it to each other .


Demo directions: https://www.howtoforge.com/tutorial/unison-file-sync-between-two-servers-on-debian-jessie/

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.
Just be aware that Unison can spaz out very badly when you do a large number of file changes in a short period, or when you change very large files. One of my coworkers uploaded a multiple gig tar file to one of our boxes and ran it completely out of RAM because Unison decided to try to open and read the whole drat thing.

G-Prime fucked around with this message at 16:44 on Aug 26, 2017

Tamba
Apr 5, 2010

Bittorrent Sync (now called Resilio Sync) should work on Synology as well.

e2: wait...its not free anymore? Nevermind then.

Tamba fucked around with this message at 16:40 on Aug 26, 2017

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

G-Prime posted:

Just be aware that Unison can spaz out very badly when you do a large number of file changes in a short period, or when you change very large files. One of my coworkers uploaded a multiple gig tar file to one of our boxes and ran it completely out of RAM because Unison decided to try to open and read the whole drat thing.

That's strange. That's not how rsync works and if a program using rsync is not doing hash blocks then it's not using the best reasons rsync is a great tool.

gently caress unison then.

thetzar
Apr 22, 2001
Fallen Rib
CloudStation ShareSync is probably what you want, it'll keep the data on the two the same. I'll use either it or HyperBackup to keep my two Diskstations aligned, depending on just how 'offline' I decide I want the offsite one to be.

I'm also trying out Resilio Sync to push my local drives to my local Diskstation. I'm not happy with the performance of Cloud Station Drive on my Mac, so I'm giving Resilio a shot. It's still free if you don't want the advanced features. Trial and error have shown me that as long as CloudStation is enabled for the share you want to use on the Diskstation, it will version the files no matter how they get there.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

EVIL Gibson posted:

That's strange. That's not how rsync works and if a program using rsync is not doing hash blocks then it's not using the best reasons rsync is a great tool.

gently caress unison then.

How do you think rsync computes the hashes for each block of a file? It has to read the whole file in, and do the number crunching.

Whenever a program reads data from a file, by default Linux caches it. Applications can take some care to avoid this caching, but last I checked rsync doesn't do any of that. (I did really check into that; we use rsync to move shitloads of data around at work and had some resource utilization issues as a result.)

So it's normal that rsyncing a shitload of stuff will chew through all your RAM. On the destination, too - both ends have to compute hashes. Rsync's original design goal was to minimize the amount of data exchange across slow network links rather than the amount of CPU/memory required to do the sync.

Much of the RAM use is relatively benign, in that it's just non-dirty file cache so Linux can drop it on the floor to free up memory as needed. Unfortunately the kernel's memory management isn't always perfect under memory pressure and this kind of thing can lead to needless swapping rather than the desired result.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
A while ago (probably a few weeks) I got a message to upgrade my ZFS pool on CentOS. I think I asked on here and then upgraded it.

Now that i've reinstalled CentOS (same version, latest) and tried to import my pool I get this message:

code:
status: The pool can only be accessed in read-only mode on this system. It
        cannot be accessed in read-write mode because it uses the following
        feature(s) not supported on this system:
        org.zfsonlinux:userobj_accounting (User/Group object accounting.)
action: The pool cannot be imported in read-write mode. Import the pool with
        "-o readonly=on", access the pool on a system that supports the
        required feature(s), or recreate the pool from backup.
 config:
I've checked that I'm running the latest version of ZFSonLinux available for my system and that I'm using the same method as last time (kabi kmod instead of DKMS).

So I was prompted to upgrade my pool to include a feature that's not even supported by the current ZFS version??

apropos man fucked around with this message at 21:09 on Aug 26, 2017

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

BobHoward posted:

How do you think rsync computes the hashes for each block of a file? It has to read the whole file in, and do the number crunching.



I thought how it was worded it was opening and getting metadata out like this doc file was last opened by so-and-so which doesn't even make more sense if it is really doing block by block hashing and not a god drat entire file hash.

If it is transferring the entire file each time it changes, it is not using rsync period.

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!

apropos man posted:

I've checked that I'm running the latest version of ZFSonLinux available for my system and that I'm using the same method as last time (kabi kmod instead of DKMS).

So I was prompted to upgrade my pool to include a feature that's not even supported by the current ZFS version??

You were compiling ZFS from source earlier when you were trying to get your system set up, did you happen to compile a version ahead of the package that's available for CentOS?

The first thing I'd try is grabbing the source for 0.7.1 and compiling that. It looks like that feature can't be disabled, but if that fixes it you'll eventually be able to go back to the package version once they update it.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

Desuwa posted:

You were compiling ZFS from source earlier when you were trying to get your system set up, did you happen to compile a version ahead of the package that's available for CentOS?

The first thing I'd try is grabbing the source for 0.7.1 and compiling that. It looks like that feature can't be disabled, but if that fixes it you'll eventually be able to go back to the package version once they update it.

I think I messed around with compiling from source, then I got pissed off and reinstalled CentOS and took from there just a vanilla installation of ZFS. I may be wrong, because I messed around with the whole setup a lot before I got some kind of sanity.

Before I went to bed I rsync'ed the whole (read only) dataset to a spare drive. I think I'll just destroy the pool and recreate it. I'm actually gonna lose my media dataset but I'm not too bothered about my Plex stuff.

I'm tempted to try FreeBSD as the host and use bhyve to run Linux guests, so that I have superior native ZFS implementation. But I'm a noob to BSD. Gonna spend a few hours on this today. Will FreeBSD/bhyve play nicely running Linux guests?

As an aside, the 960 Evo is very nice but I haven't noticed incredible speed differences from 850 Evo yet. But then, I've not had the chance to run my VM's properly yet.

e:

actually, I mounted the 850 to copy a 35GB qcow VM image to the 960 last night. I was getting a consistent ~200 MB/s transfer speed using rsync, with full disk encryption on both drives. :waycool:

apropos man fucked around with this message at 08:06 on Aug 27, 2017

Furism
Feb 21, 2006

Live long and headbang
I run NAS4Free but want to move to Linux (probably CentOS) because I'm more used to that OS. But I have a concerned about moving my ZFS volume built by FreeBSD to Linux. Apparently it depends on the zpool version? I tried to get it but it seems unset or something:

code:
remontoire: /mnt# zpool get version pool1
NAME   PROPERTY  VALUE    SOURCE
pool1  version   -        default
It just shows "-" instead of what I hoped would be "5000" (seems to be the current FreeBSD and Linux zpool version according to Wikipedia.

How do I figure this out?

Furism
Feb 21, 2006

Live long and headbang
And another question. I'm looking for recommendations for rackable cases for a NAS. Ideally 1U, but 2U would be okay. Should support mini ITX m/b, 4x 3.5" hot-swappable trays, at least 2x2.5" internal drives and a couple of PCI extension cards. Ideally would fit a regular ATX PSU or comes with an appropriate one. The Chenbro RM 14204 seemed like a good fit, any feedback on that brand?

IOwnCalculus
Apr 2, 2003





Furism posted:

I run NAS4Free but want to move to Linux (probably CentOS) because I'm more used to that OS. But I have a concerned about moving my ZFS volume built by FreeBSD to Linux. Apparently it depends on the zpool version? I tried to get it but it seems unset or something:

code:
remontoire: /mnt# zpool get version pool1
NAME   PROPERTY  VALUE    SOURCE
pool1  version   -        default
It just shows "-" instead of what I hoped would be "5000" (seems to be the current FreeBSD and Linux zpool version according to Wikipedia.

How do I figure this out?

You need to use feature flags. ZFS moved away from depending on versions a while ago, because version numbers couldn't be trusted between all of the contributors to the ZFS projects.

With that said, I don't think NAS4Free is ahead of FreeNAS on feature flags. There is at least one BSD feature flag that is not supported on ZOL, but it relates to kernel dumps and shouldn't actually be in use on a storage pool. I directly imported a pool from FreeNAS to Ubuntu 16.04 without any trouble beyond making sure to import disks by-id instead of /dev/sdX.

Adbot
ADBOT LOVES YOU

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
It was the "userobj_accounting" flag that I've recently had trouble with. A new install of CentOS couldn't deal with it and I tried Ubuntu (since Ubuntu comes with a ZFS licence) but Ubuntu couldn't deal with it either.

It was suggested during a 'zpool status' command that I could upgrade my pool a few weeks ago.

My solution has been to rsync everything important off my ZFS dataset onto a spare drive, destroy the dataset and create a new one. And never upgrade my zpool again. Progress is bad, mkay.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply