Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
The Gunslinger
Jul 24, 2004

Do not forget the face of your father.
Fun Shoe
I'm using Duplicati 2 on my clients (Windows/etc) pointed at a Minio server (Ubuntu VM running on unRAID) and I replicate that offsite once a month.

For my "media" files I just have a shell script that dumps a list of the folders in the shares once in awhile, I don't care if I lose that stuff as its easily replaced later.

Adbot
ADBOT LOVES YOU

Incessant Excess
Aug 15, 2005

Cause of glitch:
Pretentiousness
I noticed an update notification for NZB Hydra which I run as a docker container, I don't have to do anything there, right? That's something that Docker takes care of automatically?

IOwnCalculus
Apr 2, 2003





Depends on how your container is configured. The ones I use fall into one of three categories.

1) Restart the container and it updates as part of booting up (plex)
2) Use the web interface of the containerized app to update it (sonarr)
3) Need to grab the latest container and re-deploy it.

Bonobos
Jan 26, 2004
So assuming I want a basic file server (like freenas or Linux w/ zfs or unraid), is 8gb of ram enough for 4x 8tb drives? Would doubling to 16gb make a noticeable difference in performance?

derk
Sep 24, 2004
8 will do you good, but if you can afford the 16 now go for it or just wait and upgrade to 16 later.

Hughlander
May 11, 2005

Thermopyle posted:

Sorry, I just meant crashplan-esque solutions, not specifically crashplan.

I tentatively plan on sticking with Crashplan small business because I just back up my NAS now since my PCs back up to my NAS. This makes CP 10/month which is actually less than the $149/year I was paying for the old Crashplan.

I might switch over to B2 at some point, but I've been happy-ish with Crashplan and already have it setup and restore procedures tested with them.

What are you using to backup to NAS? I just added a lot more storage to my NAS and only have 6 months left of CrashPlan, so starting to think about what changes I need to make. I really use crashplan’s restore from any point in time as I run into my own stupidity regularly like, “Oh hey, my .gitconfig was deleted sometime in the last 3 months and i just noticed that my custom aliases are gone!”

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Hughlander posted:

What are you using to backup to NAS? I just added a lot more storage to my NAS and only have 6 months left of CrashPlan, so starting to think about what changes I need to make. I really use crashplan’s restore from any point in time as I run into my own stupidity regularly like, “Oh hey, my .gitconfig was deleted sometime in the last 3 months and i just noticed that my custom aliases are gone!”

I just recently switched to Windows File History, but I haven't yet done an in-depth analysis of how well backing that folder up to Crashplan works when it comes to restoring, and I'm not super excited about "nesting" my backups like that. Though, it's nice that File History backups aren't completely opaque data blobs to Crashplan because the File History destination isn't some proprietary thing...it's just a mirror of your folder structure with all your files ever. I'm not exactly sure yet how it handles different versions of the same file in this scheme...it doesn't look like it's got the different versions living side by side with incrementing file names, so I need to look into that more.

FWIW, crashplan doesn't even have to get involved in the scenario you described...you just use File History to go back in time.

dox
Mar 4, 2006

sharkytm posted:

...which actively tries to gently caress up NAS installs, or at least makes zero effort to support them. There's not a great solution, sadly.

I struggled with CrashPlan on my Synology for a while, but then setup a Crashplan docker container and have had no problems since. It works very well and I'd highly recommend it- let me know if you have any questions.

redeyes
Sep 14, 2002

by Fluffdaddy

quote:

I'm not exactly sure yet how it handles different versions of the same file in this scheme...it doesn't look like it's got the different versions living side by side with incrementing file names, so I need to look into that more.

All it does its add another version of a modified file and appends the backup time and date in like UTC or something. You can directly grab a file out and rename by removing the appended date if you want.

Ziploc
Sep 19, 2006
MX-5
So I have redundant FreeNAS machines in my house now. Cause I'm paranoid and dumb.

I have a PrimaryFreeNAS box. And BackupFreeNAS box. I setup rsync in the GUI to nightly backup the PrimaryFreeNAS to the BackupFreeNAS.

I used this guide: http://thesolving.com/storage/how-to-sync-two-freenas-storage-using-rsync/

Did some quick tests with smaller. Everything was working great. Files were moved to Backup. And deleted when they were removed from Primary. Didn't pay attention to speed too much.

Threw ~1TB at it and set rsync run at 4am and went to bed.

I noticed that network utilization is ~130Mbit/s. Which is fairly miserable. This is transferring video files that are over 40gb each. So it isn't a small file problem.

I notice people complain about rsync speeds, but none seem to complain about it being this bad. If I use CIFS and drag and drop between the two on my windows box I get a bit over 50MB/s. Which makes sense as the data has to come and go from the Windows machine. You would think a direct rsync between the two machines would be much faster. Both machines can be written and read from at ~100MB/s from my Windows box. So it isn't a link problem. They exist on the same switch.

Any ideas?

Hughlander
May 11, 2005

dox posted:

I struggled with CrashPlan on my Synology for a while, but then setup a Crashplan docker container and have had no problems since. It works very well and I'd highly recommend it- let me know if you have any questions.

I used that one but it had a problem with every time CrashPlan updates you had to jump through GUI hoops to reset memory usage. I use gfjardim/crashplan which has a built in NoVNC server so you just point a web browser at it for the UI and it never reset the memory.

Hughlander
May 11, 2005

Ziploc posted:

So I have redundant FreeNAS machines in my house now. Cause I'm paranoid and dumb.

I have a PrimaryFreeNAS box. And BackupFreeNAS box. I setup rsync in the GUI to nightly backup the PrimaryFreeNAS to the BackupFreeNAS.

I used this guide: http://thesolving.com/storage/how-to-sync-two-freenas-storage-using-rsync/

Did some quick tests with smaller. Everything was working great. Files were moved to Backup. And deleted when they were removed from Primary. Didn't pay attention to speed too much.

Threw ~1TB at it and set rsync run at 4am and went to bed.

I noticed that network utilization is ~130Mbit/s. Which is fairly miserable. This is transferring video files that are over 40gb each. So it isn't a small file problem.

I notice people complain about rsync speeds, but none seem to complain about it being this bad. If I use CIFS and drag and drop between the two on my windows box I get a bit over 50MB/s. Which makes sense as the data has to come and go from the Windows machine. You would think a direct rsync between the two machines would be much faster. Both machines can be written and read from at ~100MB/s from my Windows box. So it isn't a link problem. They exist on the same switch.

Any ideas?

One comment is don’t use rsync at all. Use zfs send and zfs receive. It sends snapshots instead. I just used it to move 16Tb from one pool to another.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

redeyes posted:

All it does its add another version of a modified file and appends the backup time and date in like UTC or something. You can directly grab a file out and rename by removing the appended date if you want.

Ahh, so that's not too bad when it comes to Crashplan backing up that folder. I mean, obviously (i guess?) it'd be better to have Crashplan versioning instead, but this seems OK for now.

redeyes
Sep 14, 2002

by Fluffdaddy

Thermopyle posted:

Ahh, so that's not too bad when it comes to Crashplan backing up that folder. I mean, obviously (i guess?) it'd be better to have Crashplan versioning instead, but this seems OK for now.

It actually should work perfectly and easily with whatever online backup system you want. Just the fact it only adds a few files at a time (based on what you modify) would seem work great with incremental online stuff.

Ziploc
Sep 19, 2006
MX-5
Hmm. Ok. I read some things.

Mainly this: http://doc.freenas.org/9.10/storage.html#replication-tasks

Sounds like a worthwhile solution. I'll give it a try.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

redeyes posted:

It actually should work perfectly and easily with whatever online backup system you want. Just the fact it only adds a few files at a time (based on what you modify) would seem work great with incremental online stuff.

Yeah, definitely. I was just saying that Crashplan already has a file version system that you can filter by date/time, so now this isn't integrated with that.

Ziploc
Sep 19, 2006
MX-5
Something I haven't been easily able to google:

What happens when a snapshot is created in the middle of large file being written to the server? I just get a snapshot of a half transferred file?

eames
May 9, 2009

Crossposting from the Intel thread: If you own a Synology DS415+/DS1515+/DS1815+ or some other Intel Atom C2000 based variant of it please make sure you backup your config.

Lots of customers are reporting dead units due to the Intel Atom bug: https://forum.synology.com/enu/viewtopic.php?t=127839 or search twitter.
Mine started randomly shutting down almost exactly two years after purchase, it looked like a faulty PSU so I replaced it with a different appliance and now the Synology unit won't power up at all. The expected RMA turnaround time is over three weeks.

more info: https://www.servethehome.com/intel-atom-c2000-series-bug-quiet/

Internet Explorer
Jun 1, 2005





eames posted:

Crossposting from the Intel thread: If you own a Synology DS415+/DS1515+/DS1815+ or some other Intel Atom C2000 based variant of it please make sure you backup your config.

Lots of customers are reporting dead units due to the Intel Atom bug: https://forum.synology.com/enu/viewtopic.php?t=127839 or search twitter.
Mine started randomly shutting down almost exactly two years after purchase, it looked like a faulty PSU so I replaced it with a different appliance and now the Synology unit won't power up at all. The expected RMA turnaround time is over three weeks.

more info: https://www.servethehome.com/intel-atom-c2000-series-bug-quiet/

God damnit.

Thanks for posting this.

IOwnCalculus
Apr 2, 2003





Hughlander posted:

I used that one but it had a problem with every time CrashPlan updates you had to jump through GUI hoops to reset memory usage. I use gfjardim/crashplan which has a built in NoVNC server so you just point a web browser at it for the UI and it never reset the memory.

Seconding this. I don't know what dark magic this container uses but it's the only Crashplan solution I've had that doesn't reset the max RAM value every time it updates.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Hughlander posted:

I used that one but it had a problem with every time CrashPlan updates you had to jump through GUI hoops to reset memory usage. I use gfjardim/crashplan which has a built in NoVNC server so you just point a web browser at it for the UI and it never reset the memory.

Well hell, that NoVNC thing is cool. I get tired of setting up an SSH tunnel everytime I want to admin Crashplan on my server.

Guess I'll be setting that image up...

Ziploc
Sep 19, 2006
MX-5

eames posted:

Crossposting from the Intel thread: If you own a Synology DS415+/DS1515+/DS1815+ or some other Intel Atom C2000 based variant of it please make sure you backup your config.

Lots of customers are reporting dead units due to the Intel Atom bug: https://forum.synology.com/enu/viewtopic.php?t=127839 or search twitter.
Mine started randomly shutting down almost exactly two years after purchase, it looked like a faulty PSU so I replaced it with a different appliance and now the Synology unit won't power up at all. The expected RMA turnaround time is over three weeks.

more info: https://www.servethehome.com/intel-atom-c2000-series-bug-quiet/

We also had this failiure two weeks ago.

Hughlander
May 11, 2005

Thermopyle posted:

Well hell, that NoVNC thing is cool. I get tired of setting up an SSH tunnel everytime I want to admin Crashplan on my server.

Guess I'll be setting that image up...

NoVNC is cool in general. I set up a letsencrypt nginx reverse proxy for everything and made / published a NoVNC container that lets you point it at any machine on the network. Like 6 containers running are X+Firefox+NoVNC to some other container / machine.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
I have two RS2416RP+ at work holding some camera footage (bought with budget fluff money). Looking forward to those making GBS threads the bed.

Will they do an RMA before they actually die, or do I get to wait to panic?

Internet Explorer
Jun 1, 2005





Moey posted:

I have two RS2416RP+ at work holding some camera footage (bought with budget fluff money). Looking forward to those making GBS threads the bed.

Will they do an RMA before they actually die, or do I get to wait to panic?

Reading the tail end of that thread linked, it looks like they'll do it proactively.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Internet Explorer posted:

Reading the tail end of that thread linked, it looks like they'll do it proactively.

Yeeehaw.

I'll give it a shot and post some results.

eames
May 9, 2009

Perhaps they're just waiting to see how bad the numbers really are before issuing a voluntary recall?

There's a board fix that involves soldering a simple resistor to two header pins on the mainboard. My understanding is that this basically slows down or nearly stops the decay of the chip but only works while the CPU isn't damaged yet. Accepting RMAs for working units would allow them to apply that fix and send them out as refurbished replacements.

This posting has pictures of the resistor:

https://forum.synology.com/enu/viewtopic.php?f=106&t=127839&start=660#p505505

AFAIK all 1517+ shipping now still have the same boards with the same affected CPU stepping (B0) just with the one extra resistor on the board. What a mess, I feel sorry for Synology. It seems like they're not even allowed to talk about it.

eames fucked around with this message at 21:18 on Jan 24, 2018

Zorak of Michigan
Jun 10, 2006

Ziploc posted:

Something I haven't been easily able to google:

What happens when a snapshot is created in the middle of large file being written to the server? I just get a snapshot of a half transferred file?

My understanding is that yes, this is exactly what happens. Blocks that change after the snapshot are not part of that snapshot, even if they're appended to an existing file.

DizzyBum
Apr 16, 2007


So I've got a bunch of old SCSI HDDs hanging around that I wouldn't mind putting to use for home storage (enough to build a decently-sized RAID), but I'm also aware that SCSI is pretty drat old at this point. Is it even worth trying to find something that these drives will work in, or would I be better off trashing them?

My initial thought is that I shouldn't even bother because I'm going to be in trouble if I run out of spare drives to swap. At the same time, I'm already planning on building something within the next year or two using current technology and this is more-or-less intended as a stopgap so I can move my media off my desktop/gaming PC drives.

Thanks Ants
May 21, 2004

#essereFerrari


Actual SCSI or SAS? What sort of capacities are these disks?

I'm 99% sure you should launch them into the trash.

redeyes
Sep 14, 2002

by Fluffdaddy
I just hammer-smashed some Ultra 320 SCSI 18GB 15k drives. It actually made me sad. Back in the day I wanted some of them.

Ziploc
Sep 19, 2006
MX-5

Ziploc posted:

So I have redundant FreeNAS machines in my house now. Cause I'm paranoid and dumb.

I have a PrimaryFreeNAS box. And BackupFreeNAS box. I setup rsync in the GUI to nightly backup the PrimaryFreeNAS to the BackupFreeNAS.

I used this guide: http://thesolving.com/storage/how-to-sync-two-freenas-storage-using-rsync/

Did some quick tests with smaller. Everything was working great. Files were moved to Backup. And deleted when they were removed from Primary. Didn't pay attention to speed too much.

Threw ~1TB at it and set rsync run at 4am and went to bed.

I noticed that network utilization is ~130Mbit/s. Which is fairly miserable. This is transferring video files that are over 40gb each. So it isn't a small file problem.

I notice people complain about rsync speeds, but none seem to complain about it being this bad. If I use CIFS and drag and drop between the two on my windows box I get a bit over 50MB/s. Which makes sense as the data has to come and go from the Windows machine. You would think a direct rsync between the two machines would be much faster. Both machines can be written and read from at ~100MB/s from my Windows box. So it isn't a link problem. They exist on the same switch.

Any ideas?

Lol. I'm a dumbass. I expanded the graph to include when it actually started. And the rsync speeds were much more respectable. But it turns out, rsync didn't like that one of my 40gb files was not finished transfering when rsync started. And it's been stuck on the same loving file all loving day.

zfs snapshot send/receive it is!

SamDabbers
May 26, 2003



If both NASes are on the same LAN, you can increase the speed of your zfs send/recv by piping it through nc instead of ssh, which avoids all the encryption overhead.

Anime Schoolgirl
Nov 28, 2002

redeyes posted:

I just hammer-smashed some Ultra 320 SCSI 18GB 15k drives. It actually made me sad. Back in the day I wanted some of them.
you didn't diassemble them to get at the toy platters smh

Romulux
Mar 17, 2004

E V O L V E D
Yo when are these fuckin WD Easy Stores gonna go back on sale? Or does anyone have an extra they'd sell me?

I need at least 4TB right now but I can wait a bit if it means I can get an 8TB for a decent price. I saw that they just went on sale at Best Buy on the 8th of this month but they're back to regular price now.. Anyone know how often they drop them back down? The last drop was last month but that was for the holidays, so I'm hoping it's sooner rather than later.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



Anime Schoolgirl posted:

you didn't diassemble them to get at the toy platters smh

And magnets! You can get some crazy powerful magnets out of old server drives

thiazi
Sep 27, 2002

Romulux posted:

Yo when are these fuckin WD Easy Stores gonna go back on sale? Or does anyone have an extra they'd sell me?

I need at least 4TB right now but I can wait a bit if it means I can get an 8TB for a decent price. I saw that they just went on sale at Best Buy on the 8th of this month but they're back to regular price now.. Anyone know how often they drop them back down? The last drop was last month but that was for the holidays, so I'm hoping it's sooner rather than later.

I have 2x 6 TB recertified Reds for sale in SAMart. Not easystore rips but cheaper than retail if you're interested...

Ziploc
Sep 19, 2006
MX-5
So I think I setup the replication task properly? It's definitely running. So that's nice.

I was hoping to have a single level dataset on my destination.

In my head, my storage managers would end up like this:

Sender
PrimaryVolume
-PrimaryVolume

Receiver
BackupVolume
-BackupVolume
-PrimaryVolume

Instead the receiver looks like this.
BackupVolume
-BackupVolume
--PrimaryVolume
---PrimaryVolume

Is this... normal? I attempted to not have so many children datasets by setting the remote dataset to "BackupVolume/PrimaryVolume"

The documentation doesn't really indicate what the dataset structure should look like when completed. So I really can't tell what the "Remote ZFS Volume/Dataset" setting actually does in the replication task.

EDIT: Oh. The FreeNAS 11 documentation show a example where you don't identify a dataset. Just a volume. Testing that.

Ziploc fucked around with this message at 21:20 on Jan 25, 2018

Ziploc
Sep 19, 2006
MX-5
One quirk I don't quite understand at the moment.

I have two servers with the following hostnames:

primaryfreenas.local
backupfreenas.local

I have primary making a snapshot every night with a 4am to 5am start window.
I have primary doing a replication task to backup with a 3am to 6am start window.

I seem to be getting these errors periodically. This one came shortly after 3am.

"Replication PrimaryVolume -> backupfreenas.local:BackupVolume failed: Failed: ssh: Could not resolve hostname backupfreenas.local: hostname nor servname provided, or not known"

They're sitting on the same LAN. Everything goes back to normal like 10 minutes later. And when it comes time to do the replication, which typically happens just after the snapshot is done, completes successfully.

I haven't found much about this while googling. Not sure if this is due to the way I have my windows setup or what.

Adbot
ADBOT LOVES YOU

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

Ziploc posted:

One quirk I don't quite understand at the moment.

I have two servers with the following hostnames:

primaryfreenas.local
backupfreenas.local

I have primary making a snapshot every night with a 4am to 5am start window.
I have primary doing a replication task to backup with a 3am to 6am start window.

I seem to be getting these errors periodically. This one came shortly after 3am.

"Replication PrimaryVolume -> backupfreenas.local:BackupVolume failed: Failed: ssh: Could not resolve hostname backupfreenas.local: hostname nor servname provided, or not known"

They're sitting on the same LAN. Everything goes back to normal like 10 minutes later. And when it comes time to do the replication, which typically happens just after the snapshot is done, completes successfully.

I haven't found much about this while googling. Not sure if this is due to the way I have my windows setup or what.

To avoid dragging you down into a rabbit hole of DNS, PTR records, and your router to get that fixed, just change the task to go to each other's direct IP address.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply