Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us $3,400 per month for bandwidth bills alone, and since we don't believe in shoving popup ads to our registered users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Charles
May 9, 2004

zoom-zoom


Toilet Rascal

EVIL Gibson posted:

I didn't know there was a compatibility list. I think that list just might be a list of products they actually physically confirmed it worked.

My synology has a random external dock I use as the cache drive for sonarr/radarr and it is not slow at all.

The compatibility list is like a decade old or something.
https://www.synology.com/en-global/...lter_size=-&p=1
Anyways I found an eSata enclosure I had (didn't even know it had that port, but yup it's there), now it at least shows the drive. Yay. I thought USB Mass Storage was a generic thing, especially in USB 3.0 devices. Geez.

Adbot
ADBOT LOVES YOU

Moey
Oct 22, 2010

I LIKE TO MOVE IT


Ended up snagging 4x 12tb drives and a new-to-me 9240-8i on eBay.

My 2tb free was starting to make me nervous.

Now holding off for the new Ryzen CPUs.

D. Ebdrup
Mar 13, 2009



For what it's worth, ZFS recovers remarkably well if you fill the pool to capacity then remove what you filled it with before deleting other stuff. I did that once, and I'm still able to satuate 1000BaseT Cat6a with UDP-based NFS.

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down


Uhhh... is Crashplan too good to be true? I just signed up for the 30-day trial and it's unlimited data for $10/mo. I just downloaded a docker on my UnRaid and pointed at the account and am currently uploading shares to it. Has anyone here actually backed up their whole array, including ISOs? At some point do they come and bother you about using up too much of the 'unlimited' space? Or is there some other drawback like what happens if you go get the data out of them?

I may just end up keeping my not-easily replaceable items there which is far less space, but wowza, seems too good to be true.

Heners_UK
Jun 1, 2002


Over used CP for about 8 years and one major restore (and that was because restoring was easier than rebuilding the drive array). I like it.

Edit: irreplaceable or difficult to replace stuff only.

Heners_UK fucked around with this message at 21:16 on Oct 15, 2020

H110Hawk
Dec 28, 2006


TraderStav posted:

Uhhh... is Crashplan too good to be true? I just signed up for the 30-day trial and it's unlimited data for $10/mo. I just downloaded a docker on my UnRaid and pointed at the account and am currently uploading shares to it. Has anyone here actually backed up their whole array, including ISOs? At some point do they come and bother you about using up too much of the 'unlimited' space? Or is there some other drawback like what happens if you go get the data out of them?

I may just end up keeping my not-easily replaceable items there which is far less space, but wowza, seems too good to be true.

I last used it when they had a consumer play and the drawbacks were it was a monstrous java application with super slow upload speeds. Have they fixed at least the second half of that?

IOwnCalculus
Apr 2, 2003





I used Crashplan for years but bailed after the last price increase because:


H110Hawk posted:

I last used it when they had a consumer play and the drawbacks were it was a monstrous java application with super slow upload speeds.

The "business" version was the exact same software - I kept up with it for a while until the price increase finally came through.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

H110Hawk posted:

I last used it when they had a consumer play and the drawbacks were it was a monstrous java application with super slow upload speeds. Have they fixed at least the second half of that?

You can get moderately ok sorta speeds if you increase the threading on it (accessible in the Java app). It's still not great, though. I have symmetric 1Gb FIOS and normally was struggling to top ~150Mbps up.

But it's $10/mo, and when they say unlimited they really mean it. The only other catch is the normal home version won't let you backup network drives or other computers, so if you wanted to do that you gotta jump through some hoops.

H110Hawk
Dec 28, 2006


DrDork posted:

You can get moderately ok sorta speeds if you increase the threading on it (accessible in the Java app). It's still not great, though. I have symmetric 1Gb FIOS and normally was struggling to top ~150Mbps up.

So, "no".

I use backblaze and it's the same hoops to do a network drive, I just use B2 for my NAS and the desktop client for my remotely supported family.

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down


DrDork posted:

You can get moderately ok sorta speeds if you increase the threading on it (accessible in the Java app). It's still not great, though. I have symmetric 1Gb FIOS and normally was struggling to top ~150Mbps up.

But it's $10/mo, and when they say unlimited they really mean it. The only other catch is the normal home version won't let you backup network drives or other computers, so if you wanted to do that you gotta jump through some hoops.

Thanks, I adjusted that on the web UI from their website, which seems to have more options than the Docker container on my UnRaid. Increased CPU usage up to 90% for both away and present. Is that the threading you're referring to?

If I'm doing my Math right, it's currently estimating that I'm uploading about 14.7GB/hour, so that's 3.94MB/s?

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down


H110Hawk posted:

So, "no".

I use backblaze and it's the same hoops to do a network drive, I just use B2 for my NAS and the desktop client for my remotely supported family.

Is there a backblaze plan that's somewhat competitive to the Crashplan? I've seen the $5/TB/Month plan. If I get wait a few days/weeks I can get 10-20TB onto CP (oh god not that acronym).

I guess this is the point of the 30-day trial.

Fancy_Lad
May 15, 2003
Would you like to buy a monkey?

TraderStav posted:

Uhhh... is Crashplan too good to be true? I just signed up for the 30-day trial and it's unlimited data for $10/mo. I just downloaded a docker on my UnRaid and pointed at the account and am currently uploading shares to it. Has anyone here actually backed up their whole array, including ISOs? At some point do they come and bother you about using up too much of the 'unlimited' space? Or is there some other drawback like what happens if you go get the data out of them?

I may just end up keeping my not-easily replaceable items there which is far less space, but wowza, seems too good to be true.


I've been using them for years. Started with the personal account on a Windows system, got migrated to the business account for the low cost of twice the price when they discontinued the service, migrated that data over to Unraid and successfully adopted the backup into the docker container there so I didn't have to re-upload.

The downsides are the slowish uploads (if your daily change rate is fairly low, mostly an issue on initial seed) and poo poo-tacular client that just gobbles CPU/RAM when it is working - especially once you get into double digit TB of backups. I've not personally had any issues with restores or download speed, although I haven't attempted a restore larger than 300GB or so.

I'm at around 20TB protected currently and never heard a peep out of them - it's just worked. Basically if you can find versioned backups of anywhere near that much data for $10/month, I've never seen it. I haven't been tracking it, but at least at one point you could get unlimited cloud data with google business accounts for a similar price point, but the software options to leverage them for backups when I briefly looked at it was a disaster if you care about versioning. Possible that something has changed since then.

One tip: Use your own encryption key and not their generic master key. If you don't do it from the start, you have to restart your backup from scratch to change it. Start it off correctly (and back that sucker up a few places that don't rely on your server in case the poo poo hits the fan).

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down


Fancy_Lad posted:

I've been using them for years. Started with the personal account on a Windows system, got migrated to the business account for the low cost of twice the price when they discontinued the service, migrated that data over to Unraid and successfully adopted the backup into the docker container there so I didn't have to re-upload.

The downsides are the slowish uploads (if your daily change rate is fairly low, mostly an issue on initial seed) and poo poo-tacular client that just gobbles CPU/RAM when it is working - especially once you get into double digit TB of backups. I've not personally had any issues with restores or download speed, although I haven't attempted a restore larger than 300GB or so.

I'm at around 20TB protected currently and never heard a peep out of them - it's just worked. Basically if you can find versioned backups of anywhere near that much data for $10/month, I've never seen it. I haven't been tracking it, but at least at one point you could get unlimited cloud data with google business accounts for a similar price point, but the software options to leverage them for backups when I briefly looked at it was a disaster if you care about versioning. Possible that something has changed since then.

One tip: Use your own encryption key and not their generic master key. If you don't do it from the start, you have to restart your backup from scratch to change it. Start it off correctly (and back that sucker up a few places that don't rely on your server in case the poo poo hits the fan).

Great idea on the encryption. I'm so early in my upload it's a good time to do that. What's the best place to generate a key? I have 1password, is it possible in there? Looks like I import a .key file and away I go?

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down


TraderStav posted:

Great idea on the encryption. I'm so early in my upload it's a good time to do that. What's the best place to generate a key? I have 1password, is it possible in there? Looks like I import a .key file and away I go?



That may have been easier than I thought, I just hit the generate key button and used the randomly generated long string, saved it in my 1password. Will find a few other places to store it.

If there's anything I didn't do correct there, please let me know!

H110Hawk
Dec 28, 2006


TraderStav posted:

Is there a backblaze plan that's somewhat competitive to the Crashplan? I've seen the $5/TB/Month plan. If I get wait a few days/weeks I can get 10-20TB onto CP (oh god not that acronym).

I guess this is the point of the 30-day trial.

Their desktop client is flat rate, I have 5 licenses for that. B2 is pay per byte. I pay around $15/month for B2 but really need to prune out some stuff. I also don't backup my Linux isos.

Fancy_Lad
May 15, 2003
Would you like to buy a monkey?

TraderStav posted:

That may have been easier than I thought, I just hit the generate key button and used the randomly generated long string, saved it in my 1password. Will find a few other places to store it.

If there's anything I didn't do correct there, please let me know!

It's been years since I did it and I don't think the client offered an option to generate one when I did so, but I can't imagine that won't work

Get a few things backed up, login to Crashplan's website, perform a recovery on something that's backed up using your new key. As long as the backup opens I'd think you are good to go, but might as well give the web recovery a short test while you are in there!

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!

Grimey Drawer

Fancy_Lad posted:

I've been using them for years. Started with the personal account on a Windows system, got migrated to the business account for the low cost of twice the price when they discontinued the service, migrated that data over to Unraid and successfully adopted the backup into the docker container there so I didn't have to re-upload.

The downsides are the slowish uploads (if your daily change rate is fairly low, mostly an issue on initial seed) and poo poo-tacular client that just gobbles CPU/RAM when it is working - especially once you get into double digit TB of backups. I've not personally had any issues with restores or download speed, although I haven't attempted a restore larger than 300GB or so.

I'm at around 20TB protected currently and never heard a peep out of them - it's just worked. Basically if you can find versioned backups of anywhere near that much data for $10/month, I've never seen it. I haven't been tracking it, but at least at one point you could get unlimited cloud data with google business accounts for a similar price point, but the software options to leverage them for backups when I briefly looked at it was a disaster if you care about versioning. Possible that something has changed since then.

One tip: Use your own encryption key and not their generic master key. If you don't do it from the start, you have to restart your backup from scratch to change it. Start it off correctly (and back that sucker up a few places that don't rely on your server in case the poo poo hits the fan).

Sounds like I will end up migrating there when Google finally cuts me off of unlimited storage for $12/month.

On another topic, I just received a 3.2 TB Samsung PCI-E NVMe drive that is used but appears in very good condition. First plan of action is to finally organize my large photo collection. I will be editing on a macOS VM hosted on Unraid that I will move to the new drive and expand its size to 2 TB. This task has been nagging at me for years but it's no fun going through a crapton of photos hosted on a NAS HDD.

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down


Fancy_Lad posted:

It's been years since I did it and I don't think the client offered an option to generate one when I did so, but I can't imagine that won't work

Get a few things backed up, login to Crashplan's website, perform a recovery on something that's backed up using your new key. As long as the backup opens I'd think you are good to go, but might as well give the web recovery a short test while you are in there!

It worked! Thank you. Going to take a month or two to get everything (not including Linux ISOs) backed up, but it'll be worth it to have a disaster plan in place. I'll have backed up my list of Linux ISOs so can build that back up separately in that event.

Hughlander
May 11, 2005
Probation
Can't post for 17 hours!


I probably should have asked before I pulled the trigger but...

Anyone have experience with NVME L2Arc with large datasets of spinning rust? I'm about to add a 1TB NVME for 120TB of 5400RPM storage and I think it should just be partition the nvme to the ratio of the two pools I care about
zpool add pool1 cache /dev/nvme-partiton-1
zpool add pool2 cache /dev/nvme-partition-2

And then the part I'm not 100% sure of is I think I want:
zfs set pool1 secondarycache=metadata
zfs set pool2 secondarycache=metadata

And that will recursively set it for all datasets off of it. The main use case is that there's millions of files across the pools and backup software that wants to stat() each and every one of them takes hours to do so.

D. Ebdrup
Mar 13, 2009



L2ARC, like the ARC, ZIL, and SLOG, is pool-wide.
The only device that you can add to a pool that isn't pool-wide is an allocation class device which uses the 'special' keyword and carries a per-dataset property, and serves as a metadata cache, which is exactly what you want.

Another reason to avoid L2ARC for your use-case is that you need a shitload of memory as mapping LBAs in memory, which is required for L2ARC, takes between 70 and 330 bytes per LBA depending on ZFS implementation.
Assuming your drive has ~2000000000 LBAs, that's between 130 and 615GB of memory used for mapping L2ARC, which will come out of the memory that can be used for ARC (which is much faster both in terms of bandwidth as well as access time, even compared to NVMe).

Crunchy Black
Oct 24, 2017

CASTOR: Uh, it was all fine and you don't remember?
VINDMAN: No, it was bad and I do remember.




Yeah if this is in production, you're FAR better off just throwing as much memory at the problem as your platform can support. If its a homelab, 'grats, that's a lot of TBs in your garage.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!


Switchblade Switcharoo

Since I don't always replace drives in my file server but always see some write behavior that starts out super slow (~3MB/s) for a good while before the resliver ramps up to full speed (~150MB/s) but always wanted to visualize the data.



This is a resliver of a 10TB and it took up end of saturday, all of sunday, and then finished up at 5am monday.

The strangest thing I would like to know what it is doing is at the start where it spends about 4-5 hours at 3MBps and then the sudden jump up.

Below is the code I used to extract the speeds over time. It doesn't stop itself but it won't write anything after it doesn't find 'scanned' in zpool status.

code:
#/bin/bash

while :
 do
     zpool status 2>&1 | grep scanned | ts |  tee  -a /root/sliver-progress.log
     sleep 900
 done

Hughlander
May 11, 2005
Probation
Can't post for 17 hours!


D. Ebdrup posted:

L2ARC, like the ARC, ZIL, and SLOG, is pool-wide.
The only device that you can add to a pool that isn't pool-wide is an allocation class device which uses the 'special' keyword and carries a per-dataset property, and serves as a metadata cache, which is exactly what you want.

Another reason to avoid L2ARC for your use-case is that you need a shitload of memory as mapping LBAs in memory, which is required for L2ARC, takes between 70 and 330 bytes per LBA depending on ZFS implementation.
Assuming your drive has ~2000000000 LBAs, that's between 130 and 615GB of memory used for mapping L2ARC, which will come out of the memory that can be used for ARC (which is much faster both in terms of bandwidth as well as access time, even compared to NVMe).

Ok, so the secondarycache=metadata isn't considered l2arc?

Also, I honestly think that given the near 100% random access of the pools that ARC wouldn't help at all, but having NVMe storage of the equivilant of all the inode data would.

Memory is already maxed out. I could maybe see getting a new motherboard but don't really want to.

D. Ebdrup
Mar 13, 2009



Hughlander posted:

Ok, so the secondarycache=metadata isn't considered l2arc?

Also, I honestly think that given the near 100% random access of the pools that ARC wouldn't help at all, but having NVMe storage of the equivilant of all the inode data would.

Memory is already maxed out. I could maybe see getting a new motherboard but don't really want to.
No, secondarycache=metadata does put only metadata in the L2ARC, but every single one of the LBAs still need to be mapped to allocate them.

The thing is, if you're stat()ing files so often, that data is going to be in ARC already, and won't move to L2ARC unless it's evicted by data that's some combination of used more frequently or used more recently, since that's how ARC is architectured.

ARC and L2ARC are the exact same for the purposes of caching reads, it's just that because of non-volatile flash density outgrowing volatile flash density as well as bigger form factors, L2ARC can be bigger - at the cost of taking away memory that could be used by ARC.

So to do what you want, you'd have to figure out the exact amount of metadata you'll be storing on the NVme drive and then create a partition to be that size.

Hughlander
May 11, 2005
Probation
Can't post for 17 hours!


D. Ebdrup posted:

No, secondarycache=metadata does put only metadata in the L2ARC, but every single one of the LBAs still need to be mapped to allocate them.

The thing is, if you're stat()ing files so often, that data is going to be in ARC already, and won't move to L2ARC unless it's evicted by data that's some combination of used more frequently or used more recently, since that's how ARC is architectured.

ARC and L2ARC are the exact same for the purposes of caching reads, it's just that because of non-volatile flash density outgrowing volatile flash density as well as bigger form factors, L2ARC can be bigger - at the cost of taking away memory that could be used by ARC.

So to do what you want, you'd have to figure out the exact amount of metadata you'll be storing on the NVme drive and then create a partition to be that size.

"often" is each of them once every 8-9 hours among other random reads/writes and with...
df -i | cut -c 44-51 | awk '{total+=$1;}END{print total;}'
22227479
22 million files, I believe they would be evicted after that many hours.

Are there any guides for calculating the metadata size needed? I've looked and found https://github.com/openzfs/zfs/issu...mment-495811601 but it isn't giving the command lines that was used to look at that.

Thanks for being patient btw, this seems to be a bit of a corner that's hard to shine a light into.

D. Ebdrup
Mar 13, 2009



Hughlander posted:

"often" is each of them once every 8-9 hours among other random reads/writes and with...
df -i | cut -c 44-51 | awk '{total+=$1;}END{print total;}'
22227479
22 million files, I believe they would be evicted after that many hours.

Are there any guides for calculating the metadata size needed? I've looked and found https://github.com/openzfs/zfs/issu...mment-495811601 but it isn't giving the command lines that was used to look at that.

Thanks for being patient btw, this seems to be a bit of a corner that's hard to shine a light into.
I don't remember exactly, but zdb -b or zdb -O - either used multiple times or with -v to increase verbosity.
Read the fine man(ual) page.

D. Ebdrup fucked around with this message at 21:54 on Oct 19, 2020

bolind
Jun 19, 2005



Pillbug

EVIL Gibson posted:

code:
#/bin/bash

Just a nitpick; this line does nothing, it's treated as a comment. What you probably meant was:

code:
#!/bin/bash
The reason it works anyway is that the shell is probably already bash (or compatible) so a subshell isn't spawned.

Anyways. For work, I'm looking for a prosumer NAS with solid throughput. 10Gbit Ethernet preferred, but USB 3.2 or other solutions would be interesting too. It is acceptable that the NAS is directly connected to a machine, so it doesn't have to be ethernet.

Here's the kicker, I need somewhere around a 100TB after setting up a RAID5 or 6, of some flavor.

I found some offerings from QNAP and Synology, which seem to be the only ones offering in excess of four disk capacity.

Adbot
ADBOT LOVES YOU

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!

Grimey Drawer

bolind posted:

Just a nitpick; this line does nothing, it's treated as a comment. What you probably meant was:

code:
#!/bin/bash
The reason it works anyway is that the shell is probably already bash (or compatible) so a subshell isn't spawned.

Anyways. For work, I'm looking for a prosumer NAS with solid throughput. 10Gbit Ethernet preferred, but USB 3.2 or other solutions would be interesting too. It is acceptable that the NAS is directly connected to a machine, so it doesn't have to be ethernet.

Here's the kicker, I need somewhere around a 100TB after setting up a RAID5 or 6, of some flavor.

I found some offerings from QNAP and Synology, which seem to be the only ones offering in excess of four disk capacity.

You could look at the Synology RAID calculator site...I think you can put in all your disks and it will tell you what model supports that much.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply