Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Raymond T. Racing
Jun 11, 2019

Hughlander posted:

I mailed them on Friday. They said they’re out of stock not discountinued. And they’ll get more stock at end of June

Did they happen to mention fixing the design issues with the new 15 bay? Where you can't actually fit all the drives in the cage because of spacing issues.

Adbot
ADBOT LOVES YOU

Sneeze Party
Apr 26, 2002

These are, by far, the most brilliant photographs that I have ever seen, and you are a GOD AMONG MEN.
Toilet Rascal
Can anybody point me to a decent guide on how to get Grafana up and running on a Synology NAS within Docker? I'm of moderate skill level. I just migrated from a ds215j --> ds218+, and reconfigured my whole media management scheme into Docker and it went pretty smoothly. However, when I look at Grafana guides, I get a little confused... so I guess I'm looking for something as simple and straightforward as possible. If that's possible. Which I don't know if it is.

Hadlock
Nov 9, 2004

Grafana is just the graphing front end. You still need the monitoring back end, like Prometheus to provide data for the graphs

Boot up the grafana container, expose port 3000, probably assign it some storage so that it can persists the graphs you build between restarts

You'll need a Prometheus container, I think you need to expose port 9090, and you'll want to give it between 500mb and 4gb disk to start

In the grafana setup UI add Prometheus ip:9090 to your list of data sources

Setting up prom/graf is dead easy if you can open the correct ports. There's no auth out of the box and all the presets are very sane up until you get to about 1TB worth of data or want to preserve data longer than 30 days

Edit: try installing it locally and see how it works before you install it on the Synology NAS.

But yeah it's just two containers, grafana all the config is in the UI, I think you have to get the initial password from the first boot logs. And then Prometheus will take a config file but doesn't need one out of the box. Just give each one a small disk and you'll be good. Grafana needs like less than 50mb disk, maybe less than 15mb. Very plug and play

Hadlock fucked around with this message at 20:15 on May 25, 2020

BlankSystemDaemon
Mar 13, 2009



HalloKitty posted:

I have the drives in my PC suspended fully with elastic, old school style. It works like nothing else, and made an enormous difference over the standard grommet/long screw combo. So quiet.

corgski
Feb 6, 2007

Silly goose, you're here forever.

That has to be the gooniest sex dungeon I've ever seen.

BlankSystemDaemon
Mar 13, 2009



If your OS has an Internet SuperServer, you can use it to spawn a Prometheus data collector when a connection is opened.
Here's how FreeBSD does it for sysctls using inetd.

EDIT: At some point Microsoft coopted the term and its initialism to mean a http daemon, but it's a really useful little thing.

BlankSystemDaemon fucked around with this message at 20:49 on May 25, 2020

Sneeze Party
Apr 26, 2002

These are, by far, the most brilliant photographs that I have ever seen, and you are a GOD AMONG MEN.
Toilet Rascal

Hadlock posted:

Grafana is just the graphing front end. You still need the monitoring back end, like Prometheus to provide data for the graphs

Boot up the grafana container, expose port 3000, probably assign it some storage so that it can persists the graphs you build between restarts

You'll need a Prometheus container, I think you need to expose port 9090, and you'll want to give it between 500mb and 4gb disk to start

In the grafana setup UI add Prometheus ip:9090 to your list of data sources

Setting up prom/graf is dead easy if you can open the correct ports. There's no auth out of the box and all the presets are very sane up until you get to about 1TB worth of data or want to preserve data longer than 30 days

Edit: try installing it locally and see how it works before you install it on the Synology NAS.

But yeah it's just two containers, Grafana all the config is in the UI, I think you have to get the initial password from the first boot logs. And then Prometheus will take a config file but doesn't need one out of the box. Just give each one a small disk and you'll be good. Grafana needs like less than 50mb disk, maybe less than 15mb. Very plug and play
This is very informative and helpful, thanks. Where I ran into a snag, and it's kind of a minor snag probably, is that when I launch Grafana in Docker, it asks me for a username and password and I was like wtf? I'll try it again with Prometheus as the back-end.

Edit: after realizing that the default login was admin/admin, that was shockingly easy.

Sneeze Party fucked around with this message at 00:02 on May 26, 2020

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Sneeze Party posted:

This is very informative and helpful, thanks. Where I ran into a snag, and it's kind of a minor snag probably, is that when I launch Grafana in Docker, it asks me for a username and password and I was like wtf? I'll try it again with Prometheus as the back-end.

Edit: after realizing that the default login was admin/admin, that was shockingly easy.

Yeah, most all of these things that you can self-host some sort of web UI will start out with a default username/password like that. Usually you just need to hit up the docs and find out what the defaults are.

BlankSystemDaemon
Mar 13, 2009



Good news, everyone!
ZFS in FreeBSD 12/STABLE (ie. a very likely candidate for being in 12.2-RELEASE) has gained the ability to do allocation classes.
What are allocation classes, you ask? It's the ability to assign a 'special' vdev, which will used to store metadata, so only the blocks are written to disk. Does it sound pointless?
Well, how about adding a couple of striped mirrors of NVMe SSDs as this vdev, and now you can set a per-dataset flag for how small files (or, more accurately, files that fit inside smaller the smaller of variable blocks) can be before they're stored on the SSDs, so only the big files go on spinning rust.
It's not really a new type of caching for ZFS, instead think of it as a kind of QoS. Presumably, an even more extensive QoS system can also be built on top of it with which CAM I/O scheduling can be integrated.

For those keeping track: Yes, this is part of the work that's required to make the de-clustered RAID work, which Intel was commissioned by a very large customer of theirs, to prototype.
(Yes, it's been in what was ZFSonLinux for a few months, but apparently nobody uses that because nobody's noticed?)

BlankSystemDaemon fucked around with this message at 10:15 on May 26, 2020

Yaoi Gagarin
Feb 20, 2014

D. Ebdrup posted:

Good news, everyone!
ZFS in FreeBSD 12/STABLE (ie. a very likely candidate for being in 12.2-RELEASE) has gained the ability to do allocation classes.
What are allocation classes, you ask? It's the ability to assign a 'special' vdev, which will used to store metadata, so only the blocks are written to disk. Does it sound pointless?
Well, how about adding a couple of striped mirrors of NVMe SSDs as this vdev, and now you can set a per-dataset flag for how small files (or, more accurately, files that fit inside smaller the smaller of variable blocks) can be before they're stored on the SSDs, so only the big files go on spinning rust.
It's not really a new type of caching for ZFS, instead think of it as a kind of QoS. Presumably, an even more extensive QoS system can also be built on top of it with which CAM I/O scheduling can be integrated.

For those keeping track: Yes, this is part of the work that's required to make the de-clustered RAID work, which Intel was commissioned by a very large customer of theirs, to prototype.
Yes, it's been in what was ZFSonLinux for a few months, but apparently nobody uses that because nobody's noticed?

Funny you post this now, I was literally just reading up on allocation classes a few minutes ago, after I heard that the next freenas (which will be named truenas core) would have some kind of "fusion pool" feature.

And yeah it's really strange that this feature hasn't gotten more exposure. Given how many people on youtube and other places I see trying to add slog devices thinking that they are a general purpose write cache, I would expect people to jump all over this as a magic IOPS booster.

Anyway, maybe having metadata on an nvme SSD will make `find` blazing fast? It would be worth it for that alone

BlankSystemDaemon
Mar 13, 2009



VostokProgram posted:

Funny you post this now, I was literally just reading up on allocation classes a few minutes ago, after I heard that the next freenas (which will be named truenas core) would have some kind of "fusion pool" feature.

And yeah it's really strange that this feature hasn't gotten more exposure. Given how many people on youtube and other places I see trying to add slog devices thinking that they are a general purpose write cache, I would expect people to jump all over this as a magic IOPS booster.

Anyway, maybe having metadata on an nvme SSD will make `find` blazing fast? It would be worth it for that alone
Metadata on an NVMe SSD alone will be a huge boon, and if that's all you're storing on there it won't take up much space.
Imagine buying a small (~50GB?) couple of Optane SSDs, putting two partitions on it, and using it for metadata and synchronous writes, so that every block on rust benefits from sequential I/O.

I found the presentation from 2015:
https://www.youtube.com/watch?v=28fKiTWb2oM

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Operations that require access of the blocks like du wouldn’t be sped up with allocation classes would they? Pretty handy when I run it frequently on large files constantly.

BlankSystemDaemon
Mar 13, 2009



necrobobsledder posted:

Operations that require access of the blocks like du wouldn’t be sped up with allocation classes would they? Pretty handy when I run it frequently on large files constantly.
Well, when you run du on ZFS, it's returning the size of the file plus the metadata associated with that file, and is also affected by whether the file is compressed or not - so while the metadata information retrieval will be faster, it doesn't really matter because du is a completely inappropriate tool to use.
With ZFS you getno benefit from using folders over datasets of the 'filesystem' type, and conversely, you get a lot of benefit from using those datasets; such as per-filesystem compression ratios, very easy filesharing (at least with nfs, it's just the sharenfs property), and easy backup.

EDIT: Welp, Allan Jude just confirmed my suspicion that stat() calls would get sped up, but even if that's true it's still not the right tool.

BlankSystemDaemon fucked around with this message at 17:57 on May 26, 2020

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
I think I've figured out my cooling issues with the CS381 case:





Blocked off the side vent (you can just barely see the cardboard covering it on the left side) and put two 92mm slim profile fans on the front, with two 120mm exhaust on the rear. Still need to figure out a better way to pass the fan cables through the front panel and a more permanent way to mount the fans (I was thinking small magnets). I like that the front flap is still able to open and close.

Temps are 47-49 while running badblocks on all the drives, and down in the 35-37 range when idle. I think that should be sufficient?

I still have about a week left in my return window, I am leaning more towards keeping this chassis though. Just can't find anything else that is similar to this one, despite all the thermal flaws.

BlankSystemDaemon
Mar 13, 2009



fletcher posted:

I think I've figured out my cooling issues with the CS381 case:





Blocked off the side vent (you can just barely see the cardboard covering it on the left side) and put two 92mm slim profile fans on the front, with two 120mm exhaust on the rear. Still need to figure out a better way to pass the fan cables through the front panel and a more permanent way to mount the fans (I was thinking small magnets). I like that the front flap is still able to open and close.

Temps are 47-49 while running badblocks on all the drives, and down in the 35-37 range when idle. I think that should be sufficient?

I still have about a week left in my return window, I am leaning more towards keeping this chassis though. Just can't find anything else that is similar to this one, despite all the thermal flaws.
Noctua make excellent silent fans, they don't make fans for moving air. So, one thing that'll help if you get fans with high static pressure - they'll be way more noisy, but at least they'll push more air.
If nothing else, it'll at least show why having a fan pull air over the disks from directly behind the disks via well-placed through-holes on the PCB is also preferable to having it push it through a grate at the front or on the side.

Selachian
Oct 9, 2012

Can someone who knows absolutely nothing about this subject ask for a bit of advice here?

I work in a small university archive. We have a few odds and ends of digital media -- mostly scanned photos and newspapers, some videos. Right now our digital holdings are mostly stored in a bunch of off-the-shelf hard drives in our main storage vault, or on our shared drive on the university computer system. However, we've had a couple of donors offer large (2-3 TB) video collections, we're increasingly dealing with born-digital materials, and we have a shitload of old VHS tapes taking up way too much space that I'd like to digitize at some point. So I think it's time we need our own storage system, something more secure than the pile-o-hard-drives. Which, as far as I can see, would probably mean investing in a RAID drive, but I'm not really sure how to go about picking one.

I'd be glad for any suggestions I can get.

- Reliability and futureproofing are the main concerns. This is an archive, our goal is to keep this stuff forever.

- Retrieval speed isn't really an issue, except insofar as it makes refreshing/transferring to a new drive sometime in the future more of a drag.

- Our budget is currently in the shitter thanks to Covid, but our dean is very supportive of this idea and I think I can pry the cash loose if it's not too pricey.

- We do have an IT department and will probably discuss this with them, but obvs I'd prefer a solution that makes as little work for them as possible. We don't have anyone particularly techy in the archives itself, but we can at least read manuals and follow instructions.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Selachian posted:

Can someone who knows absolutely nothing about this subject ask for a bit of advice here?

So, to help you out here, I think you really need to figure out the answer to three questions:

(1) How much data do you need to store now? (approx--4TB vs 20TB vs 40TB, for example)
(2) How fast do you expect this to grow? The amount and quality of your VHS transfers may make a big difference here.
(3) How much money do you think you could shake loose? $500 vs $5000 would change things noticeably.

Based on the assumption that you pretty much want a set-and-forget system, you're probably going to be looking at something like a Synology or QNAP pre-made device, rather than rolling your own. As a basic price point, you could get a Synology DS1618+ with 6x10TB shucked drives for about $2,000. Dedicate two drives to parity and you'd get 40TB usable space and be able to lose any two drives and still be ok.

That said, RAID-is-not-a-backup. It just makes it harder for you to lose everything--not impossible; someone can always knock it off the shelf, it gets power surged, etc., and the whole thing dies. If you're planning on storing stuff that doesn't exist anywhere else, or would be very hard to get back if your local copy died, you should also plan for that. Not sure what your university's data policies are, but shoving a copy of everything into something like Glacier or a Backblaze account (assuming you have the bandwidth) would be a cost-effective way to get off-site storage. If your policies don't allow it, factor in buying some external drives to have copies stored somewhere else and updated every now and then, at the very least.

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down
Doing some future planning. What can I use as an assumption for disk usage per day stored per cam of really high quality security cam footage? I don't mean anything insane, just enough to be useful if I'd actually need it.

I.e: 1 camera, 7 days stored is 1TB or whatever.

My nest cams ain't cutting it.

Wizard of the Deep
Sep 25, 2005

Another productive workday

Selachian posted:

University archive stuff



Archival storage like what you're describing is really a little out of the scope of this thread. But only a little. A lot of the discussions here are closer to passion and learning projects, which the storage needs you're considering absolutely aren't. Here, the stakes are as low as the budget.

You'll need to have your IT team involved from the beginning. For robust, dependable storage, they need to have a seat at the table from the start. Commercial/industrial storage at scale isn't cheap, and it doesn't look like most of the discussions above.

To be clear, you absolutely could build a frankenbox with shucked 12GB WDs. But realize that doing so will bring dishonor on your whole family, on you, and on your cow.

Talk with your IT team(s). Tell them you need big, slow storage. Let them come back with prices, and realize that those prices are probably some of the best available to you. You'll need to handle the conversion (probably with interns/grad students/work study victims), but leave the storage to the team built for dealing with it.

It may make sense to have a smallish NAS to serve as a local "landing pad", before it gets shuffled off to the big arrays. That'd be something like a QNAP or a Synology sitting in your building, before pushing it to the server room. That's the scale this thread can help you with, but your options may be limited by available vendors and procurement requirements.

You wouldn't expect the IT folks to know how handle archiving a literal ton of old manuscripts, would you? In the same way, let the IT team(s) do what they're trained (heh) and paid (heh) to do.

If you absolutely know a retrieval time measured in days is acceptable, consider Amazon's Glacier or Azure Cold Storage tiers. With those, you can move all the headache of hardware straight to the guys who have the biggest scale at the expense of getting the data back quickly.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

TraderStav posted:

Doing some future planning. What can I use as an assumption for disk usage per day stored per cam of really high quality security cam footage? I don't mean anything insane, just enough to be useful if I'd actually need it.

I.e: 1 camera, 7 days stored is 1TB or whatever.

My nest cams ain't cutting it.

Depends entirely on what sort of resolution, framerate, and compression you're using. 720p is probably a decent compromise between "is this useful at all" and file size. 10fps is fine if you don't need much motion clarity, but you'll need 20+ if you want it to be smooth. Compression is wildly different depending on codec--for example, H.264 takes roughly half the space that MPEG-4 video files do.

There are various calculators out there to give you some ballpark ideas. Seagate also has a list: https://www.seagate.com/solutions/surveillance/how-much-video-surveillance-storage-is-enough/

What's wrong with the Nest, anyhow? The hardware specs on it are actually very solid for a security camera.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Wizard of the Deep posted:

Talk with your IT team(s). Tell them you need big, slow storage. Let them come back with prices, and realize that those prices are probably some of the best available to you. You'll need to handle the conversion (probably with interns/grad students/work study victims), but leave the storage to the team built for dealing with it.

While you're absolutely right, having dealt with some educational institutions in the past, asking for a $50,000+ solution plus maintenance usually takes either an internal sponsor with a bunch of pull, or someone with grant money. Absent that, it's the old standby of making things work on shoestring budgets.

But, yeah. If $50k+ wouldn't get you laughed out of the room, everything Wizard said is correct and The Right Way to do it.

Wizard of the Deep
Sep 25, 2005

Another productive workday

DrDork posted:

While you're absolutely right, having dealt with some educational institutions in the past, asking for a $50,000+ solution plus maintenance usually takes either an internal sponsor with a bunch of pull, or someone with grant money. Absent that, it's the old standby of making things work on shoestring budgets.

But, yeah. If $50k+ wouldn't get you laughed out of the room, everything Wizard said is correct and The Right Way to do it.

You're absolutely right. My (poorly-explained) point was to get some cost scope first, then sell it to the internal sponsor/grantor. Starting with the budget instead of the realistic costs will almost certainly result in a budget that's far too small, because consumer storage costs have poisoned the well for dependability at scale.

The other piece of advice is "make friends with at least one person in the IT department", where you can spitball things without having full meetings and agendas and project managers. Y'know, so you can actually get things done.

Wizard of the Deep fucked around with this message at 01:37 on May 27, 2020

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down

DrDork posted:

Depends entirely on what sort of resolution, framerate, and compression you're using. 720p is probably a decent compromise between "is this useful at all" and file size. 10fps is fine if you don't need much motion clarity, but you'll need 20+ if you want it to be smooth. Compression is wildly different depending on codec--for example, H.264 takes roughly half the space that MPEG-4 video files do.

There are various calculators out there to give you some ballpark ideas. Seagate also has a list: https://www.seagate.com/solutions/surveillance/how-much-video-surveillance-storage-is-enough/

What's wrong with the Nest, anyhow? The hardware specs on it are actually very solid for a security camera.

Thanks, will check it out. To be fair to Nest, it's about 30% Nest and 70% me wanting to futz around with a new project. I've always liked the idea of having everything be self sufficient and still record when my internet goes down. Now that I recently got 500/50 internet I can reasonably talk back to the mothership to access my footage remotely.

The nest is mostly fine and I probably won't end up replacing them now that I think this through a bit more. I wish I could get right to the file to scrub the video back and forth. My wife got a surprise gift left on our front bistro table and it took forever for us to get close to figuring out who it was. Go back 15 seconds, load... etc. then the quality still wasn't great with the frame that loaded.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Wizard of the Deep posted:

You're absolutely right. My (poorly-explained) point was to get some cost scope first, then sell it to the internal sponsor/grantor. Starting with the budget instead of the realistic costs will almost certainly result in a budget that's far too small, because consumer storage costs have poisoned the well for dependability at scale.

That's a fair point. And there is something to be said about coming to the budget meeting with "Well we could do it The Pro Way for $100k, or we could do it A Good Way for $50k" and having that be far more successful than just starting with the $50k line up front. But presumably he knows how to weasel money out of his uni by now (or has a Director who does).

Wizard of the Deep posted:

The other piece of advice is "make friends with at least one person in the IT department", where you can spitball things without having full meetings and agendas and project managers. Y'know, so you can actually get things done.

Yeah, you're absolutely right about this, regardless of what option he ends up going with.

bizwank
Oct 4, 2002

TraderStav posted:

The nest is mostly fine and I probably won't end up replacing them now that I think this through a bit more. I wish I could get right to the file to scrub the video back and forth. My wife got a surprise gift left on our front bistro table and it took forever for us to get close to figuring out who it was. Go back 15 seconds, load... etc. then the quality still wasn't great with the frame that loaded.
If you didn't know this already, you can draw a new activity zone right where the thing happened, and it will retroactively mark any motion in that zone for the whole video history. Then it's just a matter of rewinding and checking each dot of that zone's color.

Selachian
Oct 9, 2012

Thanks for the input. $50K is... well beyond what we could possibly swing, but I take the point about keeping Tech Services involved from the start.

As for Amazon, while cloud backups are great for supplemental security, as an archivist I am biased toward having a local copy that I can see, handle, and control myself, as opposed to having the materials off on someone else's computers who knows where, and at the mercy of someone else's policies. Yeah, it may seem overly paranoid to worry about Amazon going out of business, but a century ago people would've said the same about Woolworth's.

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

You might want to have it on both the cloud and local

Selachian
Oct 9, 2012

taqueso posted:

You might want to have it on both the cloud and local

Yeah, that's the ideal, plus possibly LTO tapes stored off-site for really historically important stuff. Right now, though, my focus is on just acquiring the hardware so we can set up a real system for preserving digital media in-house.

Kia Soul Enthusias
May 9, 2004

zoom-zoom
Toilet Rascal
What do you use to convert analog media to digital? Specifically something with a composite out?

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

8-bit Miniboss posted:

I have the U-NAS where the motherboard fits on the side and that was indeed a tiny fit.


Before the LSI card was installed.

How are the drive temperatures in the U-NAS 8 bay units?

8-bit Miniboss
May 24, 2005

CORPO COPS CAME FOR MY :filez:

fletcher posted:

How are the drive temperatures in the U-NAS 8 bay units?

The Noctua's do a good enough job. Before I moved to a new apartment, I had it in sitting in the living room and it was averaging 35/36c. Post move the box is in my bedroom for the time being until I can get it into a more permanent location near my networking gear and it's closer to 40 now due to it getting to warmer temps in SoCal. One of my drives is a bit higher now because I'm doing a BTRFS scrub at the moment.

Edit: This is with 4 drives currently. I was going to get some more soon after I got the case, but family stuff. :negative:

8-bit Miniboss fucked around with this message at 07:43 on May 27, 2020

Crunchy Black
Oct 24, 2017

by Athanatos

MODS!?

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

TraderStav posted:

Thanks, will check it out. To be fair to Nest, it's about 30% Nest and 70% me wanting to futz around with a new project. I've always liked the idea of having everything be self sufficient and still record when my internet goes down. Now that I recently got 500/50 internet I can reasonably talk back to the mothership to access my footage remotely.

The nest is mostly fine and I probably won't end up replacing them now that I think this through a bit more. I wish I could get right to the file to scrub the video back and forth. My wife got a surprise gift left on our front bistro table and it took forever for us to get close to figuring out who it was. Go back 15 seconds, load... etc. then the quality still wasn't great with the frame that loaded.

Scrubbing in unifi protect is the smoothest I've ever seen in any cctv system, there's no subscription, and you host it yourself.

Selachian
Oct 9, 2012

Charles posted:

What do you use to convert analog media to digital? Specifically something with a composite out?

Haven't decided yet. Right now, the whole VHS conversion project is "maybe next year, when our budget is hopefully back to normal and we can hire student assistants to do the grunt work again."

Even without that, though, the new dean is very enthusiastic about producing more digital materials and has invested in a heavy-duty book/document scanner for us to share with the library, so I'd like to show we're getting with the program.

BlankSystemDaemon
Mar 13, 2009



Wizard of the Deep posted:

Archival storage like what you're describing is really a little out of the scope of this thread.
This is absolutely untrue, although there is an enterprise storage thread, but for a university that's absolutely outside of the budget, so this thread is a fine place to ask.
All of your other advice is fine, though. :)
I especially agree that asking the IT department after figuring out the scope and proper budgeting is the right thing to do.

Heck, most of my storage that I've been posting about qualifies as archival storage, and the thread is called 'Packrats Unite' - what are we, if not archiving stuff we think is important?

It's a home-grown image from SA, although that's just an archive thread.

Wizard of the Deep
Sep 25, 2005

Another productive workday

D. Ebdrup posted:

This is absolutely untrue, although there is an enterprise storage thread, but for a university that's absolutely outside of the budget, so this thread is a fine place to ask.

The distinction I was drawing is this thread is for hacky-rear end poo poo that's fine for home stuff and building a lab out of spare commercial parts. It's not for long-term archive that's expected to be managed by teams on an on-going basis.

I'm really just building up the necessary supporting documentation for seven years from now when Selachian comes back in tears because there's no way to recover lost files from some jigsaw puzzle of an archive I can say "I told you so".

Froist
Jun 6, 2004

It turned into a bit of a messy saga, but I got my minimal-cost storage upgrade on my N40L (from a few pages ago) up and running.

I bought the 4x2.5" mount and only found out afterwards that the external drives I had (Samsung M3s) aren't shuckable and have a proprietary connector inside, so that's winging its way back to Amazon now. In the end I got Xpenology with DSM 6.17 running using the onboard NIC, 4x2tb internal drives, and 3x2tb externals mounted inside and connected up via a USB3 hub/PCIe card, with DSM tricked into thinking they're internal drives. I know this is far from ideal from a performance perspective, but the server sits in my loft connected via powerline adapters so the whole thing isn't setting speed records anyway. And I still have a full mirror of the data onto a (new) external drive.

Having said that, network performance on Xpenology seems pretty bad compared to what I'm used to from my old Ubuntu + ZFS setup. Copying to a SMB share I get around 3mb/s transfer rate, and copying from the same share I get around the same rate but CPU usage jumps to 50%. Having seen this I tried AFP and got about the same results but without the CPU usage spike. I tried sticking my old OS drive back in there (i.e. exactly the same network conditions) and get 10mb/s both ways.

While the "internal USB drives" setup is a bit messy I'm pretty sure that isn't the bottleneck - running a 'dd' test directly on the Xpenology install shows 271mb/s write and 477mb/s read performance from the volume. With hindsight I probably should have run benchmarks like this before copying 6TB back to the box, lesson learned I guess.

I like having the nice DSM interface, but if there isn't a quick fix for this I'm tempted to switch to Unraid or FreeNAS or something before I get too much set up on here.

tl;dr: Any ideas why network throughput would be less than half my previous setup having moved to Xpenology?

Edit: I booted Xpenology back up to try iPerf after taking some benchmarks on my old install, and everything seemed to be performing similarly. So tried some file copies and those had improved too, without the CPU spike. Not sure what the problem was but it appears to be solved.

Leaving the post here so you can still laugh at my hideous, slow setup. But at least it's no worse than it was before :)

Froist fucked around with this message at 15:00 on May 27, 2020

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Selachian posted:

Yeah, that's the ideal, plus possibly LTO tapes stored off-site for really historically important stuff. Right now, though, my focus is on just acquiring the hardware so we can set up a real system for preserving digital media in-house.

I brought up Glacier because it's reasonably cheap, especially at the sizes I'm guessing you're going to need in the near term. At $0.004/GB/mo, you're talking $160/mo for 40TB. Though if you ever needed to pull the whole thing back down, that'd be another $3,600, so not something to do unless you actually needed it--it's no replacement for a local copy you can work with on a day to day basis. Still, as a DR option, it's not bad.

And while I agree with you about Amazon in 100 years being an open question, I'd also note that the probability of catastrophic loss of your local system is assuredly higher than the probability of AWS going out of business before you retire.

BlankSystemDaemon
Mar 13, 2009



Wizard of the Deep posted:

The distinction I was drawing is this thread is for hacky-rear end poo poo that's fine for home stuff and building a lab out of spare commercial parts. It's not for long-term archive that's expected to be managed by teams on an on-going basis.

I'm really just building up the necessary supporting documentation for seven years from now when Selachian comes back in tears because there's no way to recover lost files from some jigsaw puzzle of an archive I can say "I told you so".
It's kind of ironic though, because barring catastrophic hardware failure, enterprise storage that isn't ZFS is more likely to lose data than one of the setups that many people in here use.
In FreeBSD, ZFS has also existed for double the seven years when Selachian hopefully isn't coming back because the data was ate and not sufficiently available to out-live catastrophic hardware failure.

DrDork posted:

I brought up Glacier because it's reasonably cheap, especially at the sizes I'm guessing you're going to need in the near term. At $0.004/GB/mo, you're talking $160/mo for 40TB. Though if you ever needed to pull the whole thing back down, that'd be another $3,600, so not something to do unless you actually needed it--it's no replacement for a local copy you can work with on a day to day basis. Still, as a DR option, it's not bad.

And while I agree with you about Amazon in 100 years being an open question, I'd also note that the probability of catastrophic loss of your local system is assuredly higher than the probability of AWS going out of business before you retire.
There is one problem with every cloud solution: Data egress, ie. getting your data out is either be very very expensive or very very slow - or, more usually, both.
The fastest way to recover your data from offline storage, that can't be affected by the usual problems that disks are prone to, is still and likely always will be tape.

BlankSystemDaemon fucked around with this message at 14:52 on May 27, 2020

Adbot
ADBOT LOVES YOU

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

8-bit Miniboss posted:

The Noctua's do a good enough job. Before I moved to a new apartment, I had it in sitting in the living room and it was averaging 35/36c. Post move the box is in my bedroom for the time being until I can get it into a more permanent location near my networking gear and it's closer to 40 now due to it getting to warmer temps in SoCal. One of my drives is a bit higher now because I'm doing a BTRFS scrub at the moment.

Edit: This is with 4 drives currently. I was going to get some more soon after I got the case, but family stuff. :negative:

Thanks for the info!

I think I'm gonna give up on trying to get an 8 bay compact NAS without super loud fans and just get a Node 804. I liked having the Mini SAS HD to Mini SAS HD to keep the cabling mess down, and the hot swap bays were certainly nice, but it seems like none of the compact NAS chassis handle cooling very well.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply