Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

EVIL Gibson posted:

What are you running because I got a guide that certainly helped me out. Also what type of card?

No GPU, just the Skylake 6100 to do transcoding. CentOS 7 headless host running CentOS 7 VM with emby. Gigabyte X150-ECC motherboard with 16GB unbuffered ECC DDR4.

Your reply makes me think that I should just set it to "Intel Quicksync (experimental)" in the server settings?

Also, I've been copying a load of videos from a spare drive into my ZFS mirror pair. Then I went to bed for an hour and I just sent the "shutdown -r 0" command to my host. It's doing some heavy writing and no signs of the reboot yet. Been going for about 4 minutes. Is this normal for ZFS? Is it performing a final scrub/cache sync before reboot?

Adbot
ADBOT LOVES YOU

evol262
Nov 30, 2010
#!/usr/bin/perl

apropos man posted:

I've went with Emby as my video player. I'm interested in compiling my own ffmpeg, as mentioned upthread, and using it as the custom transcoder.

I'm scared to compile anything at the moment because I'm running a Skylake i3 6100 in my server, and that's one of the series of CPU's affected by the recent data corruption scare when doing compilation :sadpeanut:

Guess I'll wait for some microcode updates.

You can either disable hyperthreading or wait for a microcode update to CentOS (probably in August, though maybe sooner).

Given that it took a year and a half for a patch to this, it's not as easy to run into or severe as you think it may be anyway. You're probably safe just recompiling. I mean, what's the worst that happens? Segfault?


apropos man posted:

Also, I've been copying a load of videos from a spare drive into my ZFS mirror pair. Then I went to bed for an hour and I just sent the "shutdown -r 0" command to my host. It's doing some heavy writing and no signs of the reboot yet. Been going for about 4 minutes. Is this normal for ZFS? Is it performing a final scrub/cache sync before reboot?
Impossible to say without knowing your cache flush settings.

Sir Bobert Fishbone
Jan 16, 2006

Beebort
Not strictly NAS-related, but I guess it's about time to get around to replacing this desktop storage hard drive...

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

apropos man posted:

No GPU, just the Skylake 6100 to do transcoding. CentOS 7 headless host running CentOS 7 VM with emby. Gigabyte X150-ECC motherboard with 16GB unbuffered ECC DDR4.

Your reply makes me think that I should just set it to "Intel Quicksync (experimental)" in the server settings?

Also, I've been copying a load of videos from a spare drive into my ZFS mirror pair. Then I went to bed for an hour and I just sent the "shutdown -r 0" command to my host. It's doing some heavy writing and no signs of the reboot yet. Been going for about 4 minutes. Is this normal for ZFS? Is it performing a final scrub/cache sync before reboot?

Try to see if your ffmpeg doesn't already have quick sync enabled


code:

$ ffmpeg -codecs | grep ‘qsv’
DEV.LS h264  H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 
(decoders: h264 h264_qsv ) 
(encoders: h264 _qsv )
DEV.L. mpeg2video MPEG-2 video 
(decoders: mpeg2video mpegvideo mpeg2 _ qsv ) 
(encoders: mpeg2video mpeg2 _ qsv )
D.V.L. vc1 SMPTE VC-1 
(decoders: vc1 vc1 _ qsv )

Then you can try to encode a file by itself and see if it properly works. You should see h264_qsv being transferred to.


code:

$ ffmpeg -y –i test.mp4 -vcodec h264_qsv -acodec copy -b:v 8000K out.mp4

Here is a guide of how to install the prerequisite libraries and compile a version of ffmpeg on centos. I am not familiar with your VM but qsv may not be available in a sandboxed environment.

https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/quicksync-video-ffmpeg-install-valid.pdf

EVIL Gibson fucked around with this message at 20:46 on Jun 28, 2017

EssOEss
Oct 23, 2006
128-bit approved
This may sound like a silly question but why are you guys encoding video on your NAS systems? Is it a case of overcoming format incompatibilities or what?

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

EssOEss posted:

This may sound like a silly question but why are you guys encoding video on your NAS systems? Is it a case of overcoming format incompatibilities or what?

For me, I have mkv files and that format cannot be streamed as easy as h264. Subtitles, forget about it.

Transcoding to easier to render formats for the the end client saves bandwidth and network saturation especially if you are supporting multiple streams

My phone does not need 4k resolutions and 5.1 sound. 720 with stereo is good enough but I sure don't want to encode every single movie and then show to 720 (which Plex allows you to do by "optimizing")

IOwnCalculus
Apr 2, 2003





EVIL Gibson posted:

For me, I have mkv files and that format cannot be streamed as easy as h264. Subtitles, forget about it.

You seem to be mixing up container and video codec :v:

But with that said... one of the whole benefits of running Plex or anything like it, is that it will figure out the best format for whatever device / connection you're using to connect to it. As EVIL said, there's no point in trying to shove a 4K stream into a non-4K device, since it probably doesn't have the processing power or bandwidth to handle it. Better to let a nice beefy server transcode it on the fly. That way you don't need to keep around multiple copies for every single piece of content.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
All of the reasons already mentioned, plus I have a Chromecast stuck into the bedroom TV which doesn't play a certain audio license, so if I'm transcoding on-the-fly I know the audio will work every time without having to SSH into my server on my phone and run a bash script to transcode the audio and wait 5 minutes until I can play the transcoded version.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

IOwnCalculus posted:

You seem to be mixing up container and video codec :v:

But with that said... one of the whole benefits of running Plex or anything like it, is that it will figure out the best format for whatever device / connection you're using to connect to it. As EVIL said, there's no point in trying to shove a 4K stream into a non-4K device, since it probably doesn't have the processing power or bandwidth to handle it. Better to let a nice beefy server transcode it on the fly. That way you don't need to keep around multiple copies for every single piece of content.

I means maktroka video and whatever you want in mp4 haha.

I am really getting familiar with the differences and why tools like mp4automater exists 😀

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

evol262 posted:

You can either disable hyperthreading or wait for a microcode update to CentOS (probably in August, though maybe sooner).

Given that it took a year and a half for a patch to this, it's not as easy to run into or severe as you think it may be anyway. You're probably safe just recompiling. I mean, what's the worst that happens? Segfault?

Impossible to say without knowing your cache flush settings.

I tried to compile it earlier tonight and couldn't quite pull it off. Kept running into dependency problems as I could only find guides for older versions of CentOS and the Intel suite isn't exactly well explained.

Oh well, I'll just stick to the default ffmpeg instead of the bleeding edge version. My CPU is running really nice for my use case anyhow. It's great for an i3.

The ZFS thing finished shortly after I hit 'post'. Must have took about 4 minutes to sync the cache to disk before rebooting. I'll bear that in mind and reboot less often when I''m doing final checks to see if systemd services are enabling properly. It probably took longer than normal because I'd completed an rsync job of a couple of hundred GB about 30 mins prior to rebooting. I know that ZFS likes to keep things cached in RAM, so this is to be expected I suppose.

redeyes
Sep 14, 2002

by Fluffdaddy

Sir Bobert Fishbone posted:

Not strictly NAS-related, but I guess it's about time to get around to replacing this desktop storage hard drive...



It is exceedingly rare for HDs to do that. Which model/brand?

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

apropos man posted:

I tried to compile it earlier tonight and couldn't quite pull it off. Kept running into dependency problems as I could only find guides for older versions of CentOS and the Intel suite isn't exactly well explained.

Oh well, I'll just stick to the default ffmpeg instead of the bleeding edge version. My CPU is running really nice for my use case anyhow. It's great for an i3.

The ZFS thing finished shortly after I hit 'post'. Must have took about 4 minutes to sync the cache to disk before rebooting. I'll bear that in mind and reboot less often when I''m doing final checks to see if systemd services are enabling properly. It probably took longer than normal because I'd completed an rsync job of a couple of hundred GB about 30 mins prior to rebooting. I know that ZFS likes to keep things cached in RAM, so this is to be expected I suppose.

Try that command to list codecs because I know FFMpeg started auto building nvenc (Nvidia) into the main distro for Ubuntu.

If you see it there, all you'll have to do is install the library files or sdk from Intel , if that.

MagusDraco
Nov 11, 2011

even speedwagon was trolled

redeyes posted:

It is exceedingly rare for HDs to do that. Which model/brand?

Not the same guy but I had my 4 year old 1TB WD Blue decide it had to reallocate sectors each time data was written to it sometime a week or two ago. Replaced it but it had gotten up to 1401 reallocated sectors. It didn't run out of sectors to reallocate to before I swapped it out and all the data was fine but seeing "1300ish reallocated sectors" one day was a little scary.

MagusDraco fucked around with this message at 01:32 on Jun 29, 2017

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

EVIL Gibson posted:

Try that command to list codecs because I know FFMpeg started auto building nvenc (Nvidia) into the main distro for Ubuntu.

If you see it there, all you'll have to do is install the library files or sdk from Intel , if that.

OK. Cheers. I may have another bash at it tomorrow, cos it's 01:30 and the eyes are closing :tootzzz:

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!

EVIL Gibson posted:

I means maktroka video and whatever you want in mp4 haha.

I am really getting familiar with the differences and why tools like mp4automater exists 😀

Matroska and MP4 are just containers, your videos are probably all h.264 and converting between containers shouldn't involve an encoding step at all. If MP4 is easier for your software to handle you can just convert everything without any changes to the video or audio.

apropos man posted:

The ZFS thing finished shortly after I hit 'post'. Must have took about 4 minutes to sync the cache to disk before rebooting. I'll bear that in mind and reboot less often when I''m doing final checks to see if systemd services are enabling properly. It probably took longer than normal because I'd completed an rsync job of a couple of hundred GB about 30 mins prior to rebooting. I know that ZFS likes to keep things cached in RAM, so this is to be expected I suppose.

That's not normal for ZFS, I imagine it was something else preventing you from rebooting. Maybe another process accessing the disks which is why they were spinning. ZFS doesn't do any automatic scrubs on shutdown, it does synchronize the cache but unless something is wrong that should only take seconds at worst, if it needs to spin up the drives.

Desuwa fucked around with this message at 01:51 on Jun 29, 2017

Sir Bobert Fishbone
Jan 16, 2006

Beebort

redeyes posted:

It is exceedingly rare for HDs to do that. Which model/brand?

Western Digital Black (WD6401AALS).

It gave me just about 5.5 years, so I guess I don't have too many complaints here. Got a new drive coming in on Friday.

redeyes
Sep 14, 2002

by Fluffdaddy

Sir Bobert Fishbone posted:

Western Digital Black (WD6401AALS).

It gave me just about 5.5 years, so I guess I don't have too many complaints here. Got a new drive coming in on Friday.

Good job WD in this case. The Blacks usually do well in failure cases. What'd you replace it with?

quote:

Not the same guy but I had my 4 year old 1TB WD Blue decide it had to reallocate sectors each time data was written to it sometime a week or two ago. Replaced it but it had gotten up to 1401 reallocated sectors. It didn't run out of sectors to reallocate to before I swapped it out and all the data was fine but seeing "1300ish reallocated sectors" one day was a little scary.

I've had a bunch of blues die but they were readable. Good deal. Seagate on the other hand, when they fail, they fail hard.

redeyes fucked around with this message at 01:55 on Jun 29, 2017

Sir Bobert Fishbone
Jan 16, 2006

Beebort

redeyes posted:

Good job WD in this case. The Blacks usually do well in failure cases. What'd you replace it with?


I've had a bunch of blues die but they were readable. Good deal. Seagate on the other hand, when they fail, they fail hard.

I've got a Toshiba MG03ACA300 on order...I've a NAS to back up most of my critical files in case this turns out not to be reliable, but I'm pretty excited to have more space than 640 gigs to play around with.

MagusDraco
Nov 11, 2011

even speedwagon was trolled

redeyes posted:

Good job WD in this case. The Blacks usually do well in failure cases. What'd you replace it with?


I've had a bunch of blues die but they were readable. Good deal. Seagate on the other hand, when they fail, they fail hard.

Replaced my 1TB Blue with a 4TB WD Black (WD4004FZWX, manufacture date was April 2017). It's just desktop space/steam game overrun if the ssd is full but seems to be doing fine.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
My current WD 1 TB Black I've had for my desktop data drive is from mid-2009 I thought I was getting ripped off paying $150 for at the time. It is a real trooper and the day it dies will be a day of sadness for my longest-ever living drive :911:

EL BROMANCE
Jun 10, 2006

COWABUNGA DUDES!
🥷🐢😬



redeyes posted:

Good job WD in this case. The Blacks usually do well in failure cases. What'd you replace it with?


I've had a bunch of blues die but they were readable. Good deal. Seagate on the other hand, when they fail, they fail hard.

I've never recovered anything off about 10tb of dead Seagate drives, and that's enough to stop me ever buying anything from that lot ever again.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

Desuwa posted:

That's not normal for ZFS, I imagine it was something else preventing you from rebooting. Maybe another process accessing the disks which is why they were spinning. ZFS doesn't do any automatic scrubs on shutdown, it does synchronize the cache but unless something is wrong that should only take seconds at worst, if it needs to spin up the drives.

I noticed it doing something similar during a time when it should have been idle. This was a couple of hours after it happened during the shutdown process. The blue LED on the top of my Fractal R5 case was going like the clappers for 2 or 3 minutes, apropos of nothing.

Next time I'll quickly log in and run htop to see what's going on. This is an ideal case for something like grafana but last time I tried to set that up I had a pain in the arse getting all of the constituent parts working with each other.

Just when you think you're nearly finished, there's more stuff to add :shudder:

Moey
Oct 22, 2010

I LIKE TO MOVE IT

EL BROMANCE posted:

I've never recovered anything off about 10tb of dead Seagate drives, and that's enough to stop me ever buying anything from that lot ever again.

What was your "recovery" process?

EL BROMANCE
Jun 10, 2006

COWABUNGA DUDES!
🥷🐢😬



Nothing particularly out there, just stuff that had worked in the past even if just partially. I remember Disk Warrior was one of the software choices, and my friend had an expensive drive duplicator in his work place that he'd had great results with before. I remember spending a day or two on the first drive that died with no luck and then deciding the data lost wasn't the biggest deal. The next step would've been drive board swapouts. Anecdotal evidence for sure, but we probably had about equal Seagates and WDs and the WDs were just so much more reliable for us, and so many other people were complaining about the Seagates that was enough for me.

IOwnCalculus
Apr 2, 2003





apropos man posted:

I noticed it doing something similar during a time when it should have been idle. This was a couple of hours after it happened during the shutdown process. The blue LED on the top of my Fractal R5 case was going like the clappers for 2 or 3 minutes, apropos of nothing.

Next time I'll quickly log in and run htop to see what's going on. This is an ideal case for something like grafana but last time I tried to set that up I had a pain in the arse getting all of the constituent parts working with each other.

Just when you think you're nearly finished, there's more stuff to add :shudder:

Try netdata? Super easy setup to at least have an hour or so of info.

redeyes
Sep 14, 2002

by Fluffdaddy

Moey posted:

What was your "recovery" process?

As long as the controller allows access to the disk even if it is damaged, you have a chance. Seagates just tend to go boom and the controllers lock up and they never go back online.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

IOwnCalculus posted:

Try netdata? Super easy setup to at least have an hour or so of info.

Hey! The demo of that looks very good! I'm gonna use it.

Next problem:
I used to have a systemd timer that ran a script to do a long S.M.A.R.T. test on each of my storage drives once month.

I'd have:
'smartctl --test=long /dev/sda' on the first Saturday of the month and 'smartctl --test=long /dev/sdb' on the first Sunday.

Can I still do this now that they are both in a ZFS mirror pair or will it wreck something? The test takes two or three hours per drive.

I can just set the pool offline while the test runs, yes?

e: or better still, just detach whichever drive is under test and then reattach it?

hifi
Jul 25, 2012

apropos man posted:

Hey! The demo of that looks very good! I'm gonna use it.

Next problem:
I used to have a systemd timer that ran a script to do a long S.M.A.R.T. test on each of my storage drives once month.

I'd have:
'smartctl --test=long /dev/sda' on the first Saturday of the month and 'smartctl --test=long /dev/sdb' on the first Sunday.

Can I still do this now that they are both in a ZFS mirror pair or will it wreck something? The test takes two or three hours per drive.

I can just set the pool offline while the test runs, yes?

e: or better still, just detach whichever drive is under test and then reattach it?

you can read about it in the smartctl man page but that test is normally ran online with the -C option forcing it into offline mode.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

hifi posted:

you can read about it in the smartctl man page but that test is normally ran online with the -C option forcing it into offline mode.

I'm gonna adapt my bash script to run 'zpool offline <poolname> <diskname>' before SMART testing. It's probably a better method then I had before. I used to have the test running in the middle of the night but there still would have been a small amount of i/o during testing. If I offline the drive then there's gonna be nothing hitting it.

I'll prepend the test with 'sleep 600', just in case it does a large write operation when offlining.

IOwnCalculus
Apr 2, 2003





I think you're way overthinking it, people schedule SMART tests on FreeNAS without offlining their zpool all the time.

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!
Just run it online, automating taking healthy disks offline for a test that can be run online is just asking for something to go wrong.

SMART tests are good to run, but far more important is setting up regular scrubs and giving your system some way to inform you when there are checksum errors or drive failures. I run weekly scrubs and my machine emails me every day, whether it's healthy or not.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

Desuwa posted:

Just run it online, automating taking healthy disks offline for a test that can be run online is just asking for something to go wrong.

SMART tests are good to run, but far more important is setting up regular scrubs and giving your system some way to inform you when there are checksum errors or drive failures. I run weekly scrubs and my machine emails me every day, whether it's healthy or not.

Ah. Cool. I didn't know if it was OK to have SMART running on a busy drive. I'll have a look at scrubs too. Cheers.

Ziploc
Sep 19, 2006
MX-5
I'm finally going to have some time next week to put attention on my FreeNAS Coral server.

I read this: https://forums.freenas.org/index.php?resources/faq-migrating-from-freenas-corral-to-freenas-9-10-11.36/

And it sounds like I can make a fresh install of 11 on a USB key and have it pick up my ZFS arrangement from before. I literally only installed Coral, setup a pool and threw some data on it. Sounds like I'm good to go?

IOwnCalculus
Apr 2, 2003





Yes, that zpool will be easily importable by anything supporting ZFS. Don't forget to export it before you shut it down (like I always forget to do).

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
Last week when I reinstalled my host three or four times my zpool imported absolutely fine. Even on the one occasion when I'd forgotten to export it beforehand.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Yeah, exporting a pool is recommended, but chances are pretty good that you're not going to hurt it if you forget to. I mean, one shouldn't tempt fate intentionally, but ZFS is pretty durable.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA
I think during my 'why do you keep halting you piece of poo poo?' phase, I had reinstalled without exporting the pool like 5-6 times, and went through like 4 distros. The issue ended up being the crappy WD green drives I had. Turns out .308 fixes WD Greens quite nicely.

IOwnCalculus
Apr 2, 2003





I've never had a problem with import -f, it's just annoying to go:

"Alright, fresh install time."
"Hey, why the gently caress isn't my pool already..."
"Oh, right."

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
Yeah. I'm a bit electronically sympathetic like that. "I know this is probably gonna import OK but I'm a dick for forgetting to export it".

Adbot
ADBOT LOVES YOU

Generic Monk
Oct 31, 2011

apropos man posted:

Thanks. I've just spent another couple of hours looking at 9p sharing. I think I did this at the very start, before heading towards nfs, then smb, then zfs.

I feel like I've gone round in a week-long circle and come back to the idea of just mounting the two different datasets as an nfs share for each guest.

Gonna finally commit to something tonight that actually works!



Home server with some torrents and a few personal documents on it. :wtf:

holy gently caress ain't no kill like overkill



granted my use case is pretty much the same (just add a few backups) and i'm running zfs with freenas, but if i had to go through even close to the amount of hassle you are i'd be selling my kidney for a synology just to end the janitoring

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply