|
EVIL Gibson posted:What are you running because I got a guide that certainly helped me out. Also what type of card? No GPU, just the Skylake 6100 to do transcoding. CentOS 7 headless host running CentOS 7 VM with emby. Gigabyte X150-ECC motherboard with 16GB unbuffered ECC DDR4. Your reply makes me think that I should just set it to "Intel Quicksync (experimental)" in the server settings? Also, I've been copying a load of videos from a spare drive into my ZFS mirror pair. Then I went to bed for an hour and I just sent the "shutdown -r 0" command to my host. It's doing some heavy writing and no signs of the reboot yet. Been going for about 4 minutes. Is this normal for ZFS? Is it performing a final scrub/cache sync before reboot?
|
# ? Jun 28, 2017 19:08 |
|
|
# ? Apr 27, 2024 00:58 |
|
apropos man posted:I've went with Emby as my video player. I'm interested in compiling my own ffmpeg, as mentioned upthread, and using it as the custom transcoder. You can either disable hyperthreading or wait for a microcode update to CentOS (probably in August, though maybe sooner). Given that it took a year and a half for a patch to this, it's not as easy to run into or severe as you think it may be anyway. You're probably safe just recompiling. I mean, what's the worst that happens? Segfault? apropos man posted:Also, I've been copying a load of videos from a spare drive into my ZFS mirror pair. Then I went to bed for an hour and I just sent the "shutdown -r 0" command to my host. It's doing some heavy writing and no signs of the reboot yet. Been going for about 4 minutes. Is this normal for ZFS? Is it performing a final scrub/cache sync before reboot?
|
# ? Jun 28, 2017 19:43 |
|
Not strictly NAS-related, but I guess it's about time to get around to replacing this desktop storage hard drive...
|
# ? Jun 28, 2017 20:23 |
|
apropos man posted:No GPU, just the Skylake 6100 to do transcoding. CentOS 7 headless host running CentOS 7 VM with emby. Gigabyte X150-ECC motherboard with 16GB unbuffered ECC DDR4. Try to see if your ffmpeg doesn't already have quick sync enabled code:
Then you can try to encode a file by itself and see if it properly works. You should see h264_qsv being transferred to. code:
Here is a guide of how to install the prerequisite libraries and compile a version of ffmpeg on centos. I am not familiar with your VM but qsv may not be available in a sandboxed environment. https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/quicksync-video-ffmpeg-install-valid.pdf EVIL Gibson fucked around with this message at 20:46 on Jun 28, 2017 |
# ? Jun 28, 2017 20:42 |
|
This may sound like a silly question but why are you guys encoding video on your NAS systems? Is it a case of overcoming format incompatibilities or what?
|
# ? Jun 28, 2017 21:37 |
|
EssOEss posted:This may sound like a silly question but why are you guys encoding video on your NAS systems? Is it a case of overcoming format incompatibilities or what? For me, I have mkv files and that format cannot be streamed as easy as h264. Subtitles, forget about it. Transcoding to easier to render formats for the the end client saves bandwidth and network saturation especially if you are supporting multiple streams My phone does not need 4k resolutions and 5.1 sound. 720 with stereo is good enough but I sure don't want to encode every single movie and then show to 720 (which Plex allows you to do by "optimizing")
|
# ? Jun 28, 2017 21:43 |
|
EVIL Gibson posted:For me, I have mkv files and that format cannot be streamed as easy as h264. Subtitles, forget about it. You seem to be mixing up container and video codec But with that said... one of the whole benefits of running Plex or anything like it, is that it will figure out the best format for whatever device / connection you're using to connect to it. As EVIL said, there's no point in trying to shove a 4K stream into a non-4K device, since it probably doesn't have the processing power or bandwidth to handle it. Better to let a nice beefy server transcode it on the fly. That way you don't need to keep around multiple copies for every single piece of content.
|
# ? Jun 28, 2017 22:09 |
|
All of the reasons already mentioned, plus I have a Chromecast stuck into the bedroom TV which doesn't play a certain audio license, so if I'm transcoding on-the-fly I know the audio will work every time without having to SSH into my server on my phone and run a bash script to transcode the audio and wait 5 minutes until I can play the transcoded version.
|
# ? Jun 28, 2017 22:50 |
|
IOwnCalculus posted:You seem to be mixing up container and video codec I means maktroka video and whatever you want in mp4 haha. I am really getting familiar with the differences and why tools like mp4automater exists 😀
|
# ? Jun 28, 2017 23:28 |
|
evol262 posted:You can either disable hyperthreading or wait for a microcode update to CentOS (probably in August, though maybe sooner). I tried to compile it earlier tonight and couldn't quite pull it off. Kept running into dependency problems as I could only find guides for older versions of CentOS and the Intel suite isn't exactly well explained. Oh well, I'll just stick to the default ffmpeg instead of the bleeding edge version. My CPU is running really nice for my use case anyhow. It's great for an i3. The ZFS thing finished shortly after I hit 'post'. Must have took about 4 minutes to sync the cache to disk before rebooting. I'll bear that in mind and reboot less often when I''m doing final checks to see if systemd services are enabling properly. It probably took longer than normal because I'd completed an rsync job of a couple of hundred GB about 30 mins prior to rebooting. I know that ZFS likes to keep things cached in RAM, so this is to be expected I suppose.
|
# ? Jun 29, 2017 01:24 |
|
Sir Bobert Fishbone posted:Not strictly NAS-related, but I guess it's about time to get around to replacing this desktop storage hard drive... It is exceedingly rare for HDs to do that. Which model/brand?
|
# ? Jun 29, 2017 01:27 |
|
apropos man posted:I tried to compile it earlier tonight and couldn't quite pull it off. Kept running into dependency problems as I could only find guides for older versions of CentOS and the Intel suite isn't exactly well explained. Try that command to list codecs because I know FFMpeg started auto building nvenc (Nvidia) into the main distro for Ubuntu. If you see it there, all you'll have to do is install the library files or sdk from Intel , if that.
|
# ? Jun 29, 2017 01:27 |
|
redeyes posted:It is exceedingly rare for HDs to do that. Which model/brand? Not the same guy but I had my 4 year old 1TB WD Blue decide it had to reallocate sectors each time data was written to it sometime a week or two ago. Replaced it but it had gotten up to 1401 reallocated sectors. It didn't run out of sectors to reallocate to before I swapped it out and all the data was fine but seeing "1300ish reallocated sectors" one day was a little scary. MagusDraco fucked around with this message at 01:32 on Jun 29, 2017 |
# ? Jun 29, 2017 01:30 |
|
EVIL Gibson posted:Try that command to list codecs because I know FFMpeg started auto building nvenc (Nvidia) into the main distro for Ubuntu. OK. Cheers. I may have another bash at it tomorrow, cos it's 01:30 and the eyes are closing
|
# ? Jun 29, 2017 01:30 |
|
EVIL Gibson posted:I means maktroka video and whatever you want in mp4 haha. Matroska and MP4 are just containers, your videos are probably all h.264 and converting between containers shouldn't involve an encoding step at all. If MP4 is easier for your software to handle you can just convert everything without any changes to the video or audio. apropos man posted:The ZFS thing finished shortly after I hit 'post'. Must have took about 4 minutes to sync the cache to disk before rebooting. I'll bear that in mind and reboot less often when I''m doing final checks to see if systemd services are enabling properly. It probably took longer than normal because I'd completed an rsync job of a couple of hundred GB about 30 mins prior to rebooting. I know that ZFS likes to keep things cached in RAM, so this is to be expected I suppose. That's not normal for ZFS, I imagine it was something else preventing you from rebooting. Maybe another process accessing the disks which is why they were spinning. ZFS doesn't do any automatic scrubs on shutdown, it does synchronize the cache but unless something is wrong that should only take seconds at worst, if it needs to spin up the drives. Desuwa fucked around with this message at 01:51 on Jun 29, 2017 |
# ? Jun 29, 2017 01:47 |
|
redeyes posted:It is exceedingly rare for HDs to do that. Which model/brand? Western Digital Black (WD6401AALS). It gave me just about 5.5 years, so I guess I don't have too many complaints here. Got a new drive coming in on Friday.
|
# ? Jun 29, 2017 01:50 |
|
Sir Bobert Fishbone posted:Western Digital Black (WD6401AALS). Good job WD in this case. The Blacks usually do well in failure cases. What'd you replace it with? quote:Not the same guy but I had my 4 year old 1TB WD Blue decide it had to reallocate sectors each time data was written to it sometime a week or two ago. Replaced it but it had gotten up to 1401 reallocated sectors. It didn't run out of sectors to reallocate to before I swapped it out and all the data was fine but seeing "1300ish reallocated sectors" one day was a little scary. I've had a bunch of blues die but they were readable. Good deal. Seagate on the other hand, when they fail, they fail hard. redeyes fucked around with this message at 01:55 on Jun 29, 2017 |
# ? Jun 29, 2017 01:53 |
|
redeyes posted:Good job WD in this case. The Blacks usually do well in failure cases. What'd you replace it with? I've got a Toshiba MG03ACA300 on order...I've a NAS to back up most of my critical files in case this turns out not to be reliable, but I'm pretty excited to have more space than 640 gigs to play around with.
|
# ? Jun 29, 2017 02:00 |
|
redeyes posted:Good job WD in this case. The Blacks usually do well in failure cases. What'd you replace it with? Replaced my 1TB Blue with a 4TB WD Black (WD4004FZWX, manufacture date was April 2017). It's just desktop space/steam game overrun if the ssd is full but seems to be doing fine.
|
# ? Jun 29, 2017 02:40 |
|
My current WD 1 TB Black I've had for my desktop data drive is from mid-2009 I thought I was getting ripped off paying $150 for at the time. It is a real trooper and the day it dies will be a day of sadness for my longest-ever living drive
|
# ? Jun 29, 2017 04:46 |
|
redeyes posted:Good job WD in this case. The Blacks usually do well in failure cases. What'd you replace it with? I've never recovered anything off about 10tb of dead Seagate drives, and that's enough to stop me ever buying anything from that lot ever again.
|
# ? Jun 29, 2017 04:47 |
|
Desuwa posted:That's not normal for ZFS, I imagine it was something else preventing you from rebooting. Maybe another process accessing the disks which is why they were spinning. ZFS doesn't do any automatic scrubs on shutdown, it does synchronize the cache but unless something is wrong that should only take seconds at worst, if it needs to spin up the drives. I noticed it doing something similar during a time when it should have been idle. This was a couple of hours after it happened during the shutdown process. The blue LED on the top of my Fractal R5 case was going like the clappers for 2 or 3 minutes, apropos of nothing. Next time I'll quickly log in and run htop to see what's going on. This is an ideal case for something like grafana but last time I tried to set that up I had a pain in the arse getting all of the constituent parts working with each other. Just when you think you're nearly finished, there's more stuff to add
|
# ? Jun 29, 2017 08:33 |
|
EL BROMANCE posted:I've never recovered anything off about 10tb of dead Seagate drives, and that's enough to stop me ever buying anything from that lot ever again. What was your "recovery" process?
|
# ? Jun 29, 2017 08:49 |
|
Nothing particularly out there, just stuff that had worked in the past even if just partially. I remember Disk Warrior was one of the software choices, and my friend had an expensive drive duplicator in his work place that he'd had great results with before. I remember spending a day or two on the first drive that died with no luck and then deciding the data lost wasn't the biggest deal. The next step would've been drive board swapouts. Anecdotal evidence for sure, but we probably had about equal Seagates and WDs and the WDs were just so much more reliable for us, and so many other people were complaining about the Seagates that was enough for me.
|
# ? Jun 29, 2017 13:14 |
|
apropos man posted:I noticed it doing something similar during a time when it should have been idle. This was a couple of hours after it happened during the shutdown process. The blue LED on the top of my Fractal R5 case was going like the clappers for 2 or 3 minutes, apropos of nothing. Try netdata? Super easy setup to at least have an hour or so of info.
|
# ? Jun 29, 2017 13:35 |
|
Moey posted:What was your "recovery" process? As long as the controller allows access to the disk even if it is damaged, you have a chance. Seagates just tend to go boom and the controllers lock up and they never go back online.
|
# ? Jun 29, 2017 13:59 |
|
IOwnCalculus posted:Try netdata? Super easy setup to at least have an hour or so of info. Hey! The demo of that looks very good! I'm gonna use it. Next problem: I used to have a systemd timer that ran a script to do a long S.M.A.R.T. test on each of my storage drives once month. I'd have: 'smartctl --test=long /dev/sda' on the first Saturday of the month and 'smartctl --test=long /dev/sdb' on the first Sunday. Can I still do this now that they are both in a ZFS mirror pair or will it wreck something? The test takes two or three hours per drive. I can just set the pool offline while the test runs, yes? e: or better still, just detach whichever drive is under test and then reattach it?
|
# ? Jun 29, 2017 14:27 |
|
apropos man posted:Hey! The demo of that looks very good! I'm gonna use it. you can read about it in the smartctl man page but that test is normally ran online with the -C option forcing it into offline mode.
|
# ? Jun 29, 2017 18:21 |
|
hifi posted:you can read about it in the smartctl man page but that test is normally ran online with the -C option forcing it into offline mode. I'm gonna adapt my bash script to run 'zpool offline <poolname> <diskname>' before SMART testing. It's probably a better method then I had before. I used to have the test running in the middle of the night but there still would have been a small amount of i/o during testing. If I offline the drive then there's gonna be nothing hitting it. I'll prepend the test with 'sleep 600', just in case it does a large write operation when offlining.
|
# ? Jun 29, 2017 23:07 |
|
I think you're way overthinking it, people schedule SMART tests on FreeNAS without offlining their zpool all the time.
|
# ? Jun 29, 2017 23:09 |
|
Just run it online, automating taking healthy disks offline for a test that can be run online is just asking for something to go wrong. SMART tests are good to run, but far more important is setting up regular scrubs and giving your system some way to inform you when there are checksum errors or drive failures. I run weekly scrubs and my machine emails me every day, whether it's healthy or not.
|
# ? Jun 29, 2017 23:14 |
|
Desuwa posted:Just run it online, automating taking healthy disks offline for a test that can be run online is just asking for something to go wrong. Ah. Cool. I didn't know if it was OK to have SMART running on a busy drive. I'll have a look at scrubs too. Cheers.
|
# ? Jun 29, 2017 23:26 |
|
I'm finally going to have some time next week to put attention on my FreeNAS Coral server. I read this: https://forums.freenas.org/index.php?resources/faq-migrating-from-freenas-corral-to-freenas-9-10-11.36/ And it sounds like I can make a fresh install of 11 on a USB key and have it pick up my ZFS arrangement from before. I literally only installed Coral, setup a pool and threw some data on it. Sounds like I'm good to go?
|
# ? Jun 30, 2017 15:56 |
|
Yes, that zpool will be easily importable by anything supporting ZFS. Don't forget to export it before you shut it down (like I always forget to do).
|
# ? Jun 30, 2017 17:39 |
|
Last week when I reinstalled my host three or four times my zpool imported absolutely fine. Even on the one occasion when I'd forgotten to export it beforehand.
|
# ? Jun 30, 2017 20:11 |
|
Yeah, exporting a pool is recommended, but chances are pretty good that you're not going to hurt it if you forget to. I mean, one shouldn't tempt fate intentionally, but ZFS is pretty durable.
|
# ? Jun 30, 2017 21:14 |
|
I think during my 'why do you keep halting you piece of poo poo?' phase, I had reinstalled without exporting the pool like 5-6 times, and went through like 4 distros. The issue ended up being the crappy WD green drives I had. Turns out .308 fixes WD Greens quite nicely.
|
# ? Jun 30, 2017 21:36 |
|
I've never had a problem with import -f, it's just annoying to go: "Alright, fresh install time." "Hey, why the gently caress isn't my pool already..." "Oh, right."
|
# ? Jun 30, 2017 23:39 |
|
Yeah. I'm a bit electronically sympathetic like that. "I know this is probably gonna import OK but I'm a dick for forgetting to export it".
|
# ? Jul 1, 2017 00:16 |
|
|
# ? Apr 27, 2024 00:58 |
|
apropos man posted:Thanks. I've just spent another couple of hours looking at 9p sharing. I think I did this at the very start, before heading towards nfs, then smb, then zfs. holy gently caress ain't no kill like overkill granted my use case is pretty much the same (just add a few backups) and i'm running zfs with freenas, but if i had to go through even close to the amount of hassle you are i'd be selling my kidney for a synology just to end the janitoring
|
# ? Jul 1, 2017 01:30 |