Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo
I am running Ubuntu 16.04 and I need to image my boot drive to a new SSD 256 Gig.

The BIG issue is the boot drive is a 256 Gig USB drive and I know there are some issues when you image between device types.

The USB has been configured to keep as much cache off of it as possible but I am still worried about the finite writes on USB. SSD would be better and the reason I am doing this.

What would be the advice of everyone to start from?

Some things I am thinking of and need some feedback on

1) Which utility to use? rsync (which I am still not good at understanding the huge amount of options. only used it for simple directory moves) or simple dd? There are two partitions on the usb disk using ext4; one for boot and one for everything else. Worried about DD finding an error on the USB and reducing the speed to slower than slow; option with dd is to use ddrescure which ignores bad blocks until it's done moving the rest of the good blocks.

2) Probably safer to export the ZFS drives and redo the zpool again, but I would love if I didn't have to do this since I would have to reconfigure Plex.

3) grub is going to be poo poo to redo.

4) dd can't copy from a bigger drive to a smaller drive. I need to find out the number of blocks the ssd has to see even if I can dd it.

EVIL Gibson fucked around with this message at 20:40 on Jan 28, 2017

Adbot
ADBOT LOVES YOU

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

eames posted:

multiple hosted servers and recording them directly to ACD "for fun".

I'll admit, I don't use Amazon cloud to know that acronym so I thought you meant he was recording it to the photomanager ACDsee

For actual NAS question, I am trying to move away from SMB and goto NFS. I see a lot of tutorials of how to set up different permissions for different IP/IP ranges but I am trying to find a way to make it so I can have read only for anonymous users while I have rw for an authenticated user .

There are no real good guides I'm doing this through zfs except for those long rear end zfs set sharenfs="to lots of characters" commands


For those that did it before, is creating a share outside of NFS easier or is the fact that zfs brings up the shares as soon as the mounts are ready better ?

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

GokieKS posted:

Back when I used NFS (before Apple completely hosed up NFS auto-mounting in on one of the recent versions which prompted me to just give up and move to netatalk/AFP instead of NFS, since slow SMB performance on Mac clients was the only reason I used NFS, and even that required a lot of tuning), I used the "zfs set sharenfs" command.

Despite the syntax of one giant line being kind of obtuse, it's not that bad - it's just a bunch of share_nfs attributes separated by commas.

Before all that though, why are you moving from SMB to NFS? Unless your clients are predominantly Linux machines, SMB (+AFP if you have Macs) is almost certainly going to be an easier (and probably better) option.

I have SMB up and running. It's great. The reason is that I would like to move to NFS because I always hear it's more efficient at transmitting without needing to do hacks like increasing the MTU on my machine. It might be complete garbage but I would like to fully take advantage of the ARC .

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

apropos man posted:

Because I had difficulty accessing shares from one, so I also installed the other while I worked out what was going to be the best option. I usually wouldn't have both of them in an everyday situation either. I wish Emby were as slick as Plex, though. It's just not quite there in terms of UX but has the advantage of being a bit more specific with watch folders.

Also the fact you can replace embys version of ffmpeg with one you built to implement hardware decoding and encoding.

Plex used to use ffmpeg before they went closed source but have been cock teasing hardware for the past year or two .

Emby does it perfectly properly prioritizing first gpu then CPU.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

evol262 posted:

I'm gonna assume you mean decoding here. Accel has been there for a desktop, and it's practically required for s number of DEs.

GPUs are actually pretty bad at encoding. It's useful for offload, but GPUs have no branch prediction, which is incredibly important for modern codecs. There's a use case for it, but it's not nearly as extreme as you'd think.

Uh, they do really well. not sure what you are looking at to make that case.

Also are you confusing branch prediction with matrix transformations?

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

evol262 posted:

No, I'm not.

I made the distinction because older versions of nvenc and vce were bad, gpu encoding isn't great on Intel, consumer "gaming" GPUs often have hardware encoder support which barely beats (or is worse than) CPU encoding (see: gtx970/980), and GPU encoding is really, really bad at dealing with anything which may have artifacting to mitigate or isn't streaming directly to h264/265

GPU encoding is trickier than it looks. And, for plex, 2 cores of a modern i5/i7 can encode and stream a h264 file on the fly. No GPU necessary

So I assume you are running the preview version where you can turn on hw transcoding because the current release of plex does not support it?

Do you specifically configure your Plex to only use 2 cores? Do you optimize your videos for easier streaming (ac3 to aac, rear end subs to movtext, mkv to h264???) because no test I can do can get the on the fly transcoding numbers as using nvenc.

EVIL Gibson fucked around with this message at 02:55 on Jun 28, 2017

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

apropos man posted:

I've went with Emby as my video player. I'm interested in compiling my own ffmpeg, as mentioned upthread, and using it as the custom transcoder.

I'm scared to compile anything at the moment because I'm running a Skylake i3 6100 in my server, and that's one of the series of CPU's affected by the recent data corruption scare when doing compilation :sadpeanut:

Guess I'll wait for some microcode updates.

What are you running because I got a guide that certainly helped me out. Also what type of card?

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

apropos man posted:

No GPU, just the Skylake 6100 to do transcoding. CentOS 7 headless host running CentOS 7 VM with emby. Gigabyte X150-ECC motherboard with 16GB unbuffered ECC DDR4.

Your reply makes me think that I should just set it to "Intel Quicksync (experimental)" in the server settings?

Also, I've been copying a load of videos from a spare drive into my ZFS mirror pair. Then I went to bed for an hour and I just sent the "shutdown -r 0" command to my host. It's doing some heavy writing and no signs of the reboot yet. Been going for about 4 minutes. Is this normal for ZFS? Is it performing a final scrub/cache sync before reboot?

Try to see if your ffmpeg doesn't already have quick sync enabled


code:

$ ffmpeg -codecs | grep ‘qsv’
DEV.LS h264  H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 
(decoders: h264 h264_qsv ) 
(encoders: h264 _qsv )
DEV.L. mpeg2video MPEG-2 video 
(decoders: mpeg2video mpegvideo mpeg2 _ qsv ) 
(encoders: mpeg2video mpeg2 _ qsv )
D.V.L. vc1 SMPTE VC-1 
(decoders: vc1 vc1 _ qsv )

Then you can try to encode a file by itself and see if it properly works. You should see h264_qsv being transferred to.


code:

$ ffmpeg -y –i test.mp4 -vcodec h264_qsv -acodec copy -b:v 8000K out.mp4

Here is a guide of how to install the prerequisite libraries and compile a version of ffmpeg on centos. I am not familiar with your VM but qsv may not be available in a sandboxed environment.

https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/quicksync-video-ffmpeg-install-valid.pdf

EVIL Gibson fucked around with this message at 20:46 on Jun 28, 2017

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

EssOEss posted:

This may sound like a silly question but why are you guys encoding video on your NAS systems? Is it a case of overcoming format incompatibilities or what?

For me, I have mkv files and that format cannot be streamed as easy as h264. Subtitles, forget about it.

Transcoding to easier to render formats for the the end client saves bandwidth and network saturation especially if you are supporting multiple streams

My phone does not need 4k resolutions and 5.1 sound. 720 with stereo is good enough but I sure don't want to encode every single movie and then show to 720 (which Plex allows you to do by "optimizing")

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

IOwnCalculus posted:

You seem to be mixing up container and video codec :v:

But with that said... one of the whole benefits of running Plex or anything like it, is that it will figure out the best format for whatever device / connection you're using to connect to it. As EVIL said, there's no point in trying to shove a 4K stream into a non-4K device, since it probably doesn't have the processing power or bandwidth to handle it. Better to let a nice beefy server transcode it on the fly. That way you don't need to keep around multiple copies for every single piece of content.

I means maktroka video and whatever you want in mp4 haha.

I am really getting familiar with the differences and why tools like mp4automater exists 😀

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

apropos man posted:

I tried to compile it earlier tonight and couldn't quite pull it off. Kept running into dependency problems as I could only find guides for older versions of CentOS and the Intel suite isn't exactly well explained.

Oh well, I'll just stick to the default ffmpeg instead of the bleeding edge version. My CPU is running really nice for my use case anyhow. It's great for an i3.

The ZFS thing finished shortly after I hit 'post'. Must have took about 4 minutes to sync the cache to disk before rebooting. I'll bear that in mind and reboot less often when I''m doing final checks to see if systemd services are enabling properly. It probably took longer than normal because I'd completed an rsync job of a couple of hundred GB about 30 mins prior to rebooting. I know that ZFS likes to keep things cached in RAM, so this is to be expected I suppose.

Try that command to list codecs because I know FFMpeg started auto building nvenc (Nvidia) into the main distro for Ubuntu.

If you see it there, all you'll have to do is install the library files or sdk from Intel , if that.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo
Does anyone else try to order the same size and model drive from different stores so you'll be pulling from different batches?

My biggest worry is if one drive starts going the other drives might start going as well at the most critical time of reslivering.

Just remember Seagate drives getting the click of death at a certain read/write count for some drat reason.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

BobHoward posted:

Assuming you're talking about what I think you're talking about, that last one wasn't a QC or manufacturing defect, it was a firmware bug.

Then this was the best kept secret because me and several others had no clue wtf was going on in 2006 or so

Tech support was awful to look at then. Downloads not any better since they all looked like geocities pages with miles and miles of manufacturer firmwares and you considered yourself lucky if there was also a link to download the tool to flash the firmware hah

EVIL Gibson fucked around with this message at 20:42 on Jul 3, 2017

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

DrDork posted:

There was a time when they were shucking external drives because it was notably cheaper than buying normal internal drives, but I think that was back when they were mostly using 1.5TB drives. Pretty sure everything 2TB and up are "normal" procurements.

What is relevant is, as you note, that most of (but not all of) the HGSTs are either data-center (though not their NAS-branded) or enterprise drives, which you'd expect to be more reliable. Likewise, all the WD drives are some form of Red.

The Seagates, meanwhile, are mostly generic Barracuda internal drives, with no NAS- or data-center specific ones that I could see.

So that generic Seagates are actually turning in failure rates on par with Reds is interesting. HGST is the clear reliability winner, with everyone else being more or less equal, and depending heavily on specific model number.

Basically, if you can put up with the chatter a HGST puts out, get them for your NAS.

But that's why you build networks so you can keep them in a dark, cool, but not humid place.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

SamDabbers posted:

It was designed on Solaris and has more and better-integrated features on it and Illumos distros, but napp-it has basic support for Debian and Ubuntu.

Dang their page has the best FAQ written as if I was back in the year 2000 reading a description off of cheap pc component's box. Here are some hot takes:


napp it bitching faq posted:

No bitrot
As it was known that storage has a silent error/bitrot problem (data corruption by chance) that become a problem with larger capacities, they build in real data checksum protection end to end, from disk to disk driver to detect these corruptions and repair them from redundancy either on access or an online check of all files called scrubbing.

No Write holes
Next problem ZFS should be solve was data corruption due a crash during a write. Older systems first modify data on a write and update then affected metadata, On a crash it can happen that data is updated but metadata not resulting in a currupted filesystem. If this happens on a raid you may additionally find a corrupted raid too. [....] This is achieved with a write behaviour where you never update old data but always write it newly. [....]

EVIL Gibson fucked around with this message at 20:33 on Jul 10, 2017

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo
I remember when I ran a windows file home server ... Thing (forget the name, it let you build a raid 0 bastard where you could throw in any size drive and made sure there were at least two copies of a file on two disks) the one thing I really appreciated was the hard drive visualizer I used. I think it was built into windows but whatever it was it let you create a rough model of your PC with hard drive images that could be linked to hard drive details. If you know a disk failed you didn't need to play 20 disk pulls to figure out which disk is which.

Is there a command line program that can do this with the best (the best) ascii art and possibly perform commands right there to offline a disk while still showing you the PC contents?

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

IOwnCalculus posted:

Reverse proxying is what I'm going to set up on mine.

Is there a good reverse proxy guide because I will have to do that for emby soon. Some I am finding that I think are good are from 6 years ago and wanted to make sure all options are explored.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

Paul MaudDib posted:


As for video encoding, you don't do that stuff on a 1080 Ti, you use a Quadro that has the media core fully unlocked. That takes you from a (soft-limited) 4 streams at once to 32 streams at once. Obviously if you are saturating the media core already that doesn't get you anything but that would be very unusual.

But yeah hardware-accelerated encoding is a pretty substantial tradeoff in quality or bitrate. The hardware just isn't as good as x264 and x265, its merit is how fast it runs. Speaking from the testing I've done on video game captures, at low bitrates you see quality improvement all the way down to veryslow with x264 at least (haven't played with x265 much, it's just too damned slow for day-to-day usage with current processors).

Nvidia locks off at 2 on my 730 . 4 is like a joke if that's true. It's been like 4 years.

Now here's a hot tip: AMD doesn't have a lock. You can encode as much as you want.

And encoding is something I need to do like I mentioned before. I would do it using QuickSync for Intel but my Xeon doesn't have that package. It is becoming old to have to predownload stuff to watch it on my phone with Plex but their work into the transcoder is just to make a worse version of ffmpeg you can't recompile (the loop holes they go through to not have to show their code in GitHub is hilarious).

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

VostokProgram posted:

Maybe you can make a symbolic link to a share instead of mapping the drive?

Another vote for symbolic links just for the fact they were created to aid in Unix migrations and compatibility.

Just be careful about the flags when making those links.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

Steakandchips posted:

So if I have 2 synology nases, and I have the exact same stuff on both, on the same LAN, but then, i take 1 of them far away, and link them together via the internet, what do I need to do make them continuously synced?

i.e. if I write something on one of them, I want it to be available on the other.

sync should be bi-directional.

What is the keyword I am looking for? I don't think synology cloud-sync does this, as it is a master/client relationship, i.e. 1 directional.

rsync?

Unison https://www.cis.upenn.edu/~bcpierce/unison/ is built to keep two computers updated to have the latest files. It uses rsync on the back end to do the diffs and transmission handling, but uses it so both computers do it to each other .


Demo directions: https://www.howtoforge.com/tutorial/unison-file-sync-between-two-servers-on-debian-jessie/

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

G-Prime posted:

Just be aware that Unison can spaz out very badly when you do a large number of file changes in a short period, or when you change very large files. One of my coworkers uploaded a multiple gig tar file to one of our boxes and ran it completely out of RAM because Unison decided to try to open and read the whole drat thing.

That's strange. That's not how rsync works and if a program using rsync is not doing hash blocks then it's not using the best reasons rsync is a great tool.

gently caress unison then.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

BobHoward posted:

How do you think rsync computes the hashes for each block of a file? It has to read the whole file in, and do the number crunching.



I thought how it was worded it was opening and getting metadata out like this doc file was last opened by so-and-so which doesn't even make more sense if it is really doing block by block hashing and not a god drat entire file hash.

If it is transferring the entire file each time it changes, it is not using rsync period.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

BobHoward posted:

I think you might be a bit confused about how rsync works? (Everything I've already described)


I bring up rsync doing block by block hashing several times and someone already said the original post can be understood as the program was doing weird things to the files.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

Paul MaudDib posted:

I was wondering the same thing. A proper shutdown should have everything flushed, is it necessary to explicitly export it or is a shutdown enough synchronization?

edit: maybe just a shutdown wouldn't flush ZIL/SLOG?

Export always flushes.

Also it looks like it is possible to convert drives referenced as disk assignment (/dev/sdx) to use drive id when the zpool is reformed.

Here is a thread but make sure you read it all because the asker messes up but figures out what you need to exactly do.

https://ubuntuforums.org/archive/index.php/t-2087726.html

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

Takes No Damage posted:

Not sure if there's a better thread to ask this:

Anyone have recommendations for data recovery services? My sister has an external HD that gives the click of death when she plugs it in, has some financial data and sentimental photos she really wants to recover.

The level of money you would probably be willing to spend will get you the company running stuff you can already do , maybe freezing it and then hoping the heads can start reading it before it thaws.

Now if we are talking take apart platter by platter and scanning each one in, we are easily hitting thousands and thousands.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

kloa posted:

Never tried them, but Noctua makes 1U size fans :v:

Delta also makes 1U fans

:smugmrgw:

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

Furism posted:

I'm trying to export my ZFS pool so I can import it into another OS later on. But FreeBSD is giving me poo poo:

code:
 remontoire: ~# zpool export pool1
cannot unmount '/mnt/pool1/revelation': Device busy
I stopped all services but SSH and it still says the device is busy. I can't umount it either. How can I figure out what's making the pool busy?

Try "zpool export -f pool1"

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

Sniep posted:

Anyone know on Synology why now and again I'll go to the admin page UI and it will just be dog slow + constantly pop up empty boxes with just an "OK" button?

like this:



Joke answer: don't worry, it's just saying everything is OK and wanted to say it.

(Is there a process explorer or something similar so you can see what might be causing that?)

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

bobfather posted:

Quick question for those in the know:

I have a 5 disk RAIDZ2 that I think I actually want to convert to a 6 disk RAID10.

I can’t build the RAID10 with all 6 disks because I need 1 of the disks to hold the data. Here’s my plan:

1. Backup the Z2 to one (or two) extra drives
2. In FreeNAS, kill my 5-drive pool and use 4 disks to make a RAID10
3. Move my data to the RAID10

But then here’s the crucial question:

4. Can I then add 2 more disks to my RAID10?

I want to be able to have a RAID10 composed of 3 vdevs with 2 disks in each. Pretty much all I’d have to do is add the disks as a third vdev, right?

What you can do is create your 3 disk vdev with two real drives with one missing. Zfs won't let you create a vdev without all drives present on pool creation.

I was in the same situation. One solution is to create a fake drive mount that reports as whatever size you need it to be but is actually a file that starts at no size but will grow in size as you write to it.

Get the two disks lined up and then create the fake hdd. You will add them to the pool successfully and you'll see the fake drive starting to fill up; bring down the pool immediately and remove the fake mount. The pool will report as degraded. Then when you are ready, use the replace command to put in your real third drive and wait until zfs repopulates and marks the pool as good.

All this time, you will be one failure from destruction so backup backup backup.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo
I have to do some file sync up against files I do not know where they could be in the file server.

I used a tool before that let me do this function before, but forgot if it was some odd rsync flag or whatever.

Imagine this scenario

code:
On PC:
Pics/NOTPORN/2015/Hot_Porn001.jpg [md5 hash for 001]
Pics/NOTPORN/2015/Hot_Porn002.jpg [md5 hash for 002]
Pics/NOTPORN/2015/Hot_Porn004.jpg [md5 hash for 004]

On File Server (after getting hashs of all of /store/pics):
/store/pics/Trip_To_GrandCanyon/ACTUALLYPORN/Hot_Porn001.jpg [md5 hash for 001]
/store/pics/Trip_To_GrandTetons/JUSTMOREPORN/Hot_Porn002.jpg [md5 hash for 002]
So what should happen is the comparer should get all the hashs of all files and let me know that I don't need to move over on the PC the pics Porn001 and Porn002 because it found them somewhere in the directory and all of it's recursive sub directories I pointed it to. It goes through everything and finds that I did not have Porn004 anywhere and report it did not find a match.

After I moved over 004 to my a new directory for "Trip_To_Appalacians" I will delete everything from the source.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

eames posted:

Yes, I did this passthrough with Linux KVM and Plex. The encoding quality is visibly worse than software and not worth it because the processors can do much better at relatively low load. IMO Quicksync only makes sense for underpowered Celeron/Atom NAS boxes but I don’t know how those will fare when h.265 becomes standard.
My box came with Intel AMT and required no licenses to use.

I use the hardware encoding for live transcoding to a lower format over severely restrictive bandwidth (my data plan).

Since Plex incorpreated it into the current version of Plex, I have started seeing less drops out or complaints of quality loss through the Plex mobile app.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

The Milkman posted:

So, I finally migrated off my rapidly decaying Corral install over the weekend. I was trying to hold off until 11.1 for the Docker support. But predictably, 11.1 is late, and probably won't even have it anyway if I'm reading the tea leaves right. Between plugins/a couple hand rolled jails I have my essential services running again.

Two questions:

Is there extra upkeep I need to do for each jail, or does it get updated with the rest of the system? I only used plugins back on 9.x so I never really learned much about actual jails, also aside from Emby none of it was exposed publicly.

Aside from media stuff, what else do y'all run on your servers? Probably gonna up a Certbot jail so I can HTTPS up my Emby (and NextCloud whenever I get around to setting that up), but I'm wondering if there's any other little services that would make life easier that I'm not thinking of.

If you haven't yet, scheduling your zfs scrub to perform weekly.

Installed a service to connect to my Google drive through command line so I can upload/process stuff to my server remotely via the drive .

Script to download direct 1080p from services I subscribe to via youtube-dl (not just YouTube, check out the filter list).

Let me know how that emby thing works out for you. I wanted to host it on a reverse ssl proxy connection where it only accepts connections that have the matching certs on the port I've created.

EVIL Gibson fucked around with this message at 01:24 on Nov 22, 2017

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

Photex posted:

I've been using unraid for a little over a year, it just works is the easiest way to sell it to people. There is a ton you can do with it though.

Now what happens you explain to them they also need a proper backup system?

"Don't I already have that now?"

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo
As someone who actually started my file server to boot from a USB , don't. Even though I understood that I had to move all caching off the USB to some other place (in my case, ram drive), it was such a bitch to handle upgrades since that always worried me the finite amount of write cycles USB memory has. I know ssds have the same issue, but on a vastly different scale and they have SMART to let me know very shortly when it is going to gently caress up. USB SMART (or the equivalent I was looking at) is very limited what it can pull.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

kloa posted:

Probably a silly question, but I have a RAID1 setup with a Synology DS212j and they are formatted as ext4.

The Synology was probably bitchin' fast back when I bought it in 2010, but is way too slow these days. Can I just put these 2 drives into a faster machine and not lose the RAID? I'm not using the Hybrid-RAID or whatever Synology wanted to default to, so I'm hoping I can just swap these to a faster machine with unRAID or something on it and not have to rebuild anything :ohdear:

Here's a guide of how to pull the files off a Synology Nas drive in Ubuntu (or really any Linux since they mostly have the same applications except a few)

https://forum.synology.com/enu/viewtopic.php?t=51393

Raid1 disks are just a full on mirrors with a bit of metadata

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

Twerk from Home posted:

You can also just rip the disks straight to an MKV with MakeMKV, and then Plex (or any media player at all) can handle it just fine.

I encode to mp4 because some of the people that watch my stuff have appleTV and other apple products which seems to be a better experience for them (live FF/Rew instead of a black screen for one example)

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

Matt Zerella posted:

Sure! But quicksync doesn't limit you to two streams.

I am in the awkward position that my Xeon i3 came with no built in quicksync so... GPU is it.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

Lowen SoDium posted:

I currently have a Ubuntu file server with 3x 3TB WD REDs in RAIDZ. Disk are just over 5 years old and the array is full. Looking to replace them all. Data will be copied from old array to new array.

Would you rather have more drives in a higher RAIDZ mode or fewer larger drives in a lower RAIDZ mode?

For example:

5x 6TB WD Reds in RAIDZ2 for 18TB of usable storage with 2 parity

or

3x 10TB WD Reds in RAIDZ for 20TB of usable storage with 1 parity

Cost for each option is close enough that it's not factor for me. The extra 2TB isn't a big concern for me either. I am more concerned about reliability and data loss issues.

You have to consider that the rebuilding requires lots of writes and reads across all drives.

If you have 1 parity, then you need to hope to hell or high water no other drive goes offline/goes bad while rebuilding. This happens (seeingly) more often if you raid is made of hard drives from the same batch.

If you have 2 parity, one other drive can go bad during the heavy write/reads and it will not stop the rebuild process.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

IOwnCalculus posted:

Counterpoint: ZFS does not poo poo the bed when this happens. I had this occur once during a rebuild and it marked a few files as corrupt, which let me easily restore them from backup.

Really, it's more safety concious where I build in case the worst case possible which always seems to happen lol

Adbot
ADBOT LOVES YOU

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

Falcon2001 posted:

How often should I consider proactively replacing disks? My NAS runs 24x7 but doesn't see a lot of day to day use. I'm running 4x Western Digital WD40EFRX 4 TB WD Red, all purchased almost exactly 4 years ago.

To be honest, just have a 5th running hot spare (if you can) and check either SMART or , as zfs does, tells you when a drive is running degraded (it picks up a lot of bad writes/reads and is constantly removing them from being used)

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply