Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Raymond T. Racing
Jun 11, 2019

priznat posted:

Nice. No gotchas with recognizing the drives on a new system? As long as they are on a sata controller supported by the OS they should be fine?

As long as Unraid sees the same GUIDs, it'll be just fine. I'd recommend turning off auto-start array before you shut down the build before changing hardware just to make sure it recognized everything properly before booting it up

Adbot
ADBOT LOVES YOU

Raymond T. Racing
Jun 11, 2019

Atomizer posted:

That's a typical problem. They revised the SATA power delivery in the specs, so newer drives have that 3.3 V line remapped like Kitty mentioned to use as a power/reset switch (for remote power cycling of drives in data centers.) This means older PSUs still deliver a current over the 3.3 V line, which keeps the drive in the "off" state. The drives and PSUs are both techincally "fine," they're just incompatible because they're built to different versions of the SATA spec.

FYI 3.5" drives use 12 V for the main spindle motor and 5 V for the electronics and everything else, while 2.5" drives just use 5 V. That's why you can power 2.5" drives in an external enclosure off a single USB connection (which has provided 5 V since the original version) but 3.5" drives need an external power supply for 12 V. 3.3 V isn't used for drives nowadays, it's mainly for things like RAM, some external flash cards (remember dual-voltage flash media?) etc.

The dumbdumbs who made that modification to SATA, rather than using pull low for remote reboot, they used pull high and figured "well it's only designed for enterprise drives so it shouldn't be an issue". Since part of the economies of scale of the shucks is that they use more enterprise targeted drives, you end up with the issue of some drives constantly rebooting.

Also if you're going to just tape over the 3.3v pins, use kapton tape, not electrical tape or masking tape or scotch tape.

Raymond T. Racing fucked around with this message at 00:07 on Sep 3, 2019

Raymond T. Racing
Jun 11, 2019

H110Hawk posted:

Just about to buy one and realized I should read the manual and make sure it will actually shutdown my Synology. Glad I did because gently caress this noise:
13. MUTE: This icon appears whenever the UPS is in silent mode. The alarm does not beep during silent mode until the battery reaches low capacity.

The search continues for a UPS which will never beep ever.

The Cyberpower BRG1500AVRLCD according to the manual (as well as confirmed from me never having it happen), will only beep if there's a problem, but it doesn't consider low battery a problem. Does that fit the bill?

Raymond T. Racing
Jun 11, 2019

THF13 posted:

I built the "anniversary" build following a guide on serverbuilds.net and have been extremely happy with it. The motherboard for that one isn't available anymore but they have other builds worth a look.
  • NASKiller4.0: Variety of options for a NAS ranging from 2-10 plex streams, 6-15 HD capacity, $175-$600.
  • Lego: Larger dual CPU build with lots of expandability options, considered an "in progress" build so doesn't have as much info/options.
  • Hardware transcoding: Not a build exactly, but the idea here is to use an cheap small box just for plex/emby with hardware transcoding separate from your NAS.
  • DAS: Attach 15 bays to an existing PC/server.
anniversary bro

As for case recommendations, the Rosewill hotswap is a massive waste of money, and I'm of the opinion that the 15 bay non hotswap Rosewill is a much better use of less money.

Raymond T. Racing
Jun 11, 2019

sockpuppetclock posted:

restic seems like it's for making encrypted backup repos & snapshots, which is useful, but I need the data slightly more accessible.

I really dunno what I'm doing but I looked into rsync on windows via cygwin to the qnap's rsync daemon.
The problem is the data I tested it with just kept spewing this error for every file like so:
code:
rsync: set_acl: sys_acl_set_file(txt, ACL_TYPE_ACCESS): Operation not supported (95)
rsync: set_acl: sys_acl_set_file(txt, ACL_TYPE_DEFAULT): Operation not supported (95)
rsync: set_acl: sys_acl_set_file(txt/.1.txt.2DxWMg, ACL_TYPE_ACCESS): Operation not supported (95)
rsync: set_acl: sys_acl_set_file(txt/.2.txt.cs0Ljy, ACL_TYPE_ACCESS): Operation not supported (95)
And when I grab the test data back from the qnap the file permissions are basically a mess.

If someone understands the issue I would appreciate a solution. Otherwise I'll just settle for backing up with restic...

so to confirm, cygwin rsync copies to qnap, then you use SMB to get it back from the qnap?

if so you're confusing the hell out of file ownership because of that

Raymond T. Racing
Jun 11, 2019

Paul MaudDib posted:

do I remember properly that there is some gotcha about serving NFS and Samba of the same files at the same time? possibly with or without ZFS, I don't recall

edit: possibly locking, if the same file is accessed at the same time

cygwin user account interactions with windows is awful

https://cygwin.com/cygwin-ug-net/ntsec.html

Raymond T. Racing
Jun 11, 2019

sockpuppetclock posted:

I just used rsync and scp
code:
rsync -avhPHAx --password-file=/rpass /cygdrive/c/testinput/ username@192.168.0.16::backup/qtest/
scp -r admin@192.168.0.16:/share/backup/qtest/ /cygdrive/c/testoutput/

yeah the permissions probably got super mangled because cygwin kept the SID as written, and windows didn't parse it back in as an SID but instead treated it literally

Raymond T. Racing
Jun 11, 2019

Unoriginality posted:

I'm going to be building myself a new NAS in the near future, since 12tb drives are apparently stupidly cheap right now. I haven't started properly shopping cases/boards/etc yet, but my suspicion is that I'm going to want to put 8-12 drives in it. Anyone have a variety of case they're particularly fond of for such things? Ease of working on it being the main concern.

I like the Rosewill 4U rackmount style cases, the RSV-L4500 can fit 15 drives in it plus pretty much any motherboard you could reasonably obtain fits in it.

Raymond T. Racing
Jun 11, 2019

Heners_UK posted:

Can it push to array on high cache usage? I thought it could only be scheduled.

Write through only works if your minimum free space is set properly per share. If the minimum free space is set to 0kb (as it is by default for a new share), write through won't work, and it'll blindly try to fit files into the 1kb space left on cache.

Raymond T. Racing
Jun 11, 2019

Having network shares show in the "Network" pane requires SMBv1 to be enabled. Mapping shares as a network drive or navigating directly to \\hostname will work without enabling SMBv1 on Windows.

6.8 will use the Western Digital (iirc) protocol to have the server show back up in the Network pane.


No clue why everyone decided to interpret it as "unraid only uses SMBv1"

Raymond T. Racing
Jun 11, 2019

Henrik Zetterberg posted:

I can't get this to work on my son's Win10 computer for the loving life of me, but it works just fine on mine. Both are updated fully and I don't think it's firewall bullshit.

Does the network location work, it just won't authenticate or does it even load the network location?

If it won't authenticate, try making sure no shares are mapped, searching for "credential manager", deleting everything that relates to your server authentication, then log out/in and try again

Raymond T. Racing
Jun 11, 2019

The common thread seems to be unauthenticated access to shares is what Windows doesn't entirely like. Does your son have a user account in Unraid to access shares?

Raymond T. Racing
Jun 11, 2019

also even though it sounds silly, make sure there's no credentials in credential manager

the auth flow for SMB in Windows is dumb as hell and I think a bad username/password or already instantiated connection with a different username/password causes the new connection to freak out before even loading shares

Raymond T. Racing
Jun 11, 2019

wolrah posted:

My local Samba server shows up just fine in my Network pane on Windows 10 with SMB1 disabled entirely at both ends (Samba is actually set to use only the Win7 and later variant of SMB2 because there will never again be a Vista machine on my LAN), so this is definitely not true. According to Samba as long as nmbd is set up properly it should browse normally.

I was going off Matt Zerella's post that ended page 552 and the responses from other users like That Works who also had to enable SMB1 on their Windows machines to access their Unraid machines.

It definitely is, now I'm almost considering installing unraid in a VM myself just to verify one way or another.

If it supports SMB3 but still allows connections from SMB1 that's not the most secure configuration in the world but it's a reasonable default for a commercial product where compatibility without configuration is desirable to some users.

If it requires that clients have SMB1 enabled to access the current stable version, something is horribly wrong with their priorities and it'd make me wonder what else they have that badly wrong.
IIRC there's a WD discovery protocol that also populates the network pane, which Unraid didn't have until 6.8.

it's WS-discovery, not WD.

I can't speak to the specifics of your setup, but it's possible that service is running on your samba server?

Raymond T. Racing fucked around with this message at 23:05 on Dec 10, 2019

Raymond T. Racing
Jun 11, 2019

Scotch tape works okay, but generally you'd want to be using kapton tape for taping pins.

Using quality molex>sata adapters also works, making your own custom PSU cables also works (check with a multimeter before plugging in hard drives)

Raymond T. Racing
Jun 11, 2019

CopperHound posted:

I don't know how it compares to kapton tape, but I have a bunch of this stuff from taping up bike wheels, and a small sliver of it works great for making off the 3.3v pins. It also doesn't leave behind residue.

https://www.amazon.com/72-Yds-Coating-Masking-Temperature/dp/B00CKGIBYE

it looks like kapton tape is pretty much comparable to this stuff

Raymond T. Racing
Jun 11, 2019

Henrik Zetterberg posted:

Was there a guide to shucking earlier in the thread? Is that what all the taping connectors chat was about? 14TB is tempting.

tl;didn't write one

buy easystore or mybook, rip it open (if you live in the US they have to honor warranty even if you take it out of the shell) then connect. Most PSUs aren't server SATA specification. SATA forum decided to make a super neat feature for SATA drives in servers where that if they get the 3.3v pin held high (which is completely unused prior to them deciding to do this), they do a full reboot which would be exactly like unplugging and replugging the cables assuming nothing physically broke. For whatever reason, WD uses the server specced white label Reds, which listen on the 3.3v pins for power being applied. Problem is, all PSUs that aren't up to date on that SATA spec will always be sending 3.3v over those pins, so they are always power cycling.

Taping works well enough, but generally the easier way is to remove 3.3v from the cables somehow. Using a safe (i cannot emphasize this enough) Molex to SATA power adapter will get rid of 3.3v, as Molex doesn't have a 3.3v rail. You can also use extension cables designed for expanding a single SATA power connector into 4, and just rip out the 3.3v wire and also solve the problem.

https://www.youtube.com/watch?v=b6VCQ64DkfM

Raymond T. Racing
Jun 11, 2019

THF13 posted:

You can build a NAS cheaper than a similarly performing Synology. This guide gives a lot of options. https://forums.serverbuilds.net/t/guide-nas-killer-4-0-fast-quiet-power-efficient-and-flexible-starting-at-125/667

Benefits to doing this mostly are you can give yourself extra hard drives bays to allow for future expansion. If you need more plex streams the same site has a good guide on offloading plex to a cheap ~$100 prebuilt system with a modern version of intel quicksync. https://forums.serverbuilds.net/t/guide-hardware-transcoding-the-jdm-way-quicksync-and-nvenc/1408

As someone who's become a bit disillusioned with these builds (switching my anniversary 1 out for an anniversary 2 due to USB issues), while the builds themselves are great, you don't really need hardware transcoding when you have stupid amounts of threads

Raymond T. Racing
Jun 11, 2019

DrDork posted:

To answer your question on USB-A to USB-C adapters, yeah, they're just dumb little wire converters. However, they don't all have the pins to support the highest data speeds. The one you linked notes a max of 480Mbps (which means it's probably really USB 2.0 internally and meant more to charge lovely cell phones with than anything else), so I'd skip that and go for something that explicitly supports 10Gbps to ensure you can get the most out of your system.

For USB 3.0 vs USB 3.1, it's unlikely to make much of a difference in this regard. USB 3.0 should support 5Gbps, or ~625MBps, which is more than 5x HDDs are likely to be able to saturate (figure they might be able to do ~120MBps each max, so you're looking at ~600MBps tops). USB 3.1 might up that to 10Gbps, but it doesn't say if it's implementing the 10Gb or the 5Gb option, so I'd suspect it's probably the same 5Gbps just using the USB-C connector. The little blurb on Amazon claims it's "up to 20% faster" but who knows how that'd actually play out, since if it were USB 3.1 gen 2 you'd expect them to be crowing about it and claiming much higher top speeds. :iiam:

As for how data gets transferred, unfortunately external dock/bays like that are dumb bunnies and are basically a USB switching chip connected to a bunch of SATA connectors. What that means is that they're not smart enough to keep data traffic internal to the dock: data going from dock drive A to dock drive B will take a trip from drive A down the cable to your PC and then back up the cable to drive B. This effectively means you'll get half the transfer speed you'd get going from PC drive X to dock drive B or whatnot, since you're moving the data across the wire twice.

So while those C female to A male adapters do technically exist, I'd strongly advise against purchasing any. That specific adapter configuration is considered a SHALL NOT by the USB forum and while they're dumb when it comes to naming generations, they do have very good reasons as to calling that a SHALL NOT. USB-A ports have all been designed with the assumption that they're the host port, not the slave, and there could never be any power flowing towards the host from either another host or a power source. Using a C female to A male adapter lets you get rid of that physical lockout, and allows you to plug a host into a USB-A host (which at best will just not work), and more dangerously allows you to plug a crappy USB-C power adapter into the USB-A port of a computer, a port which was never designed to handle power flowing into it, and since it was a physical impossibility, over current/voltage protections are minimal going inbound.

Raymond T. Racing
Jun 11, 2019

DrDork posted:

While you are correct on this, there's also zero reason power should be flowing up the line from the USB split chip that's on the dock/expander in the first place. But, yeah, don't use it to plug a power brick / charger in with.

Really more than anything, it breaks the USB-A host/slave assumption, and if used improperly could cause bad things.

They're a thing that doesn't add any positives for me, and just adds scare factor.

Raymond T. Racing
Jun 11, 2019

fatman1683 posted:

It's been awhile since I've bought hardware on ebay, I'm not sure I'd want to roll those dice necessarily, but one of the refurb places might have some v3 CPUs with a warranty of sorts. I'll look into it, thanks.

e:


Yeah, noted. At the time I wanted to get whatever was current in the hopes of keeping it for as long as possible, and prices on Haswell hadn't started to drop yet. It's worked well enough for my purposes so far, but I definitely need to revisit it for this next round of upgrades.

Other than issues endemic to the hardware across the entire fleet, I've purchased a LGA1366 NAS setup, a LGA2011 setup because I was chasing the hotness, and then a different LGA2011 board because of the mentioned endemic issues across I believe every single motherboard of that SKU, I've had zero issues whatsoever with any ebay hardware.

It's a dice roll yes, but I'd bet it goes bad far less than you think.

Raymond T. Racing
Jun 11, 2019

THF13 posted:

Schilling for serverbuilds.net based builds for like the 5th time in this thread, they have a few builds that use a 4u Rosewill RSV-L4500 with a few modifications to make it actually quiet.
-Take out front fans entirely
-Reverse interior fan wall, replace fans with quieter ones
-replace back 80mm fans
-Use desktop style CPU coolers instead of typical low profile server heatsinks.
-Don't run fans full speed

I haven't tried this first hand but it supposedly is extremely quiet if not silent. You can't transplant a Dell server mobo into it, their anniversary build2 guide has various mostly supermicro boards that should all work. https://forums.serverbuilds.net/t/guide-anniversary-2-0-snafu-server-needs-a-friggin-upgrade/1075

Problem with anniversary2 (as someone who was a former SB shill but ended up getting banned for speaking out against JDM's pricky behavior), is that all of the reasonably priced supermicro 2011 boards available now are narrow ILM, so there's pretty much no desktop coolers that fit it. Anni2 is basically only supermicro boards, so you can either get a server specced narrow ILM air cooler that's noisy as all hell, get a much more expensive narrow ILM air cooler running a noctua fan, or just say gently caress it and get a couple of asetek AIOs and the narrow ILM bracket they sell separately

edit: to clarify the above position, JDM makes a decent build, but he's kind of a prick in literally every other situation, plus the massive conflict of interest with creating a guide then using ebay affiliate links i wasn't a fan of

Raymond T. Racing fucked around with this message at 06:24 on Jan 29, 2020

Raymond T. Racing
Jun 11, 2019

Elem7 posted:

I'm setting up a new file server to replace an older one I've had for around 7 years that's showing signs I shouldn't be trusting it anymore.

I'm really looking for something that I can just place in a corner and not worry about for the most part, not a constant project, so I was leaning towards just using UNRAID but I'm afraid it can't meet both of the 2 criteria I was hoping to meet with this new server.

1. 2 disk failure redundancy while retaining the ability of individual data drives to be readable when moved to another PC (in case some event renders the system inoperable but HDDs are intact)
2. Automatic detection and remediation of bit rot

My thinking was UNRAID with BTRFS on the array would meet the above criteria but I'm getting mixed messages online about whether or not BTRFS is in a state at this point where it can be trusted to remain stable over a period of years.

Any other option that provides both? I'm not really concerned with array performance beyond native speed of a single HDD for this system. ZFS of course handles number 2 but as far as I was aware not number 1.

Okay so this is my relatively uninformed answer:

bit rot is a non-issue on any file system made in the last 10 years for consumer workloads. I've never ever heard of anyone panic about bitrot on desktop platforms but all of you throw out the mere idea that your stuff could in theory magically flip bits and suddenly all logical thinking goes out the window and people start chasing bitrot proof filesystems and just make their life harder.

Raymond T. Racing
Jun 11, 2019

Elem7 posted:

As far as I know it's only a non-issue in ZFS, BTRFS, REFS and whatever Apple's latest and greatest FS is that I can't remember off the top of my head. Am I really super concerned that it's going to make a huge difference in what are mostly media files? No, but if talking about a 30+ TB file system that may be around for 10 years it's certainly the case that I'll suffer from it and it's not impossible in data more sensitive than a video file where it just results in a single frame artifact.

If there wasn't any good way around it I wouldn't worry about it but it seems silly not to mitigate it if I can without a lot of effort.

The recommended array filesystem in Unraid is xfs. Your hard drives have ECC. You're significantly more likely to have issues in memory causing bad data to be written than the hard drive having a cosmic ray flip a bit.

Raymond T. Racing
Jun 11, 2019

Smashing Link posted:

Does anyone have pihole running on an Unraid system? Docker vs. a VM? Spaceinvaderone's video (https://www.youtube.com/watch?v=2VnQxxn00jU&t=144s) is from 2018 and some comments refer to unraid not being able to route DNS through an IP within the Unraid system. Has anyone gotten this working?

From an reliability standpoint, it's a pretty awful idea to have a host use a container as its upstream DNS source. Way too many ways for things to break and require you to have to go and set the DNS of Unraid back to a real external resolver to get yourself back up and running.

Raymond T. Racing
Jun 11, 2019

Constellation I posted:

No issue whatsoever. Pretty much plug and play. Basically set your DNS at the router level pointed to the docker as primary, then set the secondary to a proper public DNS like Google's as a failover.

To clarify: doing it this way will result in not all of your DNS queries going to your pihole. There's no way to set priority of DNS servers in DHCP, so clients will just pick whichever one first for a given query.

Raymond T. Racing
Jun 11, 2019

Thermopyle posted:

Whats your concern here? It sounds like you're concerned about the extra layer the container inserts into the process instead of running DNS directly on the source.

If that's the case, it's far from "pretty awful" as containers are very reliable. I mean, yeah it's an extra layer that may not be needed but there are literally dozens of layers of abstractions between here and there anyway. Let's not overstate the risks.

What I mean is that if you're using the Pihole container as the DNS resolver for Unraid, you end up in a dependency hell where updating the pihole container requires shutting down DNS resolving, but getting the update requires resolving that DNS.

To me it just seems like an overcomplication of things, and it makes more sense to either: run Pihole on Unraid, but don't have Unraid use the Pihole for DNS resolving; or have Pihole running on a separate physical device and let Unraid use it as it's resolver.

Raymond T. Racing
Jun 11, 2019

That Works posted:

Oh my god this literally happened this week to me.

Still not sure wtf is up with that season of TGP

tvdb has a stick up their bum with episode numbering, so things that look for episodes always get it wrong. basically tvdb considers the opening of the season to be two separate episodes even though they're aired back to back and according to TV guides is a combo episode "chapter xx, xx+1" so it always breaks

Raymond T. Racing
Jun 11, 2019

IMO for home stuff, while FreeNAS/TrueNAS is technically better with ZFS, there's a lot to be said for Unraid's BJOD mechanic.

Raymond T. Racing
Jun 11, 2019

Woah nelly I'm seeing a lot of misinformation on what Unraid's cache does

Cache is for writing only, never for reads (unless you're reading something that was put into the array prior to the mover running and moving stuff from array to spinning rust). Once something is moved to the spinning rust from the cache, it realistically will never go back onto the cache (there's no smart access file moving or anything like that).

IMO you're never going to have a situation where flash fails completely suddenly without warning, so I run my cache in RAID0, but I also have a higher risk tolerance than others.

Raymond T. Racing
Jun 11, 2019

TraderStav posted:

As I do not have enough free space initially to transfer all my data at once, is there any downside to having one drive in the array, filling it up, adding the drive that I just copied from to the array (now I have 2 drives in the array) and then moving to the next drive of my source data? Probably will have 3-4 total drives at the end of it. But wasn't sure how it balanced the data over time if a lot was initially loaded onto one drive. Will it 'spread' the data over to the other drives as they're added over time?

you're fine. as long as you keep it the default fill to high water, it'll try to spread things out as much as possible.

With that in mind however, spreads only occur during writes, so stuff written to drive 1 won't get unbunched to drives 2-n (unless you use a plugin to scatter a share)

Raymond T. Racing
Jun 11, 2019

TraderStav posted:

I'm off and running! Set up my shares, sub directory settings, disabled the cache, and started copying. Boy I wish I had 10gb in my LAN, my A-M folder for Linux Distros is going to take 7 hours alone.

I had previously separated my Plex libraries by A-M and N-Z to make them easier to navigate due to volume. Do you guys do something similar or throw them all in same folder and have another way to not be overwhelmed in the list?

I'm at the critical opportunity to redesign file structures so taking it!

How often do you actually navigate the raw file structure?

IMO it's a waste of time to do that level of micromanagement

Raymond T. Racing
Jun 11, 2019

TraderStav posted:

What I mean was that my Plex library was separated into the halves of the alphabet, not just the files themselves. Navigating that with a ton of files can be onerous.

Just seems like a lot of added effort for nothing really gained. Search or the alphabet selector on the right

Raymond T. Racing
Jun 11, 2019

Besides, allegedly you can send them back bare and they'll send a replacement easystore. The only real sticking point is if you send back one in the wrong shell, as then they think you're pulling a fast one over on them and it turns into a nightmare. Either send them back in the matched enclosure, or send them back bare.

Raymond T. Racing
Jun 11, 2019

Toshimo posted:

I guess my question then is: Is the MTBF on the cheap drives noticeably less?

It's worth a small premium to me not to have to RMA/rebuild every 6 months.

They're exactly the same as WD Reds minus some differences in SATA implementation (3.3v pin needs to be removed/covered), a different label, and not having the Red warranty length.

Raymond T. Racing
Jun 11, 2019

Toshimo posted:

I have a dozen or more computer touching projects that I would get more personal satisfaction from than dicking around with a file server on the regular.

Also, the amount of additional moving parts going from an appliance to a full tower means that I can't just stick it somewhere in a corner with enough airflow; I've got to make sure that I can pop a monitor/keyboard/mouse on it to troubleshoot if it becomes unresponsive.

This is a premium I am willing to pay at this point, yes.

I don't have a monitor/keyboard/mouse connected to my 15 bay rosewill build (once I get it reconnected in my new living arrangements thanks corona), server grade hardware has IPMI. I've used a monitor/kb/m all of one time on this setup, and that was just initial bootstrapping my IPMI.

Raymond T. Racing
Jun 11, 2019

Crunchy Black posted:

It still shocks me that people are willing to pay 1k for a Synology and be neutered from the start when you can get the 12 bay Rosewill.

Are y'all seriously so space constrained in your goon caves than an extra (1.75*3") height is killing you? idgi

Although sadly it looks like Rosewill may have given up on the storage cases, I don't see either the R4000 or the L4500 for sale on Newegg anymore, which is a shame because they were stupid cost efficient for storage density.

Raymond T. Racing
Jun 11, 2019

Crunchy Black posted:

Yeah maybe its a personal thing but I'm tired of not having IPMI on stuff and I will never put anything new in the rack that doesn't have it, hence my reticence about Synology stuff.

To throw more fuel on the fire, FreeNAS just works if you've installed and configured it...once :D

Which reminds me I need to get a new coin battery, the one in my Unraid build is dead so it keeps forgetting bios settings on powerup, but also it's got two liquid AIOs so hopefully I'm not covering the CMOS battery

Raymond T. Racing
Jun 11, 2019

Granite Octopus posted:

I'm in need of some replacement drives for my 4-bay Synology. I originally bought some NAS-specific drives for it. Money is tight right now, and external hard drives are fully half the price of the cheapest NAS-specific drives for the same capacity.

Is it a terrible idea to shuck drives these days? While the data is important to me, it is raided, plus I have a local backup in the form of another external disk, and remote backup via Backblaze, so even if it lasts half as long I don't really mind. Speed isn't necessarily important either, since its mostly streaming large movie/tv files.

shuck is the way to go

Adbot
ADBOT LOVES YOU

Raymond T. Racing
Jun 11, 2019

IOwnCalculus posted:

Shuck away. I was only paranoid about them until they stopped allowing "but you opened it" as a reason to void warranties.

Which is the very important note:

Either keep all your shells (and note which one went with which drive), or don't bother. WD apparently gets touchy but will accept a bare shucked drive, but if you screw up and put the drive back in the wrong shell, it turns into a massive back and forth to get them to RMA it as they'll claim you're trying to defraud them.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply