|
priznat posted:Nice. No gotchas with recognizing the drives on a new system? As long as they are on a sata controller supported by the OS they should be fine? As long as Unraid sees the same GUIDs, it'll be just fine. I'd recommend turning off auto-start array before you shut down the build before changing hardware just to make sure it recognized everything properly before booting it up
|
# ¿ Aug 31, 2019 01:30 |
|
|
# ¿ Apr 26, 2024 21:57 |
|
Atomizer posted:That's a typical problem. They revised the SATA power delivery in the specs, so newer drives have that 3.3 V line remapped like Kitty mentioned to use as a power/reset switch (for remote power cycling of drives in data centers.) This means older PSUs still deliver a current over the 3.3 V line, which keeps the drive in the "off" state. The drives and PSUs are both techincally "fine," they're just incompatible because they're built to different versions of the SATA spec. The dumbdumbs who made that modification to SATA, rather than using pull low for remote reboot, they used pull high and figured "well it's only designed for enterprise drives so it shouldn't be an issue". Since part of the economies of scale of the shucks is that they use more enterprise targeted drives, you end up with the issue of some drives constantly rebooting. Also if you're going to just tape over the 3.3v pins, use kapton tape, not electrical tape or masking tape or scotch tape. Raymond T. Racing fucked around with this message at 00:07 on Sep 3, 2019 |
# ¿ Sep 2, 2019 04:33 |
|
H110Hawk posted:Just about to buy one and realized I should read the manual and make sure it will actually shutdown my Synology. Glad I did because gently caress this noise: The Cyberpower BRG1500AVRLCD according to the manual (as well as confirmed from me never having it happen), will only beep if there's a problem, but it doesn't consider low battery a problem. Does that fit the bill?
|
# ¿ Sep 6, 2019 02:25 |
|
THF13 posted:I built the "anniversary" build following a guide on serverbuilds.net and have been extremely happy with it. The motherboard for that one isn't available anymore but they have other builds worth a look. As for case recommendations, the Rosewill hotswap is a massive waste of money, and I'm of the opinion that the 15 bay non hotswap Rosewill is a much better use of less money.
|
# ¿ Oct 21, 2019 00:26 |
|
sockpuppetclock posted:restic seems like it's for making encrypted backup repos & snapshots, which is useful, but I need the data slightly more accessible. so to confirm, cygwin rsync copies to qnap, then you use SMB to get it back from the qnap? if so you're confusing the hell out of file ownership because of that
|
# ¿ Oct 23, 2019 03:27 |
|
Paul MaudDib posted:do I remember properly that there is some gotcha about serving NFS and Samba of the same files at the same time? possibly with or without ZFS, I don't recall cygwin user account interactions with windows is awful https://cygwin.com/cygwin-ug-net/ntsec.html
|
# ¿ Oct 23, 2019 04:02 |
|
sockpuppetclock posted:I just used rsync and scp yeah the permissions probably got super mangled because cygwin kept the SID as written, and windows didn't parse it back in as an SID but instead treated it literally
|
# ¿ Oct 23, 2019 04:29 |
|
Unoriginality posted:I'm going to be building myself a new NAS in the near future, since 12tb drives are apparently stupidly cheap right now. I haven't started properly shopping cases/boards/etc yet, but my suspicion is that I'm going to want to put 8-12 drives in it. Anyone have a variety of case they're particularly fond of for such things? Ease of working on it being the main concern. I like the Rosewill 4U rackmount style cases, the RSV-L4500 can fit 15 drives in it plus pretty much any motherboard you could reasonably obtain fits in it.
|
# ¿ Nov 9, 2019 03:59 |
|
Heners_UK posted:Can it push to array on high cache usage? I thought it could only be scheduled. Write through only works if your minimum free space is set properly per share. If the minimum free space is set to 0kb (as it is by default for a new share), write through won't work, and it'll blindly try to fit files into the 1kb space left on cache.
|
# ¿ Dec 2, 2019 01:55 |
|
Having network shares show in the "Network" pane requires SMBv1 to be enabled. Mapping shares as a network drive or navigating directly to \\hostname will work without enabling SMBv1 on Windows. 6.8 will use the Western Digital (iirc) protocol to have the server show back up in the Network pane. No clue why everyone decided to interpret it as "unraid only uses SMBv1"
|
# ¿ Dec 9, 2019 19:38 |
|
Henrik Zetterberg posted:I can't get this to work on my son's Win10 computer for the loving life of me, but it works just fine on mine. Both are updated fully and I don't think it's firewall bullshit. Does the network location work, it just won't authenticate or does it even load the network location? If it won't authenticate, try making sure no shares are mapped, searching for "credential manager", deleting everything that relates to your server authentication, then log out/in and try again
|
# ¿ Dec 9, 2019 21:18 |
|
The common thread seems to be unauthenticated access to shares is what Windows doesn't entirely like. Does your son have a user account in Unraid to access shares?
|
# ¿ Dec 9, 2019 23:14 |
|
also even though it sounds silly, make sure there's no credentials in credential manager the auth flow for SMB in Windows is dumb as hell and I think a bad username/password or already instantiated connection with a different username/password causes the new connection to freak out before even loading shares
|
# ¿ Dec 9, 2019 23:16 |
|
wolrah posted:My local Samba server shows up just fine in my Network pane on Windows 10 with SMB1 disabled entirely at both ends (Samba is actually set to use only the Win7 and later variant of SMB2 because there will never again be a Vista machine on my LAN), so this is definitely not true. According to Samba as long as nmbd is set up properly it should browse normally. it's WS-discovery, not WD. I can't speak to the specifics of your setup, but it's possible that service is running on your samba server? Raymond T. Racing fucked around with this message at 23:05 on Dec 10, 2019 |
# ¿ Dec 10, 2019 23:00 |
|
Scotch tape works okay, but generally you'd want to be using kapton tape for taping pins. Using quality molex>sata adapters also works, making your own custom PSU cables also works (check with a multimeter before plugging in hard drives)
|
# ¿ Dec 13, 2019 18:59 |
|
CopperHound posted:I don't know how it compares to kapton tape, but I have a bunch of this stuff from taping up bike wheels, and a small sliver of it works great for making off the 3.3v pins. It also doesn't leave behind residue. it looks like kapton tape is pretty much comparable to this stuff
|
# ¿ Dec 13, 2019 21:39 |
|
Henrik Zetterberg posted:Was there a guide to shucking earlier in the thread? Is that what all the taping connectors chat was about? 14TB is tempting. tl;didn't write one buy easystore or mybook, rip it open (if you live in the US they have to honor warranty even if you take it out of the shell) then connect. Most PSUs aren't server SATA specification. SATA forum decided to make a super neat feature for SATA drives in servers where that if they get the 3.3v pin held high (which is completely unused prior to them deciding to do this), they do a full reboot which would be exactly like unplugging and replugging the cables assuming nothing physically broke. For whatever reason, WD uses the server specced white label Reds, which listen on the 3.3v pins for power being applied. Problem is, all PSUs that aren't up to date on that SATA spec will always be sending 3.3v over those pins, so they are always power cycling. Taping works well enough, but generally the easier way is to remove 3.3v from the cables somehow. Using a safe (i cannot emphasize this enough) Molex to SATA power adapter will get rid of 3.3v, as Molex doesn't have a 3.3v rail. You can also use extension cables designed for expanding a single SATA power connector into 4, and just rip out the 3.3v wire and also solve the problem. https://www.youtube.com/watch?v=b6VCQ64DkfM
|
# ¿ Dec 15, 2019 04:26 |
|
THF13 posted:You can build a NAS cheaper than a similarly performing Synology. This guide gives a lot of options. https://forums.serverbuilds.net/t/guide-nas-killer-4-0-fast-quiet-power-efficient-and-flexible-starting-at-125/667 As someone who's become a bit disillusioned with these builds (switching my anniversary 1 out for an anniversary 2 due to USB issues), while the builds themselves are great, you don't really need hardware transcoding when you have stupid amounts of threads
|
# ¿ Dec 21, 2019 20:43 |
|
DrDork posted:To answer your question on USB-A to USB-C adapters, yeah, they're just dumb little wire converters. However, they don't all have the pins to support the highest data speeds. The one you linked notes a max of 480Mbps (which means it's probably really USB 2.0 internally and meant more to charge lovely cell phones with than anything else), so I'd skip that and go for something that explicitly supports 10Gbps to ensure you can get the most out of your system. So while those C female to A male adapters do technically exist, I'd strongly advise against purchasing any. That specific adapter configuration is considered a SHALL NOT by the USB forum and while they're dumb when it comes to naming generations, they do have very good reasons as to calling that a SHALL NOT. USB-A ports have all been designed with the assumption that they're the host port, not the slave, and there could never be any power flowing towards the host from either another host or a power source. Using a C female to A male adapter lets you get rid of that physical lockout, and allows you to plug a host into a USB-A host (which at best will just not work), and more dangerously allows you to plug a crappy USB-C power adapter into the USB-A port of a computer, a port which was never designed to handle power flowing into it, and since it was a physical impossibility, over current/voltage protections are minimal going inbound.
|
# ¿ Jan 3, 2020 19:36 |
|
DrDork posted:While you are correct on this, there's also zero reason power should be flowing up the line from the USB split chip that's on the dock/expander in the first place. But, yeah, don't use it to plug a power brick / charger in with. Really more than anything, it breaks the USB-A host/slave assumption, and if used improperly could cause bad things. They're a thing that doesn't add any positives for me, and just adds scare factor.
|
# ¿ Jan 3, 2020 22:35 |
|
fatman1683 posted:It's been awhile since I've bought hardware on ebay, I'm not sure I'd want to roll those dice necessarily, but one of the refurb places might have some v3 CPUs with a warranty of sorts. I'll look into it, thanks. Other than issues endemic to the hardware across the entire fleet, I've purchased a LGA1366 NAS setup, a LGA2011 setup because I was chasing the hotness, and then a different LGA2011 board because of the mentioned endemic issues across I believe every single motherboard of that SKU, I've had zero issues whatsoever with any ebay hardware. It's a dice roll yes, but I'd bet it goes bad far less than you think.
|
# ¿ Jan 8, 2020 03:34 |
|
THF13 posted:Schilling for serverbuilds.net based builds for like the 5th time in this thread, they have a few builds that use a 4u Rosewill RSV-L4500 with a few modifications to make it actually quiet. Problem with anniversary2 (as someone who was a former SB shill but ended up getting banned for speaking out against JDM's pricky behavior), is that all of the reasonably priced supermicro 2011 boards available now are narrow ILM, so there's pretty much no desktop coolers that fit it. Anni2 is basically only supermicro boards, so you can either get a server specced narrow ILM air cooler that's noisy as all hell, get a much more expensive narrow ILM air cooler running a noctua fan, or just say gently caress it and get a couple of asetek AIOs and the narrow ILM bracket they sell separately edit: to clarify the above position, JDM makes a decent build, but he's kind of a prick in literally every other situation, plus the massive conflict of interest with creating a guide then using ebay affiliate links i wasn't a fan of Raymond T. Racing fucked around with this message at 06:24 on Jan 29, 2020 |
# ¿ Jan 29, 2020 05:53 |
|
Elem7 posted:I'm setting up a new file server to replace an older one I've had for around 7 years that's showing signs I shouldn't be trusting it anymore. Okay so this is my relatively uninformed answer: bit rot is a non-issue on any file system made in the last 10 years for consumer workloads. I've never ever heard of anyone panic about bitrot on desktop platforms but all of you throw out the mere idea that your stuff could in theory magically flip bits and suddenly all logical thinking goes out the window and people start chasing bitrot proof filesystems and just make their life harder.
|
# ¿ Jan 30, 2020 01:01 |
|
Elem7 posted:As far as I know it's only a non-issue in ZFS, BTRFS, REFS and whatever Apple's latest and greatest FS is that I can't remember off the top of my head. Am I really super concerned that it's going to make a huge difference in what are mostly media files? No, but if talking about a 30+ TB file system that may be around for 10 years it's certainly the case that I'll suffer from it and it's not impossible in data more sensitive than a video file where it just results in a single frame artifact. The recommended array filesystem in Unraid is xfs. Your hard drives have ECC. You're significantly more likely to have issues in memory causing bad data to be written than the hard drive having a cosmic ray flip a bit.
|
# ¿ Jan 30, 2020 01:47 |
|
Smashing Link posted:Does anyone have pihole running on an Unraid system? Docker vs. a VM? Spaceinvaderone's video (https://www.youtube.com/watch?v=2VnQxxn00jU&t=144s) is from 2018 and some comments refer to unraid not being able to route DNS through an IP within the Unraid system. Has anyone gotten this working? From an reliability standpoint, it's a pretty awful idea to have a host use a container as its upstream DNS source. Way too many ways for things to break and require you to have to go and set the DNS of Unraid back to a real external resolver to get yourself back up and running.
|
# ¿ Feb 10, 2020 03:08 |
|
Constellation I posted:No issue whatsoever. Pretty much plug and play. Basically set your DNS at the router level pointed to the docker as primary, then set the secondary to a proper public DNS like Google's as a failover. To clarify: doing it this way will result in not all of your DNS queries going to your pihole. There's no way to set priority of DNS servers in DHCP, so clients will just pick whichever one first for a given query.
|
# ¿ Feb 10, 2020 17:16 |
|
Thermopyle posted:Whats your concern here? It sounds like you're concerned about the extra layer the container inserts into the process instead of running DNS directly on the source. What I mean is that if you're using the Pihole container as the DNS resolver for Unraid, you end up in a dependency hell where updating the pihole container requires shutting down DNS resolving, but getting the update requires resolving that DNS. To me it just seems like an overcomplication of things, and it makes more sense to either: run Pihole on Unraid, but don't have Unraid use the Pihole for DNS resolving; or have Pihole running on a separate physical device and let Unraid use it as it's resolver.
|
# ¿ Feb 10, 2020 21:10 |
|
That Works posted:Oh my god this literally happened this week to me. tvdb has a stick up their bum with episode numbering, so things that look for episodes always get it wrong. basically tvdb considers the opening of the season to be two separate episodes even though they're aired back to back and according to TV guides is a combo episode "chapter xx, xx+1" so it always breaks
|
# ¿ Feb 26, 2020 19:01 |
|
IMO for home stuff, while FreeNAS/TrueNAS is technically better with ZFS, there's a lot to be said for Unraid's BJOD mechanic.
|
# ¿ Mar 19, 2020 01:00 |
|
Woah nelly I'm seeing a lot of misinformation on what Unraid's cache does Cache is for writing only, never for reads (unless you're reading something that was put into the array prior to the mover running and moving stuff from array to spinning rust). Once something is moved to the spinning rust from the cache, it realistically will never go back onto the cache (there's no smart access file moving or anything like that). IMO you're never going to have a situation where flash fails completely suddenly without warning, so I run my cache in RAID0, but I also have a higher risk tolerance than others.
|
# ¿ Mar 20, 2020 22:47 |
|
TraderStav posted:As I do not have enough free space initially to transfer all my data at once, is there any downside to having one drive in the array, filling it up, adding the drive that I just copied from to the array (now I have 2 drives in the array) and then moving to the next drive of my source data? Probably will have 3-4 total drives at the end of it. But wasn't sure how it balanced the data over time if a lot was initially loaded onto one drive. Will it 'spread' the data over to the other drives as they're added over time? you're fine. as long as you keep it the default fill to high water, it'll try to spread things out as much as possible. With that in mind however, spreads only occur during writes, so stuff written to drive 1 won't get unbunched to drives 2-n (unless you use a plugin to scatter a share)
|
# ¿ Mar 21, 2020 19:11 |
|
TraderStav posted:I'm off and running! Set up my shares, sub directory settings, disabled the cache, and started copying. Boy I wish I had 10gb in my LAN, my A-M folder for Linux Distros is going to take 7 hours alone. How often do you actually navigate the raw file structure? IMO it's a waste of time to do that level of micromanagement
|
# ¿ Mar 21, 2020 20:04 |
|
TraderStav posted:What I mean was that my Plex library was separated into the halves of the alphabet, not just the files themselves. Navigating that with a ton of files can be onerous. Just seems like a lot of added effort for nothing really gained. Search or the alphabet selector on the right
|
# ¿ Mar 21, 2020 20:50 |
|
Besides, allegedly you can send them back bare and they'll send a replacement easystore. The only real sticking point is if you send back one in the wrong shell, as then they think you're pulling a fast one over on them and it turns into a nightmare. Either send them back in the matched enclosure, or send them back bare.
|
# ¿ Apr 5, 2020 02:15 |
|
Toshimo posted:I guess my question then is: Is the MTBF on the cheap drives noticeably less? They're exactly the same as WD Reds minus some differences in SATA implementation (3.3v pin needs to be removed/covered), a different label, and not having the Red warranty length.
|
# ¿ Apr 5, 2020 02:52 |
|
Toshimo posted:I have a dozen or more computer touching projects that I would get more personal satisfaction from than dicking around with a file server on the regular. I don't have a monitor/keyboard/mouse connected to my 15 bay rosewill build (once I get it reconnected in my new living arrangements thanks corona), server grade hardware has IPMI. I've used a monitor/kb/m all of one time on this setup, and that was just initial bootstrapping my IPMI.
|
# ¿ Apr 5, 2020 05:58 |
|
Crunchy Black posted:It still shocks me that people are willing to pay 1k for a Synology and be neutered from the start when you can get the 12 bay Rosewill. Although sadly it looks like Rosewill may have given up on the storage cases, I don't see either the R4000 or the L4500 for sale on Newegg anymore, which is a shame because they were stupid cost efficient for storage density.
|
# ¿ Apr 5, 2020 06:02 |
|
Crunchy Black posted:Yeah maybe its a personal thing but I'm tired of not having IPMI on stuff and I will never put anything new in the rack that doesn't have it, hence my reticence about Synology stuff. Which reminds me I need to get a new coin battery, the one in my Unraid build is dead so it keeps forgetting bios settings on powerup, but also it's got two liquid AIOs so hopefully I'm not covering the CMOS battery
|
# ¿ Apr 5, 2020 06:12 |
|
Granite Octopus posted:I'm in need of some replacement drives for my 4-bay Synology. I originally bought some NAS-specific drives for it. Money is tight right now, and external hard drives are fully half the price of the cheapest NAS-specific drives for the same capacity. shuck is the way to go
|
# ¿ Apr 14, 2020 02:59 |
|
|
# ¿ Apr 26, 2024 21:57 |
|
IOwnCalculus posted:Shuck away. I was only paranoid about them until they stopped allowing "but you opened it" as a reason to void warranties. Which is the very important note: Either keep all your shells (and note which one went with which drive), or don't bother. WD apparently gets touchy but will accept a bare shucked drive, but if you screw up and put the drive back in the wrong shell, it turns into a massive back and forth to get them to RMA it as they'll claim you're trying to defraud them.
|
# ¿ Apr 14, 2020 06:18 |