|
Had something strange pop up on my home server that I haven't seen before. Was noticing that webmin was showing two of my raid disks with errors. At first I thought it was hddtemp being strange, but I decided to check smartctl and see what it was seeing, and sure enough on two disks there are errors. There are no errors in the counters, rather ABRT errors in the log. What's also strange is that its two disks sequentially, one very new, one nearly a year old. Here's the SMART output from both disks:code:
|
# ¿ Aug 28, 2014 19:40 |
|
|
# ¿ Apr 28, 2024 12:25 |
|
Wonder why all the drives didn't log that error. Guess smartmon gave up after two drives. Go figure. I've gone ahead and replaced the cables to those drives just to be safe based upon the research I did on the error on my own. I'm not that up to date on the standards that SMART uses, so I didn't think it could be a bogus command. Thanks for the info!
|
# ¿ Aug 28, 2014 21:30 |
|
BobHoward posted:By any chance are all the rest of your drives something other than WD30EFRX firmware version 80.00A80? My guess is that it's an issue specific to that drive model and/or firmware rev. Note that smartctl reported "Not in smartctl database" for both of them. The database is a list of known quirks, capabilities, parameter interpretation methods, and so forth. It's not unheard of for the generic fallback support to have a few minor issues (which is why the database exists). code:
code:
|
# ¿ Aug 29, 2014 12:21 |
|
great, i've got nine of the wd red 3TB drives between two servers. how about those low failure rates on the hitachis tho? did not expect that.
|
# ¿ Sep 25, 2014 13:14 |
|
Should we tell him or let him find out the hard way?
|
# ¿ Jun 21, 2015 21:23 |
|
ufarn posted:Bummer. There has to be some way of migrating. Yep. Copy your data to an external source like a backup hard drive and then migrate your disks over, then copy your data over to the new NAS. That's about the only way this is going to work.
|
# ¿ Jul 22, 2015 17:44 |
|
ufarn posted:Can you do this from the drives alone, or do they need to be connected to/in the NAS? If they're in raid0 they must both remain in the NAS at all times during the data copy. I don't know if you can hook the external up to the myworld or not so you'll have to look into that (also be aware of what file system the myworld formats the external as for the data transfer, this could make the difference between hooking the external directly to the synology as I'm not sure what formats the synology can mount). You could hook the external up to your PC and copy the data through the PC to the external that way, then once you've copied everything over you would verify everything is there, then remove the disks from the myworld, then place them in the synology, then perform the initialization/format procedure in the synology and then you could possibly hook the external up directly to the synology and copy the data to the new system. Speeds may vary but it should work. Anyone correct me if I've got this wrong.
|
# ¿ Jul 22, 2015 19:00 |
|
NihilCredo posted:I've had the realisation that, since I plan to leave my file server off most of the time and only WakeOnLAN it as necessary - meaning idle power consumption is not really an issue - it might not make much economic sense to build a separate machine from my desktop... hardware-wise, that is. You really don't want to run FreeNAS in a VM unless you're doing hardware passthrough of a drive controller or drives themselves. FreeNAS (ZFS in particular) wants full control of the disks. Also virtual disk performance is awful on top of that and there could be serious data errors (so I've heard. I've run FreeNAS as a test in ESXi and the disk write performance was very poor). If your dataset isn't changing much, IE large static media collection, you could look into something like Snapraid. Just send the torrents to a scratch disk before they get loaded to the media collection. There's also Flexraid, but I don't know much about it and I believe there's a cost associated with it. Have you considered a NAS appliance like a Synology or Q-Nap? I've used both and they're great devices. They're also very low power for the most part and compact so they don't take up much space. Using your desktop as a server has some drawbacks, I used to do it years ago and I ran into all sorts of limitations and bugs. They've probably been long fixed by now, for instance running out of memory for file shares under Windows 7. That one required a registry hack to fix. Also if you're wanting a local backup of your data, a NAS is handy. (but remember to have an offsite solution too) Keeping your local backups on the same machine is asking for trouble, say the power supply goes tits up and takes all the hard drives with it. There goes your backups. Also a NAS appliance like Synology comes with great tech support. Anyway, just some jumbled thoughts.
|
# ¿ Nov 2, 2015 15:02 |
|
Seagate 3TB NAS drives http://www.newegg.com/Product/Produ...-_-22178392-S0A $89.99 with code ESCEHHF22 through 11:59 PDT tonight.
|
# ¿ Mar 31, 2016 13:10 |
|
Correct. It will essentially create a mirror of the first drive. You are adding redundancy that point, not capacity.
|
# ¿ Apr 5, 2016 13:01 |
|
uhhhhahhhhohahhh posted:drat, thought it only used part of the drive for parity, not the full one. Looks like I'll have to buy 2 more then, down to 300gb!! Personally this is how I do things. I just run openvpn on my server and port forward to the openvpn server. I do have some services forwarded through a reverse proxy but on random high ports and via SSL, and password protected. This is maybe two services tho. The VPN is nice because it allows for more access without the port forwarding. More access is nice.
|
# ¿ Apr 5, 2016 16:48 |
|
Furism posted:I have an older Synology 4 slots NAS that's been acting up recently. It takes about 20-30 presses on the power button to turn it on and I don't take this as a good sign. It's 4 or 5 years old so not under warranty anymore. I was thinking to replace it before something more important breaks (like the CF card or whatever they store the OS on - happened to me in the past, on a Synology as well). Qnap might not be so bad for what you're looking for. I've got one at work that I use for various purposes. It's similar to the Synology but doesn't cost as much. I've got a four bay unit. It does support SSH login and has an rsync client/daemon. I think I paid about $220 for it on sale. It does have an ARM processor in it but it really performs very well. I'm currently running a raid 6 array on it and there is no real latency and I get near wire speeds with it. This is a TS-431+. As always YMMV. But I think they're worth looking into.
|
# ¿ Apr 14, 2016 23:36 |
|
KinkyJohn posted:What are the differences between WD red/green/blue/purple again? I know I would want red for storage, but why are the others so terrible again? Red: Storage drives, equipped with TLER for RAID Green: Low power drives, parks heads aggressively. Due to this limits lifespan. Blue: Standard consumer drive. 1 or 2 year warranty. Purple: Video recording drive. You wouldn't use this in a typical storage situation. Black: High performance drive. 5 year warranty.
|
# ¿ Apr 19, 2016 14:32 |
|
I'm fairly certain that the 3TB Seagates were limited to a specific model/run. The ones made after the Thailand flooding if I'm remembering correctly. Those drives were absolute poo poo and failed at extremely high rates. This is responsible for the high failure rates seen on the Backblaze graphs. The data has simply been skewed by a bad production run. Remove those drives and we'd see a more realistic view of Seagate's current 3TB drives.
|
# ¿ Apr 20, 2016 18:59 |
|
EconOutlines posted:I'm going to be cross posting with the main PC thread a bit but I thought I'd get my ducks in a row here first. I'd like to re-purpose my soon-to-be 5 year old PC as a Plex NAS when I upgrade my main one. You're not really limited to commercial hardware (if by that you mean enterprise) on Linux. I run a CentOS 6 home built NAS with MDADM RAID6 on off the shelf hardware. For your situation where you have different sized disks, you're probably best off using a Linux distro with Snapraid. I've used Snapraid in my set up before and it's pretty good, handles everything from the command line or cronjobs. Basically set your largest disks as parity and put your data on the remaining disks. If you lose a disk you only lose the data on the one disk, and with Snapraid, you can get that data back by replacing the disk and running a command to rebuild the disk. It also only spins up the needed disks for your operations as opposed to a traditional RAID where all disks are spinning. It's also free unlike unRaid or other solutions. It also runs on Windows in the event you want to go with a Windows OS. It's what I'd recommend for this situation.
|
# ¿ Apr 28, 2016 12:49 |
|
Skandranon posted:All of this is good, but for your setup, I'd suggest getting another 4tb drive so you can have 2 parity drives with Snapraid. After that, you can gradually swap out the remaining 3tb & 2tb drives as they die with 4tb ones as you feel makes sense. Good call on the secondary parity. It has been a while since I really had gotten into Snapraid, so it's multiple parity disk capabilities were somewhat forgotten. Can have up to what...6 now?
|
# ¿ Apr 29, 2016 12:37 |
|
Guni posted:Hi all-knowledgeable NAS goons! I've asked a few questions in this thread before and got some great advice (which I ultimately have never used due to various reasons). But I have a new set of questions so I can confirm what I'm thinking. I'm about to go balls to the wall with my mini-ITX build and remove my 3.5" bay; so that won't be an option to consider. 1) The NAS will transfer the data around your network at the speeds of your internal network connections. So if you've got gigabit ethernet you can expect up to gigabit ethernet speeds. Your network speed is not limited by your internet speed. The only time this is true is when you're transferring data over the internet. 2) Best to connect everything through a switch. If your modem/router has a built in switch, make sure it's at least gigabit. You can transfer data over wifi, but it's best to have the NAS plugged in via wired ethernet. 3) RAID is NOT backup. RAID is for minimizing downtime. Anything you sync to your NAS is a committed change. If you screw up a document or picture and sync it to the NAS, that change is forever and you don't get your old data back. You should consider a backup solution such as an external hard drive or a cloud backup service. Personally I use a second NAS server and 2 external drives and amazon cloud for backup. Total overkill for my linux ISOs but I'd like to make sure I don't lose anything I've been collecting over the years. Yes, you can use your NAS to backup your PCs and other devices, but as far as it being the only target for your data goes, have a working, tested backup. 4) Synology and QNap both make excellent 2-bay units. I run a pair of the old Synology 212js at work and they're decent, but don't expect rocketing speeds out of them. These are also older models, so I don't know what the newer ones can do. I've also got a 4 bay Qnap that I picked up for about $220 on Woot that has dual gigabit ports and I can get great speeds out of it. Currently running 4x3TB WD Red Pros in Raid6. That's 6TB of usable space with 2 disk redundancy. Again, RAID is NOT backup.
|
# ¿ Jun 29, 2016 12:12 |
|
Cockmaster posted:Does this mean I'd be all right with a single-drive NAS plus an external hard drive to back up anything I couldn't easily replace (plus Google Drive for important documents)? If you're going with the DS216 model you've got room for two disks, so you're best off with using Synology hybrid RAID or RAID1 plus an external drive and cloud backup. This gives you the best possible protection against downtime and data loss. Remember, with hard drives it's not a matter of 'if' drives will fail, it's a matter of 'when' drives will fail. Just remember that any data that isn't backed up off site is vulnerable to loss from fire, flood, plague of locusts, etc. Make sure you have a regularly scheduled backup.
|
# ¿ Jul 4, 2016 14:49 |
|
Skandranon posted:I've been having issues with Crashplan as well. About at the 1.5tb uploaded part and it struggles to upload anything more. Haven't really looked further into it as I'm frustrated enough with it and would rather be doing other things. I was pretty excited about Amazon Drive until I saw it has a 2gb filesize cap. I'm using Amazon Cloud Drive and I've got files as large as 14GB stored there. I'm using rclone to get them up to the site tho, so I don't know where this 2GB limitation is coming from.
|
# ¿ Aug 24, 2016 10:41 |
|
Lichy posted:I have an old toshiba laptop that I want to try and use as a local area network drive through my router. I was thinking of installing Fedora server or something like that on it. The goal is for it to be useable as network drive for file transfer between different laptops in the house. It need not be connected to the Internet, however that might work. Any guides on how to set up something similar? You'd hook the laptop up to the network and set up Samba to share files to the Windows clients, and NFS to the Linux clients. Check out the documentation for Fedora for how to set something like this up.
|
# ¿ Sep 9, 2016 16:19 |
|
Going to have to second DrDork here. The backblaze data is skewed heavily on Seagates due to the batch of bad 3TB drives that hit the market after the flooding in Taiwan. Huge failure rate on a specific model of drives. I've been using some Seagate 5TB externals for about a year and they are rock solid, so I'd definitely say their build quality is fine. Currently experimenting with Toshiba desktop 4TB drives in a backup NAS that I built over the weekend. Since it's just a backup host I'm not too concerned about the system, but I'm going to be paying attention to the drive conditions and seeing how they perform overall. I dumped about 7TB of data on them over about 2.5 days averaging 800Mb/s so they do perform very well. These are desktop class drives. My go to drives for NAS is usually WD Red, but I'd definitely take a stab at the Seagate NAS drives in the future should I decide to upgrade or replace existing drives. No reason not to. Generally they are a few bucks cheaper than the Reds and that can add up when you're buying multiple drives for a project.
|
# ¿ Dec 1, 2016 18:43 |
|
Mr. Crow posted:I'm new to the NAS world and working on building my first system (technically still deciding what I want to do). Plan on setting up a server with ESXi and running multiple VMs, including a NAS. Been looking a lot at ZFS as this thread and most NAS blogs seem to have a hard-on for it; but it kind of seems like overkill for a home media server. I don't like the inflexibility and general requirements it has, at least from a home use scenario. So I thought I'd try out the mergerFS and snapraid set up in a VM on my main server. Obviously this isn't going to perform as well as it would on bare metal, but it does give a pretty good idea as to how it would function. I built a Debian 8.7.1 set up on a 20GB drive because I didn't really plan to put much on the main system. Afterward I created six 100GB SCSI drives (KVM, Virtio) and added them to the VM. I partitioned them and formatted them XFS. I mounted them as follows: code:
Next step was to install mergerfs and snapraid. Neither of these are in the Debian repos, so I had to grab the deb package for mergerfs for Jessie 64bit for my VM and download the snapraid 11.0 source code. I installed make and gcc via apt-get so I could compile and install snapraid, and used dpkg -i to install mergerfs. Mount line for mergerfs, I placed it in /etc/rc.local as for whatever reason (probably my misunderstanding of something) could not get it to work in /etc/fstab, so I used this command. code:
code:
Next was to install Samba. So apt-get install samba. I'm not going to get really into depth here, as the provided configuration file contains plenty of examples for setting up a share. You are going to want to set your share up to point to '/mnt/storage/folder'. Make sure the user connecting has the right permissions, make sure to set them on the folder. Doing this from the /mnt/storage directory will set the permissions across all of the disks/folders. Next is to load some data. I found that copying data from a Windows 10 pro machine to this server I could peak at 100MB/s or higher. One thing I did notice that I felt was extremely strange is when I overwrote a file with the same data, the rate dropped to 3-5MB/s. Overall, the speed was quite acceptable for a virtual machine. I copied 100GB of data to the virtual server via rsync/scp and it hovered around 60MB/s between the host server and the VM. The source of the data and the target VM disk were on the same RAID6 so I think the overall speed there was acceptable. Next step was to run snapraid sync to write out the parity data. Probably due to the source/target disks all being on one array things ran a little slow. Also this VM only has 2 cores and 2GB of RAM allocated to it, so it isn't a powerhouse. I think it took about 30 minutes to sync up 100GB of data. Here's what it looks like now: code:
|
# ¿ Feb 18, 2017 23:32 |
|
Twerk from Home posted:How strong is the suggestion that you shouldn't use ext4 on volumes larger than 16TB? I can't tell if it's just a minor performance thing, or if I should not even consider ext4 on large volumes and look at xfs instead. https://www.unix-ninja.com/p/Formatting_Ext4_volumes_beyond_the_16TB_limit It can be done with the latest tools. It looks like it's a 32bit limitation of the tools that come with most distros and using a 64bit version of the tools seems to take care of the issue. Personally I just use XFS rather than loving with it.
|
# ¿ Apr 24, 2017 15:40 |
|
jawbroken posted:1 parity drive is fine for drives that large.
|
# ¿ Jul 19, 2017 12:08 |
|
fletcher posted:Anybody else using restic? I'm looking for a linux CLI client to backup my NAS to B2. It looks pretty nice, and it has a lot of activity on github. I'm using rclone with B2 and it is really good. Not what you're asking about, but figured I'd offer an alternative. I uploaded 8TB on my gigabit connection in about three days I guess with the speed manually throttled to 700Mbps. B2 likes lots of simultaneous connections, so I run 32 concurrent connections and 64 checkers on my uploads, and set the versions on the B2 side to a single version of retention since I have versioned backups at home. Figure it'd save space on the cloud side and keep costs low. I've been using rclone for a while now and it's been good to me.
|
# ¿ Aug 29, 2017 14:08 |
|
Greatest Living Man posted:Is anyone here familiar with setting up OpenVPN on FreeNAS? I can now connect to my VPN from an outside computer, but I can't access any intranet sites (like 192.168.1.232, my freeNAS WebUI). I know it has something to do with routing but I'm not really sure where to go from here. You need a route to point your ovpn block back to the server or else your router won't know what to do with the traffic. Should be as simple as adding a static route in your router. As far as pushing all your traffic out the default gateway you need a specific statement in the server config to do this. push "redirect-gateway def1" should push traffic out the gateway. I've tested this extensively with clients in places like the UK, France, and China. edit: you'll also need DNS in there as well. push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" -N Nulldevice fucked around with this message at 13:06 on Oct 9, 2017 |
# ¿ Oct 9, 2017 12:55 |
|
Greatest Living Man posted:I added a static route to my router with the host: 192.168.1.254 (my openvpn jail IP address) netmask: 255.255.255.0 gateway: 192.168.1.1 (router IP) metric: 2 and type: WAN. Is this the correct way of thinking about it or should I be creating a static route with the IP that my openVPN assigns? (10.8.0.6) You should have a route for 10.8.0.0/whatever pointing to the OpenVPN server. This will allow your server to talk to the other hosts on your network.
|
# ¿ Oct 10, 2017 00:32 |
|
SlowBloke posted:Hi, I've just finished setting up my new NAS(a QNAP 1253bu) i wanted to ask in case there are any qnap users here: What if you run out of room in one of your media partitions for a particular type of media? Kinda leaves you hosed. Just stick with one large volume and use folders and permissions to manage everything. Less chance of poo poo going boom. (oh, and backups, always have backups. raid is not backup.)
|
# ¿ Oct 12, 2017 11:16 |
|
Incessant Excess posted:Couldn't you just re-format your disk and then copy the files back from the NAS? Well the cryptolocker will likely use your own saved network credentials to also gently caress up your backups. That's pretty common these days, so the use of a network share isn't all that safe.
|
# ¿ Jan 22, 2018 18:30 |
|
Farmer Crack-rear end posted:Set the network share you're backing up to to be non-writable from your normal credentials, and configure your backups to run under a separate set of credentials. What I was referring to is that it can access all of your network credentials. If you've saved any network credentials in Windows, it's highly likely a competent cryptolocker will be able to use them. Everything is stored in the same place. You can see this in the credential manager in the control panel, locations and credentials are stored there. The malware will simply mount the share using the credentials stored and wipe out the backups if possible. Things like file history would be easily wiped out. The way I've gotten around this is a little different. I share out my directories to my server and the server mounts the directory using automount and does a rsync diff of the home directory and anything else important (directory is also read only as a share) and keeps it on the server. All of my NAS shares are read only with one exception which is just scratch space/drop off location. All of my download work is done directly on the server. Using CentOS as a base and various programs to handle the downloads (rtorrent/rutorrent/nzbget) then i log into the server and use custom scripts to manage downloaded content. I have a second server that is used as a backup target which has no shares. Everything is loaded via ssh/rsync nightly with 12 days worth of backups on auto rotation. Also have on demand mounted external drives. Final backup is Backblaze B2/rclone for catastrophic failures. I put a lot of thought into 'what if', probably to the level of extreme paranoia. However with all of this I make no assertion that I'm bulletproof, anything can happen. I think I'm pretty well protected as is, but I'm always looking for ways to improve the situation.
|
# ¿ Jan 23, 2018 12:32 |
|
Richard M Nixon posted:I'm sorry in advance for being lovely with research, but I'm having a hard time sorting through very out-of-date info. Snapraid is pretty easy to set up and get going honestly. The configuration file is very well explained. You can sync up the parity as many times a day as you want to. When I used it I did a sync nightly. The advantages of this are if you were to delete a file by accident, as long as you didn't sync right afterward, you could undelete the file. Snapraid supports up to six parity disks (maybe more now, I haven't checked), so it could conceivably withstand a six disk failure. The only restriction is that the parity disks must be as big as, or larger than your data disks. I created a prototype server using Snapraid and MergerFS to create a disk pool with parity. To make the pool consistent I used the same directory structure on every data disk that was going into the Merger pool. Once combined all the data appears in one location. You can set Merger to only fill a disk to a certain percentage before moving to the next disk in the pool. Performance seemed pretty decent even if it was a VM (I think bare metal would do even better), and functionality was good. I created a samba share pointing to the merged directories as needed and had no trouble. I did this using Debian 8. I believe all of the packages are in apt or can be added pretty easily. As far as setup difficulty, I'd rate it as relatively easy to do if you're patient and pay attention to detail. Also it is very easy to add disks to the system. Just make sure they're smaller or equal size to the parity disks and add them to the snapraid.conf and the mergerfs mount command. Here's some output from the system: Disk space (small VM so disks are small, but it gets the point across) You'll notice that parity is as large as the largest disk space consumed by data.: code:
code:
code:
code:
code:
code:
I found this build on the internet somewhere in a couple of places and decided to try it out. It's pretty solid.
|
# ¿ Mar 26, 2018 12:26 |
|
Harik posted:Why are people transcoding in 2018? I haven't had a media box in years that can't play anything i throw at it. Phone watching, tablet watching. There are all sorts of devices that people may want to use that require transcoding. Not every device can playback content as-is.
|
# ¿ Aug 30, 2018 16:21 |
|
eightysixed posted:What's a good/the recommended processor/mobo combo with the most SATA ports available? Nothing heavy duty, but I'm finally going to migrate to unRAID and ditch Xpenology. I have 4x4TB's brand new, never opened and my old 5x1TB from the Xpen box. Well really how many sata ports do you ideally want? The pickings get slim the higher the number you go. The most common number of ports is 6. I looked st 10 port boards and there wasn't really anything that was all gamered out or cost over $500. As far as a processor goes, an i3 will be adequate for your needs unless you plan on doing a lot of virtual machines or containers. What I would probably do is go for an 8th gen i3 as it has four physical cores and a decently priced 6 to 8 port board. https://www.newegg.com/Product/Product.aspx?item=N82E16813144162 - $68 MSI motherboard with 6 SATA ports and a 16X PCIE port if you want to drop in a SAS controller to add more drives. Supports 8th gen processors. https://www.newegg.com/Product/Product.aspx?Item=N82E16819117822&cm_re=core_i3-_-19-117-822-_-Product -- 4c/4t Core i3 8100 CPU (300 series, compatible with motherboard) https://www.newegg.com/Product/Product.aspx?Item=N82E16820148983 - 8GB DDR4 RAM -or- https://www.newegg.com/Product/Product.aspx?item=N82E16820148985 - 16GB DDR4 RAM
|
# ¿ Sep 3, 2018 15:45 |
|
Tamba posted:I have an old 160 GB SSD that I don't use anymore, and my FreeNAS server has an empty SATA port. I'll let the experts explain the whys but from my experience unless you're running an enterprise system there is almost no reason to run an L2ARC on a home system. It'll pretty much never see use. You have to be exhausting tons of ARC to get to that point. Basically it serves no benefit.
|
# ¿ Sep 28, 2018 18:22 |
|
Discussion Quorum posted:I am going to piggyback off the poster above asking about DIY. I think the only thing that would hold you back on the Q-Nap is the 8GB memory limitation. I haven't owned a Q-Nap in a long time so I don't know about B2 integration but I'm guessing you looked into this. I would venture to say the Q-Nap may suit your needs but read further for my opinions. I would weigh the costs between the Q-Nap hardware (diskless) vs going DIY and seeing which you would get more horsepower out of. For DIY you could go with a 8th gen Celeron (2c/2) or Pentium (2c/4t) and 16GB DDR4 RAM to handle the dockers. Get a board with sufficient SATA ports or grab an HBA from ebay for less than $50 and install the OS of your choice on a drive connected to the motherboard. For the living room I'd find a case that has a low noise profile. Anyway I hope my rambling helps a little.
|
# ¿ Apr 19, 2019 14:44 |
|
meinstein posted:I'm new to this and I'm looking to build something to run Open Media Vault - I think? It looks like that's the way I should be going. This looks like a good starter build for an OMV box. You won't need any hardware raid for it as you will be using mdadm for software raid. if you're going to run plex you might need a beefier cpu tho.
|
# ¿ Apr 29, 2019 15:15 |
|
Brain Issues posted:Anybody else have shucked 14TBs yet? In the Synology units parity is striped across all drives, there are no dedicated parity drives as there are in systems such as unraid or snapraid or a raid4 system. There may be another reason why the drives are showing activity. How long have the drives been in the system? Synology uses mdadm and btrfs(optionally) to build arrays, so if the drives are showing activity it may still be building the array. You could try using the console and checking /proc/mdstat (cat /proc/mdstat) and seeing if any of the arrays are still being built.
|
# ¿ Apr 22, 2020 13:31 |
|
Brain Issues posted:I put the two 14tb drives in about 2 months ago, and converted from SHR-1 to SHR-2. The conversion took 3 weeks to finish building the array. It's just the underlying technology. You're seeing it right there in the output of the mdstat. Each array is showing either raid6 or raid1, not raid4 (dedicated parity). Synology doesn't give you the option to use a dedicated parity drive. Which disks are serving up/downloading the torrents?
|
# ¿ Apr 22, 2020 13:51 |
|
Brain Issues posted:How can I tell? They're all part of 1 volume. Hmmm, iostat if it's available in the shell could tell you which array is servicing the program for your torrents. Unfortunately my Synology is at my folks place 550 miles away so I can't experiment. I haven't heard anything about the 14TB drives being SMR at this time, so I wouldn't worry about it just yet. I know my 1019+ with 5x12TB is always doing something (can hear the disk activity) but it's low level stuff, similar or lower to your 14TB drives. I just don't worry about it. When I retrieve the unit I'm going to see what the activity is.
|
# ¿ Apr 22, 2020 14:39 |
|
|
# ¿ Apr 28, 2024 12:25 |
|
Sneeze Party posted:I have two Seagate 8GB Ironwolf drives in my 2-bay Synology NAS. I think they're SMR. Does that mean if one of them fails, I won't be able to rebuild the array? Ironwolf drives are all CMR. efb
|
# ¿ Apr 26, 2020 17:29 |