|
MMD3 posted:if it's really as easy as taking a screwdriver/dremel to the case then I guess I'm good w/ that. I think my most recent shuck took me 5 minutes because I was on a work zoom call and didn't have access to any tools. I was stuck doing it with a few credit cards like a schmuck instead of my usual flathead screwdiver and the sound of plastic tabs snapping.
|
# ¿ Sep 18, 2020 18:59 |
|
|
# ¿ Apr 25, 2024 20:42 |
|
Krakkles posted:Two interesting (well, to me ... lol, sorry!) developments in my drive replacement in an HP N40L running FreeNAS: I would kick it back to them. I wouldn't risk the mystery given the bad sensor reading.
|
# ¿ Oct 5, 2020 00:13 |
|
I have things cleverly named "Nas" and "nucaduck" and "ducks" (desktop) and "colorshredder" (color laser printer). Everything else is whatever came out of the box.
|
# ¿ Oct 8, 2020 16:51 |
|
TraderStav posted:Uhhh... is Crashplan too good to be true? I just signed up for the 30-day trial and it's unlimited data for $10/mo. I just downloaded a docker on my UnRaid and pointed at the account and am currently uploading shares to it. Has anyone here actually backed up their whole array, including ISOs? At some point do they come and bother you about using up too much of the 'unlimited' space? Or is there some other drawback like what happens if you go get the data out of them? I last used it when they had a consumer play and the drawbacks were it was a monstrous java application with super slow upload speeds. Have they fixed at least the second half of that?
|
# ¿ Oct 15, 2020 22:23 |
|
DrDork posted:You can get moderately ok sorta speeds if you increase the threading on it (accessible in the Java app). It's still not great, though. I have symmetric 1Gb FIOS and normally was struggling to top ~150Mbps up. So, "no". I use backblaze and it's the same hoops to do a network drive, I just use B2 for my NAS and the desktop client for my remotely supported family.
|
# ¿ Oct 15, 2020 23:05 |
|
TraderStav posted:Is there a backblaze plan that's somewhat competitive to the Crashplan? I've seen the $5/TB/Month plan. If I get wait a few days/weeks I can get 10-20TB onto CP (oh god not that acronym). Their desktop client is flat rate, I have 5 licenses for that. B2 is pay per byte. I pay around $15/month for B2 but really need to prune out some stuff. I also don't backup my Linux isos.
|
# ¿ Oct 16, 2020 00:32 |
|
SwissArmyDruid posted:Does anyone know of a NAS that might only use like, a pair or trio of 2.5" drives? Trying to fit everything into a 4-inch-wide wallbox is harder than I thought it would be. Spinners or ssd? If they're SSD's just command strip those suckers in place. HDD's you need to be wary of the breather hole and make sure there is a little more airflow.
|
# ¿ Oct 26, 2020 18:30 |
|
mobby_6kl posted:Shiiiiiit. That's less than what a 4tb drive costs here. You shuck em. Aka crack open the hard exterior for their juicy disk interior. Then you throw away everything that isn't the hard drive.
|
# ¿ Oct 29, 2020 19:50 |
|
Gyshall posted:I just got an empty Synology 8 bay NAS and I'm curious what the best drive is to get for these. I'm not really constrained by budget... My current NAS is about 8tb total and I'd like to increase that capacity significantly. Scroll up and get your shuck game on. Toss in a cheap ssd as a read cache. Unless you have absurd performance needs 5400 rpm shucked disks are going to be fine.
|
# ¿ Nov 1, 2020 18:12 |
|
KOTEX GOD OF BLOOD posted:I have an external HDD I'd like to slap into a new Synology unit. It's an HFS drive and I'm wondering if there's a way to put it in the Synology without reformatting it and having to backup/restore the data on it. Maybe I could convert the partition to btrfs? You're going to have to reformat it at some point. This article infers you can just mount it, copy it, then erase it. https://www.ryanbosinger.com/blog/2019/04/14/synology-nas-external-usb-drive-formatted-hfs-read-and-write.html
|
# ¿ Nov 9, 2020 20:00 |
|
KOTEX GOD OF BLOOD posted:Sure, if I have another 10tb+ drive handy, which I don't. Then no. You need a buffer to get the disk in there, especially as hfs+.
|
# ¿ Nov 10, 2020 02:41 |
|
tuyop posted:How frequently are you all encountering drive failures? I think I’ve lost two drives in my adult life and neither of them failed suddenly and led to data loss. In home use? 3. In commercial use? Literally thousands. I am 38. It's basically the lottery for home users, across 1000 people with 1 disk each you will have 1 failure a year. Now you as a nerd have 10 disks in your house between laptops, desktops, and your NAS. Suddenly your odds of being struck by random failure go up. Now you use the disks beyond the far side of the bathtub curve, some will last 10 years but most will not, suddenly the age adjusted rate of failure is 1 in 100 per year across 1000 disks. Every year a disk runs past the edge of the bathtub curve buys you more and more lottery tickets per year. Now commercial use has a whole city of hard drives in a few hundred square feet. By year 3 I would be seeing 1 a day per few thousand disks, give or take. Year 4 it's 1 per thousand probably. (I just made this up to illustrate the point. Actual rates are different than this.)
|
# ¿ Nov 24, 2020 19:04 |
|
What you want isn't there, and I understand what you're trying to do. I tried to do it myself. If you are super gung ho to try this and understand how `dd` works and have another server and don't mind hating your life if it fails, give it a go. Report back. Coming from Netapp in a previous life, I wanted this command: http://docs.netapp.com/ontap-9/index.jsp?topic=%2Fcom.netapp.doc.dot-cm-cmpr-900%2Fstorage__disk__replace.html
|
# ¿ Dec 1, 2020 02:43 |
|
mobby_6kl posted:That said before I move too much crap on them, would it make sense to add a bootable SSD? I have an 128gb one sitting all useless in an old PC so it might make sense for the OS, cache, or to let the other drives spin down when not used (which is 99% of the time) Add a cache drive, fine. Don't pretend the synology won't spend all drat day spinning those things up and down though, just leave them all on. I didn't even realize there was a way for it to have the "OS" on a dedicated disk - it sounds like a terrible idea. If you only have 4 bays you're going to be limited to 16TB usable @ 3x8TB disks. I wouldn't bother with a cache disk. Add ram if you can.
|
# ¿ Dec 1, 2020 16:02 |
|
Smashing Link posted:Wondering if anyone has set up a VM within Gsuite to serve a Plex library also hosted by Gsuite, encrypted and mounted with rclone? It's taken me about 8 hours over the past 2 days but I learned a lot and now in theory I can share a library of an arbitrarily large size with excellent speed to anywhere for $12/month (Gsuite) and some VM hosting costs, offset by $300 for free over the first 3 months. I am counting on the fact that Google doesn't scan mounted, de-encrypted folders within their customers' VMs, of course. How much do you pay for egress bandwidth? That is generally where cloud providers bend you over the barrel.
|
# ¿ Dec 6, 2020 21:34 |
|
Smashing Link posted:I'm trying to figure that out...from what I can tell it's $0.11/GiB. Not sure how fast that's going to add up without any users other than myself but have $300/3 months to gather data. Make sure you watch it like a hawk. You can blow through bandwidth very easily. It shouldn't be too bad though looking at the math that's like 113 viewing days at 1gb/hr right?
|
# ¿ Dec 6, 2020 22:26 |
|
TraderStav posted:Has anyone experienced SMART providing different temperatures for the same drive depending on the host PC? I'm running UnRaid to my JBOD tower (where the hard drive sits) and it is constantly throwing high temperature errors by UnRaid (46-50-ish). However, I temporarily had my UnRaid hosted by a completely different desktop connected to the same JBOD but never once saw the temps over 35 and never got a temp error. That drive had spent some time in the first case and received the same high temp errors so it's almost as if that specific motherboard is reading the SMART higher for that drive alone. Is one controlling the fan speed... somehow? Are they running different versions of SMART ("smartmontools")? Are they running different kernel/OS versions? Same interface card to the JBOD? You should be able to dump out the raw sensor info... somewhere.
|
# ¿ Dec 9, 2020 20:13 |
|
TraderStav posted:They're booting off of the exact same USB stick, so all of the settings and programs should be the same. Same card for the JBOD (LSI) also. When I switch machines I unplug the JBOD, pull out the LSI card and the Cache drive (nVME on a PCI card) and then drop them in the other. Plug in the USB and boot, so very little differences. Yeah that's an odd one. I would email them with the list of part make+model's with a description of the problem and see what they say.
|
# ¿ Dec 9, 2020 21:28 |
|
Ruggan posted:Just got a synology, getting started with all this backup stuff. Looking for advice on remote backup options. B2 integrates nicely and supports versioning, which helps with accidental deletion and cryptolockering. It does not help with corruption without a huge amount of work on your part.
|
# ¿ Dec 13, 2020 02:08 |
|
H2SO4 posted:From what I can tell, BackBlaze B2 and Wasabi both have similar pricing structures but Wasabi doesn't charge for egress traffic. The fine print on that is that if you download more from them in a month than total data you've got stored with them then you're "not a good fit" - meaning if you have 100TB stored with them in total but you download more than 100TB in a single month then that's a no-no. Wasabi will let you pay for bandwidth, just like B2. At least their reps tell me that on the commercial side. You just need to have an estimate up front and it helps if you're hooked up to aws us-east-1. For a consumer though downloading from Wasabi or B2 should be a once or never thing, fingers crossed.
|
# ¿ Dec 13, 2020 18:21 |
|
Most dedupe on a robust cow-snapshot capable filesystem is a engineering and implementation mistake on the user side. Change my mind. For a home user where it's mostly media files I simply can't see a huge benefit. You aren't shipping storage to 1000 users who then make 1000 copies of the same file because they download it to their desktop which is then mapped to a remote file share.
|
# ¿ Dec 14, 2020 16:20 |
|
Constellation I posted:Obligatory gently caress ipcamtalk and gently caress fenderman. Do NOT go to that website. Well I mean this just makes me want to go, care to elaborate?
|
# ¿ Dec 20, 2020 21:38 |
|
Jamus posted:I’ve been thinking about using a pair of intel optane SSDs for write caching on my synology, with the theory being that they’re meant for caching and have much higher endurance (esp. when full) than regular SSDs. Don't do it. I don't care about quality of flash, I mostly wouldn't trust the synology itself. I use an SSD for a read cache which I like to pretend helps. It has like a 50% hit rate for my workload.
|
# ¿ Dec 22, 2020 22:24 |
|
Head Bee Guy posted:Is it inadvisable to get some refurbished lenovo tower for ~$200 on newegg and just drop a couple WD Reds in there to use for unraid? I mean, I wouldn't advise paying full rate for those WD Reds when for the same price you can self insure some shucked disks that hold twice the data.
|
# ¿ Dec 26, 2020 05:12 |
|
Martytoof posted:In a Synology system, is the SHR configuration stored on the constituent disks or the Synology itself? Stored on the disks. I've done it. (same model synology though.)
|
# ¿ Dec 27, 2020 23:30 |
|
Disable swap. Add ram.
|
# ¿ Jan 1, 2021 17:21 |
|
Swap is for cowards. Especially on a dedicated NAS device.
|
# ¿ Jan 1, 2021 18:41 |
|
BlankSystemDaemon posted:Decommisioning harddrives shouldn't involve the idea of DBANing, secure erasing, or anything else, because it turns out that with the equivalent of a scanning electron microscope but for electromagnetism, you can read data off disks that have had the data overwritten more than 7 times, even if it's entirely random data that's been written. This is not a thing for a home user. If you're storing state secrets they should be encrypted end of story. If you're worried about identity theft then a single pass of zeros is all you need. SSD's provide a mechanism for it as well. Physical destruction is a last resort. If you want a quick way to recycle it hit the circuit board with the claw end of a hammer and be done with it.
|
# ¿ Jan 1, 2021 19:10 |
|
It's edging into levels of under the hood technicalities. When you page in/out you could be moving to various different devices, one of which is swap. I believe this is also how you make new copies of pages in your local Numa node, I assume it's how like Intel handles their persistent memory dimms confusingly named optane.
|
# ¿ Jan 2, 2021 18:20 |
|
IOwnCalculus posted:Cloudflare offers their CDN / caching services for free on personal accounts as well. It's incredibly limited compared to their enterprise product. I believe it's a loss leader to get you in the door, it worked for me. That being said, I'm less than impressed with the enterprise product but I've also been seriously spoiled by Akamai in the past.
|
# ¿ Jan 4, 2021 00:33 |
|
GreenBuckanneer posted:drives in raid 0 This is super dumb. You're going to loose 100% of your files every time a disk dies. Either do it as a jbod or do it as SHR/SHR2. As described you might as well just slap a unshucked 10tb disk onto your computer as USB.
|
# ¿ Jan 10, 2021 17:59 |
|
GreenBuckanneer posted:Well I mean the viable alternative is raid 10 but then that's half the storage space. I'd probably go for raid 5 because it's unlikely for more than 1 drive to fail at a time, although I have seen entire arrays go down at once, it's uncommon enough in raid 5 that businesses have sued the supplier over it. For a person like me it's fine, i'd be backing up anything anyways Yeah it's raid0 that makes my eyes twitch. You would get only downsides and 0 upside. Jbod is at least the same odds of a single disk dying with 1/4 the impact. I would do shr1 and call it a day. You're shucking disks right?
|
# ¿ Jan 10, 2021 18:28 |
|
GreenBuckanneer posted:I've done that before, it seems like it's a little cheaper, at $170 for an external (5400/5900rpm) vs $218 7200rpm for 10TB. Bigger drives than that seem too much per gigabyte I think. Voids the warranty, but when they're this cheap does it really matter for watching movies lol Definitely shuck. 7200rpm isn't buying you anything here. Also if you're going to do it without any resiliency I wouldn't bother spending a premium on a Synology. If you do SHR1 then sure.
|
# ¿ Jan 10, 2021 20:27 |
|
GreenBuckanneer posted:I suppose I could just build a miniatx box/server but As long as it's a decision where you have all the available information spend your money as you see fit. I love my synology.
|
# ¿ Jan 12, 2021 06:48 |
|
EC posted:It's a shame the DS1520+ only has 5 bays. I was looking at the Ds1821+ but it seems to have the same basic specs as the DS1621+ just with two more drive bays. If you're getting into the territory where you have more users transcoding than space and network you shouldn't run plex on the NAS. Ironically the ram is unlikely to have much impact on plex workloads unless your plex ram usage itself for the database or whatever balloons up. Using it as block cache on large media files isn't going to save you unless you have small files (1gb/hr) and very high hot spotting (lots of users watching the same few things.)
|
# ¿ Jan 15, 2021 17:24 |
|
Enos Cabell posted:Should never have posted this, tremendous self own incoming: If this resulted in data loss that's a critical bug. Unraid should be storing a copy of the current disk metadata setup on every disk. Restoring the boot drive from backup should not destroy your array.
|
# ¿ Jan 16, 2021 17:28 |
|
Enos Cabell posted:- Restored backup from when server had 6 8tb drives, 1 parity and 5 data. In the past year I added 2 12tb drives, one of which became the new parity and the old 8tb parity drive was cleaned and added to the array as a data drive. Basically the backup should not be caring about the drive configuration as it knew it. It should be reading a copy off each disk and comparing a monotonic/vector-based version counter to the uuid's of the disks. If they disagree it should make you make a decision. The backup of unraid itself should be the metadata of last resort. If it cannot reconcile itself it should print out the label information so that you can compare it to the disks. (serial+model are exposed to the os and on the sticker.) If you lose data this should be a p1 bug to unraid. Losing lovely flash boot medium is probably the most common form of failure for unraid.
|
# ¿ Jan 16, 2021 19:16 |
|
BlankSystemDaemon posted:I really wish I had some source code to read, because I keep getting different mutually-exclusive descriptions, and some which are not technically possible the way people describe them. This was bothering me too, knowing unraid works at the file level yet has parity that can rebuild arbitrary drives. How do they do the chunking to know what the parity value is if it's not a block boundary like raid4/5/6. Google got me a description but not code. The answer is apparently that they work parity at the device level? It says they do it bit by bit and that's why they have to do zeroing. https://wiki.unraid.net/Parity
|
# ¿ Jan 18, 2021 05:16 |
|
Hed posted:After a few years of working fine, I got these from my FreeNAS security output yesterday. Cable, disk, or motherboard port. Those are your options. Probably your disk if you haven't been screwing around inside the case lately.
|
# ¿ Jan 18, 2021 21:21 |
|
|
# ¿ Apr 25, 2024 20:42 |
|
Dat disk is ded. Do a forward rma with the vendor with that output. Or just say your pool failed it out.
|
# ¿ Jan 19, 2021 04:41 |