|
I jumpstarted my NAS build with two of the 5TB Toshibas pulled from my desktop. They've been running fine the last few years alongside four 5TB Reds. I did put them on the outside edge slots of my case so they'd maybe get more room to cool. But, I don't think there's a meaningful difference.
|
# ? Sep 20, 2018 20:12 |
|
|
# ? Apr 20, 2024 03:37 |
|
If anyone was looking to buy a premade "server" to use as a NAS, seems like a pretty good deal. https://arstechnica.com/staff/2018/09/dealmaster-take-25-percent-off-a-bunch-of-popular-nintendo-switch-games/ Dell PowerEdge T30 Mini Tower Server - Intel Xeon E3-1225v5 for $319.99 at Dell (use code: 319T30EX - list price $737).
|
# ? Sep 20, 2018 20:24 |
|
Internet Explorer posted:If anyone was looking to buy a premade "server" to use as a NAS, seems like a pretty good deal. Only thing about those is they have terrible 3.5" capacity... two drives by default and I guess you could put another 3 in the 5.25" drive cages with an adapter and a SATA card.
|
# ? Sep 20, 2018 20:28 |
|
On that note, I’ll recommend against the Lenovo TS440 if you’re planning to start small and expand. It’s been discontinued and most of the parts are as well including the HDD Expansion kit that gives you 4 more hot swap slots. Entirely too big for just 4 drives which has me switch up my hardware now (previous posts).
|
# ? Sep 20, 2018 20:33 |
|
Paul MaudDib posted:Only thing about those is they have terrible 3.5" capacity... two drives by default and I guess you could put another 3 in the 5.25" drive cages with an adapter and a SATA card. Sorry, I just saw this and thought folks had recommended them in the past - "Consolidate data and media files with six internal hard drive bays supporting large storage capacity"
|
# ? Sep 20, 2018 20:45 |
|
Internet Explorer posted:Sorry, I just saw this and thought folks had recommended them in the past - Oh, huh, maybe you need an expansion kit or something? I only see two drives in the pics.
|
# ? Sep 20, 2018 20:49 |
|
I think the secret is that you can remove the 5.25" slim optical drive and replace it with two 2.5" drives, plus mount two 3.5" drives up top in addition to the two below for a total of six. At least, I have the T20 and it works like that if I recall correctly. It's a great box for the price if you just want to host VMs or whatever, but the proprietary power supply and mediocre airflow make it not the best NAS in my experience unless you plan to have <=2 3.5" drives. I had to wedge in an additional fan (there are no additional fan mounts) to keep my RAID5 cool and I only have the exact number of SATA power connectors needed to have one for every drive bay, so there's not much choice in how the drives are cabled. Eletriarnation fucked around with this message at 21:00 on Sep 20, 2018 |
# ? Sep 20, 2018 20:58 |
|
I've got one of those T30's here I'm setting up for a client. It's got two 3.5" bays on the bottom with the blue plastic sleds, a folded down sheet metal style third one, and two 5.25" bays that include 3.5" blue plastic sleds inside of them, so they can fit smaller drives. Essentially this means that it's got spots to put 5 disks by default and the dvd-rom on top is a slim model. Unlike a lot of Dell's consumer line it even has spare sata cables running to each area but it only has four on-board SATA connectors. It's also got a nonstandard power setup, with 4 pin for CPU being normal but a nonstandard 8 pin for board power. The drives have a weird on-board power to sata power connector setup instead of coming directly from the PSU. It'll be a decent cheap box but may leave a little to be desired for expansion or upgrades. We ordered it a couple of weeks ago from dell small business for $319. They originally said it would arrive on the 19th but it showed up on the 11th.
|
# ? Sep 20, 2018 21:07 |
|
Yup, looks identical to the T20. That third bay on top made of sheet metal doesn't totally count, because the front panel cable is running through that space and I wasn't able to actually fit a regular drive far enough in to secure it well as a result. I used it with a 2.5"->3.5" adapter to mount an SSD. The two above it with the plastic sleds work fine, but if both are filled and constantly spinning they will run pretty hot without additional airflow. Adapters are out there to connect ATX power supplies to that nonstandard 8-pin, but it is an additional thing to consider for anyone who might be thinking of connecting a large GPU since there are no PCIe power connectors on the stock 280W supply. It said not to put more than 25W in the top PCIe slot but I think that's probably not a real concern, I had no issues running an RX 460 in it.
|
# ? Sep 20, 2018 21:32 |
|
8-bit Miniboss posted:On that note, I’ll recommend against the Lenovo TS440 if you’re planning to start small and expand. It’s been discontinued and most of the parts are as well including the HDD Expansion kit that gives you 4 more hot swap slots. It's a shame too, they're well-built. I've got the expansion cage, cable, and power board in mine though. It took like 3 months of eBay saved searches to find them.
|
# ? Sep 21, 2018 15:24 |
|
I finally got the new unRAID box setup last night. 5x4TB (1 as parity) and an SSD cache drive. Brand new drives, brand new i3 blah blah. The disks are blank. It's building parity at the non-lightning fast speed of 35-60MB/sec and is going to take 28 hours. Why is it taking that long to build the parity drive off of blank disks. Even as so, why is it going to so slow? Is it normal, and this is what I should be expecting from read/write as I go forward? Seems a bit odd...
|
# ? Sep 21, 2018 20:31 |
|
eightysixed posted:I finally got the new unRAID box setup last night. 5x4TB (1 as parity) and an SSD cache drive. Brand new drives, brand new i3 blah blah. The disks are blank. It's building parity at the non-lightning fast speed of 35-60MB/sec and is going to take 28 hours. Why is it taking that long to build the parity drive off of blank disks. Even as so, why is it going to so slow? Is it normal, and this is what I should be expecting from read/write as I go forward? Seems a bit odd... Computation of parity is O(1). No matter the data ("blank" doesn't count) you must compute the parity if you're doing a full initialization. As for the speed, you're limited by the one destination disk if you're doing consolidated parity, I assume you're doing distributed parity? (DDDDD+DDDDD+DDDDD+DDDDD+PPPPP vs DDDDP + DDDPD + DDPDD + DPDDD + PDDDD)
|
# ? Sep 21, 2018 21:11 |
|
H110Hawk posted:I assume you're doing distributed parity? (DDDDD+DDDDD+DDDDD+DDDDD+PPPPP vs DDDDP + DDDPD + DDPDD + DPDDD + PDDDD) What? I have no idea. However unRAID builds Parity
|
# ? Sep 21, 2018 21:34 |
|
eightysixed posted:
Yeah just ignore that part. Unless unraid does anything special parity computation is constant time. As for why you're not getting better write speed I don't know. You should be able to get 100MBytes/s. If it hasn't tuned the Linux MD rebuild speed sysctl that's potentially why.
|
# ? Sep 21, 2018 21:44 |
I was hoping for QuickAssist to include some sort of hardware acceleration for parity calculations, but from a presentation at OpenZFS Developer Summit where an Intel guy covered the QAT additions to ZFS on Linux, it doesn't sound like it's included yet? One can hope. Speaking of the OpenZFS Developer Summit, here's the video I mentioned in a playlist that has all of the videos: https://www.youtube.com/watch?v=4zWTU_hnGp0 I wish I could figure out how to link the actual playlist, but SA insists on turning the link into an embedded video. BlankSystemDaemon fucked around with this message at 21:57 on Sep 21, 2018 |
|
# ? Sep 21, 2018 21:53 |
|
Unraid parity is done bit by bit using an XOR calculation, I think. DDD = P 000 = 0 001 = 1 010 = 1 011 = 0 100 = 1 101 = 0 110 = 0 111 = 1 By having the 2 remaining disk values, and the parity value, it can infer what the missing value was. Tldr: It does some math bit by bit across all drives to calculate the parity value. It takes a while.
|
# ? Sep 21, 2018 22:26 |
|
Yeahp, I got the explanation from a friend who is way more in sync (no pun intended) about this than I am. Having said that, when I got home from work, it finally finished, but unRAID was yelling about heat errors (47C) on the disks. That doesn't seem to bad to me. There are 4 fans in my Fractal case, and A/C stays on 73/74 all the time. We should be good, right? Likely because it was just getting pounded. Idle seems like 42C. edit: After looking into it further, HGST NAS drives tend to run a bit warmer than others, so I guess I'm in the clear. eightysixed fucked around with this message at 00:53 on Sep 22, 2018 |
# ? Sep 22, 2018 00:40 |
|
eightysixed posted:Yeahp, I got the explanation from a friend who is way more in sync (no pun intended) about this than I am. As you've noted these drives do run at a higher temperature and most NAS drives are rated to run in a temperature range of 0 to 65C. If you are worried about cooling having a couple of fans blowing in and a couple blowing out should be fine. If there is a fan to blow or suck air past the drives it will help.
|
# ? Sep 22, 2018 00:59 |
|
eightysixed posted:Yeahp, I got the explanation from a friend who is way more in sync (no pun intended) about this than I am. You should be looking at the environmental specs, google "<disk model> specification" and find the sheet for your disk. 60C is allegedly the high point for operating temperature, but that is... toasty. I'm normally one for "if it's in the spec sheet it's good" but I have a feeling you're going to regret that sooner rather than later. I'm also normally talking about enterprise disks in a massive enterprise deployment, not your precious anime stash. For reference, my WD Reds are running at 35C on the hottest disk, ambient is 73 as well from A/C. 4 fans in the case means little to nothing if none of them are pulling air over your disks. Take a look at how the air is actually flowing in your case. A piece of ribbon, a tissue, or toilet paper can be used to detect airflow. Just don't let dusty tissue/toilet paper get ingested into your fans. https://www.hgst.com/sites/default/files/resources/DS_NAS_spec.pdf
|
# ? Sep 22, 2018 01:00 |
dexefiend posted:Unraid parity is done bit by bit using an XOR calculation, I think.
|
|
# ? Sep 22, 2018 11:09 |
|
I’m eyeballing an Epyc 3000 series for a NAS build now, any news on when boards are showing up?
|
# ? Sep 22, 2018 17:30 |
priznat posted:I’m eyeballing an Epyc 3000 series for a NAS build now, any news on when boards are showing up? We may have luck with IBASE or AsrockRack, but I wouldn't hold my breath because so far the latter managed to gently caress up the perfectly reasonable Denverton chip, which is capable of delivering both 10G PHY and SATA connectivity, by adding Realtek / other unproven manefacturer NICs and Marvell disk controllers. Similarily, they've released no boards with QAT, so far as I know.
|
|
# ? Sep 22, 2018 17:44 |
|
I have 5 TB of media on my main PC which I'd like to stick on a NAS. Initial capacity of 10TB would be cool and expandable to 20TB would be p. cool. I'm comfortable either building it or buying it. It doesn't need to be Enterprise Level or anything it's just for home office use and serving media over some protocol Kodi can read. I'm a Pro-Tier Systems Administrator but I know gently caress All about hardware these days. Can y'all point me in the correct direction? What's the good and cool poo poo these days?
|
# ? Sep 23, 2018 16:45 |
|
Junkiebev posted:I have 5 TB of media on my main PC which I'd like to stick on a NAS. Initial capacity of 10TB would be cool and expandable to 20TB would be p. cool. I'm comfortable either building it or buying it. It doesn't need to be Enterprise Level or anything it's just for home office use and serving media over some protocol Kodi can read. I am having trouble finding my recent post to emptyquote so: Buy the biggest synology you can afford.
|
# ? Sep 23, 2018 17:16 |
|
H110Hawk posted:I am having trouble finding my recent post to emptyquote so: Buy the biggest synology you can afford. What disks are good? Presumably everybody saying shucking means to remove disks from externals?
|
# ? Sep 23, 2018 17:20 |
|
Also: how loud are they/how much power do they eat?
|
# ? Sep 23, 2018 17:27 |
|
Junkiebev posted:Also: how loud are they/how much power do they eat? They aren't and they don't. Atom cpu means very little juice drawn.
|
# ? Sep 23, 2018 17:58 |
|
I'd like to add some new disks to my qnap nas, I want to add two red drives bringing it to six(plus another two for another raid set). Doing so would bring me to eight cumulative disks which seems to be the maximum supported configuration to red drives. I am not eager to splash out for new red drives and having to swap them again for red pros afterwards. Another thing i'd like some guidance is 4Kn, is there any point to seek out native 4K instead of 512e like any nas drives i've managed to find?
|
# ? Sep 23, 2018 19:12 |
|
Just a friendly tip to the Freenas users. Script an mtime removal of your .recycle folder contents if you export those over SMB (I set mine to 7 days, as snapshots overlap the functionality). Yay for reclaiming 3TB of space. My array has been running for over 2.5 years and I never looked into how those folders had grown ... I think I assumed they would automatically be cleaned after x days. Nope, it's for you to set up manually with a script.
|
# ? Sep 23, 2018 21:31 |
|
Junkiebev posted:Also: how loud are they/how much power do they eat? If you are referring to NAS drives they are quiet and most of the current specifications have a power use of around 8W. It's possible to have a very power efficient NAS running 24/7.
|
# ? Sep 23, 2018 22:55 |
|
unRAID 6 looking OK to upgrade? I don't see any horrible bug reports yet on Reddit but figured I would check here too.
|
# ? Sep 24, 2018 10:54 |
|
The Gunslinger posted:unRAID 6 looking OK to upgrade? I don't see any horrible bug reports yet on Reddit but figured I would check here too.
|
# ? Sep 24, 2018 16:03 |
|
eightysixed posted:Works fine on my two boxes Same, working great here with a mix of a and containers.
|
# ? Sep 24, 2018 16:55 |
|
I think the update messed up my unifi controller container but after some fiddling and a container update all is well again. Not even sure what went wrong.
|
# ? Sep 24, 2018 18:51 |
|
priznat posted:I think the update messed up my unifi controller container but after some fiddling and a container update all is well again. Not even sure what went wrong. I've had all kinds of issues with the unifi docker in the past. I switched it to host and all that shot went away. I'm considering switching to a minimal Debian install and ditching the docker altogether.
|
# ? Sep 24, 2018 21:44 |
|
Matt Zerella posted:I've had all kinds of issues with the unifi docker in the past. I switched it to host and all that shot went away. I'm considering switching to a minimal Debian install and ditching the docker altogether. If it craps out again I’ll do that with a VM too. Seems fine now at least. My two APs just went to “Adopting” to “Disconnected” and back again, but they were actually working fine to connect to wirelessly.
|
# ? Sep 24, 2018 21:48 |
|
priznat posted:If it craps out again I’ll do that with a VM too. Seems fine now at least. My two APs just went to “Adopting” to “Disconnected” and back again, but they were actually working fine to connect to wirelessly. I’ve had tons of those issues I think there’s fixes for the usg in 4.4.29. The uap I cannot remember if that’s fixed. Going to ditch the whole container setup for UniFi on unraid when the cloud key gen 2 plus is out as any time I do a backup of the app data folder it stops all the containers and it’s a crapshoot if all my devices come back.
|
# ? Sep 24, 2018 22:05 |
|
Viktor posted:I’ve had tons of those issues I think there’s fixes for the usg in 4.4.29. The uap I cannot remember if that’s fixed. Going to ditch the whole container setup for UniFi on unraid when the cloud key gen 2 plus is out as any time I do a backup of the app data folder it stops all the containers and it’s a crapshoot if all my devices come back. Cloud key sounds like an interesting option too!
|
# ? Sep 24, 2018 22:36 |
|
I've been running the Ports version in a jail. It lags behind the official release by a fair amount, but otherwise it's not too bad once you actually get it running. But yeah, the second it breaks I'm just ponying up for a cloud key.
|
# ? Sep 24, 2018 23:30 |
|
|
# ? Apr 20, 2024 03:37 |
|
Personally I just don't think it's really meant for docker because of how it handles networking. It'll take 10 minutes to spin up a Debian machine and get the stuff transferred over. The new cloud key just seems excessive to me for home use unless you have a decent amount of equipment.
|
# ? Sep 24, 2018 23:52 |