|
thebigcow posted:RAID-Z2 is not RAID6. RAID-Z2 will give you the read and write performance of the slowest disc in the vdev. That's basically what the RAID6 settings on that calc say, no write increase (meaning, same as 1x disk, assuming all disks the same)
|
# ? Nov 29, 2016 20:18 |
|
|
# ? May 13, 2024 16:03 |
|
Thwomp posted:It depends what exactly you're planning on doing with them. A QNAP/Synology should be able to handle one 1080p transcoded stream at a time (or multiple 720p or lower concurrent streams). That might be okay now but will family/friends/significant others also be interested in utilizing this box? Then you'll quickly run into cpu capacity issues. On the other hand, if it's just you, QNAP/Synology make it easy to setup what you're looking for. Good luck finding a mini-itx 1155 board at a reasonable price. You should most likely sell your stuff and buy some newer, lower end, lower power consumption stuff.
|
# ? Nov 29, 2016 20:48 |
|
Question about RAM... I'm building a FreeNAS based around a Supermicro MBD-X11SSM-F. Getting ECC/Unbuffered up in Canada is apparently a difficult thing. Or at least in 2 stick kits. If I'm aiming for 2x16GB is it okay buying 2 separate sticks and not matched pairs? It's all Kingston... https://ca.pcpartpicker.com/products/memory/#t=14&E=1 NCIX has pretty much all Crucial on back order, and almost nothing from Samsung. http://www.ncix.com/category/ddr4-server-ram-08-1518.htm#CRUCIAL TECHNOLOGY
|
# ? Nov 29, 2016 22:30 |
|
Most motherboards are pretty lax about RAM pairings these days. In general it will simply run them at whatever the slowest stick's timings are, so you might be leaving a little performance on the table, but it should at least work. If you just mean "buy 2x single sticks of the same make and model" vice buying a single pack that specifically identifies itself as a matched pair, don't worry about it; you'll be fine. And if not, hey, at least return shipping on a RAM stick is cheap.
|
# ? Nov 29, 2016 22:42 |
|
Anyone found a good appliance to run pfsense? I have a ProLiant N40L I'm not using anymore, would that be fast enough to get 150Mb/s throughput?
|
# ? Nov 29, 2016 23:25 |
|
Armagnac posted:Thanks for the Raid Calc, Helpful to get a sense of what to expect. I wish it had RAIDZ2 configurations as I was planning to go ZFS with this build. https://www.servethehome.com/RAID/index.php
|
# ? Nov 29, 2016 23:26 |
|
Skandranon posted:That's basically what the RAID6 settings on that calc say, no write increase (meaning, same as 1x disk, assuming all disks the same) Just looked here: https://calomel.org/zfs_raid_speed_capacity.html With 6 drives, this dude's benchmarks say he's getting really good performance. 6x 4TB, 3 striped mirrors, 11.3 TB, w=389MB/s , rw=60MB/s , r=655MB/s 6x 4TB, raidz2 (raid6), 15.0 TB, w=429MB/s , rw=71MB/s , r=488MB/s Dude is getting better write performance on raidz2 than on raid10. Does this make sense?
|
# ? Nov 29, 2016 23:38 |
|
Helpful for size, not so much for expected performance.
|
# ? Nov 29, 2016 23:39 |
|
Armagnac posted:Just looked here: If the CPU computing two parity blocks isn't the limiting factor, then yes. RAID10 striped across three mirrored pairs means you have the write performance of three drives, so a long sequential write might look like: ABC DEF GHI JKL RAIDZ2 across six drives will generate two parity blocks for each four blocks, so writing those same 12 blocks might look like: ABCD12 3EFGH4 56IJKL I would expect the CPU usage on the RAID10 to be effectively zero, but a bit higher on the RAIDZ2. Probably not high enough to be of any concern with modern systems, though. Conversely, the read performance reflects the fact that a healthy RAID10 can read from all six drives at a time, while a RAIDZ2 can only really make use of 4/6 for normal reads. IOwnCalculus fucked around with this message at 23:58 on Nov 29, 2016 |
# ? Nov 29, 2016 23:53 |
|
So it looks like when I get to 6-8 drives I'll be getting the drive performance I actually want. More research is revealing to me that getting all the 10gigE components together is *actually* the pain point in this venture. Hmm... to save me from a 10gE router, could I get a 2 slot 10gE and just attach it directly to each computer and adhoc a network with the NAS server I made? Plan is to connect one connection to the PC and another to an old MacPro both through PCIe NIC cards. I remember doing something similar to this with an old mac XRAID server & Gigabit ethernet back in the day...
|
# ? Nov 30, 2016 18:27 |
|
Armagnac posted:More research is revealing to me that getting all the 10gigE components together is *actually* the pain point in this venture. There's a reason this was the first thing I brought up, 10gigE is not cheap.
|
# ? Nov 30, 2016 18:48 |
|
http://forums.somethingawful.com/showthread.php?threadid=3799779 10gbe switch for $300 from Ubiquiti. Then you need NICs; I'm running Intel X520-T2 for my desktop and SAN; and debating throwing fiber into one of my VM hosts (mellanox is pretty cheap on ebay for a card) Still not cheap; mind you, but it's doable for reasonable-ish pricing right now. $300 switch $30 for a pair of Mellanox cards Add in twinax and you're good to go.
|
# ? Nov 30, 2016 19:32 |
|
DrDork posted:Most motherboards are pretty lax about RAM pairings these days. In general it will simply run them at whatever the slowest stick's timings are, so you might be leaving a little performance on the table, but it should at least work. If you just mean "buy 2x single sticks of the same make and model" vice buying a single pack that specifically identifies itself as a matched pair, don't worry about it; you'll be fine. Thanks for the reassurance, that opens up a lot more options.
|
# ? Nov 30, 2016 20:19 |
|
If you're looking at PCIe NICs on OS X I wouldn't bet on Apple's driver support for 10GbE NICs (various manufacturers) being all that great and a quick glance at anything non-Intel that explicitly touts OS X compatibility goes for $200+ / NIC. There's SFP+ ports on various cards and motherboards that could do the job of doing direct cable connections but you'll need a total of 4 10 SFP+ ports. With a number of Xeon D motherboards supporting dual SFP+ ports you can keep the cost of the links centered around the two client NICs while effectively futureproofing your NAS if you dump at least $600+ into the NAS (which is exactly why it's my current migration path, of course). None of this also considers the kinds of interconnects that most of us that have worked professionally with this would typically build (usually datacenter environments) like load balancing / high availability / failover setups with multiple connections from one client to the NAS or via different physical switches.
|
# ? Dec 1, 2016 00:32 |
|
FreeNAS just reported an unreadable sector on another drive after a scrub. I'm getting tight up on storage, so I was planning on replacing all my 2TB drives in my RAIDZ2 setup with 4TB drives, and expanding the zpool to effectively double my storage. With that in mind, I've been using WD Reds for storage, and they're a little pricier than, say, a Seagate NAS drive. Which is the better choice?
|
# ? Dec 1, 2016 18:11 |
|
PerrineClostermann posted:With that in mind, I've been using WD Reds for storage, and they're a little pricier than, say, a Seagate NAS drive. Which is the better choice? There has been little empirical evidence for which NAS-specific drive is better. Some people base their decisions off the Backblaze report from a while back that had Seagate drives as less reliable than WD, and HGST as more reliable; but those were desktop drives and who knows if it's as applicable to NAS-specific ones. I generally would just buy whichever WD/Seagate/HGST drives have the best deal going on them whenever it is you're looking to buy.
|
# ? Dec 1, 2016 18:35 |
|
The not-Seagate is usually the best choice but there's no harm in getting mismatched disks in case one company has a bad run of disks of one model. In my nas I have a Hitachi, Toshiba, Seagate, and 2 WD reds.
|
# ? Dec 1, 2016 18:36 |
|
Going to have to second DrDork here. The backblaze data is skewed heavily on Seagates due to the batch of bad 3TB drives that hit the market after the flooding in Taiwan. Huge failure rate on a specific model of drives. I've been using some Seagate 5TB externals for about a year and they are rock solid, so I'd definitely say their build quality is fine. Currently experimenting with Toshiba desktop 4TB drives in a backup NAS that I built over the weekend. Since it's just a backup host I'm not too concerned about the system, but I'm going to be paying attention to the drive conditions and seeing how they perform overall. I dumped about 7TB of data on them over about 2.5 days averaging 800Mb/s so they do perform very well. These are desktop class drives. My go to drives for NAS is usually WD Red, but I'd definitely take a stab at the Seagate NAS drives in the future should I decide to upgrade or replace existing drives. No reason not to. Generally they are a few bucks cheaper than the Reds and that can add up when you're buying multiple drives for a project.
|
# ? Dec 1, 2016 18:43 |
|
Counterpoint, I see approx 4x seagates fail over other brands. I go through about 30 drives a month give or take a few. Most are from 500-1000GB
redeyes fucked around with this message at 03:28 on Dec 2, 2016 |
# ? Dec 1, 2016 19:44 |
|
My just purely anecdotal experience with seagate has not been good since the '90s so I don't even give them a chance anymore.
|
# ? Dec 1, 2016 20:44 |
|
As a counter point anecdote to the anecdote, I still have a 600GB Seagate that's 7+ years old and is just fine, as well as a small collection of other drives, none of which have ever given me issue. I've even got a few 2TB Seagate NAS drives which have been fine for 2+ years now. YMMV. Either way, buy from somewhere with a good return policy (like Amazon) in case you get a DOA/infant mortality drive, regardless of brand.
|
# ? Dec 1, 2016 23:47 |
|
I did a lot of research and ended up buying WD Reds because they are the quietest, lowest power drives available. They are also a little more expensive/slower (so if speed is really important to you, maybe something else). The cheapest way to get the 6tb and 8tb right now are in the My Book Duo enclosures (only the Duo - the single enclosures don't necessarily have red drives). The 4tb drives are fairly cheap on their own I think. Here is the best single page I found to get an idea of noise on all the bigger drives: https://us.hardware.info/reviews/6763/14/seagate-enterprise-capacity-10tb-review-a-new-high-noise-levels
|
# ? Dec 2, 2016 02:12 |
|
DrDork posted:There has been little empirical evidence for which NAS-specific drive is better. Some people base their decisions off the Backblaze report from a while back that had Seagate drives as less reliable than WD, and HGST as more reliable; but those were desktop drives and who knows if it's as applicable to NAS-specific ones. Nulldevice posted:Going to have to second DrDork here. The backblaze data is skewed heavily on Seagates due to the batch of bad 3TB drives that hit the market after the flooding in Taiwan. Huge failure rate on a specific model of drives. I've been using some Seagate 5TB externals for about a year and they are rock solid, so I'd definitely say their build quality is fine. Currently experimenting with Toshiba desktop 4TB drives in a backup NAS that I built over the weekend. Since it's just a backup host I'm not too concerned about the system, but I'm going to be paying attention to the drive conditions and seeing how they perform overall. I dumped about 7TB of data on them over about 2.5 days averaging 800Mb/s so they do perform very well. These are desktop class drives. My go to drives for NAS is usually WD Red, but I'd definitely take a stab at the Seagate NAS drives in the future should I decide to upgrade or replace existing drives. No reason not to. Generally they are a few bucks cheaper than the Reds and that can add up when you're buying multiple drives for a project. It's not Seagate's first time to the poo poo rodeo. Before their 3 TB drives it was the 750 GB, 1 TB, 1.5 TB (and possibly 2TB?) based on their 7200.11 frame that had up to a 40% failure rate. And that was pre-flood. From what I remember the 7200.10 had problems too, might even have been others before that. Armchair quarterbacks nitpicking the results of the people who are actually doing science is a time-honored tradition so knock yourself out but that's by far the largest data set available. And what it shows is that Seagate drives were not just a little more likely to fail, they failed at 5-10x the rate of other drives on the market, like up to 40% on some families of drives. Regardless of any nitpicks you can make about the test, the reality is that every single other drive on the market took it just fine and Seagate is an enormous outlier. Adding my personal anecdote here, I've probably had 25 drives over the last 15 years and the only three to fail prematurely have all been Seagates. I swore off Seagate after the last one and I haven't had another drive fail since, out of about N=8. Their newer drives are supposedly doing better. If Seagate proves they can put out reliable drives for a solid period of time they will eventually re-earn some of the trust they've lost. After all, HGST are some of the most reliable drives on the market today, and they used to call them DeathStars. It's gonna take another 5 years or so before I'll trust them though. Paul MaudDib fucked around with this message at 02:53 on Dec 2, 2016 |
# ? Dec 2, 2016 02:36 |
|
Paul MaudDib posted:It's gonna take another 5 years or so before I'll trust them though. My 8tb Seagate SMR drives have not murdered me or my family yet.
|
# ? Dec 2, 2016 17:35 |
|
Moey posted:My 8tb Seagate SMR drives have not murdered me or my family yet. Speak for yourself
|
# ? Dec 2, 2016 17:42 |
|
with my extremely small sample size of fewer than 20, seagate is at 100% failure.
|
# ? Dec 3, 2016 09:32 |
|
How do you guys manage your files on iOS? I use ssh to log into my NAS and move files between virtual drives because using FTP etc causes them to copy them to my device and then copy them back to the destination. I just want to make sure I'm not missing something.
|
# ? Dec 5, 2016 02:50 |
That or OwnCloud are one of only a few methods that don't involve iTunes or copying data multiple times, if I recall correctly.
|
|
# ? Dec 5, 2016 11:12 |
|
Any recommended USB 3.0 RAID enclosures/brands/deals for local storage? Also, what's the recommended RAID configuration for serving Plex and occasional video storage/editing?
|
# ? Dec 5, 2016 16:24 |
|
Man_of_Teflon posted:Also, what's the recommended RAID configuration for serving Plex and occasional video storage/editing? Whatever gives you the redundancy that you feel comfortable with. Plex isn't exactly a demanding application unless you've got a half-dozen clients each watching something different simultaneously. RAID5 with 1 disk redundancy for arrays of 4 or less drives is usually considered a good trade-off of space vs redundancy.
|
# ? Dec 5, 2016 16:42 |
|
D. Ebdrup posted:That or OwnCloud are one of only a few methods that don't involve iTunes or copying data multiple times, if I recall correctly. I've found that I can view my whole drive system as one big partition if I log in as admin, I just have to try it from the outside through the VPN. My problem is, having given up Android for iOS, I got used to torrenting things, and use my NAS as the torrent engine to do it, and would love a way to put things in their correct folders while VPN'ing in, rather than letting my Downloads folder grow bigger over the months and then finally doing something about it when I'm home. I guess at worst I can keep using ssh and midnight commander, but it sure is inelegant.
|
# ? Dec 5, 2016 18:26 |
|
Has anyone had the issue where upgrading to mac OS from El Cap borks discovery of Synology's Apple File Services? I can consistently connect by AFP://[ip] but my server disappears from the ribbon on the left of my Finder windows. Seems like this is a known issue per Google search but I was wondering if any of you had any good workarounds. It's not that connecting by IP is a huge problem but it's inelegant and I want my mac and Synology to "just work" together.
|
# ? Dec 5, 2016 21:56 |
Apple themselves appear to have given up on AFP.
|
|
# ? Dec 6, 2016 12:38 |
|
That is sad. Networking our PowerMacs to play Marathon was a cherished childhood memory. RIP.
|
# ? Dec 6, 2016 15:53 |
|
That article is from 2013, so over time we're seeing Apple dropping anything that's not iPhones and iPads (see: divestment from their monitors and Airport / Time Capsule products) while their ideas of "innovation" on the Mac platform continue to be derived from UX related to touch-based interaction models or the usual "smaller, thinner" approach (this applies even for the Mac Pro, hell). Until I start programming and managing infrastructure like they do on Westworld professionally, that's a no-go for me and I'll go back to where I came from... Linux on a laptop, ugh.
|
# ? Dec 6, 2016 17:13 |
|
Smashing Link posted:That is sad. Networking our PowerMacs to play Marathon was a cherished childhood memory. RIP. I think you're thinking about AppleTalk It'll be completely dead on macOS when they roll out the new filesystem, as it drops AFP support. SMB is the way forward, and has been for several years.
|
# ? Dec 6, 2016 18:13 |
|
NeuralSpark posted:I think you're thinking about AppleTalk I think AFP developed out of AppleTalk, could be wrong.
|
# ? Dec 6, 2016 20:26 |
|
I still use AFP on my two macs to connect to my Synology. It's always worked better than SMB in my experience.
|
# ? Dec 8, 2016 18:20 |
|
AppleTalk was the networking protocol before TCP/IP came in the late 90's. AFP was a file access/sharing protocol that sat on top of AppleTalk.
|
# ? Dec 8, 2016 20:29 |
|
|
# ? May 13, 2024 16:03 |
|
Is the OP up to date? What is the recommended buy-a-box these days? Looking at potentially buying or building something in the next several months, trying to gauge if it's worth it to just buy something as I'm lazy.
|
# ? Dec 13, 2016 20:49 |