Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
qutius
Apr 2, 2003
NO PARTIES
I ran a little Intel SS4200E from 2009 or so to just a couple months ago. Celeron Processor 420 @ 1.6 GHz and 2GB of RAM and four 2TB drives. Slowest drat system around, but she ran like a little tank all those years without a hiccup. LOVE that I finally got around to upgrading, but appreciate the recent chatter on old and slow systems!

Adbot
ADBOT LOVES YOU

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues
Is there any reason why this well priced eBay item wouldn't work for 40GbE between two ConnectX-3 cards? It says "Infiniband" and is also maybe too-good-of-deal

Slash
Apr 7, 2011

I need to replace my ReadyNas NV+ (solaris based). It's transfer rate is too slow and one of the HDDs is failing, so i think i'd rather bite the bullet now than replace the HDD.

Is this a good option? Am i missing something on the market?
Synology Diskstation DS416 with 4x WD RED HDD

https://www.amazon.co.uk/Synology-D...iskstation&th=1

£680 for 8TB
£760 for 12TB

I realise it would probably be a little cheaper to buy the HDD's separately.

e: further details. It'll be mainly used for hosting Linux ISOs to stream to my Kodi setup over 100MBit LAN. Probably don't need to do transcoding or hosting of VMs etc.. on it, but possibly could use these features in the future to rationalise/consolidate my setup.

Slash fucked around with this message at 10:30 on Oct 6, 2017

Big Nubbins
Jun 1, 2004

bobfather posted:

Maybe, but if you get the Ryzen 1600x and pair it with one of the Asus boards capable of supporting ECC (most of them, according to my cursory research) you’re like 90% there.

As much as the price point of AMD/Ryzen is attractive for a screaming HTNAS, ECC support doesn't seem to be quite there yet and I'm having trouble deciding if this is a deal killer. ASUS and ASRock seem to be the only ones producing AM4 boards with something resembling ECC support (as opposed to merely supporting ECC-capable RAM in non-ECC mode). An article from March did a pretty thorough report on the state of integration of ECC, and the most promising result was that Ubuntu logged an uncorrectable error (after the author hosed with the timings) without halting.

The other side of this is that ASUS and ASRock aren't really testing ECC RAM in their motherboards for inclusion in their QVL. They probably used whatever was laying around the test bench from 5 years ago because I checked nearly all the QVLs between both manufacturers' current lineup and the 1 or 2 or 3 models they list can't actually be found for sale anywhere. The article I mentioned above had their luck with RAM with the Micron D9TBH chips supported all over in the Intel world, so it paints a pretty good picture of how :effort: motherboard manufacturers are about ongoing development and testing with regard to ECC.

Ezekial
Jan 10, 2014
So I'm doing a RAID6 build with 8 drives at 4tb. So 18 usable doing math in head. My question is on drives. I'm currently a student so I can get wd discount at wd store on wd reds for 119.99. Do I go for those or do I do 4tb white labels for 95, which has higher speeds? Doing LSI raidcard btw, additional question, if I decide to phase out whites if I went with them, could I phase into reds when I get a real job with real income? My only worry is that they run at different speeds. Does that cause instability? Or will it just cap white drives at 5400 or whatever the hell reds run at?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
If you don't need it right now, you can always wait and try to catch Reds on sale. That said, going with white-labels, even if they're nominally faster, is unlikely to result in performance increases that you can actually notice. With multi-disk arrays, you're already going to have sequential reads far faster than you're going to be able to utilize over normal networks (and by normal, I mean you didn't dive into 40Gb/IB connections like some other people in this thread). Random read/writes aren't likely to be terribly faster, either.

So the real question becomes "am I willing to risk the potential for higher disk failure rates and worse warranties to save $20?" I don't know your financial situation, but either direction is probably ok.

As for replacing them, yeah, it's a bit of a pain, since under most systems you're stuck with doing it one drive at a time (and risking UREs on each go--though as long as you're not using RAID software dumb enough to ditch the entire array on a single URE that shouldn't be a big issue). But it is an option.

Running at different speeds is immaterial. The RAID controller does not give a poo poo about how a given disk works at a low level--it just divvies out requests to the drives and waits for them to finish. If that means that a request to a White completes in 20ns and it takes a Red 25ns, it just means that the White might get 5ns of idle time before the controller comes back with more requests. In fact, some Super Serious Business people intentionally utilize different drives in the same array as a method to lower the chance of multiple concurrent drive failures.

Violator
May 15, 2003


Does anyone use dual disk redundancy? I've always been paranoid that the rebuild after a disk failure would be a heavy enough operation that it could cause a second disk to crap out.

I've run Drobos for the past 10 years set with dual disk and been really happy with them, but I'm starting to think about building out a custom box that has a lot more space and capabilities and so I'm rethinking every aspect of my storage solution.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Violator posted:

Does anyone use dual disk redundancy? I've always been paranoid that the rebuild after a disk failure would be a heavy enough operation that it could cause a second disk to crap out.

I've run Drobos for the past 10 years set with dual disk and been really happy with them, but I'm starting to think about building out a custom box that has a lot more space and capabilities and so I'm rethinking every aspect of my storage solution.

It's pretty common to recommend in this thread to use more than one disk for redundancy for that very reason.

BlankSystemDaemon
Mar 13, 2009



So, apparently FreeBSD Foundation and Delphix is sponsoring Matt Ahrens to work on raidz expansion for OpenZFS, plus there's an absolute ton of really awesome things being presented at the OpenZFS 2017 DevSummit.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

D. Ebdrup posted:

So, apparently FreeBSD Foundation and Delphix is sponsoring Matt Ahrens to work on raidz expansion for OpenZFS.

About time. These OSS people need to give me stuff faster.

IOwnCalculus
Apr 2, 2003





Thermopyle posted:

It's pretty common to recommend in this thread to use more than one disk for redundancy for that very reason.

Either that or use a system that won't poo poo the entire array for a single URE during a rebuild, and have good backups.

BlankSystemDaemon
Mar 13, 2009



Thermopyle posted:

About time. These OSS people need to give me stuff faster.
Sorry, I :ninja:'d my post after you posted. OpenZFS getting these kinds of features is especially important, now that Solaris is dead.

IOwnCalculus posted:

have good backups.
Enough emphasis cannot be added to this, or that they should be verifiable and testable - a backup isn't one, unless you know it's useful.

bobfather
Sep 20, 2001

I will analyze your nervous system for beer money
Quick question for those in the know:

I have a 5 disk RAIDZ2 that I think I actually want to convert to a 6 disk RAID10.

I can’t build the RAID10 with all 6 disks because I need 1 of the disks to hold the data. Here’s my plan:

1. Backup the Z2 to one (or two) extra drives
2. In FreeNAS, kill my 5-drive pool and use 4 disks to make a RAID10
3. Move my data to the RAID10

But then here’s the crucial question:

4. Can I then add 2 more disks to my RAID10?

I want to be able to have a RAID10 composed of 3 vdevs with 2 disks in each. Pretty much all I’d have to do is add the disks as a third vdev, right?

bobfather fucked around with this message at 22:54 on Oct 6, 2017

Star War Sex Parrot
Oct 2, 2003

IOwnCalculus posted:

Either that or use a system that won't poo poo the entire array for a single URE during a rebuild
On the topic of UREs I'll quote one of your older posts...

IOwnCalculus posted:

If you care about data loss, a single drive never has been a good idea at any scale. However, the real concern with unrecoverable read errors is during a RAID rebuild, since that's pretty much the only time you would ever read the whole drive start to finish. Three or four large drives combined are now large enough that when you do a full read of all of them to rebuild a failed drive, you're reading enough data to expect at least one URE during that process just based on the manufacturer-published URE rates.
We've reached a point where we don't need "three or four large drives combined" anymore. For example, WD Red's expected read error rate is 1 bit in 12.5TB which is actually getting close to the capacity of the product line. They're at 10TB right now and I'm curious if they'll keep that spec for the 12TB Red or just quote the 12TB Gold number which pushes the error rate out an order of magnitude.

If you really care about every bit of your data, dual parity is required at this point. I personally don't covet my data enough to stress about UREs, but it's worth noting as drive capacities increase. As always...

IOwnCalculus posted:

have good backups.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Usenet is my backup.

BlankSystemDaemon
Mar 13, 2009



bobfather posted:

Quick question for those in the know:
EDIT: ↓ Use Evil Gibsons gzero method (short for geom zero device; at least I think that's what he's getting at) with one caveat: Since you don't get any advantage from striping vdevs except more IOPS, I would recommend making a RAIDz2 with 6 devices, instead - that way you can lose two disks (or one, while you're replacing another, once drives start needing to be replaced).

It's worth noting that geom zero devices don't actually grow at all, data gets written to /dev/null - but it's useful for migrating from UFS or in situations like yours.

BlankSystemDaemon fucked around with this message at 23:13 on Oct 6, 2017

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

bobfather posted:

Quick question for those in the know:

I have a 5 disk RAIDZ2 that I think I actually want to convert to a 6 disk RAID10.

I can’t build the RAID10 with all 6 disks because I need 1 of the disks to hold the data. Here’s my plan:

1. Backup the Z2 to one (or two) extra drives
2. In FreeNAS, kill my 5-drive pool and use 4 disks to make a RAID10
3. Move my data to the RAID10

But then here’s the crucial question:

4. Can I then add 2 more disks to my RAID10?

I want to be able to have a RAID10 composed of 3 vdevs with 2 disks in each. Pretty much all I’d have to do is add the disks as a third vdev, right?

What you can do is create your 3 disk vdev with two real drives with one missing. Zfs won't let you create a vdev without all drives present on pool creation.

I was in the same situation. One solution is to create a fake drive mount that reports as whatever size you need it to be but is actually a file that starts at no size but will grow in size as you write to it.

Get the two disks lined up and then create the fake hdd. You will add them to the pool successfully and you'll see the fake drive starting to fill up; bring down the pool immediately and remove the fake mount. The pool will report as degraded. Then when you are ready, use the replace command to put in your real third drive and wait until zfs repopulates and marks the pool as good.

All this time, you will be one failure from destruction so backup backup backup.

Internet Explorer
Jun 1, 2005





Thermopyle posted:

Usenet is my backup.

What's your RTO on that restore?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
After I've taken a ZFS snapshot - is there a combination of rsync options that will let me move files around without incurring the cost of total data moved, i.e. just the metadata cost of filesystem changes?

Mr Shiny Pants
Nov 12, 2012

D. Ebdrup posted:

So, apparently FreeBSD Foundation and Delphix is sponsoring Matt Ahrens to work on raidz expansion for OpenZFS, plus there's an absolute ton of really awesome things being presented at the OpenZFS 2017 DevSummit.

Seeing that DSSD has been bought, maybe they could entice Bonwick to do some ZFS work again.

Mr Shiny Pants
Nov 12, 2012

Paul MaudDib posted:

After I've taken a ZFS snapshot - is there a combination of rsync options that will let me move files around without incurring the cost of total data moved, i.e. just the metadata cost of filesystem changes?

That's what ZFS send and receive is for.

bobfather
Sep 20, 2001

I will analyze your nervous system for beer money

EVIL Gibson posted:

More advice

Thanks for the advice!

But help me understand one thing. I have 5 disks right now in RAIDZ2. I want to go 6 disks in a 2x2x2 RAID10.

Couldn’t I just:

1. Backup
2. Manually degrade two disks (i.e., remove them from the RAIDZ2 through FreeNAS - at this point the Z2 would be severely compromised but still function fine, though probably a lot slower)
3. Mirror those two disks into a vdev
4. Restore onto new vdev
5. Dissolve the RAIDZ2 and create a vdev of 2 more disks and stripe it with my original vdev
6. Create yet another vdev of 2 more disks and stripe it with those vdevs

Shouldn’t this work fine for creating a 2x2x2 RAID10?

Also with 6 disks, a 2x2x2 RAID 10 is nice versus a 3x3 RAID10 because it’s easier to expand (only have to upgrade 2 drives at a time, rather than 3), it will have more space, and it will have better read/write performance, but losing any individual vdev totally will kill the whole array, correct?

Whereas the 3x3 RAID10 will offer 1/3 less space overall, be tougher to expand, but be better at resisting failures (since you can technically lose 2 drives from each vdev and still be fine). Is my pro/con thinking accurate here?

bobfather fucked around with this message at 15:23 on Oct 7, 2017

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

D. Ebdrup posted:

So, apparently FreeBSD Foundation and Delphix is sponsoring Matt Ahrens to work on raidz expansion for OpenZFS, plus there's an absolute ton of really awesome things being presented at the OpenZFS 2017 DevSummit.
I hope this involves a better solution than the permanent remapping table the last time that topic got any attention.

--edit: Ooh, ZSTD compression.
--edit2: lol Oracle giving a keynote. Better be about releasing their ZFS bits developed since, otherwise they can gently caress off.

Combat Pretzel fucked around with this message at 15:34 on Oct 7, 2017

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Combat Pretzel posted:

I hope this involves a better solution than the permanent remapping table the last time that topic got any attention.

--edit: Ooh, ZSTD compression.
--edit2: lol Oracle giving a keynote. Better be about releasing their ZFS bits developed since, otherwise they can gently caress off.

Oracle can gently caress off anyway.

BlankSystemDaemon
Mar 13, 2009



bobfather posted:

Thanks for the advice!

But help me understand one thing. I have 5 disks right now in RAIDZ2. I want to go 6 disks in a 2x2x2 RAID10.

Couldn’t I just:

1. Backup
2. Manually degrade two disks (i.e., remove them from the RAIDZ2 through FreeNAS - at this point the Z2 would be severely compromised but still function fine, though probably a lot slower)
3. Mirror those two disks into a vdev
4. Restore onto new vdev
5. Dissolve the RAIDZ2 and create a vdev of 2 more disks and stripe it with my original vdev
6. Create yet another vdev of 2 more disks and stripe it with those vdevs

Shouldn’t this work fine for creating a 2x2x2 RAID10?

Also with 6 disks, a 2x2x2 RAID 10 is nice versus a 3x3 RAID10 because it’s easier to expand (only have to upgrade 2 drives at a time, rather than 3), it will have more space, and it will have better read/write performance, but losing any individual vdev totally will kill the whole array, correct?

Whereas the 3x3 RAID10 will offer 1/3 less space overall, be tougher to expand, but be better at resisting failures (since you can technically lose 2 drives from each vdev and still be fine). Is my pro/con thinking accurate here?
You've fundamentally misunderstood how striped mirrors work. If you lose two drives in the same vdev, you lose data. Striped mirrors exist so that IOPS are spread across each vdev, thereby improving IOPS linearly. The only advantage that striped mirroring has is that the resilver is a bit faster (depending on a lot of factors) since all ZFS needs to do is to copy data from one drive to another - but that's only assuming your CPU isn't fast enough to do the raidz calculations, which isn't very likely given the speed of CPUs vs the speed of disks.
If you want that your data should be available even if you lose two disks at the same time, you need to use RAIDz2 (similar to RAID6).
ZFS also offers the option of RAIDz3, which has no parallel in traditional RAID setups, to my knowledge, but offers up to three disk failures at the same time without losing data.
Moreover, ZFS cannot rebalance it self once an additional vdev gets added to an existing pool - so only newly written data gets striped across 4 vdevs if you add a 4th mirror when you already have 3.

Combat Pretzel posted:

lol Oracle giving a keynote. Better be about releasing their ZFS bits developed since, otherwise they can gently caress off.
Holy poo poo, they are actually doing a keynote on ZFS at OpenZFS? I just can't see Larry Ellison doing anything good for anyone ever, but I also have to admit that the fact that they recently handed over Java does seem to suggest.... something, although I don't know what.
Been looking around a bit, and it seems the only advantage Oracle ZFS has over OpenZFS is encryption, but only because OpenZFS has a policy of not letting code be upstreamed until something has been tested in production for at least a year.
So I'm not really sure what the point of the keynote will be.

BlankSystemDaemon fucked around with this message at 17:45 on Oct 7, 2017

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Internet Explorer posted:

What's your RTO on that restore?

Whatever 15MB/s (my max download speed) is for however many TB I have. Certainly faster than I'm going to get from Crashplan or wherever.

The thing is, there's going to be some subset of stuff that isn't available anymore because of DMCA stuff. I'm willing to make that tradeoff to save me the thousands of dollars it would cost to do a full on-site backup.

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!

D. Ebdrup posted:

If you want that your data should be available even if you lose two disks at the same time, you need to use RAIDz2 (similar to RAID6).

I think you misunderstood him. He's comparing creating three mirrored vdevs of two disks each to two mirrored vdevs of three disks each. If he goes the three disks per mirrored vdev route he can lose two disks from either vdev (in the best case he could lose four disks without losing data, two from each vdev), which is what he wants, but in exchange he uses six disks to only get two disks of usable space.


If you have good backups one slightly risky option is to create a two or three way stripe first, move the data onto it, then trash the raidz2 pool and add the remaining drives as mirrors for each drive already in the pool. It depends how costly restoring from backup would be and how much you care about getting maximum IOPS for your existing data.

bobfather
Sep 20, 2001

I will analyze your nervous system for beer money

Desuwa posted:

I think you misunderstood him. He's comparing creating three mirrored vdevs of two disks each to two mirrored vdevs of three disks each. If he goes the three disks per mirrored vdev route he can lose two disks from either vdev (in the best case he could lose four disks without losing data, two from each vdev), which is what he wants, but in exchange he uses six disks to only get two disks of usable space.


If you have good backups one slightly risky option is to create a two or three way stripe first, move the data onto it, then trash the raidz2 pool and add the remaining drives as mirrors for each drive already in the pool. It depends how costly restoring from backup would be and how much you care about getting maximum IOPS for your existing data.

Thanks for this. Thank you too, Ebdrup. I especially find the last thing you wrote about ZFS being unable to rebalance itself to be the biggest hurdle.

If I'm not mistaken, that means if I build a 2 disk vdev that mirrors, then copy all my data to it, when I stripe in additional mirrors the majority of the original data will be on that original 2 disk vdev, and only new data added to the pool will get distributed amongst the new vdevs I striped in after the fact. That seems like a huge problem for reliability.

I think I'm going to just stick with my 5 disk RAIDZ2, but I just had one last thought: does FreeNAS allow you to recopy data already on a pool back to itself, in order to facilitate the recopied data being evenly distributed amongst all the disks in the pool?

My pool is nowhere near full (less than 2 tb of data on a pool that can take 5.5-6 tb), so it would be totally possible to recopy or clone the data twice on the same pool. I just want all the data to be evenly distributed amongst all the striped vdevs to maximize against losses.

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!

bobfather posted:

If I'm not mistaken, that means if I build a 2 disk vdev that mirrors, then copy all my data to it, when I stripe in additional mirrors the majority of the original data will be on that original 2 disk vdev, and only new data added to the pool will get distributed amongst the new vdevs I striped in after the fact. That seems like a huge problem for reliability.

It doesn't have any effect on reliability. If you lose a single vdev you lose the pool, it doesn't matter how much data is present on any particular vdev. It will affect performance, to some extent, but you have to be doing something particularly intensive for that to matter.

bobfather posted:

My pool is nowhere near full (less than 2 tb of data on a pool that can take 5.5-6 tb), so it would be totally possible to recopy or clone the data twice on the same pool. I just want all the data to be evenly distributed amongst all the striped vdevs to maximize against losses.

I think here you're confusing disks with vdevs. A single disk can be a vdev but the individual disks in a raidz2 vdev aren't vdevs themselves. You don't have to worry about striping or distribution inside a raidz vdev. If your raidz2 vdev loses three drives it is dead and so is the pool containing it, the remaining drives cannot be read to get "some" of the data back.


Copying data back onto a pool can be useful for making sure it's written sequentially but it's probably not something worth worrying about.

BlankSystemDaemon
Mar 13, 2009



A vdev is one of the intermidddant virtual device types that a pool can consist of: mirror (≥ 2 disks), raidz1(≥ 3 disks), raidz2 (≥ 4 disks), raidz3 (≥ 5 disks), cache(l2arc), or log(slog). You can have multiples of each type of virtual device and they can be basically any block device (although certain block devices are better suited to certain things) that your OS supports.
What are you trying to achieve by doing two striped 3-way mirrors, though? The only advantage of multiple vdevs in a zpool is the IOPS increase, but your use-case doesn't appear to be that you need to run an enterprise database. Anyone with such a use-case will also be able to afford SSDs (which have orders of magnitude higher IOPS), and to have hot-spares available so that they don't have to wait for a disk to be replaced until the resilver can begin.
If you lose 3 disks in three-way mirror you will lose data, whereas with RAIDz3 you can lose 3 disks and, provided you mount the faulted pool as read-only, still at least have a chance of recovering data before another disk fails or you get an URE and the file gets marked as corrupted (unlike some forms of RAID, which can get faulty just by 1 URE).

About realiability, forget that term - it's hugely misleading unless you're doing multipath dual-controller on multiple machines with hast and carp. Think instead of data-availability, and remember to look up a few posts to where people wrote about RAID not being a reason to not have backups.

Speaking of pools being full, there's a persistemt rumour about ZFS basically making GBS threads itself speed-wise at 80% capacity? My pool is at 9596% and is still satuating 1Gbps SMB(TCP) traffic, so it isn't true when you treat your storage like WORM storage and ensure that everything you write is as sequencial as possible (with the built-in ZIL to smooth out any small inconsistencies) - not that that's surprising, but it just goes to show that another rumour about ZFS is more complicated.

BlankSystemDaemon fucked around with this message at 22:49 on Oct 7, 2017

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo
I have to do some file sync up against files I do not know where they could be in the file server.

I used a tool before that let me do this function before, but forgot if it was some odd rsync flag or whatever.

Imagine this scenario

code:
On PC:
Pics/NOTPORN/2015/Hot_Porn001.jpg [md5 hash for 001]
Pics/NOTPORN/2015/Hot_Porn002.jpg [md5 hash for 002]
Pics/NOTPORN/2015/Hot_Porn004.jpg [md5 hash for 004]

On File Server (after getting hashs of all of /store/pics):
/store/pics/Trip_To_GrandCanyon/ACTUALLYPORN/Hot_Porn001.jpg [md5 hash for 001]
/store/pics/Trip_To_GrandTetons/JUSTMOREPORN/Hot_Porn002.jpg [md5 hash for 002]
So what should happen is the comparer should get all the hashs of all files and let me know that I don't need to move over on the PC the pics Porn001 and Porn002 because it found them somewhere in the directory and all of it's recursive sub directories I pointed it to. It goes through everything and finds that I did not have Porn004 anywhere and report it did not find a match.

After I moved over 004 to my a new directory for "Trip_To_Appalacians" I will delete everything from the source.

bobfather
Sep 20, 2001

I will analyze your nervous system for beer money
Thanks for sorting me out fellas. After I wrote that last post I realized that my thoughts about how data are distributed in a pool were wrong. Thanks for setting me straight.

As far as why I want to move from RAIDZ2 to RAID10, it's mostly for the ease of upgrading the pool later in terms of number of disks that have to be replaced to expand its total size, and how long it takes to resliver each disk. RAIDZ is objectively worse in those two aspects compared to mirrored vdevs.

Greatest Living Man
Jul 22, 2005

ask President Obama
Is anyone here familiar with setting up OpenVPN on FreeNAS? I can now connect to my VPN from an outside computer, but I can't access any intranet sites (like 192.168.1.232, my freeNAS WebUI). I know it has something to do with routing but I'm not really sure where to go from here.

code:
push "route 192.168.1.0 255.255.255.0"
doesn't make my VPN client see anything on my intranet, and I don't think traffic is actually being routed through. I've tried this on my phone with 4G as well so I'm pretty sure it's not a weird LAN incompatibility.

Mr Shiny Pants
Nov 12, 2012
Not really an answer, but I've setup OpenVPN on pfSense, it has really nice GUI for it and client packages. If you have some spare CPU, storage and RAM I would take a look at that.

bobfather
Sep 20, 2001

I will analyze your nervous system for beer money

Mr Shiny Pants posted:

Not really an answer, but I've setup OpenVPN on pfSense, it has really nice GUI for it and client packages. If you have some spare CPU, storage and RAM I would take a look at that.

Dead simple too. Install the client export package and run through the built-in wizard and you'll be up and running in 5 minutes.

Nulldevice
Jun 17, 2006
Toilet Rascal

Greatest Living Man posted:

Is anyone here familiar with setting up OpenVPN on FreeNAS? I can now connect to my VPN from an outside computer, but I can't access any intranet sites (like 192.168.1.232, my freeNAS WebUI). I know it has something to do with routing but I'm not really sure where to go from here.

code:
push "route 192.168.1.0 255.255.255.0"
doesn't make my VPN client see anything on my intranet, and I don't think traffic is actually being routed through. I've tried this on my phone with 4G as well so I'm pretty sure it's not a weird LAN incompatibility.

You need a route to point your ovpn block back to the server or else your router won't know what to do with the traffic. Should be as simple as adding a static route in your router. As far as pushing all your traffic out the default gateway you need a specific statement in the server config to do this. push "redirect-gateway def1" should push traffic out the gateway. I've tested this extensively with clients in places like the UK, France, and China.

edit: you'll also need DNS in there as well.
push "dhcp-option DNS 8.8.8.8"
push "dhcp-option DNS 8.8.4.4"

-N

Nulldevice fucked around with this message at 13:06 on Oct 9, 2017

Greatest Living Man
Jul 22, 2005

ask President Obama

Nulldevice posted:

You need a route to point your ovpn block back to the server or else your router won't know what to do with the traffic. Should be as simple as adding a static route in your router. As far as pushing all your traffic out the default gateway you need a specific statement in the server config to do this. push "redirect-gateway def1" should push traffic out the gateway. I've tested this extensively with clients in places like the UK, France, and China.

edit: you'll also need DNS in there as well.
push "dhcp-option DNS 8.8.8.8"
push "dhcp-option DNS 8.8.4.4"

-N

I added a static route to my router with the host: 192.168.1.254 (my openvpn jail IP address) netmask: 255.255.255.0 gateway: 192.168.1.1 (router IP) metric: 2 and type: WAN. Is this the correct way of thinking about it or should I be creating a static route with the IP that my openVPN assigns? (10.8.0.6)
Not getting any sense of an inter/intranet connection currently.

Nulldevice
Jun 17, 2006
Toilet Rascal

Greatest Living Man posted:

I added a static route to my router with the host: 192.168.1.254 (my openvpn jail IP address) netmask: 255.255.255.0 gateway: 192.168.1.1 (router IP) metric: 2 and type: WAN. Is this the correct way of thinking about it or should I be creating a static route with the IP that my openVPN assigns? (10.8.0.6)
Not getting any sense of an inter/intranet connection currently.

You should have a route for 10.8.0.0/whatever pointing to the OpenVPN server. This will allow your server to talk to the other hosts on your network.

some kinda jackal
Feb 25, 2003

 
 
I'm in a position to throw probably 128GB or more memory in a Dell R710 I plan to use as a FreeNAS machine. It'll serve eight 2tb drives in some kind of arrangement I haven't thought about yet. Is there really any advantage to maxing out the memory or is there a point at which it'll just be pointless without a lot more storage to serve?

I don't plan on having the freenas machine doing anything but serving storage so I can't see running many services.

Adbot
ADBOT LOVES YOU

TTerrible
Jul 15, 2005
No fear of not having the RAM to turn dedup on. I can't think of anything else beyond that.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply