Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Combat Pretzel posted:

If you have ZFS and use it via some "managed" system a la FreeNAS, you can set up automatic snapshots to contain fat finger deletes to some degree. You can set up mixed schedules of periods and retention, too. Say keep one snapshot per hour over the last three hours, then one per day over the last three days and then say one every week over the last four weeks.

I have a retarded matryoshka doll setup of Windows file shares being stored on iSCSI LUNs published via COMSTAR, which is using ZFS pools as the data store. But it does allow me to set up a poo poo simple set of VSS scripts to contain a lot of that accidental deletion stuff.

I just put together a 10 disk RAID-Z2 pool made of the 8 TB Ironwolf drives, the latest backblase drive survey showed pretty good numbers for the 4+TB seagates, so I figured why not? That and my old pools were getting kinda super full.

Adbot
ADBOT LOVES YOU

phosdex
Dec 16, 2005

If you're using samba shares you can also just enable the recycle bin. I do that and have a cron script that runs once a day that deletes anything in there after 7 days.

Phosphine
May 30, 2011

WHY, JUDY?! WHY?!
🤰🐰🆚🥪🦊

Paul MaudDib posted:

So if you have 5 drives in your array, you have a 0.95^5 = 77% chance of your array failing.

23%. 77% chance of surviving.

BlankSystemDaemon
Mar 13, 2009



people posted:

:words: about the pros and cons of RAID5/6
The one issue with hardware RAID5/6 that good software RAID solutions solve is that they're not susceptible to write holes (in fact, that might be part of the definition of a good software RAID solutions).

Combat Pretzel posted:

If you have ZFS and use it via some "managed" system a la FreeNAS, you can set up automatic snapshots to contain fat finger deletes to some degree. You can set up mixed schedules of periods and retention, too. Say keep one snapshot per hour over the last three hours, then one per day over the last three days and then say one every week over the last four weeks.
To be fair to everything not-FreeNAS, this is just a cronjob so basically everything that can run ZFS drat well also better impliment cron or it ain't worth using in the first place.

Methylethylaldehyde posted:

I have a retarded matryoshka doll setup of Windows file shares being stored on iSCSI LUNs published via COMSTAR, which is using ZFS pools as the data store.
Doesn't OpenSolaris or whatever derivative of it you're using (Illumos?) support kernel iSCSI targets? FreeBSD has that, and I thought that was the last Unix-like to impliment it.

Thermopyle posted:

ata-WDC_WD50EFRX-68L0BN1_WD-WXB1HB4JY3KC
Did it default to using this drive identifier for you? I've been considering taking the time to boot a FreeBSD live cd (well, USB flash disk) and renaming all my eli devices for my encrypted root on zpool to something more useful, and that naming scheme is basically perfect because it identifies how the disks are connected, what make and model they are, and it's serial number (?) - so if you've got as much OCD as me and label your disks with their SN on the outward-facing side, you can quickly identify which drive to pull based on zpool status.

Tevruden
Aug 12, 2004

D. Ebdrup posted:

Did it default to using this drive identifier for you? I've been considering taking the time to boot a FreeBSD live cd (well, USB flash disk) and renaming all my eli devices for my encrypted root on zpool to something more useful, and that naming scheme is basically perfect because it identifies how the disks are connected, what make and model they are, and it's serial number (?) - so if you've got as much OCD as me and label your disks with their SN on the outward-facing side, you can quickly identify which drive to pull based on zpool status.

Looks like linux's /dev/disk/by-id:

code:
tevruden@wolfgang:~$ ls -1 /dev/disk/by-id/ata*
/dev/disk/by-id/ata-Samsung_SSD_850_PRO_128GB_S24ZNXAH304893H
/dev/disk/by-id/ata-Samsung_SSD_850_PRO_128GB_S24ZNXAH304893H-part1
/dev/disk/by-id/ata-Samsung_SSD_850_PRO_128GB_S24ZNXAH304904E
/dev/disk/by-id/ata-Samsung_SSD_850_PRO_128GB_S24ZNXAH304904E-part1
/dev/disk/by-id/ata-ST4000DM000-2AE166_WCD00BZT
/dev/disk/by-id/ata-ST4000DM000-2AE166_WCD00BZT-part1
/dev/disk/by-id/ata-ST4000DM000-2AE166_WCD00BZT-part9

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Phosphine posted:

23%. 77% chance of surviving.

Whoops, fixed.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Another problem is resilver time. You can easily be looking at days+ to rebuild a high-capacity RAID array if you lose a drive. If this is a business-critical system, ask yourself what you would do in the meantime. Do you have alternatives, or are you losing money? That will drive your design here too.

The tl;dr is for a simple use-case I would forget about raid. You should probably go with either LVM or ZFS, with either one or two volume groups/pools. Then you create one big pool for each volume group, so you have either one big spanned volume across 4 drives or two volumes spanned across four drives. Then you serve them with Samba and clone them across every night using rsync. Use FreeNAS.

You will still benefit from using ZFS's data integrity checks in this situation, and it gives you a lot of future options for expansion/performance increase/etc. You just won't be able to use some of the features like snapshotting unless you increase your disk space first, but that will just be a matter of plugging in more drives, you won't have to dump and rebuild like you would to change your underlying filesystem.

Either way I think you would get more for your money by building your own machine with (again, I like this build), but of course that takes time. You could really do worse than buying a ThinkServer TS140 or something similar, too. They are nice little boxes.

I still do recommend ECC RAM if at all feasible, regardless of what hardware you get. It's not a must-have but it's pretty high on the importance list. Note that your motherboard/CPU also need to support it, and different machines use different types (registered/unregistered/fully-buffered/load-reduced/etc).

You also want at least 8 or 16 GB of RAM in this, especially if you are going with ZFS.

Paul MaudDib fucked around with this message at 16:20 on Jan 24, 2017

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

D. Ebdrup posted:

Did it default to using this drive identifier for you?

This guy is right:


Tevruden posted:

Looks like linux's /dev/disk/by-id:

code:
tevruden@wolfgang:~$ ls -1 /dev/disk/by-id/ata*
/dev/disk/by-id/ata-Samsung_SSD_850_PRO_128GB_S24ZNXAH304893H
/dev/disk/by-id/ata-Samsung_SSD_850_PRO_128GB_S24ZNXAH304893H-part1
/dev/disk/by-id/ata-Samsung_SSD_850_PRO_128GB_S24ZNXAH304904E
/dev/disk/by-id/ata-Samsung_SSD_850_PRO_128GB_S24ZNXAH304904E-part1
/dev/disk/by-id/ata-ST4000DM000-2AE166_WCD00BZT
/dev/disk/by-id/ata-ST4000DM000-2AE166_WCD00BZT-part1
/dev/disk/by-id/ata-ST4000DM000-2AE166_WCD00BZT-part9

The two machines I have zfs pools on are running ZFS-On-Linux and both seem to default to using /dev/disk/by-id.

Paul MaudDib posted:

Another problem is resilver time. You can easily be looking at days+ to rebuild a high-capacity RAID array if you lose a drive.

For reference, I just resilvered a RAIDZ1 pool with four 2 TB drives in it (which has a capacity of 5.17TB) and it took 17 hours.

The real lovely thing is that after finishing that I did a reboot and it started the resilvering all over again...currently trying to figure out whats up with that.

Thermopyle fucked around with this message at 17:08 on Jan 24, 2017

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

D. Ebdrup posted:

Doesn't OpenSolaris or whatever derivative of it you're using (Illumos?) support kernel iSCSI targets? FreeBSD has that, and I thought that was the last Unix-like to impliment it.


I'm using OmniOS, and COMSTAR is the kernel level iscsi implementation, or at least the management frontend for it.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

D. Ebdrup posted:

To be fair to everything not-FreeNAS, this is just a cronjob so basically everything that can run ZFS drat well also better impliment cron or it ain't worth using in the first place.
The cron job becomes mildly complex when it comes to properly tracking these snapshots and making sure the oldest/right ones get deleted. I'd rather use a "battle proven" interface to it instead of jerryrigging cron jobs or hoping that third party scripts function properly.

Mr. Crow
May 22, 2008

Snap City mayor for life
What's my best option for backups if I'm running my NAS as a VM under a hypervisor? My understanding is that rules out ZFS (technically doable but seems there are too many gotchas from my research)?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
The only real gotcha is that if you pass raw or file backed services through, there's possibly no way for ZFS to force flushes (maybe your hypervisor of choice passes any requested on the virtual disk down to the real one, in case of raw devices, might be worthwhile to find out), which a big part of its data integrity bases on. If you can pass through a HBA via PCIe Passthrough, then the situation is different.

IOwnCalculus
Apr 2, 2003





Mr. Crow posted:

What's my best option for backups if I'm running my NAS as a VM under a hypervisor? My understanding is that rules out ZFS (technically doable but seems there are too many gotchas from my research)?

The absolute best case here is to run a hypervisor / hardware setup that works with Intel VT-d or AMD IOMMU, and directly pass the hardware controller(s) with the NAS disks attached to it to the NAS VM. The NAS VM then gets baremetal access to the drives and you could physically remove them and plug them into a physical box with the same OS, no problem.

suddenlyissoon
Feb 17, 2002

Don't be sad that I am gone.
How do I move forward with my NAS (main use is Plex) when I'm running out of space? Currently I'm running Xpenology with 6, 4tb drives in SHR2 and I'm about 80% full. I'm also running another 6 USB drives of varying sizes as backups. At my current pace, I expect that I'll run out of space within the year.

Do I buy a new synology-type unit and use it only for storage, mount the drives under xpenology and then go from there? Do I try to find a SAS that Xpenology accepts and move everything in to a new case with new hard drives? Do I just blow everything away and start over with new drives under FreeNas?

BlankSystemDaemon
Mar 13, 2009



Thermopyle posted:

The two machines I have zfs pools on are running ZFS-On-Linux and both seem to default to using /dev/disk/by-id.
Makes sense - Linux doesn't have persistent device ids for /dev/<disk>.

Methylethylaldehyde posted:

I'm using OmniOS, and COMSTAR is the kernel level iscsi implementation, or at least the management frontend for it.
Ah - I'm used to COMSTAR being used to refer to the userland implimentation, but I guess it's not specific to that and can be applied equally to the kernel-level since it's just the front-end for controlling it no matter if it's in userland or kernel.

Combat Pretzel posted:

The cron job becomes mildly complex when it comes to properly tracking these snapshots and making sure the oldest/right ones get deleted. I'd rather use a "battle proven" interface to it instead of jerryrigging cron jobs or hoping that third party scripts function properly.
For what it's worth, it's not exactly jury-rigging when it comes to FreeBSD just because you have to use a commandline - it's fine that FreeNAS exists because there are people who wouldn't have the huge benefits it offers, but I've found that some of its ways of simplifying things make it difficult to accomplish what Unix-likes which adhere to the UNIX philosophy is good at; namely letting you attempt to be a genius by creating smart solutions. All of that aside, there are plenty of places where you can find people to help you verify that it'll work like you think it will before you deploy it to production (and aside from that, there are ways to even test it).

One such smart solution that I can recall off-hand, because I heard about it recently is, when Allan Jude described how he moved from an UFS FreeBSD installation to a ZFS installation without more than 1 reboot and about a reboots worth of downtime - it went something like this:
1) Add disks, partition to add bootcode, swap and add GELI partitions for root on encrypted zpool, as well as a gzero device (unfortunately it one of a few handfuls of stuff not documented in manpage, but basically it's a +2EB device, that writes to /dev/null and returns 0 when read from, that can be used as a stand-in for the existing disk with the UFS partition so long as it isn't read from)
2) Setup a zpool on the GELI partitions along with the various datasets and install FreeBSD into it
3) Sync all files over to the zpool with rsync, then do it again to get the changed files, then go to single-user mode to prevent file changes and rsync a third time
4) Reboot to the zpool, wipe the ufs disk, add bootcode and swap to the remaining disk, and setup a GELI partition, then replace the gzero device with the remaining disk.

Mind you, he's not only guy who wrote the bootcode for FreeBSDs bootloader to support booting from both MBR and GPT to a zpool and an encrypted zpool (written in x86 assembly, that is), he's also written several books on FreeBSD and ZFS with Michael W Lucas, so it's not surprising that he comes up with stuff that genius.

Krailor
Nov 2, 2001
I'm only pretending to care
Taco Defender

suddenlyissoon posted:

How do I move forward with my NAS (main use is Plex) when I'm running out of space? Currently I'm running Xpenology with 6, 4tb drives in SHR2 and I'm about 80% full. I'm also running another 6 USB drives of varying sizes as backups. At my current pace, I expect that I'll run out of space within the year.

Do I buy a new synology-type unit and use it only for storage, mount the drives under xpenology and then go from there? Do I try to find a SAS that Xpenology accepts and move everything in to a new case with new hard drives? Do I just blow everything away and start over with new drives under FreeNas?

If you have the ability to add additional drives to your existing system through some sort of SAS/HBA that would certainly be the easist solution. XPenology supports most of the popular LSI cards that you find on Ebay and there are plenty of guides around the net on how to flash them to IT mode.

Another option is to gradually upgrade your current drives to larger ones by replacing one drive at a time and letting XPenology resilver the new drive. This isn't as cost effective since you're now left with perfectly good 4tb drives that you can't use.

Another option is to build a new XPenology system and set it up as an expansion unit for your main system.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

suddenlyissoon posted:

How do I move forward with my NAS (main use is Plex) when I'm running out of space? Currently I'm running Xpenology with 6, 4tb drives in SHR2 and I'm about 80% full. I'm also running another 6 USB drives of varying sizes as backups. At my current pace, I expect that I'll run out of space within the year.

Do I buy a new synology-type unit and use it only for storage, mount the drives under xpenology and then go from there? Do I try to find a SAS that Xpenology accepts and move everything in to a new case with new hard drives? Do I just blow everything away and start over with new drives under FreeNas?

Well, what do you want out of your upgrade? Do you actually need RAID performance? If so, there's no way around it, you're buying more drives or you're buying a new set of larger drives one way or another. There's no free lunch here.

You could also use LVM/ZFS, set up a volume group on a single new drive, transfer a drive's worth of crap over, nuke the old drive, grow the volume group, rinse/repeat. That would let you get off RAID (i.e. get more out of your existing drives) without nuking what you've got, and you can grow your volume group in the future as you add disks or upgrade your old disks.

And again, you can get RAID-like performance out of ZFS various ways, like a cache SSD or RAID-style striping (LVM can do striping too). A cache SSD still uses a SATA/SAS port (unless you have M.2/mSATA slots available?) but you get more out of your existing HDDs so it would balance out.

Paul MaudDib fucked around with this message at 03:02 on Jan 25, 2017

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Paul MaudDib posted:

And again, you can get RAID-like performance out of ZFS various ways, like a cache SSD or RAID-style striping (LVM can do striping too). A cache SSD still uses a SATA/SAS port (unless you have M.2/mSATA slots available?) but you get more out of your existing HDDs so it would balance out.

Frankly, assuming you're restricted to GigE networks like most sane people, you don't even have to worry about performance because even a simple RAIDZ1 with 3+ drives will handily saturate your network without any need to resort to things like caching or whatever, and RAID-anything for random access is already going to be restricted by single-drive access speeds for virtually any practical application, so you're not losing on that front, either. Things are a little different if you're talking enterprise-level multi-user setups, but for a home Plex server it's absolutely irrelevant.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

I'm thinking about getting something like this USB3/eSATA 4 bay drive enclosure to use with maybe snapraid or one of those types of solutions. I'm just planning on storing re-obtainable media files on it.

Anything wrong with that enclosure or those types of enclosures? Any recommendations for a different one?

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

D. Ebdrup posted:


Ah - I'm used to COMSTAR being used to refer to the userland implimentation, but I guess it's not specific to that and can be applied equally to the kernel-level since it's just the front-end for controlling it no matter if it's in userland or kernel.


I use Napp-it to manage my hobo-san, it makes it a ton less annoying to actually do anything. I no longer have to google a blog post on how to configure a target view, the little menus let me set one up nice and quick.

Tevruden
Aug 12, 2004

Thermopyle posted:

I'm thinking about getting something like this USB3/eSATA 4 bay drive enclosure to use with maybe snapraid or one of those types of solutions. I'm just planning on storing re-obtainable media files on it.

Anything wrong with that enclosure or those types of enclosures? Any recommendations for a different one?

I've had two of those. One was fine, the other one had errors that made btrfs checksums fail and then eventually ate the entire array, but I bought both of them in 2014.

Edit: the door is a little weird and has a tendency to fall off when you open it.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo
I am running Ubuntu 16.04 and I need to image my boot drive to a new SSD 256 Gig.

The BIG issue is the boot drive is a 256 Gig USB drive and I know there are some issues when you image between device types.

The USB has been configured to keep as much cache off of it as possible but I am still worried about the finite writes on USB. SSD would be better and the reason I am doing this.

What would be the advice of everyone to start from?

Some things I am thinking of and need some feedback on

1) Which utility to use? rsync (which I am still not good at understanding the huge amount of options. only used it for simple directory moves) or simple dd? There are two partitions on the usb disk using ext4; one for boot and one for everything else. Worried about DD finding an error on the USB and reducing the speed to slower than slow; option with dd is to use ddrescure which ignores bad blocks until it's done moving the rest of the good blocks.

2) Probably safer to export the ZFS drives and redo the zpool again, but I would love if I didn't have to do this since I would have to reconfigure Plex.

3) grub is going to be poo poo to redo.

4) dd can't copy from a bigger drive to a smaller drive. I need to find out the number of blocks the ssd has to see even if I can dd it.

EVIL Gibson fucked around with this message at 20:40 on Jan 28, 2017

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.
Just use Clonezilla. It can do the whole deal for you, no big deal.

Mr Shiny Pants
Nov 12, 2012

EVIL Gibson posted:


2) Probably safer to export the ZFS drives and redo the zpool again, but I would love if I didn't have to do this since I would have to reconfigure Plex.


Just Zpool export the pool and Zpool import it on the new system. This works a treat.

Actuarial Fables
Jul 29, 2014

Taco Defender
Are there any issues with adding drives that have different RPM into a software RAID? I'm not worried about performance all too much, I just don't want my data to get lost.

BlankSystemDaemon
Mar 13, 2009



As far as I know, there isn't really any way to get screwed over and lose data when it comes to mixing spindle speed, block size (512 vs 512e vs 4096), number of LBAs, or cache size, but it will affect performance of the RAID - and probably more than you expect. It's only when you add/replace drives in an array that you have to be careful about the number of LBAs.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Actuarial Fables posted:

Are there any issues with adding drives that have different RPM into a software RAID? I'm not worried about performance all too much, I just don't want my data to get lost.

No, you'll be fine. At worst, your RAID will perform to the level of the slowest drive you have in the array. So in that sense you're potentially wasting the extra performance of the faster drive, but it's not gonna hurt anything.

Actuarial Fables
Jul 29, 2014

Taco Defender
Thanks! I'm planning on getting two Seagate and two WD NAS drives to start with and the Seagate ones run at 5900, so I got a little confused.

borkencode
Nov 10, 2004
Backblaze published its 2016 hard drive reliability stats https://www.backblaze.com/blog/hard-drive-benchmark-stats-2016/

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

borkencode posted:

Backblaze published its 2016 hard drive reliability stats https://www.backblaze.com/blog/hard-drive-benchmark-stats-2016/

Well, that does seem to support the idea that Seagate, on the whole, ain't bad, but has particular model numbers which are just dumpster fires. 13.5% failure is, uh, not good.

Outside that, I think it shows what a lot of people have been saying: outside a few problematic models, HGST/Toshiba/Seagate/WD are all pretty close over the long run, with failure rates low enough that other factors can be taken into consideration when selecting drives.

poverty goat
Feb 15, 2004



I've used freenas on my old pc for a few years now, and it is good, but there's a lot you can't do in BSD and much of what you can do you have to build from source in a jail and gently caress that noise. Installing or upgrading anything from ports is such a goddamn waste of time it makes me want to admin BSD for a living so I can get paid to sit here and press enter every minute or two while every dependency on earth compiles from source

Anyway, I was looking at the flood of cheap old EOL'd intel workstations on ebay, got carried away and made a lowball offer on an HP Z400 w/ a beefy 6 core xeon and 24 gigs of ECC, which I won. It's got 4 pcie slots (2x8 and 2x16) and it can accommodate 5 hard drives with an adapter for the 3x5.25" bays (a lot of comparable machines don't have an oldschool 3-stack of 5.25" bays and would be hard pressed to accommodate 5 drives). It also supports VT-D, which lets me pass through devices directly to VMs. So now I've got a freenas VM running in there under esxi 6.5 with my old reflashed Dell H100 SAS card passed through to it, I've imported my zpools and everything I used to do in jails or plugins are running alongside in linux, as god intended, installed in seconds with binaries from a goddamned package manager. And I've still got a lot of headroom for more VM nonsense. I may even be able to pass a GPU through to something and run the media center end of things from the same box.

I've never done anything in esxi and the whole setup was faster and easier than getting deluge compiled and working from ports in a jail. It's even using less power than the old machine. I wish I'd done this a while ago.

My next post, presumably, will be about loving it up and losing my zpools. Stay tuned!!

poverty goat fucked around with this message at 16:22 on Feb 4, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

DrDork posted:

Well, that does seem to support the idea that Seagate, on the whole, ain't bad, but has particular model numbers which are just dumpster fires. 13.5% failure is, uh, not good.

Outside that, I think it shows what a lot of people have been saying: outside a few problematic models, HGST/Toshiba/Seagate/WD are all pretty close over the long run, with failure rates low enough that other factors can be taken into consideration when selecting drives.

Maybe you shouldn't buy the brand that has problematic models every 2-3 years though. I mean, if you cherrypick the numbers you can get whatever results you want. Out of my hard drives that haven't failed, 100% are still working, let's see you beat that!

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

poverty goat posted:

there's a lot you can't do in BSD

how sure are you about this? That doesn't seem right, are you sure we're talking about the same thing, the devil's OS, right?

quote:

Anyway, I was looking at the flood of cheap old EOL'd intel workstations on ebay, got carried away and made a lowball offer on an HP Z400 w/ a beefy 6 core xeon and 24 gigs of ECC, which I won. It's got 4 pcie slots (2x8 and 2x16) and it can accommodate 5 hard drives with an adapter for the 3x5.25" bays (a lot of comparable machines don't have an oldschool 3-stack of 5.25" bays and would be hard pressed to accommodate 5 drives). It also supports VT-D, which lets me pass through devices directly to VMs.

I have pretty much that machine right now as my Lubuntu working machine, but with 16 GB and 4 cores. Good luck with the power supply though, mine wouldn't handle even half of its rated load and the 24-pin connector is non-standard.

quote:

So now I've got a freenas VM running in there under esxi 6.5 with my old reflashed Dell H100 SAS card passed through to it, I've imported my zpools and everything I used to do in jails or plugins are running alongside in linux, as god intended, installed in seconds with binaries from a goddamned package manager. And I've still got a lot of headroom for more VM nonsense. I may even be able to pass a GPU through to something and run the media center end of things from the same box.

I've never done anything in esxi and the whole setup was faster and easier than getting deluge compiled and working from ports in a jail. It's even using less power than the old machine. I wish I'd done this a while ago.

My next post, presumably, will be about loving it up and losing my zpools. Stay tuned!!

Good luck and don't forget to hail satan, friend.

[but really good luck, and make sure everything flushes through as quickly as possible]

Paul MaudDib fucked around with this message at 16:34 on Feb 4, 2017

poverty goat
Feb 15, 2004



Paul MaudDib posted:

Good luck with the power supply though, mine wouldn't handle even half of its rated load and the 24-pin connector is non-standard.

I'm aware. I've looked into how to adapt a standard ATX PSU for it, if needed, and I'm pretty sure this is just the thing if you trust dhgate

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Paul MaudDib posted:

Maybe you shouldn't buy the brand that has problematic models every 2-3 years though. I mean, if you cherrypick the numbers you can get whatever results you want. Out of my hard drives that haven't failed, 100% are still working, let's see you beat that!

Outside of the problematic model, though, Seagate has better reliability than WD, so arguably if you stay away from the drives that have 1-2 stars on Amazon/Newegg, you should come out on top!

The point is that super-unreliable drives are easy to identify for the most part, and everything else is within a standard deviation of each other, so it makes more sense to buy based off other factors (price, spindle speed, noise, warranty, etc) than lock yourself into one manufacturer in the mistaken assumption that their product is head and shoulders better than the others.

DrDork fucked around with this message at 16:44 on Feb 4, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

poverty goat posted:

I'm aware. I've looked into how to adapt a standard ATX PSU for it, if needed, and I'm pretty sure this is just the thing if you trust dhgate

Assuming China isn't going to burn my house down, that looks like exactly what I need. Ugh, I've been waiting for someone to do that for years.

Off topic though, what's up with DHGate's HTTPS? HttpsAnywhere is really upset with it for some reason.

BlankSystemDaemon
Mar 13, 2009



poverty goat posted:

I've used freenas on my old pc for a few years now, and it is good, but there's a lot you can't do in BSD and much of what you can do you have to build from source in a jail and gently caress that noise. Installing or upgrading anything from ports is such a goddamn waste of time it makes me want to admin BSD for a living so I can get paid to sit here and press enter every minute or two while every dependency on earth compiles from source

Anyway, I was looking at the flood of cheap old EOL'd intel workstations on ebay, got carried away and made a lowball offer on an HP Z400 w/ a beefy 6 core xeon and 24 gigs of ECC, which I won. It's got 4 pcie slots (2x8 and 2x16) and it can accommodate 5 hard drives with an adapter for the 3x5.25" bays (a lot of comparable machines don't have an oldschool 3-stack of 5.25" bays and would be hard pressed to accommodate 5 drives). It also supports VT-D, which lets me pass through devices directly to VMs. So now I've got a freenas VM running in there under esxi 6.5 with my old reflashed Dell H100 SAS card passed through to it, I've imported my zpools and everything I used to do in jails or plugins are running alongside in linux, as god intended, installed in seconds with binaries from a goddamned package manager. And I've still got a lot of headroom for more VM nonsense. I may even be able to pass a GPU through to something and run the media center end of things from the same box.

I've never done anything in esxi and the whole setup was faster and easier than getting deluge compiled and working from ports in a jail. It's even using less power than the old machine. I wish I'd done this a while ago.

My next post, presumably, will be about loving it up and losing my zpools. Stay tuned!!
FreeNAS is an appliance NAS OS based on FreeBSD (appliance in this context means to do one thing and do it well - similar to how pfSense is an appliance router/firewall OS based on FreeBSD), and cannot be compared to real FreeBSD, but aside from that, I think you might've misunderstood the point of jails on FreeBSD, what ports is for, and what packages are for.
Jails (which are a type of container, though not the first and certainly not the last) exist, as the original papers title hints at, to confine root. You can completely ignore using jails, unless you either want to: 1) isolate something for security purposes (say, if you have a httpd listening on port 80 and you don't want an exploit against that to affect your entire system), 2) confine something for a clean build environment for poudriere (more on this later), or 3) take in-production and deploy to a test environment so that you can test something without affecting production).
Ports are a collection of Makefiles (think recipe) that describe where to get the original source and what to do with it in order to all manner of software not included in the base system, but you can build any POSIX compatible software on FreeBSD. Most ports come with many build options, but the defaults are generally rather conservative.
Packages are built from ports via poudriere by FreeBSD, and are made available from pkg.freebsd.org (which itself uses SRV records to ensure you use as local a package repo as possible, plus it allows for failover). Out of the over-26000 ports in base, only 11 failed to build for the last quarterly build for AMD64/x86-64 (the one that you're on, unless you changed the defualt behaviour). Additionally, packages built by FreeBSD for pkg.freebsd.org always use default values for ports (which is part of, but not the whole reason, why ports have conservative default build options). Interestingly, ports and packages have been the inspiration of Gentoos emerge system and debians package system respectively.
This means that if you're happy with the default options for packages, you can simply use pkg to grab it - and if you want custom packages, you can do what FreeBSD does and set up poudriere (digitalocean has a rather nice article on how) which can then regularly check whether any given port has been updated, build it for you with whatever options you want, inform you if it fails, and can even be used to build packages for platforms which aren't tier1 and therefore aren't necessarily to have all packages built for it (such as ARM and MIPS, something I personally do).


You can do both of what FreeNAS and pfSense can do by using FreeBSD, with much less complexity, more flexibility, and do it faster (since ipfw is considerably faster than pf nowadays), plus a lot of other things like encrypted root on zpool with up-to-triple-parity or n-disk mirroring (depending on whether you need storage space or IOPS) and boot environments that you can create in a few milliseconds, which mean that if your system somehow breaks because of an update, you can be right back to a working system with a single reboot (and your user-files weren't touched, only system files), as well as jails plus an actual hypervisor (bhyve, like what ESXi offers) which can both leverage ZFS with all the cloning, templating, quick deployment and resilience that that offers. Also, as an aside, it's fairly simple to run a Linux userland on top of the FreeBSD kernel in a jail.

What you've done is scrapped approximately 20 years of computing advancement by using a hypervisor which you then use to completely segment your servers resources in such a fashion that you can almost-never regain them again, on top of which you're then running multiple types of containers - that's effectively what happened back in the 90s where x86 computers couldn't yet do proper SMP.

Containers on bare-metal, where you don't segment your compute and storage, is something that the industry is JUST NOW thinking of (after Amazon introduced the idea of segmented compute and storage with EC2 and S3), that Solaris/Illumos and FreeBSD have been doing for ages.

Mind you, I'm not knocking virtualisation - it's wonderful for certain things; but the major downsides are almost never out-weighted by its upsides.

BlankSystemDaemon fucked around with this message at 19:00 on Feb 4, 2017

poverty goat
Feb 15, 2004



D. Ebdrup posted:

FreeNAS is an appliance NAS OS based on FreeBSD (ie. it's meant to do one thing and do it well - similar to how pfSense is an appliance router/firewall OS based on FreeBSD), and cannot be compared to real FreeBSD, but aside from that, I think you might've misunderstood the point of jails on FreeBSD, what ports is for, and what packages are for.

You're right, I'm cursing BSD when I should be cursing FreeNAS. It forces you to use jails to install anything, which is sensible but I ended up using ports for everything because pkg was broken in jails for a while. I generally had a bad experience all around with plugins or linux compatibility layer stuff under freeNAS, not for lack of trying. e: and obviously I was pushing the bounds of what freeNAS is designed for at this point and I have no interest in giving up any of the convenience and quality of life features in freeNAS to go full bsd

It's a home server, everything I wish to back up is backed up to crashplan and nothing of irl importance depends on any of it. I don't want a second server running 24/7 in my home. Also I want an honest to god linux VM, and a VM with a GPU and HDMI and a pony. I'm sure it's not optimal but this is the only way to fit it all into one box and I think my chances of catastrophic failure are within acceptable bounds

poverty goat fucked around with this message at 19:21 on Feb 4, 2017

pgroce
Oct 24, 2002
FreeNAS 10 (still in beta, IIRC) is moving away from jails in favor of Docker containers running in a Linux VM (via bhyve).

It's not available yet (as a stable release anyway) but when you get sick of dealing with VM pass through, it will probably be ready to deploy. So you have that to look forward to.

I'm waiting with bated breath, personally. Sure, I can make anything build on BSD, but I'd rather reuse someone else's work configuring the build and deployment process, and if I don't like a decision they made, Dockerfiles are easy to tweak and images are easy to build.

Meanwhile, I'd rather run my NAS on bare metal, so I'm deploying new services on my Windows machine via Docker/Hyper-V, with plans to move them over when 10 drops. (After a few months for it to get a good shakedown cruise, of course. :) )

Adbot
ADBOT LOVES YOU

Pryor on Fire
May 14, 2013

they don't know all alien abduction experiences can be explained by people thinking saving private ryan was a documentary

I love the way you old fucks treat computing like a battle scene in Star Trek. Doesn't matter how mundane the activity or device or system, you find a way to turn something easy into a soap opera for absolutely no reason.

It's loving awesome, never change.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply