Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Mr Crucial
Oct 28, 2005
What's new pussycat?

Pweller posted:

Can you elaborate on your issues with FlexRAID? And does anyone else have opinions?

...

FlexRAID seems to be a popular choice that supports all of the above, but their wiki is confusing to me and data is spotty and outdated around the net. I understand that the current implementation uses data snapshots that need to be committed, but I'm assuming I can automate this process every night. I could tolerate losing the last day or two's changes so this would be fine for me. I would be using a separate drive for the OS without any of this jiggery applied to it.

FlexRaid allows you to specify data drives and 1 or more parity drives. The parity drive(s) have to be at least as big as your biggest data drive, so if you have 3x 1Tb drives and 2x 2Tb drives then you have to sacrifice one of the 2Tb drives. You can schedule array updates, I have mine set to hourly on my NAS. The v2.0 release has a real-time array but it's very new and not recommended for anything other than test data.

I have a number of issues with FlexRaid:

- Firstly it's written by one guy and has been in perpetual beta status - the 1.x releases were all time-limited betas and the 2.0 releases are all 'previews', I think these are also time-limited.
- The author seems to get sidetracked into additional modules outside the core product, such as a WHS plugin. Nice enough, but when the core project is still a buggy beta then you worry about his priorities.
- The 1.x configuration interfaces were poo poo, you had to edit text files to get things set up properly. v2.0 has a bizarre Windows-esque web interface which is an improvement, but not by much. The drive pooling in particular is finicky if you need to split folders across several drives.
- It's really, really, buggy. Through various releases I haven't been able to get the drive pooling to work properly, despite logging bugs and posting on the Flexraid forums. So I have to use Drive Bender for the pooling and just use FlexRaid for the parity.

FlexRaid really does have potential but at the moment it's very hard to recommend. Give it a year unless you're brave.

Adbot
ADBOT LOVES YOU

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Aren't Sun/Oracle's Thumpers / Thors capable of handling everything to do ZFS? They should be sufficiently cheap nowadays to run them as a testbed for ZFS POCs. It's somewhat of a small miracle to get Solaris and ZFS working on consumer hardware to me, but I think it's worth it over messing with LVM.

Hok
Apr 3, 2003

Cog in the Machine

FISHMANPET posted:

In a pinch the PowerVaults can do it, you just have to use the onboard controller to turn each disk into a single disk array, but that's a big kludgy hack and I wouldn't recomend it except as a last resort.

I've got a hunch you could take an MD1000/1200 and hook it up to a SAS6e card and it would give you direct access to a bunch of disks for ZFS to do it's magic. I'd have to test it to be sure. The 6gb SAS card should work about the same.

I don't think we've got a MD1000 available to test at work, although I'm pretty sure there's one in the training lab, I'm curious enough to give it a try next time the lab's not being used, not sure thats going to be anytime soon however, they're booked out at least the next 6 weeks, and I'm not interested enough to hang about after hours to do it.

Gwaihir
Dec 8, 2009
Hair Elf
I've set up a Nexenta box on a Dell R710 with a Perc5/e card running a Powervault MD1000 without issue. Dell's controllers are indeed wierd like that (I guess it's an LSI thing?) but I was able to pool up all the drives across in internal Perc6/i and the Perc5/e external SAS controller no problem. Like Fishmanpet posted, you just have to make individual "Raid-0" arrays out of each disk, which is how those controllers end up doing JBOD mode. I also tested it with Vertex2 SSDs for l2ARC, which also worked fine. The rest of my drivers are 300 gig 15k rpm SAS, internally and in the MD1000. I don't have that machine up at the moment (It's my generalized "random poo poo" test box, so it gets reformatted pretty frequently), but I could probably get it back up and running to bench from the MD1000.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

necrobobsledder posted:

Aren't Sun/Oracle's Thumpers / Thors capable of handling everything to do ZFS? They should be sufficiently cheap nowadays to run them as a testbed for ZFS POCs. It's somewhat of a small miracle to get Solaris and ZFS working on consumer hardware to me, but I think it's worth it over messing with LVM.

Not really. It works pretty well on most Intel and AMD Chipsets. Get an Intel Nic and you're good to go.

Corb3t
Jun 7, 2003

Can someone please explain the advantages/disadvantages of using ZFS / RAID Z instead of something like RAID 5? RAID Z/ZFS doesn't require identical hard drives, correct?

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
It's ideal to use identical drives per vdev (i.e. RAIDZ array, mirror array, single drive), but you can use multiple vdevs per pool, and there are specific uses of multiple vdevs for pool in which you want different drives in different vdevs, like an SSD as a write cache drive to absorb random I/O.

RAIDZ's benefits over RAID5 for our purposes ITT:
- More robust error checking. If a drive is silently corrupting data, this will be noted on the fly, the data restored via parity reconstruction, and the drive marked as faulty.
- Built-in deduplication to shrink the amount of redundant data stored (e.g. in backups, accidental multiple copies of movies, etc.)
- Requires no special drive controllers and can easily be moved from computer to computer
- It's very simple to set up multiple-drive parity, e.g. RAIDZ2, RAIDZ3, per vdev, whereas this often more robust fakeRAID software or more expensive controllers for standard RAID implementations
- ZFS combines the RAID system and filesystem, so there is reduced messing with drivers and technical RAID fiddle-faddle as long as ZFS is supported

ZFS is not perfect, however:
- It's slow. RAID5 is faster because it is less computationally intensive, even when ZFS is matched with adequate processing power.
- It will get fragmented as hell as it fills up and things are deleted, and there are no defrag tools.
- It limits your choice of operating system, as fairly few support ZFS
- Oracle has not said anything about it or said much about Solaris, and seems on the edge of abandoning it to the small community of open source developers who stick it in BSD and do F/OSS forks of Solaris (like OpenIndiana). ZFS can't be easily natively ported to Linux due to licensing incompatibilities.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Factory Factory posted:

It's ideal to use identical drives per vdev (i.e. RAIDZ array, mirror array, single drive), but you can use multiple vdevs per pool, and there are specific uses of multiple vdevs for pool in which you want different drives in different vdevs, like an SSD as a write cache drive to absorb random I/O.

RAIDZ's benefits over RAID5 for our purposes ITT:
- More robust error checking. If a drive is silently corrupting data, this will be noted on the fly, the data restored via parity reconstruction, and the drive marked as faulty.
- Built-in deduplication to shrink the amount of redundant data stored (e.g. in backups, accidental multiple copies of movies, etc.)
- Requires no special drive controllers and can easily be moved from computer to computer
- It's very simple to set up multiple-drive parity, e.g. RAIDZ2, RAIDZ3, per vdev, whereas this often more robust fakeRAID software or more expensive controllers for standard RAID implementations
- ZFS combines the RAID system and filesystem, so there is reduced messing with drivers and technical RAID fiddle-faddle as long as ZFS is supported

ZFS is not perfect, however:
- It's slow. RAID5 is faster because it is less computationally intensive, even when ZFS is matched with adequate processing power.
- It will get fragmented as hell as it fills up and things are deleted, and there are no defrag tools.
- It limits your choice of operating system, as fairly few support ZFS
- Oracle has not said anything about it or said much about Solaris, and seems on the edge of abandoning it to the small community of open source developers who stick it in BSD and do F/OSS forks of Solaris (like OpenIndiana). ZFS can't be easily natively ported to Linux due to licensing incompatibilities.

Since ZFS is a copy on write, it closes the Raid 5 write hole.

To update a stripe in Raid 5, it has to touch multiple disks. If the power dies before all the disks have been updated, you'll have garbage data written to the array, and the system won't know that. ZFS allocates new blocks each time it writes so at worst you lose your last write, but your data stays consistent.

McRib Sandwich
Aug 4, 2006
I am a McRib Sandwich

Gwaihir posted:

I've set up a Nexenta box on a Dell R710 with a Perc5/e card running a Powervault MD1000 without issue. Dell's controllers are indeed wierd like that (I guess it's an LSI thing?) but I was able to pool up all the drives across in internal Perc6/i and the Perc5/e external SAS controller no problem. Like Fishmanpet posted, you just have to make individual "Raid-0" arrays out of each disk, which is how those controllers end up doing JBOD mode. I also tested it with Vertex2 SSDs for l2ARC, which also worked fine. The rest of my drivers are 300 gig 15k rpm SAS, internally and in the MD1000. I don't have that machine up at the moment (It's my generalized "random poo poo" test box, so it gets reformatted pretty frequently), but I could probably get it back up and running to bench from the MD1000.

Holy crap, this is almost exactly the setup I'm trying to get off the ground -- don't go anywhere! Seriously though, there are apparently remarkably few people running Nexenta on server hardware like this, as opposed to ZFS directly on OS/OI. I'd love to hear as much as you can tell us about what you love and hate about the platform.

Specifically, one of the touted features of Nexenta is its ability to map disks to physical slots so that Nexenta can flash an LED on a failed drive when it goes bad -- this is crucial for my intended deployment. Have you had any luck getting this mapping to work on the MD1000 (especially with the RAID-0 kludge)?

Are you worried about drive portability in case your MD1000 fails? With the RAID-0 workaround, I could see that potentially being an issue if you needed to move the physical disks to another machine for recovery purposes.

Are you doing anything like presenting the Nexenta shares to VMware? Impressions of iSCSI / NFS support in Nexenta?

Viktor
Nov 12, 2005

Got some Synology RS411's in the shop. They are pretty nice little units baring the possibility of previously mentioned implosion. It's basically a boxed Linux NAS with a software RAID.

You have to setup the unit with a CD/software and download the latest DSM release to install on the device. Once the initial DSM is installed it's managed via a fantastic HTML interface. The unit won't do much without a hard drive in it as it partitions the disks with swap partition/root straight off the bat. As you can see in this example with two 250GB disks it grabs the 2.5GB for root partition(md0) and 2GB for swap RAM(md1). The rest is left over to partition in any sort of "volume"(md2). You can also notice that the SWAP/root raids have hdc/hdd "removed" till you physically installed them in the system.

code:
TEST-RS411-1> cat /proc/mdstat 
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
md2 : active raid1 sda3[0] sdb3[1]
      239420032 blocks super 1.2 [2/2] [UU]
      
md1 : active raid1 sda2[0] sdb2[1]
      2097088 blocks [4/2] [UU__]
      
md0 : active raid1 sda1[0] sdb1[1]
      2490176 blocks [4/2] [UU__]
System looks like a slackware linux distro running on an ARM processor(Feroceon 88FR131). It's got a SeaSonic 250W PSU with a custom motherboard and drive backplane. There's three other Sunon Maglev fans that power down shortly after the disks power off to make the idle system very quiet. The interesting thing is the motherboard is totally removable on thumbscrews allowing it to be swapped out at any time. I'm guessing for ease of setup and to change the board for different models.



I pitted against another NAS box that I've been playing around with. It's a FreeNAS 0.7.2 ZFS setup on a Dell PowerEdge. The PE is a 2950 with 4x500GB drives slung off of a PERC5 with 4 RAID0's to pass down to the OS level the physical disks. It's a quad core 2.33Ghz with 4Gb of RAM so a pretty beefy box. The ZFS was setup in a zraid1. The Synology is straight out of the box with some desktop Seagate Barracuda 7200.12 250GB just to test with. I have them both connected to a gigabit switch with a LAGG setup on the PowerEdge and a single connection on the Synology. An Ubuntu 10.04LTS Xen virtual machine was used for the quick test of:
sysbench --num-threads=16 --test=fileio --file-total-size=3G --file-test-mode=rndrw

PowerEdge/FreeNAS - 113.17 Requests/sec executed
RS411 RAID1 - 169.87 Requests/sec executed
It easily pushes 42MB/s on sequential writes and around 65MB/s on reads

It's a terrific little NAS baring the earlier reported deaths, but I hate they didn't throw in fixed rails.

jeeves
May 27, 2001

Deranged Psychopathic
Butler Extraordinaire
FreeNAS/ZFS question:

If I am making my NAS into one big share and primarily one user account reads/writers and another user account reads to all the same big share, then I don't need to deal with the whole datasets thing?

Also, I plan on manually turning off the NAS every night when I can to save power (it looks like FreeNAS is smart enough to do a safe/software shutdown when I press the power button on my NAS), so I take it I don't need to enable the power saver daemon and such?

When I first set up the NAS earlier tonight, the performance was utter poo poo and there was a ten second delay on editing/renaming/creating folders or files. Now later in the night it seems to be working 100% better. The only thing I possibly changed was frantically turning off that power saving thing and also setting the Advanced Power Management option on each drive in my pool to disabled instead of 1 (minimum power usage with spin-down).

Would those have effected why the device was acting like poo poo when writing to files? Were the drives auto-spinning down after 1 second or something? Or did the vpool or whatever just take a few hours to get up and running which now accounts for why the thing is running so much better?

jeeves
May 27, 2001

Deranged Psychopathic
Butler Extraordinaire
Another FreeNAS/ZFS quick question:

As I said in the previous post, I have one single zfs pooled drive in Raid-Z, and I am curious how to share different sub-folders to different users with different permissions?

I'm only used to setting up shares on Win Servers, where you set up the shares per folder. I want to basically set up a main share where most of the data is stored, where user A has read/write, and user B only has read. Then a second share (using the same pool) where user B has read/write also. In WinServ I would just create two folders in the drive, and set permissions per folder, but have them both on the same drive so that they share the same drive space.

How does one do this in FreeNAS? I can only really find beginner tutorials on how to set up shares per pool.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

jeeves posted:

Another FreeNAS/ZFS quick question:

As I said in the previous post, I have one single zfs pooled drive in Raid-Z, and I am curious how to share different sub-folders to different users with different permissions?

I'm only used to setting up shares on Win Servers, where you set up the shares per folder. I want to basically set up a main share where most of the data is stored, where user A has read/write, and user B only has read. Then a second share (using the same pool) where user B has read/write also. In WinServ I would just create two folders in the drive, and set permissions per folder, but have them both on the same drive so that they share the same drive space.

How does one do this in FreeNAS? I can only really find beginner tutorials on how to set up shares per pool.

Start with this: http://www.lagesse.org/freenas-tutorial-for-windows-users-part-two-configuration/

If you feel like you're missing something fundamental, here's a bunch of :words: about Unix systems like FreeBSD and ZFS and how they are work a bit differently compared to Windows drives. Filesystems first, then shares.

First thing to note is that ZFS pools are a logical unit the same way something with a drive letter is a logical unit on a Windows PC. Shares and ZFS are separate things, just like shares and NTFS volumes are separate in Windows. Unix filesystems are a layer more abstracted than Windows ones.

In Disk Management on Windows, you mount a volume by assigning a drive letter or hooking the drive to an NTFS path on another drive's letter. In Unix, mounting drives is most like the latter, except rather than hooking the path to another drive letter, you hook the path to the root filesystem, /.

All parts of the root filesystem may be assigned arbitrarily to any physical drive or partition. If you're booting ZFS from a liveUSB, then / is actually assigned to a RAM disk whose contents are loaded from the stick, which contains the OS image and only a few mounted folders for persistent storage, like /log and /var (this is configurable).

Once you've created an additional volume you want to mount, like a ZFS pool, you can either assign a part of the root file system to it (like /usr or /home), or you can mount it within the root filesystem, usually as a sub-path of /mnt or /mount or /media (depending on the particulars of the *nix system in question). So our "tank" zpool would mount as /mnt/tank, and if you mentally replaced "/mnt/tank" with "D:", you wouldn't be far off in terms of understanding what's going on.

Where it gets crazy is that you can then redirect other portions of the filesystem to refer to subfolders in /mnt/tank. A popular and useful one for FreeNAS is to assign your users' home directories to folders in the pool, so that /home/Joe would be a link to /mnt/tank/Users/Joe, for example. It's very much like filesystem junctions/hard links in NTFS, but much more common and casually done.

Share stuff starts here

After all this ^^^^ poo poo is done and properly configured in FreeNAS, setting shares works pretty much exactly the same way, just a clunkier interface. You have to go enable the CIFS/SMB service, you have to create your user accounts locally unless you have an auth box. Any administrators have to be part of the "wheel" group (don't ask me).

That just sets network-level permission, however. You will also need to give filesystem-level permission to read and write to the pool. Unix file permissions can be a pain in the rear end, so here's the least complex way:

Say we have your User A and User B, matched with folders A and B. You want A to have full access to A, everyone to have access to B, and B to have read-only access to A.

Add both A and B to the group "users" via user management. Then, using root/superuser at the command line, you will run the following commands:

chown -Rv A:users A
chmod -Rv 755 A
chown -Rv A:users B
chmod -Rv 775 B

The first command changes the A folder to be owned by A and associated with the group "users". The second command gives A full permission to the folder, and gives members of the "user" group read-only access. It also gives everyone else read-only access.

The third command changes the A folder to be owned by A and associated with the group "users". The fourth command gives A full permission to the folder, and also gives full permission to members of the "user" group. It also gives everyone else read-only access.

And that's sufficient to do what you want. When your shares are set up with the proper security settings, I/O will execute as the local users, and it will only proceed if both network authentication and file/folder permissions match up. If you screw up the network security, you will find find that B can't sneak a write into A. If you screw up the file permission, you'll get a lot of "Access denied" errors.

---

Of course, I don't have a FreeNAS box in front of me, so I can't point you like a walkthrough. But you should be able to figure it out. Also, full disclosure: I eventually settled on a Windows Server box for my NAS needs, because gently caress.

Factory Factory fucked around with this message at 10:51 on Jul 13, 2011

Gwaihir
Dec 8, 2009
Hair Elf

McRib Sandwich posted:

Holy crap, this is almost exactly the setup I'm trying to get off the ground -- don't go anywhere! Seriously though, there are apparently remarkably few people running Nexenta on server hardware like this, as opposed to ZFS directly on OS/OI. I'd love to hear as much as you can tell us about what you love and hate about the platform.

Specifically, one of the touted features of Nexenta is its ability to map disks to physical slots so that Nexenta can flash an LED on a failed drive when it goes bad -- this is crucial for my intended deployment. Have you had any luck getting this mapping to work on the MD1000 (especially with the RAID-0 kludge)?

Are you worried about drive portability in case your MD1000 fails? With the RAID-0 workaround, I could see that potentially being an issue if you needed to move the physical disks to another machine for recovery purposes.

Are you doing anything like presenting the Nexenta shares to VMware? Impressions of iSCSI / NFS support in Nexenta?

Resurrecting that box is next on my list of things to do (It's currently testing our server2008 image and Hyper-V setup), because I do intend to use it to run VMs off of. Unfortunatly (for you at least), for now they'll be Hyper-V based VMs, since I'm stuck with using whatever comes down to us in the "Official" image- I work for a state level SSA office, so we're somewhat constrained in what stuff we can try and pull with respect to our server infrastructure and such. On the other hand, I'm not worried about drive portability at all- We're using uniform hardware across all of our windows servers, so it's always going to be MD1000s with the same controllers. They can swap drives around between them and recover the config that the drive was using without much issue. I thiiiink (Not certain about this, but I'm going to add it to my things to test now) that in single drive raid-0 mode, the controller really is just acting as a JBOD hba- I should be able to just pop a single drive in to another machine without a raid controller at all and read it.

I'll have to let you know about the drive notification issue when I get the box back up and running. Last time I was testing with it, I was mainly checking to see how good I was able to get AD integration and authentication working, for serving shares to users on windows boxes (It worked fine after a few hiccups- Case sensitivity aparently DOES matter when putting in things like domain controller names or LDAP strings in this case!). It also has super easy to configure email alerting about all sorts of conditions relating to server and drive health, however I did not get to checking the actual drive LED notifications. I'm 95% certain that the drivers were listed off in their enclosure connection order, due to the way they were passed to ZFS by the raid controller, but I'd have to check for sure.

Aside from the bone headed AD thing though, the entire platform worked really really well for the time I was messing with it. In terms of setup, pool config, nic teaming, etc, it's a nice tool to deal with. I'll have an update about how it works as a VM parking place for the hyper-v cluster I'm working on at some point down the road.

jeeves
May 27, 2001

Deranged Psychopathic
Butler Extraordinaire

Factory Factory posted:

FreeNAS words

I understand unix permissions, I just figured that FreeNAS's own web interface would put that poo poo into its GUI instead of forcing you to SSH into the machine and set permissions via command line or poo poo, gently caress.

Also, I want to make two shares off of one volume. I have the volume set up correctly and stuff, and I made a test share and stuff copied to it just fine. It's just everyone on my network could connect to that share WITHOUT A PASSWORD, and have full r/w access to it, which is definitely what I do not want.

I can play around with permissions on the folders themselves to see if that will fix it, but my main problem is that FreeNAS only seems to want to share an entire volume as a share, and not sub folders of that volume.

Example:


There is no place to put in /Share1/ after that volume path, as I have Share1 and Share2 as two subfolders in that volume. I am fine with setting permissions manually on those share folders via command line, but it seems to only want to make me share the whole thing.

Ugh. I know exactly how to do this in a totally professional sense via Windows Servers, but this simple stuff that is so complicated on unix is pissing me off. I really would like to use FreeNAS as I can use the internal USB port of my NAS for it, or else I'd have to buy another SATA card to plug in yet another HD to install Windows on.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Is there a reason you don't want to make multiple file systems and share each out individually? It's the ZFS way.

jeeves
May 27, 2001

Deranged Psychopathic
Butler Extraordinaire

FISHMANPET posted:

Is there a reason you don't want to make multiple file systems and share each out individually? It's the ZFS way.

I was hoping that I could have two shares share the same diskspace. For easily moving stuff uploaded to one share to another, as one share is only temporary. I don't want to permanently allocate space to that temporary share if it is not in use.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

jeeves posted:

I was hoping that I could have two shares share the same diskspace. For easily moving stuff uploaded to one share to another, as one share is only temporary. I don't want to permanently allocate space to that temporary share if it is not in use.

That's not how ZFS works. A ZFS file system is fluid, and the file systems will all draw their free space from the pool.

The only disadvantage to having multiple file systems is that a move operation will require the data to actually be copied to the new file system, whereas if they're on the same file system, it's a nearly instant operation as just a few pointers need to be re-written.

PopeOnARope
Jul 23, 2007

Hey! Quit touching my junk!
I think a big hurdle for all of this with us is confidence. There's a lot of documentation and support out there for BSD, Solaris, and FreeNAS, but honestly? We're not making the jump for life. We're loving with this once to get it set up and then leaving it alone. I really don't want to wake up one day, and poo poo my pants because I don't know how to fix what just went wrong and possibly ate my array.

jeeves
May 27, 2001

Deranged Psychopathic
Butler Extraordinaire

PopeOnARope posted:

I think a big hurdle for all of this with us is confidence. There's a lot of documentation and support out there for BSD, Solaris, and FreeNAS, but honestly? We're not making the jump for life. We're loving with this once to get it set up and then leaving it alone. I really don't want to wake up one day, and poo poo my pants because I don't know how to fix what just went wrong and possibly ate my array.

I like ZFS's better-ness than making a Windows software Raid-5, but honestly everything else has been a loving nightmare to set up. I am tempted to run a cord from my Proliant Microserver's remaining SATA port (a eSATA port) back into the machine from the back panel, and hang a cheap 32gb SSD off of it and just put WinServ2008 on that.

I finally figured out how to make mount points per volume instead of the entire volume, that is what the ZFS datasets feature was apparently. I think I reliably set up permissions and it seems to test out okay on my windows machine.

Then I go to my girlfriend's mac and tell it to map to the NAS, and it connects without even asking for a password, and gives her full read/write permissions. What the gently caress? I know I have guess accounts turned off, and there is nothing in her keychain remembering some sort of previous password to it or something.

Edit- Connecting her mac to it via smb. It connects her with full admin rights, without even asking for a login. Nothing but Windows Sharing is on in FreeNAS.

jeeves fucked around with this message at 04:03 on Jul 14, 2011

McRib Sandwich
Aug 4, 2006
I am a McRib Sandwich

Gwaihir posted:

Resurrecting that box is next on my list of things to do (It's currently testing our server2008 image and Hyper-V setup), because I do intend to use it to run VMs off of. Unfortunatly (for you at least), for now they'll be Hyper-V based VMs, since I'm stuck with using whatever comes down to us in the "Official" image- I work for a state level SSA office, so we're somewhat constrained in what stuff we can try and pull with respect to our server infrastructure and such. On the other hand, I'm not worried about drive portability at all- We're using uniform hardware across all of our windows servers, so it's always going to be MD1000s with the same controllers. They can swap drives around between them and recover the config that the drive was using without much issue. I thiiiink (Not certain about this, but I'm going to add it to my things to test now) that in single drive raid-0 mode, the controller really is just acting as a JBOD hba- I should be able to just pop a single drive in to another machine without a raid controller at all and read it.

I'll have to let you know about the drive notification issue when I get the box back up and running. Last time I was testing with it, I was mainly checking to see how good I was able to get AD integration and authentication working, for serving shares to users on windows boxes (It worked fine after a few hiccups- Case sensitivity aparently DOES matter when putting in things like domain controller names or LDAP strings in this case!). It also has super easy to configure email alerting about all sorts of conditions relating to server and drive health, however I did not get to checking the actual drive LED notifications. I'm 95% certain that the drivers were listed off in their enclosure connection order, due to the way they were passed to ZFS by the raid controller, but I'd have to check for sure.

Aside from the bone headed AD thing though, the entire platform worked really really well for the time I was messing with it. In terms of setup, pool config, nic teaming, etc, it's a nice tool to deal with. I'll have an update about how it works as a VM parking place for the hyper-v cluster I'm working on at some point down the road.

Wow, this is great, thanks for the details. Definitely looking forward to what your findings are as you resurrect this machine!

Corb3t
Jun 7, 2003

Its only been mentioned a couple times in the past in this thread but has anyone tried out Amahi?

movax
Aug 30, 2008

In case any masochists still using Solaris out there, got my CPU errors/faults cleared after spending some time learning how to use fmadm/fmdump/etc. Sun's old site got nuked, so I'm going to try to recollect all the manpages for the common admin facilities and write-up some common tasks.

It's actually a very powerful platform, and is amazingly flexible...if you're someone who's maintaining Solaris boxes for a living. I don't have the time to learn every single goddamned management system there is. (Hint: almost each component of the system has a corresponding *adm, *dump and *info command, it's pretty cool)

Also got to dump poll-mode power-management wise because I am now on Nehalem (i7-930). Next step is installing Virtualbox and making a nice beefy Linux VM to move some services too. My future plans including adding 6 more drives to finish this machine off, Hitachi 5K3000s. I have 3 so far, sitting around as I bought them from the 'egg. Anyone have their Hitachis prematurely die due to Newegg shipping?

Scuttle_SE
Jun 2, 2005
I like mammaries
Pillbug

Corbet posted:

Its only been mentioned a couple times in the past in this thread but has anyone tried out Amahi?

Not all of Amahi, but the drive-pooling bit, greyhole, I have been running for quite some time now. Works really well.

It works basically like the drive pooling in WHSv1; you copy files to the shares, greyhole moves them to pool-disks and creates a symlink in the original folder.

jeeves
May 27, 2001

Deranged Psychopathic
Butler Extraordinaire
Ugh it's been a day and I still can't figure this out.

Connect from my PC: all authentication via accounts work, and permissions per accounts work.

Connect from a Mac: it logs into the NAS with no authentication required, with full r/w access.

What the gently caress, FreeNAS?

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

jeeves posted:

Ugh it's been a day and I still can't figure this out.

Connect from my PC: all authentication via accounts work, and permissions per accounts work.

Connect from a Mac: it logs into the NAS with no authentication required, with full r/w access.

What the gently caress, FreeNAS?

Are any of your users members of the "wheel" (administrators) group? They might have superuser permissions over the whole filesystem.

Corb3t
Jun 7, 2003

I was looking at the Synology DS1511+ but the fact that a lot of goons have had issues with their Synology NAS boxes.

Can someone recommend another 5-bay NAS that is $800 or so?

jeeves
May 27, 2001

Deranged Psychopathic
Butler Extraordinaire

Factory Factory posted:

Are any of your users members of the "wheel" (administrators) group? They might have superuser permissions over the whole filesystem.

No, I made sure to set my members to non-wheel.

Actually, the pool itself (not the dataset) may still be set to wheel, and it may be detecting that Mac's are logging in via wheel or some poo poo and giving permissions over everything. Let me check.

Dear god this whole fiasco of just getting permissions to work on FreeNAS has left such a bad taste in my mouth. If there was a legitimate ZFS client for Windows I would just install Win2008 and never look back.


Edit - So even changing the volume permissions to be only 'myuser' user and 'myuser' group, same with the dataset, Macs can still log in without authenticatation and have full r/w access to the share.

You know what? gently caress it. gently caress you FreeNAS. You had your chance, but gently caress you. ZFS may close the write hole of Raid-5, but letting macs have full unauthenticated access to permissions is an even bigger write hole I would think. I'm just going to spend another 50$ and put in a 32gb SSD and put Win2008 on it, and plug that SSD into the eSATA port on the back, snaking the cord back into the machine.

jeeves fucked around with this message at 22:09 on Jul 14, 2011

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

jeeves posted:

No, I made sure to set my members to non-wheel.

Actually, the pool itself (not the dataset) may still be set to wheel, and it may be detecting that Mac's are logging in via wheel or some poo poo and giving permissions over everything. Let me check.

Dear god this whole fiasco of just getting permissions to work on FreeNAS has left such a bad taste in my mouth. If there was a legitimate ZFS client for Windows I would just install Win2008 and never look back.


Edit - So even changing the volume permissions to be only 'myuser' user and 'myuser' group, same with the dataset, Macs can still log in without authenticatation and have full r/w access to the share.

You know what? gently caress it. gently caress you FreeNAS. You had your chance, but gently caress you. ZFS may close the write hole of Raid-5, but letting macs have full unauthenticated access to permissions is an even bigger write hole I would think. I'm just going to spend another 50$ and put in a 32gb SSD and put Win2008 on it, and plug that SSD into the eSATA port on the back, snaking the cord back into the machine.

Try out mdadm plus LVM. I've been writing an article covering how to do this in detail. If you're interested I can share what I've got so far.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

jeeves posted:

You know what? gently caress it. gently caress you FreeNAS. You had your chance, but gently caress you. ZFS may close the write hole of Raid-5, but letting macs have full unauthenticated access to permissions is an even bigger write hole I would think. I'm just going to spend another 50$ and put in a 32gb SSD and put Win2008 on it, and plug that SSD into the eSATA port on the back, snaking the cord back into the machine.

And that is why I did exactly that, except with a 64 GB SSD.

VVVV

I also hosed Ubuntu more than I ever hosed all Solaris/BSD variants combined, simply because I was more comfortable yet didn't know what I was doing.

Factory Factory fucked around with this message at 22:55 on Jul 14, 2011

Pweller
Jan 25, 2006

Whatever whateva.
I've thought long and hard and I'm pussying out of trying FlexRAID/RAID/ZFS. Too afraid of messing up the administration and losing everything. That and it requires extra hard drives.

Planning to rebuild my failing DIY NAS machine (cheap mobo capacitors are puffing up and strange things starting to happen), and simply schedule a file backup/mirror nightly onto a large USB drive, with some sort of checksum checks. I think I'm going to get two enclosures to swap in and out of the configuration every week or two. This seems as simple, frugal, and low-maintenance as it gets for reasonably reliable backups.

Springing for a new, quality power supply with decent motherboard. Leaving Windows comfort zone for Ubuntu Desktop with OpenSSH and VNC server and I should be set. Going to forego the massive HTPC headaches I've had in the past with connecting pc>tv, and streaming to XBOX 360 (time sink resulting in horrible media management if it even ever works since I prefer to manage media by folder rather than metadata). Simply buying a wee WDTV Live box to replace noisy and large XBMC XBOX attached to wifi bridge in living room.

I'm currently dealing with ~2TB of 10yrs of photos, music, movies, random files, which I could see being a lot less data than some goons, particularly those of you focusing on work implementations, but there's my solution for the time being. Tired of fiddling around constantly managing tech.

jeeves
May 27, 2001

Deranged Psychopathic
Butler Extraordinaire

Thermopyle posted:

Try out mdadm plus LVM. I've been writing an article covering how to do this in detail. If you're interested I can share what I've got so far.

I'm just going to go with Windows like Factory Factory did. My profession is windows tech support and server maintenance and etc, so I was trying to expand my knowledge by playing around with ZFS (especially since ZFS seems better than raid-5), but this was just unduly pissing me the gently caress off.

I feel like a car mechanic who spends all day fixing cars to come home and have a lovely car he can't figure out. I just want something that works, and if it breaks I'll have a better chance of fixing it with Windows.

movax
Aug 30, 2008

It took me a good many hours over Christmas break, but I finally got perms working the way I wanted under Solaris utilizing ACLs. They are surprisingly powerful; if you are deathly afraid of CLI, you can set all directories to initially be full perms to anonymous, and then use Windows Explorer to set permissions.

Highly effective, lets me have my own personal user, a media user that I hand out to my roommates/visitors (that gets read access to everything and write access to my 'upload' filesystem) and various other accounts for other stuff.

As is par for the course with Solaris though, the documentation was scattered around the world in various pieces, requiring the sacrifice of many a woodland creature and the breastmilk of a Japanese virgin.

jeeves
May 27, 2001

Deranged Psychopathic
Butler Extraordinaire
FreeNAS update: apparently according to their own support forums, it is a bug with Apple and not FreeNAS. If you connect to any smb client with 10.7 right now it will give you full read write access. THANKS APPLE.


Actually, I have an SMB share on my winserver at home, I'll make sure it is actually Apple and not just some FreeNAS tard talking out of his rear end.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

jeeves posted:

FreeNAS update: apparently according to their own support forums, it is a bug with Apple and not FreeNAS. If you connect to any smb client with 10.7 right now it will give you full read write access. THANKS APPLE.


Actually, I have an SMB share on my winserver at home, I'll make sure it is actually Apple and not just some FreeNAS tard talking out of his rear end.

I don't know WTF I'm talking about, but it sure seems like a design flaw if clients (Apple devices) can just ignore your access controls.

movax
Aug 30, 2008

jeeves posted:

FreeNAS update: apparently according to their own support forums, it is a bug with Apple and not FreeNAS. If you connect to any smb client with 10.7 right now it will give you full read write access. THANKS APPLE.


Actually, I have an SMB share on my winserver at home, I'll make sure it is actually Apple and not just some FreeNAS tard talking out of his rear end.

What? This seems obscenely wrong, a bug in Apple software should not magically turn its SMB client into the loving Neo of SMB, able to hack into anything at will. 10.6.8 here, I can't connect to my server's shares without proper authentication. Definitely check this out on your Windows share...

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

movax posted:

What? This seems obscenely wrong, a bug in Apple software should not magically turn its SMB client into the loving Neo of SMB, able to hack into anything at will. 10.6.8 here, I can't connect to my server's shares without proper authentication. Definitely check this out on your Windows share...

I think the issue here might be authenticating and receiving more permissions than should be given. Maybe it's some kind of mix-up with identically-named/passworded accounts on different systems that aren't part of a Domain?

jeeves
May 27, 2001

Deranged Psychopathic
Butler Extraordinaire
I've been using shares via SMB to my girlfriend's mac to use as a ghetto HTPC for years now, it's only recently I have been trying to make a NAS to consolidate media. But yeah, the knee-jerk responce from someone on FreeNAS's own forum was 'oh its apple's fault with their unreleased untested mac 10.7!"

So I am going to go home and see if her mac can gently caress with my Win2003 box and I haven't noticed in the 2 weeks I've had 10.7 installed on her machine (this would be easy to recognize-- I can just see if her Mac poo poo up my server with tons of hidden mac .DS_Store files), or if the FreeNAS people are full of poo poo.

I'm guessing the latter. I'm pretty close to just going with Win2008 instead of this poo poo, but I keep banging my head against this wall as really like the benefits of ZFS over Raid-5 (and not having to drop more money on another HD to install Win2008 to).

Edit - Remoted in, and yeah my Win2003 box doesn't have any of the .DS_Store files that mac machines poo poo up any network space they have write access to (my Temp share that the mac DOES have write access to has this ever present hidden file-- last modified earlier today when I tested last, so it is not like they ever fixed that with 10.7 anyhow). This is definitely a problem with FreeNAS not properly asking for authentication from her Mac, and not just a blanket loving bug with Mac 10.7 being Neo (laff) on all SMB shares.

In summation: ugh gently caress FreeNAS

jeeves fucked around with this message at 02:36 on Jul 15, 2011

McRib Sandwich
Aug 4, 2006
I am a McRib Sandwich

jeeves posted:

I've been using shares via SMB to my girlfriend's mac to use as a ghetto HTPC for years now, it's only recently I have been trying to make a NAS to consolidate media. But yeah, the knee-jerk responce from someone on FreeNAS's own forum was 'oh its apple's fault with their unreleased untested mac 10.7!"

So I am going to go home and see if her mac can gently caress with my Win2003 box and I haven't noticed in the 2 weeks I've had 10.7 installed on her machine (this would be easy to recognize-- I can just see if her Mac poo poo up my server with tons of hidden mac .DS_Store files), or if the FreeNAS people are full of poo poo.

I'm guessing the latter. I'm pretty close to just going with Win2008 instead of this poo poo, but I keep banging my head against this wall as really like the benefits of ZFS over Raid-5 (and not having to drop more money on another HD to install Win2008 to).

Edit - Remoted in, and yeah my Win2003 box doesn't have any of the .DS_Store files that mac machines poo poo up any network space they have write access to (my Temp share that the mac DOES have write access to has this ever present hidden file-- last modified earlier today when I tested last, so it is not like they ever fixed that with 10.7 anyhow). This is definitely a problem with FreeNAS not properly asking for authentication from her Mac, and not just a blanket loving bug with Mac 10.7 being Neo (laff) on all SMB shares.

In summation: ugh gently caress FreeNAS

Been following your issues here, just wanted to mention a couple things:

The .DS_Store presents that OS X leaves on remote shares can actually be disabled in the OS. No idea why this isn't the default for non-AFP volumes, but hey, Apple. Here's the article on disabling that:

http://support.apple.com/kb/HT1629

Also, if you want to run ZFS but don't want to put up with the bullshit that you're getting from FreeNAS, you may want to consider the NexentaStor (or Nexenta Core Platform) product that I've been trying to gather info on. The NexentaStor Community Edition is free and allows stores of up to 18TB raw disk, comes with a web GUI, and is pretty darn nice for what it does.

http://www.nexentastor.org/projects/site/wiki/CommunityEdition

If you do end up checking it out, share your experiences here, it'd be nice to hear some more folks talking about the product (last mentions before recently were dozens of pages ago in this thread).

Adbot
ADBOT LOVES YOU

teamdest
Jul 1, 2007
I flat out don't understand how a client-side bug on a completely unrelated platform turns into root access on a Unix based file server. That's an outright lie.


edit: I was running NexentaStor Community for a while, but there were a LOT of authentication issues between linux, windows, and Nexenta, so I wound up saying "gently caress it" and going back to my old standby of Debian/mdadm/XFS.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply