|
Gism0 posted:I've got an Asus P8Z77-I Deluxe motherboard (Intel Z77 chipset) with 4 SATA ports and 2 eSATA ports. You should be able to use six, you'll just need to get an eSATA -> SATA cable and feed it back into the case. It's not the most elegant solution, but it should work. Just double-check the manual first - I had an intel DH67CF that I though I would be able to use five SATA devices with, but it turns out the eSATA was mutually exclusive to one of the normal SATA ports, and it wasn't possible to use both at the same time. Guess they thought they could save some money by not wiring an extra SATA port off the PCH.
|
# ¿ Sep 27, 2012 10:33 |
|
|
# ¿ Apr 26, 2024 17:05 |
|
Mantle posted:I'm having trouble with permissions management on my nas4free box. Is there no way in the GUI to create/manage folders in my zfs datasets? Doing this locally (ie: If they are logged into the server directly) would involve changing the umask. Doing this in OSX is easy enough to do (see here), but whether it carries across onto the fileserver depends on which protocol you are using. Doing it in windows is not something I'm familiar with, sorry. Alternatively, if you're using CIFS/SMB (which I assume you are, at least for windows), you can set masks on file/directory creation directly. If you set these to 006, you should get the desired permissions. This will make ALL users set permissions like this. The easiest workaround if you only want SOME users to share files with their group is just to set the primary group for those who shouldn't be sharing files to something unique (most linux systems have a group with an identical name to each user for this purpose). It's not the nicest solution, but it would work. Documentation on SMB/CIFS is [irl=http://wiki.nas4free.org/doku.php?id=documentation:setup_and_user_guide:services_cifs_smb_samba]here[/url].
|
# ¿ Sep 29, 2012 03:33 |
|
FISHMANPET posted:ZFS stores all of it's metadata on the drives, you just have to run "zfs import" and it will scan all the drives and figure out what's on them. As far as I remember, it stores it's RAID settings in a superblock either on Drive #1 of the RAID or on every drive of the raid (option when you build it). So mdadm --assemble --scan should just make everything magically work again. Just don't ever use --build or --create unless you intend to wipe things.
|
# ¿ Feb 27, 2013 09:02 |