Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
alo
May 1, 2005


Does anyone have a suggestion for a low power motherboard/cpu combination with plenty of pci slots for loading up with drives. An overclocked core 2 duo isn't exactly a requirement for simple raid5/raidz.

I could get away with having limited pci slots if there were numerous sata ports. The idea is mostly to load up a machine with 7+ drives and not have to have the overhead of a modern power hungry machine.

Bonus points if it runs Solaris.

Adbot
ADBOT LOVES YOU

alo
May 1, 2005


vanjalolz posted:

Can anyone provide a link with some info on resizing raid-z arrays?

I vaguely remember reading somewhere that you can upgrade drives, you just can't add them.

Possibly related question, how does raidz work with different sized drives?

You can replace the drives, one by one. The size will stay the same until the last drive is replaced. You're right in that you can't change a 4 disk array into a 5 disk array. Try it out with files before you do it with disks.

With different sized drives, you'll run into the same situation as any other raid5 implementation. It'll take the size of the smallest disk.

alo
May 1, 2005


Sock on a Fish posted:

Is there some secret to setting sharenfs and sharesmb properties for ZFS on OpenSolaris? At work I setup a Solaris box and had all of my nfs shares configured in minutes, but trying to do the same at home on OpenSolaris is giving me problems. I'll do something like this:

code:
pfexec zfs create -o sharenfs=rw=192.168.0.0/24 rpool/share
And then I'll get an auth error when I try to mount from a machine in that subnet. The nfs server logs are useless in helping to diagnose the issue. I've verified that the nfs server is running and that filesystem permissions are such that I should be able to read rpool/share.

Does it mount with just sharenfs=on? If so, you might just need to add an @ before the network you specified (as in sharenfs=rw=@192.168.0.0/24). I use host names, but I've seen it mentioned while figuring out nfs on OpenSolaris.

alo
May 1, 2005


IOwnCalculus posted:

I wouldn't trust that, though; I've had Ubuntu randomly reassign drivemappings from one reboot to the next. I just replaced a drive in my backup server - the failed drive was /dev/sdc, but when it came back up it had remapped something else to /dev/sdc and the replacement for the failed drive to /dev/sdb.

Look at using the links in /dev/disk/by-id/ which will be consistent after reboots (and generally contains the unique serial number of the drive which makes it easier to identify a failed drive).

alo
May 1, 2005


FISHMANPET posted:

I made the decision last night at 3AM when I couldn't sleep to do this in the future. I just moved so everything is complete chaos (though the first thing the lady had me setup was the server so we could watch some teev :3:) but it's glad to hear it went well.

Correct me if I'm wrong, but isn't the XVM stuff removed from OpenIndiana since Oracle is no longer supporting it?

See also: http://opensolaris.org/jive/thread.jspa?threadID=134657

I have the same setup and will be moving to ESXi with a separate storage server in the near future. I'm not too crazy about having two machines where I used to have one. (Unless anyone has any experiences with passing drives directly to a VM in ESXi and the performance implications).

alo
May 1, 2005


adorai posted:

You could also run Xen with raw disks passed to the guest.

I used to use OpenSolaris and Xen (XVM), but moved to ESXi with an OpenIndiana guest. Xen support has been removed in OpenIndiana, so you'd be stuck with an earlier build of OpenSolaris. Xen/XVM has been discontinued by Oracle, so there's no future path for that configuration.

Looking back, OpenSolaris and Xen had a bunch of annoying problems and it never worked 100%. Each new OpenSolaris build would introduce new problems with XVM bits. Services wouldn't start randomly or they'd have to have their startup timeout increased so that they wouldn't fail. The clock would be inaccurate (sometimes it would work, using ntp just made it worse). Sometimes the changes that needed to be made to grub wouldn't be made... A mess, it worked, just not amazingly well (and I've been running Solaris on x86 since ZFS was added).

Go ESXi, I'm happy that I did (so far, it's only been a week). Plain OpenIndiana with no Xen bits has been running like a champ without any issues for me. My biggest complaint so far though is with management. You have to use the Windows-only client. I have to keep Windows machines around at work and home just for this. You can do some simple tasks via the service console over ssh (and there are some features that you can only do over the service console), but a lot of the (documented) functionality is reserved for paying customers.

alo
May 1, 2005


BnT posted:

Would I need to make nested datasets/filesystems to make this happen? If so, is it possible to move data instantly from a parent dataset into a nested dataset?

You'll have to create sub-datasets and move the data manually. I'd mv the audio and video directories to audio.old and video.old before creating the datasets or it might mount empty directories over the video and audio directories.

alo
May 1, 2005


That intel/lsi card you linked won't support anything larger than 2TB. Try a newer LSI or go for the IBM M1015 card that everyone grabs from ebay and re-flashes.

http://www.newegg.com/Product/Product.aspx?Item=N82E16816118182
http://www.newegg.com/Product/Product.aspx?Item=N82E16816118112

As far as ESXi, the most important part is going to be whether your motherboard and cpu combination supports PCI passthrough in ESXi. Look for VT-d (Intel) or IOMMU (AMD).

That said, I have the older LSI card since I wasn't going to buy >2TB drives at the time and I'll probably get a new controller when I buy larger drives in the future.

alo
May 1, 2005


Delta-Wye posted:

It seems like something is missing :( Any other techniques/tools I can use to make sure the copy was good? it seems like rsync should have done the job but I can't shake the suspicion that some files are missing based on the partition sizes.

You can try running something like: (on Solaris)
code:
gfind /tank/doc/ -type f -print0 | gxargs -0 md5sum | sort > oldnas.txt
Then run the same thing on your new nas (note that I'm using the GNU versions of find and xargs here) and diff the two resulting files (or md5sum the result).

alo
May 1, 2005


Paul MaudDib posted:

What's the cheapest way to get a server rack? I've got a servers that I picked up cheap, but no rack. I've got a total of 5u of servers, 1x1u and 2x2u, so I probably want like 7u for a bit of airflow and expandability. Can I buy the rails and make one myself, or is there something cheap enough to make it not worth it? Craigslist maybe?

Option 1, the ubiquitous "lack rack" (not very sturdy though). It's really just like stacking your servers in a pile and putting a table over it...
Option 2, there are some decent* half-racks out there for 300ish.
Option 3, surplus/craigslist 42U rack (luck dependent, hope you've got room for it)
Option 4, terrible two post "audio" racks. There's usually a whole lot of these in the right sizes (5U to 20U), but they'll be two post, round hole aka poo poo

It's much more palatable to go for option 2 or 3 if you have a whole bunch of extra space. Also, if you've already got 5U of stuff, think about a 1u switch, plus maybe a 2U UPS?

* decent meaning four post, square holes.

alo
May 1, 2005


A norco with drives (although drive noise varies) and 120mm fans is actually pretty quiet. My desktop is much louder.

alo
May 1, 2005


I have three scythe 120mm fans each rated for 25cfm and 7.5dBA. Make sure you also replace the 80mm fans on the back of the case. I have two 80mm 30cfm 25dBA fans -- that's the main source of noise on mine.

I had to turn everything off here to notice that there are hard drive seek noises.

alo
May 1, 2005


GokieKS posted:

I assume these are the 800 RPM GTs? 25 CFM actually seems pretty low - how many drives do you have and what kind of CPU/HDD temps are you seeing? If 3 of those is really sufficient for a full set of drives, then the Corsairs that I'm fond of should definitely be enough.

And yeah, those back 80mms will be replaced too. They should have much less impact on the temperature of the drives though, so I'm not as worried about those as I am with the 120mms.

I only have half of my bays full. The fans I have are no longer sold, but here's their newegg page: http://www.newegg.com/Product/Product.aspx?Item=N82E16835185056. I just looked for comparable fans from the same manufacturer and holy poo poo they're selling them for ~40 bucks a fan (on newegg at least)

Anyhow, my "system temp" as reported by ESXi is 32C. My drives are about 33-37C according to SMART.

alo
May 1, 2005


Recently, I replaced a drive on an older ZFS pool -- it wasn't very smooth though. The zpool was created with 512 byte sector drives (ashift=9). However, the replacement disk I purchased was a newer 4kb sector drive. ZFS wouldn't let me replace the drive, so I had to backup, destroy, recreate and restore the zpool (which now has ashift=12 set, 4 512 drives and 1 4kb drive). Annoying, but since it was an older machine with 1tb drives I use for scratch storage in my office lab, it wasn't really a big deal (only 2.5TB of data, mostly test VMs I could blow away).

Now at home, I've got almost the same situation (although the disks are OK right now). I have a zpool with 8 disks in raidz2, but in this case, there's no easy way to back up the data (about 7.5TB). I already have the important data backed up in multiple locations, but the rest of the data is media that I wouldn't want to spend the time re-downloading.

Online cloud backup seems like it would cost money (not a problem) and take 2 months to upload (holy poo poo). I'm actually considering tape, since I already have an LTO3 drive sitting around at my office (media would be cheaper than hard drives, plus the backup/restore time is much better than 2-4 months).

Any thoughts? I actually think tape is going to win this one -- anything I should know before I start?

alo
May 1, 2005


thebigcow posted:

Do they still have the thing where you can set a jumper on the drive to have it present itself as a 512b sector drive?

I think the jumper thing is only for correct alignment on older operating systems. I just went through the same thing (look a few posts up) and the solution was to completely rebuild my zpool.

I guess it's a great opportunity to dump Solaris 11.

alo
May 1, 2005


While the word "kerberos" is on the page, has anyone had any luck getting nfs4 working with kerberos and active directory (Solaris nfs server, mixed clients, mostly linux)?

alo
May 1, 2005


Well a drive in my 8 drive raidz2 just started throwing errors. I purchased them (a mix of 2tb drives at... $80 a pop) a month before the Thailand floods, they've all had a good life.

Just bought a (larger) replacement, it looks like drive prices have decreased, but not at the rate they did from 2000-2010. I can still buy a 2TB drive for 80 bucks.

alo
May 1, 2005


BlankSystemDaemon posted:

Yeah, there's a strange phenomenon where the price per TB is anything but linear despite the fact that you'd think the smaller drives wouldn't be selling at all with the much bigger drives as relatively cheap as they are.

I'll be replacing each of these with 14tb drives. I don't expect to get >10y of power on time again, but we'll see. Looks like I'll be paying between 250 and 320 dollars per drive now.

Also thinking about replacing the enclosure with something a bit more robust (moving the drives to a sas/sata jbod sounds nice). Back when I put the system together, the Norco 4224 was the popular enclosure. It looks like in the last 10 years there's been a lot more old jbods appearing in the used market that might be better suited for home storage (or at least not suck like the Norco's). I don't need a netapp disk shelf sized jbod, but is there anything in the 12-16 drive range that's not terrible? Sound isn't an issue since this is in a dedicated room in my basement (in a nice and cozy rack).

alo
May 1, 2005


I ended up getting the DS4246. Got it in, racked it up and swapped over all of my drives. It's a big improvement on my old Norco as the backplane isn't dropping drives every few days (it's probably just old and brittle).

The drive closures are great, much better than the Norco (and Supermicro stuff too) and actually better than most of the Dell/HP stuff I've worked with. I'm mostly referring to the insertion where with some drive sleds, you can make contact before you close the latch, or you miss the catch when inserting and the drive isn't fully seated, but is still powered on.

In addition, since it's an actual sas jbod, I can query it to get the drive locations.

Adbot
ADBOT LOVES YOU

alo
May 1, 2005


Less Fat Luke posted:

How's the noise? Did you swap any of the PSU fans or left it as is?

I'm seeing a lot of those and other DAS arrays on eBay thanks to Chia coin cratering (lol)

The noise is ok. It’s loud on first boot, but then the fans slow down to the quieter side of enterprise gear. I have a basement so noise isn’t really much of an issue.

Of course my wife was in the next room the other night and said “is that sound the new [disk shelf]” so perhaps I’m not the best judge of noise.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply