Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
TheNothingNew
Nov 10, 2008


Fitret posted:

I am indeed planning on doing hardware RAID.

Hey, lookit that. I open this thread to ask a question, and someone's building a setup with the same motherboard. Cool.

I don't know how it affects your plans, but this board appears to have two controllers. See link below, pop the third pic open.
http://www.newegg.com/Product/Produ...N82E16813128359

The two purple SATA connections are on their own controller, and support RAID 0/1/JBOD. The six yellow SATA connections are run off the south bridge, and support RAID 0/1/5/10/JBOD. It looks like you cannot link the two.

I'm using DDR 800, seems fine, although I should say that I'm not actually up and running yet. Check Gigabyte's website, I'm pretty sure it says it supports DDR 800.

edit: oh yeah, whole freaking list of supported DDR2 800 memory chips, and this is just what they've bothered to test:
http://www.gigabyte.us/Products/Mot...?ProductID=2916

Oh, and setting up RAID is damned easy. You turn on the RAID controller you want in the BIOS, save/quit, then enter RAID and pick your RAID type and drives. And make sure you pick your boot order in the BIOS settings, you don't want your OS on the RAID array.

Question for the thread: how long, at a guess, does it take to set up the initial sync for a RAID array? RAID 5 (5+1), with 1TB drives, ~4.5 TB useable. 5400 rpm drives, 64kb stripe.
It's just that there isn't a countdown or anything, and this has been running for the last six hours. Still says "initialize" under status.

TheNothingNew fucked around with this message at 06:37 on Aug 27, 2009

Adbot
ADBOT LOVES YOU

Fitret
Mar 25, 2003

We are rolling for the King of All Cosmos!

TheNothingNew posted:

Hey, lookit that. I open this thread to ask a question, and someone's building a setup with the same motherboard. Cool.

I don't know how it affects your plans, but this board appears to have two controllers. See link below, pop the third pic open.
http://www.newegg.com/Product/Produ...N82E16813128359

The two purple SATA connections are on their own controller, and support RAID 0/1/JBOD. The six yellow SATA connections are run off the south bridge, and support RAID 0/1/5/10/JBOD. It looks like you cannot link the two.

I'm using DDR 800, seems fine, although I should say that I'm not actually up and running yet. Check Gigabyte's website, I'm pretty sure it says it supports DDR 800.

edit: oh yeah, whole freaking list of supported DDR2 800 memory chips, and this is just what they've bothered to test:
http://www.gigabyte.us/Products/Mot...?ProductID=2916

Oh, and setting up RAID is damned easy. You turn on the RAID controller you want in the BIOS, save/quit, then enter RAID and pick your RAID type and drives. And make sure you pick your boot order in the BIOS settings, you don't want your OS on the RAID array.

Question for the thread: how long, at a guess, does it take to set up the initial sync for a RAID array? RAID 5 (5+1), with 1TB drives, ~4.5 TB useable. 5400 rpm drives, 64kb stripe.
It's just that there isn't a countdown or anything, and this has been running for the last six hours. Still says "initialize" under status.

Wow, thanks for the heads up! Doesn't affect my board choice, though. As you saw, I've got 1 slot for a boot drive anyway, so I was only planning on 7 usable drives. I can live with 6.

TheNothingNew
Nov 10, 2008


Fitret posted:

Wow, thanks for the heads up! Doesn't affect my board choice, though. As you saw, I've got 1 slot for a boot drive anyway, so I was only planning on 7 usable drives. I can live with 6.

Remember that JBOD means Just a Bunch Of Disks - you could have your six in a RAID of some sort, then set the other two as JBOD, say your boot drive and some other random drive.

Good choice of board. I've got this one for my NAS, and the vanilla one for my desktop.

8 hours and counting on this volume creation. I'm going to bed, maybe it'll be done when I wake up. At least it's not generating any real heat.

Dotcom Jillionaire
Jul 19, 2006

Social distortion


FreeNAS is installed on my Acer box now (it makes a nice little noise after booting) but I think my real problem is the software does not recognize my NIC. The lights on the Ethernet jack don't light up, the router claims to get no ethernet signal, and I certainly can't get into the web interface.

Any advice on how to get this working proper? The NIC is "Gigabit Ethernet Marvell Yukon 88E8071"

TheNothingNew
Nov 10, 2008


After 24 hours I gave up. Using Software raid throughFreeNAS. ETA=6 hours. Booyah.


tehschulman posted:

FreeNAS is installed on my Acer box now (it makes a nice little noise after booting) but I think my real problem is the software does not recognize my NIC. The lights on the Ethernet jack don't light up, the router claims to get no ethernet signal, and I certainly can't get into the web interface.

Any advice on how to get this working proper? The NIC is "Gigabit Ethernet Marvell Yukon 88E8071"

I'd say you don't have the right drivers, but since you aren't getting any light when you plug the cable in, I'm guessing dead NIC. You can buy a PCI drop-in card for like I think.

Dotcom Jillionaire
Jul 19, 2006

Social distortion


Yeah I've traced down the issue I think and the issue is FreeNAS (freebsd) doesn't have the drivers for my Ethernet chipset (there is definitely no connection between the box and router and I've checked every other aspect). There are drivers I can download from a few sites, but I've never patched or tinkered with the BSD kernel before and it seems a little easier to just install WHS again for the time being. Buying a more generic NIC doesn't sound like a bad plan though. Just hoping it will work someday.

Dotcom Jillionaire
Jul 19, 2006

Social distortion


Went to Staples and bought a PCI NIC, turns out my box just has a PCIe slot or somesuch, but I went back and bought a USB to Ethernet dongle.

Now my router can actually see the file server box, it resolves an IP with DHCP and everything, BUT I STILL CAN'T loving ACCESS THE BOX VIA FREENAS WEBGUI. I've plugged the drive into my external enclosure and run the server through my Thinkpad, remotely logged into it with my eeePC without a problem. I set some services to run, for all I know FreeNAS is running just fine in the file server, but why the gently caress can I still not access it???


If you'd like to buy an Acer Aspire Easystore H340 there is one for sale on SA-Mart now! I think in my situation with the features I desire in a server, building my own box would be a better solution right now. The box is brand new, please PM or post in the thread if you'd like to get in touch. Thanks!

Dotcom Jillionaire fucked around with this message at 05:06 on Aug 30, 2009

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

KennyG posted:

After about 3 straight weeks of time on a Core2 Quad 9550 I was able to run 37,764,768,000,000 simulations on 'virtual drives' in various RAID configurations to try and put some concrete probability numbers on RAID configurations using standard available hard drives. It wasn't the most efficient method, but I feel it's statistically valid and pretty accurate given the real world anomalies in play.
Sun introduced triple-parity RAID-Z in one of their latest Nevada builds. Care to simulate that?

Henrik Zetterberg
Dec 7, 2007




Looking to pick up a pair of 32MB cache 1TB drives (also considering 1.5TB if cheap enough) to slap into my unRAID and have a couple questions.

My machine is:
AMD Athlon 2800+
Gigabyte 7N400-L.. something or rather.
1.5gb RAM
No PCIe slots, only PCI.
No SATA ports on the board, only IDE and whatever SATA cards on the PCI bus.

- Is there a general preference between Seagate and WD? Just from a quick glance, it looks like WD has a 5-year warranty compared to Seagate's 3. Prices seem pretty similar. OP seems down the middle.
- Will the WD Black drives make much of a speed difference over the Greens? It's only a $10 difference on Newegg, so if I go WD, I'll probably just go with Black unless Greens give me significantly more power savings.
- Is there a suggested PCI SATA card? 4-port would be best.

Henrik Zetterberg fucked around with this message at 07:22 on Sep 1, 2009

Minty Swagger
Sep 8, 2005

Ribbit Ribbit Real Good


Get the BLACK Drives from WD for Unraid. Their 5 year warranty (vs 3) makes me more confident in buying their drives. Also in UNRAID the faster the parity drive the better the array performs so if you can, I'd drop one of those in as the parity drive too. the black drives are close to the fastest consumer grade hard drive you can get without dabbling into raptors and crazy 10k rpm discs.

I have two 1TB Black drives in my Unraid, one of which is parity and I get consistently higher transfer and parity check rates than others who post on the unraid forums. Just my two cents anyway.

In terms of a SATA card, I'd personally check the Unraid forums to confirm as you want to make sure whatever card you get works in unraid. Most all do, but what a bitch it would be if yours didn't.

Remember your fastest SATA ports will be the onboard though so if you plan to drop a parity drive onto one (which I would recommend) it would speed things up considerably.

deong
Jun 13, 2001

I'll see you in heck!


So I just picked up Intel SS4200-E along w/ another 1TB drive. Looking around the web, it looks like the default OS is a bit lacking. I'm thinking of putting on OpenSolaris for ZFS integration, but was wondering if anyone has had experience with the box? It looks like the DOM/PATA is inaccessible w/ linux distro's that I've seen mentioned, but serving from an USB stick works fine.

Mostly wondering if anyone else has used the box, and has had experience w/ installing an OS to it?

frunksock
Feb 21, 2002



TheNothingNew posted:

Question for the thread: how long, at a guess, does it take to set up the initial sync for a RAID array? RAID 5 (5+1), with 1TB drives, ~4.5 TB useable. 5400 rpm drives, 64kb stripe.
It's just that there isn't a countdown or anything, and this has been running for the last six hours. Still says "initialize" under status.
I just made a 3+1 RAID-Z array with 1TB drives. It took, I dunno, 100ms? No amount of time long enough that I could really perceive it. At this point, for a NAS, I'd need to have a pretty drat good reason to not use Solaris.

TheNothingNew
Nov 10, 2008


frunksock posted:

I just made a 3+1 RAID-Z array with 1TB drives. It took, I dunno, 100ms? No amount of time long enough that I could really perceive it. At this point, for a NAS, I'd need to have a pretty drat good reason to not use Solaris.

I do have a good reason, and that reason is: tard-proof. Or fairly so. I went with freenas, software raid 5, took a couple of hours? Don't recall.
But yeah, I don't have the Unix chops to take a run at a Solaris NAS box just yet.

Ethereal
Mar 8, 2003


FreeNAS 0.7 has ZFS built into it. That makes it super tard proof.

TheNothingNew
Nov 10, 2008


Ethereal posted:

FreeNAS 0.7 has ZFS built into it. That makes it super tard proof.

Ah. Um, indeed.
I'd love to state that I have a decent understanding of RAID 5 and how it works, and so was more comfortable with that idea, which is true, but the real factor is that I got stuck in "must have raid 5" mode and forgot that I had other options.

Ah well, I'm happy enough with it.

deimos
Nov 30, 2006

Forget it man this bat is whack, it's got poobrain!


Holy gently caress.

The holiest of grails. 67TB for $7867. Including custom-designed (plans open sourced) case.

Although I'd use supermicro SATA cards.

frogbs
May 5, 2004
Well well well

deimos posted:

Holy gently caress.

The holiest of grails. 67TB for $7867. Including custom-designed (plans open sourced) case.

Although I'd use supermicro SATA cards.

That is insane...i'd love to read more about the software side of everything though, how they manage all that storage without going crazy.

IOwnCalculus
Apr 2, 2003





Ha, a few of my coworkers and I discussed that today. Some of the highlights:

*I feel sorry for any customer whose data ends up on the three backplanes that are sharing one PCI controller and not one of the other six backplanes that split the load across three PCIe controllers. That thing is going to rape the PCI bus.
*Only one boot drive, and nonredundant power supplies? I hope they have all customers redundant across multiple boxes.
*How do you swap drives in that thing readily? The amount of downtime has got to be painful. You've got to take it offline, power down both PSUs, wait for everything to spin down, then unrack a nearly-100lb-server (over 70lb of drives alone!). Pop it open, locate the failed drive (how will they know which one failed at this point? Match serial numbers?), swap it, button it all up. Re-rack it, power it on (two steps), start the rebuild (mdadm is not exactly fast), and hope none of the other drives decided to use the spindown as an excuse to die.
*Why not low-speed green drives? Clearly (see bus concern above) performance is a nonissue. Low-power drives would have made this a lot easier to power and cool.

It's really a pretty damned neat idea and very clever, but I feel like they're going to regret some of their choices in a few years when drives start popping left and right.

MrMoo
Sep 14, 2000



IOwnCalculus posted:

*Only one boot drive, and nonredundant power supplies? I hope they have all customers redundant across multiple boxes.
Already covered in the article, much like Google they have replication and de-duplication occurring at a higher level.

IOwnCalculus posted:

*How do you swap drives in that thing readily? The amount of downtime has got to be painful.
Much like Google they don't, it's too expensive in time when your engineers should be racking up new servers instead.

IOwnCalculus posted:

*Why not low-speed green drives? Clearly (see bus concern above) performance is a nonissue. Low-power drives would have made this a lot easier to power and cool.
Presumably they also carry a "green" premium that makes it cost inefficient and too slow on response time. Google's DIY designed PSUs suggest that green devices are good but I presume 5,400 rpm disks would simply be too slow?

H110Hawk
Dec 28, 2006


IOwnCalculus posted:

*(how will they know which one failed at this point? Match serial numbers?)

Since they build them all alike, they know the mapping exactly of how Linux will enumerate the disks. They probably just have a cheat sheet that shows exactly which slot holds /dev/sdh or whatever.

eames
May 9, 2009



It is a neat idea but I wouldn’t want to be a customer there.

What happens when their fancy $38 80GB PATA Boot drive bites the dust?
They clearly have no RAID1 or tape backup in place for such an event.

Cheap is nice and all, but maybe they should have invested the $30 for their power switch in a second boot drive for $38 instead.

e: I’d also like to know why they chose 1.5 TB 7200.11 Seagate Barracudas. A quick google search turned up lots of nice stories such as this and this.
Those problems were most likely fixed already but still… yikes.

eames fucked around with this message at 16:05 on Sep 4, 2009

IOwnCalculus
Apr 2, 2003





H110Hawk posted:

Since they build them all alike, they know the mapping exactly of how Linux will enumerate the disks. They probably just have a cheat sheet that shows exactly which slot holds /dev/sdh or whatever.

I wouldn't trust that, though; I've had Ubuntu randomly reassign drivemappings from one reboot to the next. I just replaced a drive in my backup server - the failed drive was /dev/sdc, but when it came back up it had remapped something else to /dev/sdc and the replacement for the failed drive to /dev/sdb.

Don't get me wrong, I like what they've done, but I think they haven't made enough concessions to long-term maintenance and reliability; it seems like for a relatively marginal increase in cost (especially compared to even the next-cheapest solution) they could make things considerably more reliable.

CISADMIN PRIVILEGE
Aug 15, 2004

optimized multichannel
campaigns to drive
demand and increase
brand engagement
across web, mobile,
and social touchpoints,
bitch!


I don't know if this is the thread to ask in, but I'm looking at the iomega ix4-200 and drobo pro's as a backup iSCSI device to run virtual machines from with two ESXi server in a small office (25 workstation 2 server). The Servers have good internal storage (RAID 5 SAS 15K arrays) so I definitely wouldn't want to to just use the device, but it would be nice to be able to boot a backup copy of one or the other server in a pinch should an entire server go down. I don't expect any sort of automatic failover just the ability to recover more quickly than restore from USB. Does anyone have any experience with this type of setup?


Links:
These: iomega ix4-200r
http://go.iomega.com/en-us/products...verviewItem_tab
These: iomega ix4-200d
http://go.iomega.com/en-us/products..._tab%20onclick=
And the Drobo Pro's don't really need linking.

frunksock
Feb 21, 2002



moep posted:

It is a neat idea but I wouldn’t want to be a customer there.

What happens when their fancy $38 80GB PATA Boot drive bites the dust?
They clearly have no RAID1 or tape backup in place for such an event.

Cheap is nice and all, but maybe they should have invested the $30 for their power switch in a second boot drive for $38 instead.

e: I’d also like to know why they chose 1.5 TB 7200.11 Seagate Barracudas. A quick google search turned up lots of nice stories such as this and this.
Those problems were most likely fixed already but still… yikes.
It doesn't matter. Say your web site requires 10PB of online storage (think storing photos or videos, or in this case, online backups). Now, you also need to back that data up. You can back up to tape, but that's not cheap, and doing your restores can become very tricky. If you're dealing with photos, you might have 35,000,000 1.5MB JPEGs per 50TB. At that scale, things like 'find' and 'ls' and 'fsck' just kind of stop working. Depending on how you decided to back up your data to tape, a restore of a single user's images might be easy, but a restore of an entire filesystem nearly impossible, or vice versa. Or a single day's images, etc. And tape isn't that much cheaper than consumer-level SATA disk. So, you decide to just keep multiple copies of all the data online, and you build redundancy and integrity into your application layer. At this point, you don't really care if a single 50TB unit goes offline, just like, at a smaller scale, you don't care if a single 300GB drive in your RAID goes offline.

Conceptually, think of doing something like RAID5 or RAID1, but instead of striping and writing parity across individual disks, or mirroring across individual disks, you have your app layer doing that across multiple storage bricks. We are used to the idea of expecting a single drive to fail; they are used to the idea of expecting a single 50TB unit to fail.

People are comparing this to something like Sun's X4540, but it's kind of misguided since Sun engineered the X4540 to be reliable and perform well as a single unit. These guys don't care because they only need the complete system to be reliable and to perform well, not a single unit.

Eternity
Mar 13, 2008


A user is getting error -I'm already connected to that location when trying to map another drive with same login.

He already have one drive mapped on same server.

Windows share is configured on NAS server.

H110Hawk
Dec 28, 2006


IOwnCalculus posted:

I wouldn't trust that, though; I've had Ubuntu randomly reassign drivemappings from one reboot to the next. I just replaced a drive in my backup server - the failed drive was /dev/sdc, but when it came back up it had remapped something else to /dev/sdc and the replacement for the failed drive to /dev/sdb.

Except that once you've popped out the disk you can validate the serial number. It should give you a pretty good first guess. This is one of the reasons we like Solaris for our backup servers. It maps a drive out pretty much the same way every time. You can even have Linux remap the disks every time it boots up and update the database if something changes.

quote:

Don't get me wrong, I like what they've done, but I think they haven't made enough concessions to long-term maintenance and reliability; it seems like for a relatively marginal increase in cost (especially compared to even the next-cheapest solution) they could make things considerably more reliable.

They make most of their money on that marginal increase in cost. I imagine they just have some dude go around to all the devices with a dead disk once a day, or even every few days, power them down, swap the disk, and power them back up. For that price no one is expecting 100% reliability. Just tell people "yeah, your poo poo might go down between 10am and 1pm daily for drive repair and other maintenance." You even have the advantage of people will want to do their backups overnight so you can work during the day.

Having done our storage system where it has to scale past a handful of servers, the cheaper way almost always adds up to the better way. Disks don't fail that often under backup use. One guy, $15/hour, full health benefits. 30 minutes per server per failed disk. You have to beat $15 for "increased reliability". You probably get at most 1.2-1.5 failed disks per rack per day. I imagine any increase in cost would be at a minimum $100 per server. 10 servers in a rack, $1000/rack, $10,000 total. That is 666 drive swaps. You're hoping to bring failure from 1.5 to 1.2 reliably? 1.5 to 0.5? Or just reduce the time spent per disk from 30 minutes to 20 or 15?

Now, with that math aside, if you're replicating this data 2 ways you can just let one machine wholesale fail before replacing it. Pull the whole unit once it gets past what you consider a safe failure threshold, put a new one in the rack, let it copy the data over automatically, and repair the server offline all at once. This would probably let you cut time per disk from 30 to 20 minutes, since there isn't a long spindown/spinup time per disk replaced, and no clock ticking. Their threshold for total failure is likely a lot higher than you might imagine if they worked in this manner.

What do you propose the next cheapest solution to be that would beat their cost and margin?

alo
May 1, 2005




IOwnCalculus posted:

I wouldn't trust that, though; I've had Ubuntu randomly reassign drivemappings from one reboot to the next. I just replaced a drive in my backup server - the failed drive was /dev/sdc, but when it came back up it had remapped something else to /dev/sdc and the replacement for the failed drive to /dev/sdb.

Look at using the links in /dev/disk/by-id/ which will be consistent after reboots (and generally contains the unique serial number of the drive which makes it easier to identify a failed drive).

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA


H110Hawk posted:

Having done our storage system where it has to scale past a handful of servers, the cheaper way almost always adds up to the better way. Disks don't fail that often under backup use. One guy, $15/hour, full health benefits. 30 minutes per server per failed disk. You have to beat $15 for "increased reliability". You probably get at most 1.2-1.5 failed disks per rack per day. I imagine any increase in cost would be at a minimum $100 per server. 10 servers in a rack, $1000/rack, $10,000 total. That is 666 drive swaps. You're hoping to bring failure from 1.5 to 1.2 reliably? 1.5 to 0.5? Or just reduce the time spent per disk from 30 minutes to 20 or 15?

What do you propose the next cheapest solution to be that would beat their cost and margin?

For the cost, they can replicate data across 2 or 3 different racks and not give a poo poo if an entire rack burns down. And yeah, unless you're going to be paying some poor Chinese boy to change tapes in the 500 LTO-4 drives you'd be using for nightly backups, it makes way more sense to just say gently caress it and throw together a new rack and let the data replicate over for added parity.

I kinda want to see the block diagram for the higher level de-dupe and replication. I shudder to think what the block/chunk database looks like for the de-dupe function.

CISADMIN PRIVILEGE
Aug 15, 2004

optimized multichannel
campaigns to drive
demand and increase
brand engagement
across web, mobile,
and social touchpoints,
bitch!


Methylethylaldehyde posted:

For the cost, they can replicate data across 2 or 3 different racks and not give a poo poo if an entire rack burns down. And yeah, unless you're going to be paying some poor Chinese boy to change tapes in the 500 LTO-4 drives you'd be using for nightly backups, it makes way more sense to just say gently caress it and throw together a new rack and let the data replicate over for added parity.

I kinda want to see the block diagram for the higher level de-dupe and replication. I shudder to think what the block/chunk database looks like for the de-dupe function.

Did I miss something or do they have only the single motherboard NIC per unit. Replicating 67TB over that might take a while.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA


bob arctor posted:

Did I miss something or do they have only the single motherboard NIC per unit. Replicating 67TB over that might take a while.

Say you replicate your data twice in a RAID1 setup, so now you have 3 copies of the data across 3 different bricks. What are the chances that 2 bricks will fail completely in the time it takes for it to replicate across that gigabit link. It takes 10 days to completely replicate the data over a single GigE link. The chances of 3 RAID6s failing within 10 days of each other is so retardedly low as to approach zero outside of acts of god.

Hell, I'm betting like 5% of their RAID6s are running either partially or completely degraded. And they don't give a poo poo.

oblomov
Jun 20, 2002

Meh... #overrated

Methylethylaldehyde posted:

Say you replicate your data twice in a RAID1 setup, so now you have 3 copies of the data across 3 different bricks. What are the chances that 2 bricks will fail completely in the time it takes for it to replicate across that gigabit link. It takes 10 days to completely replicate the data over a single GigE link. The chances of 3 RAID6s failing within 10 days of each other is so retardedly low as to approach zero outside of acts of god.

Hell, I'm betting like 5% of their RAID6s are running either partially or completely degraded. And they don't give a poo poo.

Yeap, that's the great part of the setup I am thinking. They are going Google for storage hardware. No idea what their block size is for replication, but say it's 256MB, they can replicate a shitload of those and stripe it through the datacenter in 3-4 places if they want.

It's pretty clever design for a particular function, meant to drive down storage cost. It's not meant to be self-enclosed super-redundant system. If thats the goal, might as well buy a Sun or Equallogic box.

oblomov
Jun 20, 2002

Meh... #overrated

There is pretty interesting NAS that just came out from iomega/EMC: Iomega StorCenter ix4-200d.

http://www.iomega.com/about/prrelea...82709_200d.html

Apparently it can act as iSCSI, and it's on HCL from MS Win 2008 AND VMware ESX(i)! Also supports NFS with VMware. Plus it integrates with AD, provides remote replication and serves CIFS (and NFS). It also has 2x 1GBe ports and a poo poo load of other capabilities (print support, bluetooth, etc...).

The cost is very reasonable considering the functionality at $600ish for 2TB (4x500GB) module.

DLCinferno
Feb 22, 2003

Happy

So I built an OpenSolaris server and I'm absolutely loving ZFS so far. I was running mdadm on Ubuntu before (have had great success with OS software raid) and it's been an interesting venture moving to ZFS. Want to share a file system with a windows machine? Single command: zfs set sharesmb=on pool/media. Done! No samba configuration, it's all automatic and the share is mounted and ready after each reboot without any config editing. Couldn't be easier.

I'd be happy to go into more details on the setup if anyone is interested (including some challenges getting rtorrent working on Solaris), but what I'm really interested in is some feedback on disk spindown.

Basically, what I'd like to know is what people think about MTBF and its relationship to disk spin up/down. Right now I've got the 4 drives in my array set to spin down after 4 hours of inactivity, which reduces the power consumption from 82 watts to 54 watts. However, I've also been reading that each spinup event is roughly equivalent to 10 hours of activity on the disk. The electricity gains are negligible (about $22 a year if all disks were always spun down), so I guess I'm asking if it is harder on the disk to leave them spinning 24x7 or have them spin up at least once a day for at least 4 hours (I usually turn on music/video when I get home from work).

I use the server as a torrent box so it is definitely 24x7, but the torrents are served from the system drive so the array isn't used unless I'm hitting the music/video shares. Should I set the spindown time to something much longer like 24 hours? What I'd like is to maximize the disk lifetimes. I've got a massive Chieftec case with 3 92mm side fans pointed directly onto the drives so the temps are pretty good no matter how hard they are working.

Thoughts?

w_hat
Jul 8, 2003


I'd love to hear about the rtorrent troubles. I eventually got an older version working but my scripts never worked correctly so I'm running XP on virtualbox for utorrent.

eames
May 9, 2009



w_hat posted:

I'd love to hear about the rtorrent troubles. I eventually got an older version working but my scripts never worked correctly so I'm running XP on virtualbox for utorrent.

+1

I too ended up using these packages based on 0.8.2 but they are over a year old (0.8.5 is stable).

movax
Aug 30, 2008



DLCinferno posted:

Basically, what I'd like to know is what people think about MTBF and its relationship to disk spin up/down. Right now I've got the 4 drives in my array set to spin down after 4 hours of inactivity, which reduces the power consumption from 82 watts to 54 watts. However, I've also been reading that each spinup event is roughly equivalent to 10 hours of activity on the disk. The electricity gains are negligible (about $22 a year if all disks were always spun down), so I guess I'm asking if it is harder on the disk to leave them spinning 24x7 or have them spin up at least once a day for at least 4 hours (I usually turn on music/video when I get home from work).

How do you have your drives set to spin down? I'm running OpenSolaris Nevada (the new/beta branch thing), and I haven't figured that out yet...

Farmer Crack-Ass
Jan 2, 2001

~this is me posting irl~


Hey guys, I'm looking at snagging three of these EX-H34 hotswap cages from Lian-Li. (They're going into a P80.)

I managed to hunt down photos of the back of the cage's three-drive sibling and it looks like Lian-Li has the whole cage powered by one molex connector. I can't find photos of the back of the four-drive cage I want, but I imagine it could be the same.


My question: can one molex connector safely power four hard drives, plus a 120mm fan?

DLCinferno
Feb 22, 2003

Happy

moep posted:

+1

I too ended up using these packages based on 0.8.2 but they are over a year old (0.8.5 is stable).

Yeah, I used those first, but wanted the new version. I found this great link that documented the process but it was also for 0.8.2, but I cribbed some of it for 0.8.5, especially the use of gmake.

http://www.jcmartin.org/compiling-r...on-opensolaris/

I also found this bug ticket filed for libtorrent:

http://libtorrent.rakshasa.no/ticket/1003

it had a link to two patches, one for libtorrent:

http://xnet.fi/software/memory_chunk.patch

...and the other for rtorrent:

http://xnet.fi/software/scgi.patch

The patch made it so that I could compile the newest version of libtorrent, but rtorrent still wouldn't compile. I think I may have screwed up some system setting, not really sure, but at this point I called one of my friends who has alot more experience and he was finally able to get it to compile. I'm not sure what exactly he did, but I'll see if he still remembers.

There's also some discussion about the problem on the Solaris mailing list here:

http://mail-index4.netbsd.org/pkgsr.../msg031938.html

...it looks like they fixed it for 0.8.2 so maybe once the update makes it into release then rtorrent will be able to compile normally.


movax posted:

How do you have your drives set to spin down? I'm running OpenSolaris Nevada (the new/beta branch thing), and I haven't figured that out yet...

I added the following lines to /etc/power.conf and then ran /usr/sbin/pmconfig to reload the new config.

I'm sure it's obvious - set the four drives to 120 minutes and the system to always stay running.
code:
device-thresholds        /pci@0,0/pci1458,b005@1f,2/disk@1,0    120m
device-thresholds        /pci@0,0/pci1458,b005@1f,2/disk@2,0    120m
device-thresholds        /pci@0,0/pci1458,b005@1f,2/disk@3,0    120m
device-thresholds        /pci@0,0/pci1458,b005@1f,2/disk@4,0    120m
system-threshold        always-on
The one downside is that ZFS spins up drives sequentially when they are coming back online and it takes like 9 seconds for each one, so when you hit your array you have to wait for awhile. I did run across a script that did them concurrently but I can't find it right now.

eames
May 9, 2009



DLCinferno posted:

Yeah, I used those first, but wanted the new version. I found this great link that documented the process but it was also for 0.8.2, but I cribbed some of it for 0.8.5, especially the use of gmake.

http://www.jcmartin.org/compiling-r...on-opensolaris/

I also found this bug ticket filed for libtorrent:

http://libtorrent.rakshasa.no/ticket/1003

it had a link to two patches, one for libtorrent:

http://xnet.fi/software/memory_chunk.patch

...and the other for rtorrent:

http://xnet.fi/software/scgi.patch

The patch made it so that I could compile the newest version of libtorrent, but rtorrent still wouldn't compile. I think I may have screwed up some system setting, not really sure, but at this point I called one of my friends who has alot more experience and he was finally able to get it to compile. I'm not sure what exactly he did, but I'll see if he still remembers.

There's also some discussion about the problem on the Solaris mailing list here:

http://mail-index4.netbsd.org/pkgsr.../msg031938.html

...it looks like they fixed it for 0.8.2 so maybe once the update makes it into release then rtorrent will be able to compile normally.

Thanks for that. I managed to get libtorrent-0.12.5 compiled but like you I’m stuck on rtorrent-0.8.5. I’ll just wait for the packages to get updated someday, until then I’ll have to settle with 0.8.2.

(I think the reason for the compiler errors is that the required patches listed in the guide (rtorrent-01-solaris.diff, rtorrent-02-event-ports.diff, rtorrent-03-curl-event.diff, rtorrent-04-sunpro.diff and rtorrent-05-sunpro-crash.diff) are for 0.8.2 only and incompatible with 0.8.5.)

That is really the only thing I dislike about OpenSolaris — up–to–date packages are sparse and it requires hours of patching and compiling to get basic stuff working that would require nothing more than one line in the command shell of a linux distribution.

Adbot
ADBOT LOVES YOU

DLCinferno
Feb 22, 2003

Happy

moep posted:

Thanks for that. I managed to get libtorrent-0.12.5 compiled but like you I’m stuck on rtorrent-0.8.5. I’ll just wait for the packages to get updated someday, until then I’ll have to settle with 0.8.2.

(I think the reason for the compiler errors is that the required patches listed in the guide (rtorrent-01-solaris.diff, rtorrent-02-event-ports.diff, rtorrent-03-curl-event.diff, rtorrent-04-sunpro.diff and rtorrent-05-sunpro-crash.diff) are for 0.8.2 only and incompatible with 0.8.5.)

That is really the only thing I dislike about OpenSolaris — up–to–date packages are sparse and it requires hours of patching and compiling to get basic stuff working that would require nothing more than one line in the command shell of a linux distribution.

I haven't been able to get ahold of my friend, but I should have been more clear...I didn't install the patches from the guide, although I did use the newest versions of the packages.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply