Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
rage-saq
Mar 21, 2001

Thats so ninja...

bmoyles posted:

Say you need 100TB of storage, nothing super fast or expensive, but easily expandable and preferably managed as a single unit. Think archival-type storage, with content added frequently, but retrieved much less frequently, especially as time goes on. What do you go for these days?

If you want file-presentation instead of block-presentation from your unit, the new HP Extreme Data Storage 9100 is pretty badass. Really low cost per GB, very fast NFS, CIFS, managed through one console. Its also extremely dense due to their super fancy new disk shelves that hold 82 LFF disks in a 5u shelf. Dreamworks just purchased a few petabytes of it.

Adbot
ADBOT LOVES YOU

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

You could do a coraid etherdrive setup... 5x 24TB shelves would work... and come out to way less than 100K, and it's supposedly unlimited expandability.

Bluecobra
Sep 11, 2001

The Future's So Bright I Gotta Wear Shades

bmoyles posted:

Say you need 100TB of storage, nothing super fast or expensive, but easily expandable and preferably managed as a single unit. Think archival-type storage, with content added frequently, but retrieved much less frequently, especially as time goes on. What do you go for these days?
Sun's Amber Road system looks pretty nifty:

http://www.sun.com/storage/disk_systems/unified_storage/

You can probably save some money if you go with their disk arrays instead. At my work I recently put together a J4400 array which is 24TB raw (24x1TB), and after setting up ZFS with RAIDZ2 + 2 hotspares, it is about 19TB usable. You can daisy chain up to 8 (192 disks) J4400's together so you can expand to roughly ~150TB. One J4440 was about $20k after getting two host cards, dual SAS HBA's, and gold support plus you will need a server to hook it up to. I would find a decent box and load it up with a boatload of memory for the ZFS ARC cache.

http://www.sun.com/storage/disk_systems/expansion/4400/

bmoyles
Feb 15, 2002

United Neckbeard Foundation of America
Fun Shoe
Hmm, interesting stuff, thanks for the comments. This was more for hypothetical stuff. Another group is on the cusp of picking up some EqualLogic gear (their big 48-drive nodes) for this purpose, but it just seemed like overkill given the task. Figured I'd lend a hand and see if there were alternatives for them, save the company some cash, at least assuming there were other alternatives to build-it-yourself solutions. I'll shop these around.

What's the scoop on Coraid, btw? I tried looking into them a few years back, but they didn't do demo units for some reason so I passed.

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS

bmoyles posted:

What's the scoop on Coraid, btw? I tried looking into them a few years back, but they didn't do demo units for some reason so I passed.
Its kinda weird right? hypothetically it sounds like an awesome product that should have taken off... but it didn't... but you don't have widespread report of problems either. I mean, the industry's all wet in the pants about fcoe, while aoe is more or less the same thing available years ago.

bmoyles
Feb 15, 2002

United Neckbeard Foundation of America
Fun Shoe
Yeah, I haven't been able to find any good data on them at all. The price is pretty awesome, and the concept makes sense, but I'm not going to take the plunge if they won't let me play with the boxes first...

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Bluecobra posted:

Sun's Amber Road system looks pretty nifty:

http://www.sun.com/storage/disk_systems/unified_storage/

You can probably save some money if you go with their disk arrays instead. At my work I recently put together a J4400 array which is 24TB raw (24x1TB), and after setting up ZFS with RAIDZ2 + 2 hotspares, it is about 19TB usable. You can daisy chain up to 8 (192 disks) J4400's together so you can expand to roughly ~150TB. One J4440 was about $20k after getting two host cards, dual SAS HBA's, and gold support plus you will need a server to hook it up to. I would find a decent box and load it up with a boatload of memory for the ZFS ARC cache.

http://www.sun.com/storage/disk_systems/expansion/4400/
You can also use an x4600 as an interface to a bunch of Thumpers (up to 6 at 48x1TB each). They tend not to advertise this functionality much. If you're going to do this, I recommend OpenSolaris/SXCE over Solaris 10 because of the substantial improvements in native ZFS kernel CIFS sharing.

Vulture Culture fucked around with this message at 16:16 on Jun 16, 2009

brent78
Jun 23, 2004

I killed your cat, you druggie bitch.

bmoyles posted:

What's the scoop on Coraid, btw? I tried looking into them a few years back, but they didn't do demo units for some reason so I passed.

I have four Coraid SR1521's populated with 15 x 500GB SATA each. I was excited when I got them a couple years ago, basically 15 hot swap drive bays, N+2 power supply configuration and 2 x 1 GbE network connectivity all for $5k each (not including the drives, which was another 2k). I started off doing some xen virtulization since AOE wasn't (and still isn't) supported by VMware. I enabled jumbo frames, configured the drives as a single RAID-10 with one hot spare. Never was able to achieve anywhere near the published numbers. With a moderate amount of disk I/O the shelf would really start to lag bad. They fact that it has zero cache really hurts the performance. If they would slap in a 2GB cache and decent management and alerting tools it would be killer. I ended up buying an equallogic shelf and consolidated the VMs from all four CR1521's on to it, and still have IOPS to spare. Keep in mind though that the EQ box has 16 x 300GB 15k SCSI. I will sell the CR1521's to anyone who wants them for a song. Coraid sells an HBA that allows you to use them in VMware now.

brent78 fucked around with this message at 18:32 on Jun 16, 2009

complex
Sep 16, 2003

Misogynist posted:

You can also use an x4600 as an interface to a bunch of Thumpers (up to 6 at 48x1TB each). They tend not to advertise this functionality much. If you're going to do this, I recommend OpenSolaris/SXCE over Solaris 10 because of the substantial improvements in native ZFS kernel CIFS sharing.

You've got to post a link detailing the "x4600 as a master to multiple x4500s" setup. I've never heard of that.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

complex posted:

You've got to post a link detailing the "x4600 as a master to multiple x4500s" setup. I've never heard of that.
I don't have anything handy, but it's something a Sun VAR recommended to us for low-cost cluster storage. I don't actually have any idea how it works.

Bluecobra
Sep 11, 2001

The Future's So Bright I Gotta Wear Shades

Misogynist posted:

You can also use an x4600 as an interface to a bunch of Thumpers (up to 6 at 48x1TB each). They tend not to advertise this functionality much. If you're going to do this, I recommend OpenSolaris/SXCE over Solaris 10 because of the substantial improvements in native ZFS kernel CIFS sharing.
Are you sure you don't mean hooking up a bunch of J4500 arrays to the X4600? The J4500 looks very much like a Thumper:




Though I don't see why you would need something like an X4600 when you can get a X4440 instead. With the J4500 array, you will daisy chain each expansion tray together instead of having a dedicated external SAS port in the server for each tray. One cool thing is that the SAS HBA's support MPxIO so you can be connected to both host cards in the tray.

H110Hawk
Dec 28, 2006

bmoyles posted:

What's the scoop on Coraid, btw? I tried looking into them a few years back, but they didn't do demo units for some reason so I passed.

They suck balls, for lack of a more elegant way of phrasing it. Their "device" is just plan9 with a AOE stack on it. They're slow, latent, and bug prone. We bought 400-500TB worth of them a few years back, using 750gb disks, and regretted it every second of the way.

xarph
Jun 18, 2001


bmoyles posted:

Say you need 100TB of storage, nothing super fast or expensive, but easily expandable and preferably managed as a single unit. Think archival-type storage, with content added frequently, but retrieved much less frequently, especially as time goes on. What do you go for these days?

HDS (the piece that used to be Archivas) makes a widget called HCAP that's designed for long-term data archiving. http://www.hds.com/products/storage-systems/content-archive-platform/index.html If you have any really specific questions about what it can do, I can get them answered.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

bmoyles posted:

Say you need 100TB of storage, nothing super fast or expensive, but easily expandable and preferably managed as a single unit. Think archival-type storage, with content added frequently, but retrieved much less frequently, especially as time goes on. What do you go for these days?

EMC Centera

http://www.emc.com/products/detail/hardware/centera.htm

Works with hundreds of apps including Symantec Enterprise Vault, DiskXtender, etc.

Also replicates to a second Centera so you don't have to back it up.

If you go with the Parity model you're looking at 97TB usable in a full rack.

EscapeHere
Jan 16, 2005
Are there any knowledgeable goons here that would care to comment on an HDS AMS 2500 vs an IBM DS5300? We're looking at around 100Tb with a mix of FC/SAS and SATA drives, but will need to expand that to +200Tb over the next couple of years. HDS are claiming their new internal SAS architecture is all the rage; IBM are basically saying HDS are full of poo poo and their stuff is way cooler. The IBM kit is about 10-15% more expensive but supposedly faster, so they tell me. On the other hand, we've been previously using HDS for many years and had no major problems, it's all worked very well and their support has been excellent. While we have plenty of IBM servers we have never used or bought anything from their storage range.

For this project the "fast" drives (FC or SATA) would be used for VMWare for Exchange, AD, etc while the SATA disks would be used for archiving of medical records. Can anyone give any advice or reasons why it might be worth (or not) spending the extra $s on the IBM?

Maneki Neko
Oct 27, 2000

EscapeHere posted:

Are there any knowledgeable goons here that would care to comment on an HDS AMS 2500 vs an IBM DS5300? We're looking at around 100Tb with a mix of FC/SAS and SATA drives, but will need to expand that to +200Tb over the next couple of years. HDS are claiming their new internal SAS architecture is all the rage; IBM are basically saying HDS are full of poo poo and their stuff is way cooler. The IBM kit is about 10-15% more expensive but supposedly faster, so they tell me. On the other hand, we've been previously using HDS for many years and had no major problems, it's all worked very well and their support has been excellent. While we have plenty of IBM servers we have never used or bought anything from their storage range.

For this project the "fast" drives (FC or SATA) would be used for VMWare for Exchange, AD, etc while the SATA disks would be used for archiving of medical records. Can anyone give any advice or reasons why it might be worth (or not) spending the extra $s on the IBM?

I've never been terribly happy with the IBM storage I've used in the past (we have an older DS4000 series that I'm in the process of retiring), although I haven't used the DS5300. Performance was ok, and the hardware itself was fairly reliable (with the exception of cache batteries dying every 5 or 6 months, which requires you to pull and disassemble the controller with a screwdriver), but support and management of the hardware itself was a huge pain in the rear end.

By and large though, that's the same general complaint have about all IBM equipment. Stupid poo poo like 17 different serial numbers being on a piece of equipment and IBM has no idea which one they actually want, FRU numbers that are meaningless 3 weeks after you buy something, 12 different individual firmware updates that need to manually tracked and aren't updated regularly vs. regularly updated packages, etc. I have no idea how IBM stays in business, dealing with them is terrible.

Maneki Neko fucked around with this message at 20:46 on Jun 19, 2009

xarph
Jun 18, 2001


EscapeHere posted:

For this project the "fast" drives (FC or SATA) would be used for VMWare for Exchange, AD, etc while the SATA disks would be used for archiving of medical records. Can anyone give any advice or reasons why it might be worth (or not) spending the extra $s on the IBM?

What you get in performance for that 10-15% bump is going to be erased by the utter pain in the rear end that managing IBM kit is. IBM gear is great if your company is already an IBM dynasty.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Maneki Neko posted:

By and large though, that's the same general complaint have about all IBM equipment. Stupid poo poo like 17 different serial numbers being on a piece of equipment and IBM has no idea which one they actually want, FRU numbers that are meaningless 3 weeks after you buy something, 12 different individual firmware updates that need to manually tracked and aren't updated regularly vs. regularly updated packages, etc. I have no idea how IBM stays in business, dealing with them is terrible.
I'm not that familiar with their SAN gear (we're just now installing our DS4800) but doesn't IBM Director usually do a pretty bang-up job of handling all the firmware bullshit?

BonoMan
Feb 20, 2002

Jade Ear Joe
A good rack-mountable NAS with 4 to 8 TB and under 2 grand?

KoeK
May 15, 2003
We dont die we multiply

BonoMan posted:

A good rack-mountable NAS with 4 to 8 TB and under 2 grand?

If you do not need any performance at all you could look at the Netgear RNR4410-100EUS, should be between $1700 - $2500. It is rack mountable and has 4 1Tb disks. The drawbacks are the absolute terrible speed, the annoying web interface and the crappy build quality.

If you want something with a bit more performance you could look at HP X1400 NAS (AP787A), the 4TB model costs around 6 grand. This is still not a absolute speed monster, but is multiple times faster than the netgear.

If you can support it yourself you could build something from newegg and use FreeNAS but I have no idea what components to choose.

Mierdaan
Sep 14, 2004

Pillbug

KoeK posted:

If you do not need any performance at all you could look at the Netgear RNR4410-100EUS, should be between $1700 - $2500. It is rack mountable and has 4 1Tb disks. The drawbacks are the absolute terrible speed, the annoying web interface and the crappy build quality.

I have one of these I think, from when they were made by Infrant and mine only has 4x 500GB drives. It is absolutely terrible and I wouldn't recommend it to anyone.

KoeK
May 15, 2003
We dont die we multiply

Mierdaan posted:

I have one of these I think, from when they were made by Infrant and mine only has 4x 500GB drives. It is absolutely terrible and I wouldn't recommend it to anyone.

I have a client which didn't want to listen to my advice and took the 2 TB ReadyNAS. And yes it sucks, but what to expect for 2 grand.

BonoMan
Feb 20, 2002

Jade Ear Joe
Sweet thanks for the recommends. Doesn't have to be ultra fast as it will only be used to pull graphic stills and not video.

BonoMan
Feb 20, 2002

Jade Ear Joe
What about Qnap? I haven't heard crappy things and we're looking at this:

http://www.newegg.com/Product/Product.aspx?Item=N82E16822107023

Which seems decent enough and has 8 bays which is nice.

Any thoughts?

optikalus
Apr 17, 2008

BonoMan posted:

What about Qnap? I haven't heard crappy things and we're looking at this:

http://www.newegg.com/Product/Product.aspx?Item=N82E16822107023

Which seems decent enough and has 8 bays which is nice.

Any thoughts?

Looks like a cheaper version of the Adaptec SnapAppliance, but the SnapAppliance has a decent OS and is proven reliable (ours has a few year uptime). I wouldn't use it for anything but archive.

BonoMan
Feb 20, 2002

Jade Ear Joe

optikalus posted:

Looks like a cheaper version of the Adaptec SnapAppliance, but the SnapAppliance has a decent OS and is proven reliable (ours has a few year uptime). I wouldn't use it for anything but archive.

Huh. Well 2500 is our budget and I can't find good pricing info on SnapAppliance anywhere. We're looking at 6TB storage.

optikalus
Apr 17, 2008

BonoMan posted:

Huh. Well 2500 is our budget and I can't find good pricing info on SnapAppliance anywhere. We're looking at 6TB storage.

Well, $2100 + tax and shipping doesn't leave you much room for drives, and I'd heavily recommend RAID6 for SATAs, so 8 1TB drives.

BonoMan
Feb 20, 2002

Jade Ear Joe

optikalus posted:

Well, $2100 + tax and shipping doesn't leave you much room for drives, and I'd heavily recommend RAID6 for SATAs, so 8 1TB drives.

Yeah there might be a little bit of flux we'll have to see. What's pricing like on SnapAppliances? I realize that's kind of a vague question, but any ideas?

Thanks for the advice!

optikalus
Apr 17, 2008

BonoMan posted:

Yeah there might be a little bit of flux we'll have to see. What's pricing like on SnapAppliances? I realize that's kind of a vague question, but any ideas?

Thanks for the advice!

Looks like Adaptec sold it to Overland, and I can't find any current pricing. I remember them running about $5k for an 8TB box. At that price, you might as well look at Hitachi as well.

BonoMan
Feb 20, 2002

Jade Ear Joe

optikalus posted:

Looks like Adaptec sold it to Overland, and I can't find any current pricing. I remember them running about $5k for an 8TB box. At that price, you might as well look at Hitachi as well.

At that price I'm just gonna stab myself in the eye.

So we have 1500 budgeted for a firewall. Only we need a very simple firewall (it's a simple simple simple network). So we're thinking maybe we can get a simple firewall for 400-700 and use the rest for drives?? Any firewall ideas?

echo465
Jun 3, 2007
I like ice cream

BonoMan posted:

At that price I'm just gonna stab myself in the eye.

So we have 1500 budgeted for a firewall. Only we need a very simple firewall (it's a simple simple simple network). So we're thinking maybe we can get a simple firewall for 400-700 and use the rest for drives?? Any firewall ideas?

Linksys WRT54G family?

Basic PC running a linux firewall distribution? (IPCop, Smoothwall, etc) -- A former employer of mine ran 4 or 5 sites using IPCop and cable modem connections, the largest being maybe 100 office workers.

KoeK
May 15, 2003
We dont die we multiply

BonoMan posted:

At that price I'm just gonna stab myself in the eye.

So we have 1500 budgeted for a firewall. Only we need a very simple firewall (it's a simple simple simple network). So we're thinking maybe we can get a simple firewall for 400-700 and use the rest for drives?? Any firewall ideas?

Depending on what kind of requirements you need, a Juniper SSG-5 (Should be between $500 - $600) or a Cisco ASA 5505 are both simple firewalls with comparable features. I'd go for the Juniper, but that is personal preference :)

Bluecobra
Sep 11, 2001

The Future's So Bright I Gotta Wear Shades

BonoMan posted:

A good rack-mountable NAS with 4 to 8 TB and under 2 grand?
You can do this if you roll your own with a 3U Supermicro case, 1.5TB drives, a decent Intel motherboard/processor, and OpenSolaris so you can use ZFS. Once you get OpenSolaris installed, it is pretty trival to make a ZFS pool and you can do something like a RAIDZ2 which similar to RAID 6 in redundancy. You can then share out the ZFS pool you just created to Windows hosts with a CIFS share.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Bluecobra posted:

You can do this if you roll your own with a 3U Supermicro case, 1.5TB drives, a decent Intel motherboard/processor, and OpenSolaris so you can use ZFS. Once you get OpenSolaris installed, it is pretty trival to make a ZFS pool and you can do something like a RAIDZ2 which similar to RAID 6 in redundancy. You can then share out the ZFS pool you just created to Windows hosts with a CIFS share.
Just note that if you take this route and expect AD integration, you had better be very familiar with LDAP and Kerberos (or at least know enough to troubleshoot when the tutorial you're following misses a step), because it's not much more straightforward than it is in Samba. OpenSolaris is an amazing OS for Unix/Linuxy people, but Sun bet the storage farm on the 7000 series' secret sauce, not the OpenSolaris CLI.

Bluecobra
Sep 11, 2001

The Future's So Bright I Gotta Wear Shades

Misogynist posted:

Just note that if you take this route and expect AD integration, you had better be very familiar with LDAP and Kerberos (or at least know enough to troubleshoot when the tutorial you're following misses a step), because it's not much more straightforward than it is in Samba. OpenSolaris is an amazing OS for Unix/Linuxy people, but Sun bet the storage farm on the 7000 series' secret sauce, not the OpenSolaris CLI.
Well I suppose if you are getting stuck on that you can easily install Sun's SAMBA package and you can use SWAT to create shares instead.

BonoMan
Feb 20, 2002

Jade Ear Joe
What's the general process for replacing drives in a NAS with newer ones? Say you have an 8 bay NAS and they're all taken up and then 3 years down the line you want to replace the drives? How does that happen without just copying everything over to a duplicate NAS or whatever?

H110Hawk
Dec 28, 2006

BonoMan posted:

What's the general process for replacing drives in a NAS with newer ones? Say you have an 8 bay NAS and they're all taken up and then 3 years down the line you want to replace the drives? How does that happen without just copying everything over to a duplicate NAS or whatever?

Typically copy and replace is how it is done. With the way disk sizes grow you will likely be able to do some sideline magic where you take half of your new disks, make a quick software array on your current computer, copy data. Yank all of your disks from NAS, make software array, copy data. Put all new disks including original array into NAS and build, copy data final time.

Some controllers allow you to do one at a time disk swaps/rebuilds. Once you have rebuilt onto the larger disks it will automatically (or with some button pressing) expand the raw device to be the larger size. Once you've done that you have to expand the overlaid filesystem somehow. If it's a "black box" device you are at the mercy of the device, if it's linux/windows you are at the mercy of the filesystem grow commands. (NTFS can grow with tools a-la partition magic. Other common filesystems have similar things, http://www.google.com/search?q=ext3+grow+filesystem ) Sometimes OS's don't take kindly to raw block devices that are directly attached changing size while booted. I would suggest mounting your filesystem read only for the rebuild where the FS can grow.

(In reality you just restore from backups, right? :laugh: )

BonoMan
Feb 20, 2002

Jade Ear Joe

H110Hawk posted:

Typically copy and replace is how it is done. With the way disk sizes grow you will likely be able to do some sideline magic where you take half of your new disks, make a quick software array on your current computer, copy data. Yank all of your disks from NAS, make software array, copy data. Put all new disks including original array into NAS and build, copy data final time.

Some controllers allow you to do one at a time disk swaps/rebuilds. Once you have rebuilt onto the larger disks it will automatically (or with some button pressing) expand the raw device to be the larger size. Once you've done that you have to expand the overlaid filesystem somehow. If it's a "black box" device you are at the mercy of the device, if it's linux/windows you are at the mercy of the filesystem grow commands. (NTFS can grow with tools a-la partition magic. Other common filesystems have similar things, http://www.google.com/search?q=ext3+grow+filesystem ) Sometimes OS's don't take kindly to raw block devices that are directly attached changing size while booted. I would suggest mounting your filesystem read only for the rebuild where the FS can grow.

(In reality you just restore from backups, right? :laugh: )


We do have an LTO2 system layin' around so I guess we could use that.

lilbean
Oct 2, 2003

Alright, I feel like a cheap bastard for asking this but here goes. We have an X4540 at work and it's awesome. It came loaded up with 250 gigabyte Seagate SATA drives, and I'd like to upgrade one of the vdevs (6 drives) and a hot spare with 1 terrabyte drives. My Sun vendor (unsurprisingly) wants $850 for the Sun-branded Seagate ES.2 1TB drives. I can get the same drives from CDW for about $250. These are Canadian prices by the way.

Now I also have Sun J4200 disk arrays here running since January or so and a few of them have had their 250GB drives replaced with the cheaper Seagate Barracuda 7200.11 drives (after upgrading their firmware fortunately) and they're working fine. Only one has failed with block errors, and I haven't had any strange RAID drop outs, caching issues, or other strange problems that could be associated to non-Enterprise firmware.

So the real question is, can I cheap out and use the 7200.11/7200.12 drives for the X4540 without any issue? They're literally half the cost of the ES.2 disks. Also, I'm not worried about support since we've confirmed that issues not caused by third-party disks are still supported.

Adbot
ADBOT LOVES YOU

H110Hawk
Dec 28, 2006

lilbean posted:

So the real question is, can I cheap out and use the 7200.11/7200.12 drives for the X4540 without any issue? They're literally half the cost of the ES.2 disks. Also, I'm not worried about support since we've confirmed that issues not caused by third-party disks are still supported.

You should be fine. The hardest part of the operation is breaking the loctite.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply