Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
ozmunkeh
Feb 28, 2008

hey guys what is happening in this thread

rage-saq posted:

Just get another LeftHand node, that way you can use your existing LeftHand stuff for secondary storage / backups / etc. Its a good platform and if your infrastructure is right it will perform more than adequately for the kind of environment you are talking about.

What their marketing doesn't tell you is that to increase capacity/iops you can't "just add another NSM to the management group" when they discontinue the models you purchased a couple months prior and can't source another one anywhere. Of course you can purchase one of the new models but due to the way their network RAID works it'll perform no better than the old modules and won't take advantage of any extra drive space despite being almost twice the price. Their "thanks for your money, now gently caress off" sales approach didn't impress me too much. I'll no doubt get a quote for their new machines for the sake of completeness but they're as far removed from my first choice as they can be.

I did take a look at the Sun 7xxx machines and have a local guy calling me back this afternoon so we'll see what happens there. Also, EMC's AX4 quote was half the price of the NX4 (NFS would be nice but not $20K nice).
NetApp still don't seem too bothered about taking our money, maybe their phones are broken. :haw:

Adbot
ADBOT LOVES YOU

Nukelear v.2
Jun 25, 2004
My optional title text

Insane Clown Pussy posted:

So, calling around trying to find a replacement for our old Lefthand boxes. I need something in the 2-4 TB range for 2-3 ESX hosts running a typical 6-8 server Windows environment (files, exchange, sql, sharepoint etc) I spoke with someone from EMC last week and just got quoted for a dual blade NX4 with 15 x 450GB SAS drives and CIFS/NFS/iSCSI for $40+K.

I have a setup similar to your description running on Dell MD3000i's. Dual controller, 15x450GB SAS 15k RPM drives. Was a bit over $13k.

The Sun 7110's do look fairly nice, not very familiar with their kit. Are they basically PC's or does it have any sort of dual controller redundancy?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
AFAIK all of the 7000 series use ZFS, which if your as :swoon: over ZFS as I am, is awesome.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Nukelear v.2 posted:

I have a setup similar to your description running on Dell MD3000i's. Dual controller, 15x450GB SAS 15k RPM drives. Was a bit over $13k.

The Sun 7110's do look fairly nice, not very familiar with their kit. Are they basically PC's or does it have any sort of dual controller redundancy?
you need to move up to the 7300 series for HA with shared storage.

conntrack
Aug 8, 2003

by angerbeet
For "cheap" SATA FC storage we recently got two Hitachi AMS2100 systems. Besides having the wonky HDS style of management software i like the AMS2100 series fine.

We will get a free HDP (thin prov) license next time we buy a new batch of drives so performance should (might) be the same as a really wide stripe when we have the boxes filled with 2TB spindles.

Sun quoted us a higher price for ONE 7000 series with 24 spindles than
what we paid for two 2100 systems with active-active controllers and 30 1TB spindles in each box.

HDS is looking better and better these days. HP can die in a fire, buying anything from them is a hassle and the sales people are all retards.

Serfer
Mar 10, 2003

The piss tape is real



This might not be the right crowd, but any idea how something like a Netapp box has a pool of drives which are connected to the two head units? They can assign which drives go to which in software as well. The only solutions I can come up with involve a single point of failure (eg, using a controller and serving up the drives each as their own LUN to the heads).

complex
Sep 16, 2003

You're asking how it works? Basically there are two FCAL loops, with a controller at the 'head' of each loop. In a clustered configuration each disk actually has two different addresses, so each head could access it if the other went down. Picture a disk that has two connectors on the back of it, connected to two different controllers. As long as each controller agrees on who is the doing the work, everything is fine cause they won't step on each other.

Imagine a street of 6 houses. The are numbered 1, 2, 3, 4, 5, and 6. They are served by one mailman and everything is happy.

Now, you could add a second 'label' to the houses. Label them A, B, C, D, E, and F. House 1 could also be called House A, House E could also be called House 5.

To split the work the postal service adds a second mailperson; Alice serves houses 1, 2, and 3 while Bob serves houses D, E, and F. If either Alice or Bob were sick (i.e. a storage controller failed) they will simply pick up the other half of the mail route.

Check out http://www.docstoc.com/docs/23803079/Netapp-basic-concepts-quickstart-guide, particularly from page 40 and on, for more details (and without ridiculous analogies).

Serfer
Mar 10, 2003

The piss tape is real



complex posted:

You're asking how it works? Basically there are two FCAL loops, with a controller at the 'head' of each loop. In a clustered configuration each disk actually has two different addresses, so each head could access it if the other went down. Picture a disk that has two connectors on the back of it, connected to two different controllers. As long as each controller agrees on who is the doing the work, everything is fine cause they won't step on each other.

That's essentially what I figured they were doing. Multipathing, but instead running the second path to the other controller. It seemed to me though, that whatever they were using for that became a new single point of failure.

H110Hawk
Dec 28, 2006

Serfer posted:

This might not be the right crowd, but any idea how something like a Netapp box has a pool of drives which are connected to the two head units? They can assign which drives go to which in software as well. The only solutions I can come up with involve a single point of failure (eg, using a controller and serving up the drives each as their own LUN to the heads).

Via magic, faeries, pixie dust, and most importantly lots of money.

This blog has a really great picture to illustrate the setup:

http://netapp-blog.blogspot.com/2009/08/netapp-activeactive-vs-activepassive.html

The closest thing to a single point of failure is the cluster interconnect for NVRAM mirroring. However, if the interconnect fails your cluster continues to serve data from its non-fault tolerant state, but will not transition to a new state. This means if filer A is currently active for A and B, it will continue to do so upon cluster link failure. If filer A and filer B are both serving their own data, they will never fail over to the other automatically.

The filers maintain some state information on the disks themselves in a reserved few blocks for filer A and filer B respectively so they can make educated guesses about the other filer's state. There are VERY dire warnings and consequences to acting upon a filer when it cannot sense its neighbor.

Never disagree with what a RAID setup thinks about your array without very good reason. (This is just unsolicited advice. It's the most concise way I train people in using storage systems, as it is what every action boils down to on a fileserver.)

Serfer
Mar 10, 2003

The piss tape is real



H110Hawk posted:

Via magic, faeries, pixie dust, and most importantly lots of money.
Well, I guess I won't be able to build something similar...

namaste friends
Sep 18, 2004

by Smythe

Serfer posted:

This might not be the right crowd, but any idea how something like a Netapp box has a pool of drives which are connected to the two head units? They can assign which drives go to which in software as well. The only solutions I can come up with involve a single point of failure (eg, using a controller and serving up the drives each as their own LUN to the heads).

If you're asking what I think you're asking, it works like this; with ONTAP 7 and above, the disks contain metadata written at the RAID level which assigns the disks to their respective controllers. This is called software disk ownership.

Generally in NetApp clusters which are described as "active/active", each node owns some portion of the disks. During normal operation, i/o is written to the NVRAM and shared to each node through the cluster interconnect. Once the NVRAM is full for a node the data is then written to disks which it owns.

In the event of a failover, the node which has "taken over" its partner's disks can continue serving data and writing data to its partner's disks, all thanks to the magic of software disk ownership.

In the old days of hardware disk ownership, each node in the cluster owned disks plugged into a specific HBA port/loop on the filer. With software disk ownership, it doesn't matter what loop is plugged into where as ownership only depends on the RAID metadata written to the disk.

I've seen systems where disk ownership is scattered all over the stacks of disk trays.

namaste friends
Sep 18, 2004

by Smythe

H110Hawk posted:

Via magic, faeries, pixie dust, and most importantly lots of money.

This blog has a really great picture to illustrate the setup:

http://netapp-blog.blogspot.com/2009/08/netapp-activeactive-vs-activepassive.html

The closest thing to a single point of failure is the cluster interconnect for NVRAM mirroring. However, if the interconnect fails your cluster continues to serve data from its non-fault tolerant state, but will not transition to a new state. This means if filer A is currently active for A and B, it will continue to do so upon cluster link failure. If filer A and filer B are both serving their own data, they will never fail over to the other automatically.

The filers maintain some state information on the disks themselves in a reserved few blocks for filer A and filer B respectively so they can make educated guesses about the other filer's state. There are VERY dire warnings and consequences to acting upon a filer when it cannot sense its neighbor.

Never disagree with what a RAID setup thinks about your array without very good reason. (This is just unsolicited advice. It's the most concise way I train people in using storage systems, as it is what every action boils down to on a fileserver.)

Just wanted to add that all of NetApp's current product line, except for the 6000 series have interconnects on a circuit board backplane.

Serfer
Mar 10, 2003

The piss tape is real



Cultural Imperial posted:

If you're asking what I think you're asking, it works like this; with ONTAP 7 and above, the disks contain metadata written at the RAID level which assigns the disks to their respective controllers. This is called software disk ownership.
Well, I was trying to figure out how they had drives connected to both systems really. I wanted to build something similar and use something like Nexenta, but it's looking more like these things are specialized and not available to people who want to roll their own.

lilbean
Oct 2, 2003

Serfer posted:

Well, I was trying to figure out how they had drives connected to both systems really. I wanted to build something similar and use something like Nexenta, but it's looking more like these things are specialized and not available to people who want to roll their own.
You may be better off looking at just building a couple of systems and using something like DRBD to mirror a slab of disks.

namaste friends
Sep 18, 2004

by Smythe

Serfer posted:

Well, I was trying to figure out how they had drives connected to both systems really. I wanted to build something similar and use something like Nexenta, but it's looking more like these things are specialized and not available to people who want to roll their own.

Hmm sorry I can't help you there.

Serfer
Mar 10, 2003

The piss tape is real



lilbean posted:

You may be better off looking at just building a couple of systems and using something like DRBD to mirror a slab of disks.
Yeah, that's pretty much where it's going. 100% overhead. But still cheaper that way than buying Netapp or EMC. Which reminds me, despite calling a half dozen times, and having a conference with Sun, they never sent me a quote despite promising that it would get there in x<7 days every time I called.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Serfer posted:

Yeah, that's pretty much where it's going. 100% overhead. But still cheaper that way than buying Netapp or EMC. Which reminds me, despite calling a half dozen times, and having a conference with Sun, they never sent me a quote despite promising that it would get there in x<7 days every time I called.

I can get a quote from our Sun vendor in a few hours. You've gotta find yourself a local vendor. They're always cheaper than anything we can get directly from Sun or CDWG, although we save money being an EDU.

Serfer
Mar 10, 2003

The piss tape is real



FISHMANPET posted:

I can get a quote from our Sun vendor in a few hours. You've gotta find yourself a local vendor. They're always cheaper than anything we can get directly from Sun or CDWG, although we save money being an EDU.
Well, it's probably too late now. We've already purchased one Netapp for a test setup, and I've got it all running, but we still have a little time before it needs to be rolled out to every other location. So I'll see I guess.

namaste friends
Sep 18, 2004

by Smythe

Serfer posted:

Yeah, that's pretty much where it's going. 100% overhead. But still cheaper that way than buying Netapp or EMC. Which reminds me, despite calling a half dozen times, and having a conference with Sun, they never sent me a quote despite promising that it would get there in x<7 days every time I called.

I would imagine Sun staff are busy trying to find new jobs right now.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

rage-saq posted:

Just get another LeftHand node, that way you can use your existing LeftHand stuff for secondary storage / backups / etc. Its a good platform and if your infrastructure is right it will perform more than adequately for the kind of environment you are talking about.

This is what I would probably do. The new G2's just came out, and we got some really impressive pricing on them.

frunksock
Feb 21, 2002

Not sure if this is the thread, but I want to talk Linux md for a bit. I've worked with ZFS and VxVM and with high-end enterprise arrays (DMX, etc), but not much with Linux md on cheap 1U/2Us with SATA disks.

My understanding is that it's commonly recommended to disable the write-back cache on SATA disks to protect from corruption and/or data loss in the event of a power failure or crash, when using software RAID without a battery-backed RAID controller. I understand that this risk exists even on a single, non-RAID drive, but that it's multiplied in software RAID configurations, especially so for RAID5/RAID6. Here are some things I am not 100% clear on:

RAID1 / RAID10: Does doing RAID1 or RAID10 pose any increased risk of data corruption or data loss due to power failure and pending writes (writes ACKed and cached by the disks, but uncommitted)? If so, how does this work?

Barriers: Does using ext3 or XFS barriers afford the same amount of protection from this situation as disabling the write cache entirely (again, say, for RAID10)? I also understand that barriers do not work with Linux md RAID5/6 .. what about RAID10?

Disabling the disks' write cache: I know how to do this using hdparm, but I also know that it is not a persistent change. If the machine reboots, the disks will come back up with the write-cache re-enabled. Worse, if there's a disk reset, they will come back up with the write-cache re-enabled (making the idea of doing it with a startup script inadequate). In RHEL4, there used to be an /etc/sysconfig/harddisks, but this no longer exists in RHEL5. What is the current method of persistently disabling the write cache on SATA disks? Is there a kernel option?

lilbean
Oct 2, 2003

I would just mitigate most of the risk by using a UPS and - if you can afford it - systems with redundant power. I mean a motherboard can still blow and take down the whole system immediately, but most drives follow the flush and sync commands enough to not worry that much.

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS
another loving random filesystem-goes-read-only-because-you-touched-your-san waste of an afternoon.

I have four identically configured servers, each with a dualport hba, each port going to a separate fc-switch, each fc-switch linking to a separate controller on the storage array. All four configured identically with their own individual 100GB LUN using dm-multipath/centos5.3.

I created two new LUNs and assigned them to two other and completely unrelated hosts to the four mentioned above.

*ONE* of the four blades immediatly detects a path failure, then it recovers, then detects a path failure on the other link, then it recovers, then detects a failure on the first path again, and says it recovers, but somewhere in here ext3 flips its poo poo and remounts the filesystem readonly.

Now, if I try to remount it, it says it can't because the block device is write protected. However multipath -ll says its [rw].

frunksock
Feb 21, 2002

lilbean posted:

I would just mitigate most of the risk by using a UPS and - if you can afford it - systems with redundant power. I mean a motherboard can still blow and take down the whole system immediately, but most drives follow the flush and sync commands enough to not worry that much.

The colo these servers are in has issues often enough that that is not enough for me. I also want to understand what's what, even if I had bulletproof systems and datacenters.

Halo_4am
Sep 25, 2003

Code Zombie
Did a search and saw knocking on Fujitsu drives but not their hardware. Anybody have any experience with the Eternus dx60/dx80 lines? Spec sheet makes these units look pretty awesome for the range... reviews on them that don't look like PR campaigns are hard to come by though.

optikalus
Apr 17, 2008

StabbinHobo posted:

another loving random filesystem-goes-read-only-because-you-touched-your-san waste of an afternoon.

I have four identically configured servers, each with a dualport hba, each port going to a separate fc-switch, each fc-switch linking to a separate controller on the storage array. All four configured identically with their own individual 100GB LUN using dm-multipath/centos5.3.

I created two new LUNs and assigned them to two other and completely unrelated hosts to the four mentioned above.

*ONE* of the four blades immediatly detects a path failure, then it recovers, then detects a path failure on the other link, then it recovers, then detects a failure on the first path again, and says it recovers, but somewhere in here ext3 flips its poo poo and remounts the filesystem readonly.

Now, if I try to remount it, it says it can't because the block device is write protected. However multipath -ll says its [rw].

I had this *exact* same problem with my colocation provider plugged both powersupplies of one of my servers into the same (overloaded) PDU. When the PDU finally tripped, ext3 lost its mind and everything was readonly. However, it also incorrectly claimed that it was r/w, but could not remount rw. I had to reboot single user, then run fsck against the partition (2TB!) and finally was able to mount it again.

brent78
Jun 23, 2004

I killed your cat, you druggie bitch.
My iSCSI / multipath notes for CentOS 5.4, using an EqualLogic PS5000XV and 6000XV. Hopefully they will be of some help to someone.

Configure 2 NICs on the iSCSI network, for me that was eth2 and eth3, then run a discovery
code:
iscsiadm -m iface -I iface0 --op=new
iscsiadm -m iface -I iface1 --op=new
grep -i hwaddr /etc/sysconfig/network-scripts/ifcfg-eth2
grep -i hwaddr /etc/sysconfig/network-scripts/ifcfg-eth3
iscsiadm -m iface -I iface0 --op=update -n iface.hwaddress -v 00:15:17:29:C5:8E
iscsiadm -m iface -I iface1 --op=update -n iface.hwaddress -v 00:15:17:29:C5:8F
iscsiadm -m discovery -t st -p 172.16.1.240 --interface=iface0 --interface=iface1
After discovery, let's login..
code:
iscsiadm --mode node --targetname iqn.2001-05.com.equallogic:0-8a0906-c014ea803-495000004df498e6-lun01 --login
/etc/multipathd.conf - Get the wwid's by running 'multipath -ll'
code:
defaults {
        user_friendly_names yes
}

multipaths {

        multipath {
                wwid                    36090a03880ead4837699a44f00009042
                alias                   eqlogic-lun0
        }

        multipath {
                wwid                    36090a03880ea14c0e698f44d00005049
                alias                   eqlogic-lun1
        }

}

device {
        vendor "EQLOGIC"
        product "100E-00"
        path_grouping_policy multibus
        getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
        features "1 queue_if_no_path"
        path_checker readsector0
        failback immediate
        path_selector "round-robin 0"
        rr_min_io 128
        rr_weight priorities
}
/etc/sysctl.conf
code:
net.core.rmem_default = 65536
net.core.rmem_max = 2097152
net.core.wmem_default = 65536
net.core.wmem_max = 262144
net.ipv4.tcp_mem = 98304 131072 196608
net.ipv4.tcp_window_scaling = 1
Restart multipath...
code:
/etc/init.d/multipathd restart
multipath -v2
multipath -ll

eqlogic-lun0 (36090a03880ea94e8e698244e00009091) dm-6 EQLOGIC,100E-00
[size=185G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
 \_ 12:0:0:0 sdg 8:96  [active][ready]
\_ round-robin 0 [prio=1][enabled]
 \_ 11:0:0:0 sdh 8:112 [active][ready]
eqlogic-lun1 (36090a03880ea14c0e698f44d00005049) dm-3 EQLOGIC,100E-00
[size=180G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
 \_ 9:0:0:0  sde 8:64  [active][ready]
\_ round-robin 0 [prio=1][enabled]
 \_ 10:0:0:0 sdf 8:80  [active][ready]
etc....
I may have missed something as I grabbed most of this from my .bash_history. This should get anyone 95% of the way there. I've had no problems achieving 200 MB/s+ with this setup (multiple target LUNs)

brent78 fucked around with this message at 23:49 on Mar 10, 2010

EoRaptor
Sep 13, 2003



Okay, so take two of this:

So, I really got my budget today for a new NAS/SAN. I have $30k Cdn to spend on a device that needs to do the following (in no particular order):

Minimum 2TB
NFS, maybe iSCSI
Data De-duplication, and support for backing up deduped data (NDMP?).
Snapshots
Multi-path I/O (nice to have, but not critical)
Expandability, both additional I/O and Additional Disks.
AD/LDAP integration for user permissions

Planned usage for the device is to host, via NFS or iSCSI, several virtual machines running on ESXi and about 1.5TB of data via NFS for our design department (OS X support).

Future plans include expansion to hold video editing data with dedicated connectivity to video editors (additional network runs) and expanded storage to accommodate this. Additional virtual machines are also possible.

At the current time, there are no plans to put any major database on the SAN, though a few virtual machines might have small databases (SQL express and the like), but they aren't heavily loaded. The largest virtual machine is an Exchange server with about 50 users (SBS 2003)

I'm completely supplier neutral, however included in the price will need to be any installation costs (running additional power, new rack for it at least, I think) and at least one proper network router (core switch) because all I have right now are some Dell 2x24s. I'm in Toronto, so vendor/reseller recommendations are also accepted.

If you think I missed anything in the feature set that is a must have or would benefit me, please mention it. this will be my first SAN purchase, so I have lots to learn.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

EoRaptor posted:

Okay, so take two of this:

So, I really got my budget today for a new NAS/SAN. I have $30k Cdn to spend on a device that needs to do the following (in no particular order):

Minimum 2TB
NFS, maybe iSCSI
Data De-duplication, and support for backing up deduped data (NDMP?).
Snapshots
Multi-path I/O (nice to have, but not critical)
Expandability, both additional I/O and Additional Disks.
AD/LDAP integration for user permissions

Planned usage for the device is to host, via NFS or iSCSI, several virtual machines running on ESXi and about 1.5TB of data via NFS for our design department (OS X support).

Future plans include expansion to hold video editing data with dedicated connectivity to video editors (additional network runs) and expanded storage to accommodate this. Additional virtual machines are also possible.

At the current time, there are no plans to put any major database on the SAN, though a few virtual machines might have small databases (SQL express and the like), but they aren't heavily loaded. The largest virtual machine is an Exchange server with about 50 users (SBS 2003)

I'm completely supplier neutral, however included in the price will need to be any installation costs (running additional power, new rack for it at least, I think) and at least one proper network router (core switch) because all I have right now are some Dell 2x24s. I'm in Toronto, so vendor/reseller recommendations are also accepted.

If you think I missed anything in the feature set that is a must have or would benefit me, please mention it. this will be my first SAN purchase, so I have lots to learn.

poo poo, I think you might be so close to a low end thumper with that budget, and not that I know anything about enterprise storage, I think it would do most of what you need.

Bonus points for getting 24T for your budget.

Fake Edit: Oh poo poo, looks like they got rid of the 500GB model on their site, so the 1TB disk model is $50k US.

Bluecobra
Sep 11, 2001

The Future's So Bright I Gotta Wear Shades

FISHMANPET posted:

poo poo, I think you might be so close to a low end thumper with that budget, and not that I know anything about enterprise storage, I think it would do most of what you need.

Bonus points for getting 24T for your budget.

Fake Edit: Oh poo poo, looks like they got rid of the 500GB model on their site, so the 1TB disk model is $50k US.
You can just do what I did and buy a J4400 array. It is 24TB raw and you just have to do the ZFS yourself. I basically have a poor man's open storage system. You can daisy chain several more arrays to it for expandability. It cost us about $20K for the array with two external SAS cards and gold support. The J4200 should be half the cost since it just has 12 drive bays.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

FISHMANPET posted:

Fake Edit: Oh poo poo, looks like they got rid of the 500GB model on their site, so the 1TB disk model is $50k US.
That's the retail price, but you could probably get Sun to cut at least 30% off of that on a quote if they like you. But honestly, a Thumper seems like mega overkill for a requirement of only 2 TB. I'd consider an X4275 instead.

EoRaptor
Sep 13, 2003



Misogynist posted:

That's the retail price, but you could probably get Sun to cut at least 30% off of that on a quote if they like you. But honestly, a Thumper seems like mega overkill for a requirement of only 2 TB. I'd consider an X4275 instead.

I have absolutely no problem with more space, as long as the feature set is met within budget, which the sun boxes all seem to do.

Which leads to two questions:

I know zfs does de-duplication, but can this de-duped data be backed up, or am I still working with the full set?

How is the management of the boxes? I'm okay with command line stuff, but other people will need to pick up slack from me if I'm not around, so a management interface that's not horrible is a must.

Bluecobra
Sep 11, 2001

The Future's So Bright I Gotta Wear Shades

EoRaptor posted:

I have absolutely no problem with more space, as long as the feature set is met within budget, which the sun boxes all seem to do.

Which leads to two questions:

I know zfs does de-duplication, but can this de-duped data be backed up, or am I still working with the full set?

How is the management of the boxes? I'm okay with command line stuff, but other people will need to pick up slack from me if I'm not around, so a management interface that's not horrible is a must.
I would assume that a backup program like NetBackup would ignore the ZFS de-deduplication and will backup all the files. With ZFS you can create a snapshot, and then use the "zfs send" command to send that snapshot to another host (more info here). It looks like they added a dedup option to zfs send, though this is pretty new. In fact, I am pretty sure that you will need to be on the developer build of Opensolaris to even get ZFS de-duplication so if this for your enterprise, I would just err on the side of caution.

For management, you are stuck with the command line for everything. If you want a pretty web interface with good analytics, then check out the Sun Storage 7000 systems. I think I read somewhere that they did add de-duplication.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Bluecobra posted:

I would assume that a backup program like NetBackup would ignore the ZFS de-deduplication and will backup all the files. With ZFS you can create a snapshot, and then use the "zfs send" command to send that snapshot to another host (more info here). It looks like they added a dedup option to zfs send, though this is pretty new. In fact, I am pretty sure that you will need to be on the developer build of Opensolaris to even get ZFS de-duplication so if this for your enterprise, I would just err on the side of caution.

For management, you are stuck with the command line for everything. If you want a pretty web interface with good analytics, then check out the Sun Storage 7000 systems. I think I read somewhere that they did add de-duplication.

The 7000 series is just a ZFS + Fishworks, but Fishworks does add some cost. I would say give Sun a serious look. The X4275 is a 2U box that holds 12 3.5" drives. Get that, throw Solaris on it with some drives and go nuts. There should be something of equivalent size in your budget in the 7000 series.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
if HA is not a requirement, a 7100 series from sun with 2TB raw can be had for around $10k.

EnergizerFellow
Oct 11, 2005

More drunk than a barrel of monkeys

EoRaptor posted:

Minimum 2TB
NFS, maybe iSCSI
Data De-duplication, and support for backing up deduped data (NDMP?).
Snapshots
Multi-path I/O (nice to have, but not critical)
Expandability, both additional I/O and Additional Disks.
AD/LDAP integration for user permissions
With the deduplication need, I'd look into the NetApp FAS2040 (w/ DS4243 tray if you need >12 spindles), which is the only 2000-series worth looking at (otherwise go to the FAS3140, but that's closer to $70K with a DS4243 tray). FAS3140 does have a lot more CPU horsepower and expansion slots, however.

The EMC AX4-5i is down in your price range, but, AFAIK, (still) doesn't have thin provisioning or deduplication.

As mentioned, deduplication on ZFS still hasn't made it out of development builds, AFAIK.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

EnergizerFellow posted:

As mentioned, deduplication on ZFS still hasn't made it out of development builds, AFAIK.

That is true. Solaris 10U8 (the most recent) has ZFS version 15 as the max. My OpenSolaris box running the latest devel build goes up to 22. Dedup is in 21. The latest release version of OpenSolaris comes out in a few days and will have dedup, so you can do that if you're comfortable, but otherwise you won't have dedup.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

EnergizerFellow posted:

As mentioned, deduplication on ZFS still hasn't made it out of development builds, AFAIK.
It will be, and it will be an in place upgrade. The Netapp isn't going to cut it, NFS is a $5k add on on a 2050, I can't imagine it's much cheaper on a 2040.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

adorai posted:

It will be, and it will be an in place upgrade. The Netapp isn't going to cut it, NFS is a $5k add on on a 2050, I can't imagine it's much cheaper on a 2040.

Yeah, I just upgrade my thumper from v10 to v15 (somehow the Jumpstart installs ZFS as v10 instead of the latest).

Adbot
ADBOT LOVES YOU

EnergizerFellow
Oct 11, 2005

More drunk than a barrel of monkeys
Price of entry for clustering Sun storage hardware?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply