Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
complex
Sep 16, 2003

R-Type posted:

Do you know what would be awesome? A good, clear MS iSCSI Initatior MPIO configuration guide for setting up both HA and path aggregation. I'm embarrassed to admit that I don't understand how to clearly establish MPIO between multipule ports between A W2k8 box and a iSCSI SAN like Openfiler. It seems like MS will only want to use only one NIC regardless of how many are installed (in my case, the iSCSI lan is connected to a intel 1000 PL PCIe dual port NIC.

I assume you have installed the MPIO Multipathing Support for ISCSI thinger. http://technet.microsoft.com/en-us/library/cc725907.aspx

Adbot
ADBOT LOVES YOU

complex
Sep 16, 2003

This is for the 50GB backup offer, I presume?

complex
Sep 16, 2003

What kind of performance hit will deduplication incur?

Say I have 26 servers, A through Z. They all have 72G drives now, but they use ~10GB on each of them, and assume that ~3GB of that is exactly the same base OS image.

Dedeuplication will obviously save us a lot of space. I've seen the NetApp demo videos and it sounds awesome. But people are now telling me that performance will suffer. Still others say that all your duped blocks will probably be sitting in cache or on SSD anyway, so performance actaully increases.

I can see both sides of it: if I am just reading the same block all the time (say, a shared object in Linux or a DLL in Windows), then if that block is deduped then I'll be winning. But lets say I modify that block, then the storage array will sort of have to pull that block out and now start keeping a second copy of it, and managing that slows the array down.

Thoughts?

complex
Sep 16, 2003

Hmm. We could not get out x4500 anywhere near wirespeed using iSCSI. Wondering now if the x4540 is that much better...

Any see my question on the last page on deduplication?

complex
Sep 16, 2003

Misogynist posted:

What iSCSI target stack were you using, out of curiosity?

Windows Server, not sure what version, 2003? I'm not a Windows guy.

complex
Sep 16, 2003

Doh. Sorry. Target, not initiator.

Solaris 10 U4

complex
Sep 16, 2003

Misogynist posted:

You can also use an x4600 as an interface to a bunch of Thumpers (up to 6 at 48x1TB each). They tend not to advertise this functionality much. If you're going to do this, I recommend OpenSolaris/SXCE over Solaris 10 because of the substantial improvements in native ZFS kernel CIFS sharing.

You've got to post a link detailing the "x4600 as a master to multiple x4500s" setup. I've never heard of that.

complex
Sep 16, 2003

Anyone well versed with EMC Symmetrix arrays? We have only a single admin, and he isn't very good...

Whenever he presents a LUN (or multiple LUNs to the same machine) it comes along with a 1MB LUN. He says this is required for the EMC and we should just ignore it, but I'm not so sure.

pre:
c2t5006048AD5F04751d0: configured with capacity of 0.94MB
c3t5006048AD5F0475Ed0: configured with capacity of 0.94MB


AVAILABLE DISK SELECTIONS:
       0. c0t0d0 <LSILOGIC-LogicalVolume-3000 cyl 65533 alt 2 hd 16 sec 136>
          /pci@780/pci@0/pci@9/scsi@0/sd@0,0
       1. c2t5006048AD5F04751d0 <EMC-SYMMETRIX-5771 cyl 1 alt 2 hd 15 sec 128>
          /pci@7c0/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w5006048ad5f04751,0
       2. c3t5006048AD5F0475Ed0 <EMC-SYMMETRIX-5771 cyl 1 alt 2 hd 15 sec 128>
          /pci@7c0/pci@0/pci@9/SUNW,qlc@0,1/fp@0,0/ssd@w5006048ad5f0475e,0
There are two here because we're using multiple paths.

What is this 1MB LUN, and do we need it? If not, what can I tell our SAN admin in order to stop this madness?

complex
Sep 16, 2003

A-ha, thanks. They seem to be "Gatekeeper LUNs", that provided the requisite Google juice. Now to convince the SAN guy that we don't need them.

As for the multipathing, yes, this was before i did stmsboot -e. It is a turn-up of a new box. Also, I trimmed the real disks, 4x500GB LUNs, from the output. After enabling multipathing:

pre:
bash-3.00# format
Searching for disks...done

c2t5006048AD5F04751d0: configured with capacity of 0.94MB
c3t5006048AD5F0475Ed0: configured with capacity of 0.94MB


AVAILABLE DISK SELECTIONS:
       0. c0t0d0 <LSILOGIC-LogicalVolume-3000 cyl 65533 alt 2 hd 16 sec 136>
          /pci@780/pci@0/pci@9/scsi@0/sd@0,0
       1. c2t5006048AD5F04751d0 <EMC-SYMMETRIX-5771 cyl 1 alt 2 hd 15 sec 128>
          /pci@7c0/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w5006048ad5f04751,0
       2. c3t5006048AD5F0475Ed0 <EMC-SYMMETRIX-5771 cyl 1 alt 2 hd 15 sec 128>
          /pci@7c0/pci@0/pci@9/SUNW,qlc@0,1/fp@0,0/ssd@w5006048ad5f0475e,0
       3. c4t60060480000190300445533030383833d0 <EMC-SYMMETRIX-5771 cyl 65533 alt 2 hd 60 sec 272>
          /scsi_vhci/ssd@g60060480000190300445533030383833
       4. c4t60060480000190300445533030383635d0 <EMC-SYMMETRIX-5771 cyl 65533 alt 2 hd 60 sec 272>
          /scsi_vhci/ssd@g60060480000190300445533030383635
       5. c4t60060480000190300445533030383437d0 <EMC-SYMMETRIX-5771 cyl 65533 alt 2 hd 60 sec 272>
          /scsi_vhci/ssd@g60060480000190300445533030383437
       6. c4t60060480000190300445533030384131d0 <EMC-SYMMETRIX-5771 cyl 65533 alt 2 hd 60 sec 272>
          /scsi_vhci/ssd@g60060480000190300445533030384131

complex
Sep 16, 2003

I don't know if there is any harm in having them there. It's just annoying to sort through. I guess something bad could happen if we tried to write to one, or format it.

From the bit of reading I just did, it sounds like we want to not dedicate a gatekeeper device, so it will instead just use of the normal data LUNs for Symmetrix communication (which we of course will never do).

complex
Sep 16, 2003

Don't know if you've found this in searching already, but: http://powerwindows.wordpress.com/2009/02/21/maximum-lun-partition-disk-volume-size-for-windows-servers/

Also, I don't know if that helps answer your question.

What is the target, the device presenting the ISCSI LUN?

complex
Sep 16, 2003

For the x4540 you should use raidz2 groups of 6 disks for optimal performance. This is becuase the x4540 has 6 SATA controllers. See http://blogs.sun.com/timthomas/entry/recipe_for_a_zfs_raid and/or http://www.solarisinternals.com/wiki/index.php/ZFS_Configuration_Guide#How_to_Set_Up_ZFS_on_an_x4500_System for details.

Comparing the x4540 and the CX3 head-to-head based purely on spindle count is not a good comparison. The CX3 has a bunch of cache that can really help performance.

complex
Sep 16, 2003

You're asking how it works? Basically there are two FCAL loops, with a controller at the 'head' of each loop. In a clustered configuration each disk actually has two different addresses, so each head could access it if the other went down. Picture a disk that has two connectors on the back of it, connected to two different controllers. As long as each controller agrees on who is the doing the work, everything is fine cause they won't step on each other.

Imagine a street of 6 houses. The are numbered 1, 2, 3, 4, 5, and 6. They are served by one mailman and everything is happy.

Now, you could add a second 'label' to the houses. Label them A, B, C, D, E, and F. House 1 could also be called House A, House E could also be called House 5.

To split the work the postal service adds a second mailperson; Alice serves houses 1, 2, and 3 while Bob serves houses D, E, and F. If either Alice or Bob were sick (i.e. a storage controller failed) they will simply pick up the other half of the mail route.

Check out http://www.docstoc.com/docs/23803079/Netapp-basic-concepts-quickstart-guide, particularly from page 40 and on, for more details (and without ridiculous analogies).

complex
Sep 16, 2003

If you have a single VM in a datastore, you may gain from defragmenting your guest. See http://vpivot.com/2010/04/14/windows-guest-defragmentation-take-two/ for some hard data.

However, as you add the I/O of multiple VMs to a datastore, you are effectively turning the I/O stream into random I/O, from the array's point of view. Any gains previously made will be muted.

complex fucked around with this message at 02:53 on Sep 4, 2010

complex
Sep 16, 2003

Anyone have any thoughts on NetApp's new offerings? The FAS6200, but in particular ONTAP 8.0.1. I'm thinking of going to 8 just for the larger aggregates.

complex
Sep 16, 2003

What does your IOPS profile look like, read vs. write? How about just pure total IOPS? NFS, ISCSI or block? At peak load what does cache age look like?

We have a FAS3140 with 7 full DS14MK4s and 2 full DS4243s emitting block storage to a vSphere 4.1 installation and I have looked at a lot of performance numbers.

We are running 7.3.3.

complex
Sep 16, 2003

ferrit posted:

I'll have to check this out - thanks for the link. I believe there is some way that you can actually simulate what a PAM module would do for you if you had it by running some advanced commands, but I can't find it in the NOW site right now.

We've done this. PCS. Because we are dealing with very large files in VMware, Predictive Cache Statistics indicated that an increase in FlexCache/PAM would not significantly increase cache hit ratios for us, and this not be worth it. Instead we decided to simply increase spindles.

complex
Sep 16, 2003

Anyone have any experience with 3Par gear? We're looking for alternatives to our NetApp. We use Fiber Channel.

complex
Sep 16, 2003

How do you guys do IOPS sizing when looking at a new array? Is a ballpark number of "150 IOPS/15K RPM disk times number of disk" enough for a rough number? Of course, that would be the low watermark, while any caching would just be gravy on top.

complex
Sep 16, 2003

This is for VMware, so I have the burden (luxury?) of just assuming all IO is random.

complex
Sep 16, 2003

Misogynist posted:

This is where virtualization admins get really lazy, and it really bothers me.

Suppose my array does sub-lun tiering and can do post-provisioning migration between luns?

They say laziness is a desirable trait in sysadmins.

complex
Sep 16, 2003

First, before allowing anything to happen automatically, some systems will allow you to run in sort of a "recommendation" mode, saying "I think this change will be beneficial".

Also, if your tiering system does not take time-of-day changes (or weekly/monthly, whatever) into account, then of course it won't be able to adapt to cyclic events like you describe.

complex
Sep 16, 2003

Who knows things about fibre channel switches? We have Cisco MDS 9216i and 9506 now, but I'd like to investigate Brocade switches.

What kind of features are indicated by a "director" level switch, like an MDS. Do I have to step into Brocade's DCX line, or could I get by with a Brocade 300? As far as I know we don't do any ISL Trunking.

complex
Sep 16, 2003

Looking for anyone with experience with HP's P4800 BladeSystem SAN.

complex
Sep 16, 2003

ghostinmyshell posted:

Is that a Lefthand? I too, have been searching the world for an actual HP Engineer for my Lefthand problems for the last few days.

It uses LeftHand's SAN/IQ software platform, yes, but it pairs it with a BladeSystem C7000 chassis and MSA 600 shelves (or drawers, really).

complex
Sep 16, 2003

Looks like the FAS 2020 is in HA mode and 7 drives are assigned to the head you are logged into, and the other 7 are assigned to the other head. This means the disks are in two separate aggregates. The 7 disks in head B are not "redundancy" for the disks in head A.

You could change the ownership of the 7 drives in head B to head A. Then head A would have all the drives, you could stick them in one big RAID-DP group and then have 138GB x 11 = ~1.5TB of storage, with one parity, one dparity and one spare. Head B would be idle until a failover occurred.

complex
Sep 16, 2003

No, you already have high availability. If a head fails, the other head, the partner, will takeover and there will be no interruption in service.

Your current setup is like having two baskets with 6 eggs each in them, and you use both baskets at the same time. You want to change to a setup where you run one basket with 12 eggs and the other basket is empty.

But in both situations you are running in HA, and if basket 1 failed basket 2 would be there to pickup all the eggs, with nothing going down.

complex
Sep 16, 2003

madsushi posted:

You will need to leave 3 disks attached to the partner, as it will needs its own aggregate/root volume to run and that requires at least 3 disks.

Good point. I forgot this is a small 2020 and if there are no other disks to host the root volume you can't do what I said and move all the disks to head A.

Best practice is to keep the heads balanced anyways, so you should probably follow adorai's advice.

complex
Sep 16, 2003

http://www.emc.com/collateral/software/specification-sheet/h8514-vnx-series-ss.pdf has a lot of details. I thought I had a flash drive from EMC World that has data sheets on every product, but I can't find it here in my office.

Is anyone excited about the new 3PAR P10000? V400 and V800 submodels look awesome, no SAS drives though, and no 10Gbit out of the gate. Peer Motion looks sweet though.

complex
Sep 16, 2003

Oh. I see, you want DOCUMENTATION. All that is behind Powerlink. I'm looking at VNX System Operations, 114 pages of mind numbing goodness. This is just one of what look to be ~180 horse-choking PDFs.

complex
Sep 16, 2003

Internet Explorer posted:

So does anyone have any feedback on the VNX line yet? While I have been pretty happy with our Equallogic SANs, I think the new VNX SANs are more where we need to be. Their scalability seems more more granular and I think you can do a lot more tweaking to get the performance that you need. We are specifically looking at the 5300 with unified storage.

http://henriwithani.wordpress.com/ has a good ongoing series reviewing a VNXe 3300. Unisphere is the same at on the bigger VNX.

complex
Sep 16, 2003

We have 2 x VNX 5700 on the way. Super excited.

complex
Sep 16, 2003

Ask your HP rep about the T400 deal available through the end of the year. Some ridiculous pricing is available on a true Tier 1 array.

complex
Sep 16, 2003

Martytoof posted:

This may be a terrible question to ask among people who discuss actual SAN hardware, but if I want to get my feet wet with iSCSI ESXi datastores, what would be the best way to go about this on a whitebox as far as the operating system is concerned. I'm looking at something like OpenFiler. This would literally just be a small proof of concept test so I'm not terribly concerned about ongoing performance right now.

I have FreeNAS, OpenFiler, NexentaStor CE, and a ReadyNAS 2100 in my lab at work. I think if you're learning ISCSI basics it is actually good to try a bunch of different ones. You'll see how each one does volumes and LUNs, mapping, network ACL, etc.

All of them work great with ESXi 5.

complex
Sep 16, 2003

There is a new version of TR-3749 available at http://www.netapp.com/us/library/technical-reports/tr-3749.html

complex
Sep 16, 2003

Sorry, in the enterprise world that is pretty cheap.

I've seen a CS240 with 24TB raw, 32TB usable for under $40K. Now that is serious GB/dollar in addition to IOPS/dollar.

complex
Sep 16, 2003

NippleFloss posted:

The CS240 is 24TB raw and 16TB usable. I believe you're thinking of the CS260, though you have the raw and usable numbers flipped. Those are reasonable prices, but they aren't knocking it out of the park on the GB/dollar axis.

No, I meant CS240. Instead of usable I should have said "effective". http://www.nimblestorage.com/products/nimble-cs-series-family/

I have a CS240 and the compression on VMDKs exceeds the estimated 50% compression. your compression ratio may be different.

complex
Sep 16, 2003

ghostinmyshell posted:

If you are in the DIY mood, buy one of these http://www.supermicro.com/products/nfo/sbb.cfm and slap nexenta/openfiler on it with some minor GUI changes. Tada you are now qualified to sell super low end NAS/SAN systems apparently.

Those are the platform Nimble is built on.

complex
Sep 16, 2003

I've heard that there is special budget inside EMC dedicated to "competitive situations". If sales guy is trying to unseat NetApp he/she can access this budget and get hardware costs super low. Did for us, anyways.

Adbot
ADBOT LOVES YOU

complex
Sep 16, 2003

wyoak posted:

Is Nimble able to scale out with multiple arrays, or would each array be managed separately? It sounds like they have plans for expansion shelves, have those come out yet?

Not today. In the past they have mentioned the possibility: http://www.theregister.co.uk/2011/11/22/nimble_scale_out/

I have it from an authoritative source that Nimble is definitively working on scale out arrays. Expect them soon.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply