|
R-Type posted:Do you know what would be awesome? A good, clear MS iSCSI Initatior MPIO configuration guide for setting up both HA and path aggregation. I'm embarrassed to admit that I don't understand how to clearly establish MPIO between multipule ports between A W2k8 box and a iSCSI SAN like Openfiler. It seems like MS will only want to use only one NIC regardless of how many are installed (in my case, the iSCSI lan is connected to a intel 1000 PL PCIe dual port NIC. I assume you have installed the MPIO Multipathing Support for ISCSI thinger. http://technet.microsoft.com/en-us/library/cc725907.aspx
|
# ¿ Aug 30, 2008 17:26 |
|
|
# ¿ Apr 27, 2024 14:13 |
|
This is for the 50GB backup offer, I presume?
|
# ¿ Sep 17, 2008 21:05 |
|
What kind of performance hit will deduplication incur? Say I have 26 servers, A through Z. They all have 72G drives now, but they use ~10GB on each of them, and assume that ~3GB of that is exactly the same base OS image. Dedeuplication will obviously save us a lot of space. I've seen the NetApp demo videos and it sounds awesome. But people are now telling me that performance will suffer. Still others say that all your duped blocks will probably be sitting in cache or on SSD anyway, so performance actaully increases. I can see both sides of it: if I am just reading the same block all the time (say, a shared object in Linux or a DLL in Windows), then if that block is deduped then I'll be winning. But lets say I modify that block, then the storage array will sort of have to pull that block out and now start keeping a second copy of it, and managing that slows the array down. Thoughts?
|
# ¿ Mar 6, 2009 02:28 |
|
Hmm. We could not get out x4500 anywhere near wirespeed using iSCSI. Wondering now if the x4540 is that much better... Any see my question on the last page on deduplication?
|
# ¿ Mar 9, 2009 13:12 |
|
Misogynist posted:What iSCSI target stack were you using, out of curiosity? Windows Server, not sure what version, 2003? I'm not a Windows guy.
|
# ¿ Mar 9, 2009 16:17 |
|
Doh. Sorry. Target, not initiator. Solaris 10 U4
|
# ¿ Mar 9, 2009 16:31 |
|
Misogynist posted:You can also use an x4600 as an interface to a bunch of Thumpers (up to 6 at 48x1TB each). They tend not to advertise this functionality much. If you're going to do this, I recommend OpenSolaris/SXCE over Solaris 10 because of the substantial improvements in native ZFS kernel CIFS sharing. You've got to post a link detailing the "x4600 as a master to multiple x4500s" setup. I've never heard of that.
|
# ¿ Jun 16, 2009 18:54 |
|
Anyone well versed with EMC Symmetrix arrays? We have only a single admin, and he isn't very good... Whenever he presents a LUN (or multiple LUNs to the same machine) it comes along with a 1MB LUN. He says this is required for the EMC and we should just ignore it, but I'm not so sure. pre:c2t5006048AD5F04751d0: configured with capacity of 0.94MB c3t5006048AD5F0475Ed0: configured with capacity of 0.94MB AVAILABLE DISK SELECTIONS: 0. c0t0d0 <LSILOGIC-LogicalVolume-3000 cyl 65533 alt 2 hd 16 sec 136> /pci@780/pci@0/pci@9/scsi@0/sd@0,0 1. c2t5006048AD5F04751d0 <EMC-SYMMETRIX-5771 cyl 1 alt 2 hd 15 sec 128> /pci@7c0/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w5006048ad5f04751,0 2. c3t5006048AD5F0475Ed0 <EMC-SYMMETRIX-5771 cyl 1 alt 2 hd 15 sec 128> /pci@7c0/pci@0/pci@9/SUNW,qlc@0,1/fp@0,0/ssd@w5006048ad5f0475e,0 What is this 1MB LUN, and do we need it? If not, what can I tell our SAN admin in order to stop this madness?
|
# ¿ Jul 24, 2009 20:36 |
|
A-ha, thanks. They seem to be "Gatekeeper LUNs", that provided the requisite Google juice. Now to convince the SAN guy that we don't need them. As for the multipathing, yes, this was before i did stmsboot -e. It is a turn-up of a new box. Also, I trimmed the real disks, 4x500GB LUNs, from the output. After enabling multipathing: pre:bash-3.00# format Searching for disks...done c2t5006048AD5F04751d0: configured with capacity of 0.94MB c3t5006048AD5F0475Ed0: configured with capacity of 0.94MB AVAILABLE DISK SELECTIONS: 0. c0t0d0 <LSILOGIC-LogicalVolume-3000 cyl 65533 alt 2 hd 16 sec 136> /pci@780/pci@0/pci@9/scsi@0/sd@0,0 1. c2t5006048AD5F04751d0 <EMC-SYMMETRIX-5771 cyl 1 alt 2 hd 15 sec 128> /pci@7c0/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w5006048ad5f04751,0 2. c3t5006048AD5F0475Ed0 <EMC-SYMMETRIX-5771 cyl 1 alt 2 hd 15 sec 128> /pci@7c0/pci@0/pci@9/SUNW,qlc@0,1/fp@0,0/ssd@w5006048ad5f0475e,0 3. c4t60060480000190300445533030383833d0 <EMC-SYMMETRIX-5771 cyl 65533 alt 2 hd 60 sec 272> /scsi_vhci/ssd@g60060480000190300445533030383833 4. c4t60060480000190300445533030383635d0 <EMC-SYMMETRIX-5771 cyl 65533 alt 2 hd 60 sec 272> /scsi_vhci/ssd@g60060480000190300445533030383635 5. c4t60060480000190300445533030383437d0 <EMC-SYMMETRIX-5771 cyl 65533 alt 2 hd 60 sec 272> /scsi_vhci/ssd@g60060480000190300445533030383437 6. c4t60060480000190300445533030384131d0 <EMC-SYMMETRIX-5771 cyl 65533 alt 2 hd 60 sec 272> /scsi_vhci/ssd@g60060480000190300445533030384131
|
# ¿ Jul 25, 2009 02:14 |
|
I don't know if there is any harm in having them there. It's just annoying to sort through. I guess something bad could happen if we tried to write to one, or format it. From the bit of reading I just did, it sounds like we want to not dedicate a gatekeeper device, so it will instead just use of the normal data LUNs for Symmetrix communication (which we of course will never do).
|
# ¿ Jul 25, 2009 02:58 |
|
Don't know if you've found this in searching already, but: http://powerwindows.wordpress.com/2009/02/21/maximum-lun-partition-disk-volume-size-for-windows-servers/ Also, I don't know if that helps answer your question. What is the target, the device presenting the ISCSI LUN?
|
# ¿ Oct 30, 2009 19:19 |
|
For the x4540 you should use raidz2 groups of 6 disks for optimal performance. This is becuase the x4540 has 6 SATA controllers. See http://blogs.sun.com/timthomas/entry/recipe_for_a_zfs_raid and/or http://www.solarisinternals.com/wiki/index.php/ZFS_Configuration_Guide#How_to_Set_Up_ZFS_on_an_x4500_System for details. Comparing the x4540 and the CX3 head-to-head based purely on spindle count is not a good comparison. The CX3 has a bunch of cache that can really help performance.
|
# ¿ Jan 13, 2010 20:11 |
|
You're asking how it works? Basically there are two FCAL loops, with a controller at the 'head' of each loop. In a clustered configuration each disk actually has two different addresses, so each head could access it if the other went down. Picture a disk that has two connectors on the back of it, connected to two different controllers. As long as each controller agrees on who is the doing the work, everything is fine cause they won't step on each other. Imagine a street of 6 houses. The are numbered 1, 2, 3, 4, 5, and 6. They are served by one mailman and everything is happy. Now, you could add a second 'label' to the houses. Label them A, B, C, D, E, and F. House 1 could also be called House A, House E could also be called House 5. To split the work the postal service adds a second mailperson; Alice serves houses 1, 2, and 3 while Bob serves houses D, E, and F. If either Alice or Bob were sick (i.e. a storage controller failed) they will simply pick up the other half of the mail route. Check out http://www.docstoc.com/docs/23803079/Netapp-basic-concepts-quickstart-guide, particularly from page 40 and on, for more details (and without ridiculous analogies).
|
# ¿ Mar 6, 2010 10:19 |
|
If you have a single VM in a datastore, you may gain from defragmenting your guest. See http://vpivot.com/2010/04/14/windows-guest-defragmentation-take-two/ for some hard data. However, as you add the I/O of multiple VMs to a datastore, you are effectively turning the I/O stream into random I/O, from the array's point of view. Any gains previously made will be muted. complex fucked around with this message at 02:53 on Sep 4, 2010 |
# ¿ Sep 4, 2010 01:59 |
|
Anyone have any thoughts on NetApp's new offerings? The FAS6200, but in particular ONTAP 8.0.1. I'm thinking of going to 8 just for the larger aggregates.
|
# ¿ Nov 11, 2010 15:58 |
|
What does your IOPS profile look like, read vs. write? How about just pure total IOPS? NFS, ISCSI or block? At peak load what does cache age look like? We have a FAS3140 with 7 full DS14MK4s and 2 full DS4243s emitting block storage to a vSphere 4.1 installation and I have looked at a lot of performance numbers. We are running 7.3.3.
|
# ¿ Nov 18, 2010 16:37 |
|
ferrit posted:I'll have to check this out - thanks for the link. I believe there is some way that you can actually simulate what a PAM module would do for you if you had it by running some advanced commands, but I can't find it in the NOW site right now. We've done this. PCS. Because we are dealing with very large files in VMware, Predictive Cache Statistics indicated that an increase in FlexCache/PAM would not significantly increase cache hit ratios for us, and this not be worth it. Instead we decided to simply increase spindles.
|
# ¿ Nov 18, 2010 20:01 |
|
Anyone have any experience with 3Par gear? We're looking for alternatives to our NetApp. We use Fiber Channel.
|
# ¿ Dec 10, 2010 19:15 |
|
How do you guys do IOPS sizing when looking at a new array? Is a ballpark number of "150 IOPS/15K RPM disk times number of disk" enough for a rough number? Of course, that would be the low watermark, while any caching would just be gravy on top.
|
# ¿ Feb 23, 2011 22:32 |
|
This is for VMware, so I have the burden (luxury?) of just assuming all IO is random.
|
# ¿ Feb 24, 2011 15:09 |
|
Misogynist posted:This is where virtualization admins get really lazy, and it really bothers me. Suppose my array does sub-lun tiering and can do post-provisioning migration between luns? They say laziness is a desirable trait in sysadmins.
|
# ¿ Feb 28, 2011 02:52 |
|
First, before allowing anything to happen automatically, some systems will allow you to run in sort of a "recommendation" mode, saying "I think this change will be beneficial". Also, if your tiering system does not take time-of-day changes (or weekly/monthly, whatever) into account, then of course it won't be able to adapt to cyclic events like you describe.
|
# ¿ Mar 3, 2011 15:35 |
|
Who knows things about fibre channel switches? We have Cisco MDS 9216i and 9506 now, but I'd like to investigate Brocade switches. What kind of features are indicated by a "director" level switch, like an MDS. Do I have to step into Brocade's DCX line, or could I get by with a Brocade 300? As far as I know we don't do any ISL Trunking.
|
# ¿ Apr 22, 2011 18:27 |
|
Looking for anyone with experience with HP's P4800 BladeSystem SAN.
|
# ¿ Jul 6, 2011 13:48 |
|
ghostinmyshell posted:Is that a Lefthand? I too, have been searching the world for an actual HP Engineer for my Lefthand problems for the last few days. It uses LeftHand's SAN/IQ software platform, yes, but it pairs it with a BladeSystem C7000 chassis and MSA 600 shelves (or drawers, really).
|
# ¿ Jul 12, 2011 03:44 |
|
Looks like the FAS 2020 is in HA mode and 7 drives are assigned to the head you are logged into, and the other 7 are assigned to the other head. This means the disks are in two separate aggregates. The 7 disks in head B are not "redundancy" for the disks in head A. You could change the ownership of the 7 drives in head B to head A. Then head A would have all the drives, you could stick them in one big RAID-DP group and then have 138GB x 11 = ~1.5TB of storage, with one parity, one dparity and one spare. Head B would be idle until a failover occurred.
|
# ¿ Jul 28, 2011 18:55 |
|
No, you already have high availability. If a head fails, the other head, the partner, will takeover and there will be no interruption in service. Your current setup is like having two baskets with 6 eggs each in them, and you use both baskets at the same time. You want to change to a setup where you run one basket with 12 eggs and the other basket is empty. But in both situations you are running in HA, and if basket 1 failed basket 2 would be there to pickup all the eggs, with nothing going down.
|
# ¿ Jul 28, 2011 19:08 |
|
madsushi posted:You will need to leave 3 disks attached to the partner, as it will needs its own aggregate/root volume to run and that requires at least 3 disks. Good point. I forgot this is a small 2020 and if there are no other disks to host the root volume you can't do what I said and move all the disks to head A. Best practice is to keep the heads balanced anyways, so you should probably follow adorai's advice.
|
# ¿ Jul 29, 2011 16:43 |
|
http://www.emc.com/collateral/software/specification-sheet/h8514-vnx-series-ss.pdf has a lot of details. I thought I had a flash drive from EMC World that has data sheets on every product, but I can't find it here in my office. Is anyone excited about the new 3PAR P10000? V400 and V800 submodels look awesome, no SAS drives though, and no 10Gbit out of the gate. Peer Motion looks sweet though.
|
# ¿ Aug 25, 2011 21:49 |
|
Oh. I see, you want DOCUMENTATION. All that is behind Powerlink. I'm looking at VNX System Operations, 114 pages of mind numbing goodness. This is just one of what look to be ~180 horse-choking PDFs.
|
# ¿ Aug 26, 2011 04:56 |
|
Internet Explorer posted:So does anyone have any feedback on the VNX line yet? While I have been pretty happy with our Equallogic SANs, I think the new VNX SANs are more where we need to be. Their scalability seems more more granular and I think you can do a lot more tweaking to get the performance that you need. We are specifically looking at the 5300 with unified storage. http://henriwithani.wordpress.com/ has a good ongoing series reviewing a VNXe 3300. Unisphere is the same at on the bigger VNX.
|
# ¿ Sep 8, 2011 03:39 |
|
We have 2 x VNX 5700 on the way. Super excited.
|
# ¿ Oct 20, 2011 14:38 |
|
Ask your HP rep about the T400 deal available through the end of the year. Some ridiculous pricing is available on a true Tier 1 array.
|
# ¿ Nov 19, 2011 18:41 |
|
Martytoof posted:This may be a terrible question to ask among people who discuss actual SAN hardware, but if I want to get my feet wet with iSCSI ESXi datastores, what would be the best way to go about this on a whitebox as far as the operating system is concerned. I'm looking at something like OpenFiler. This would literally just be a small proof of concept test so I'm not terribly concerned about ongoing performance right now. I have FreeNAS, OpenFiler, NexentaStor CE, and a ReadyNAS 2100 in my lab at work. I think if you're learning ISCSI basics it is actually good to try a bunch of different ones. You'll see how each one does volumes and LUNs, mapping, network ACL, etc. All of them work great with ESXi 5.
|
# ¿ Jan 8, 2012 03:56 |
|
There is a new version of TR-3749 available at http://www.netapp.com/us/library/technical-reports/tr-3749.html
|
# ¿ Jan 24, 2012 17:12 |
|
Sorry, in the enterprise world that is pretty cheap. I've seen a CS240 with 24TB raw, 32TB usable for under $40K. Now that is serious GB/dollar in addition to IOPS/dollar.
|
# ¿ Mar 22, 2012 16:07 |
|
NippleFloss posted:The CS240 is 24TB raw and 16TB usable. I believe you're thinking of the CS260, though you have the raw and usable numbers flipped. Those are reasonable prices, but they aren't knocking it out of the park on the GB/dollar axis. No, I meant CS240. Instead of usable I should have said "effective". http://www.nimblestorage.com/products/nimble-cs-series-family/ I have a CS240 and the compression on VMDKs exceeds the estimated 50% compression. your compression ratio may be different.
|
# ¿ Mar 22, 2012 20:18 |
|
ghostinmyshell posted:If you are in the DIY mood, buy one of these http://www.supermicro.com/products/nfo/sbb.cfm and slap nexenta/openfiler on it with some minor GUI changes. Tada you are now qualified to sell super low end NAS/SAN systems apparently. Those are the platform Nimble is built on.
|
# ¿ Mar 23, 2012 03:31 |
|
I've heard that there is special budget inside EMC dedicated to "competitive situations". If sales guy is trying to unseat NetApp he/she can access this budget and get hardware costs super low. Did for us, anyways.
|
# ¿ May 11, 2012 16:28 |
|
|
# ¿ Apr 27, 2024 14:13 |
|
wyoak posted:Is Nimble able to scale out with multiple arrays, or would each array be managed separately? It sounds like they have plans for expansion shelves, have those come out yet? Not today. In the past they have mentioned the possibility: http://www.theregister.co.uk/2011/11/22/nimble_scale_out/ I have it from an authoritative source that Nimble is definitively working on scale out arrays. Expect them soon.
|
# ¿ May 11, 2012 20:59 |