Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Syano
Jul 13, 2005


EMC Clariion checking in.

I run a single 15 drive DEA populated with 143gig 15rpm U320 scsi drives over an iSCSI backplane.

I've had to become intimately familiar with this beast along with wonderful technologies such jumbo MTU, chimney offload/TOE, and raid levels here recently so more than happy to answer any questions I can

Also, any discussion of enterprise storage is incomplete without involving Raid3. It still has a place where most of your writes will be sequential (hey there big database). We use it for several database and it outperforms raid5 without the storage overhead of 1+0

Adbot
ADBOT LOVES YOU

Syano
Jul 13, 2005


Richard Noggin posted:

Very nice, thanks!

We're going to be moving our server lineup to ESX 3i, and we're trying to determine whether we can get by with DAS or if we need a SAN. We're a small company, but we have a couple DB-centric apps that run. Here's the setup:

1 SBS 2003 box, serving ~5 users
1 Web based management system with a SQL backend
1 Ticketing system with a SQL backend
1 Server 2k3 box as a backup DC

We'd create a new VM for hosting SQL for both the management and ticketing systems, so the load would go down on those VMs but another one would be picking up the slack.

The new server would be a DL360 G5 with dual quad Xeons with 16GB RAM. Our current switch is a POS Linksys 16 port managed GB job - it's fine for what we're doing now, but I'm not sure about iSCSI traffic. Our storage and IO needs aren't going to change a whole lot in the next few years.

The DAS we're looking at is a MSA60 with ~900GB usable space (12 146GB SAS drives in RAID 10). For a SAN, we'd be looking at the MSA 2000 series running on iSCSI, probably with a similar drive setup.

Any thoughts about what would be the best bang for the buck? The SAN alone is $15k+, while DAS + the server comes in at $11k.

edit: we're a HP shop.

If all else was equal, the best bang for your buck will be the SAN. The direct attached storage will never have the flexibility of the SAN nor the growth potential (saying this with only minimal knowledge of the solution you are looking at)

Syano
Jul 13, 2005


Ok I need some help from some iscsi gurus and I figured this was the place to ask. I have an EMC CX3-10 array and a file server running win2k3 attached to it running powerpath 5.2 and iscsi initiator. This server is losing its shares every time it gets rebooted. No problem right? The lanman service is coming up before the iScsi targets are reconnected. Well, that is the problem. I have gone through the process of making the lanman service dependent upon the iscsi and powerpath services but I am still having the problem of the shares being down upon reboot and having to reshare the folders.

I have gone through Dell support and still have an active ticket with those guys but I thought maybe there was something I was missing that one of you guys could throw my way.

Syano
Jul 13, 2005


I think I solved my own issue so I figured I would come post in case anyone would benefit. It appears that in the PowerPath 5.2 software the iSCSI targets show as "Powerpath Device" in the registry instead of "iSCSI Device" therefore they cannot be bound persistent with the iSCSI initiator. There is a registry fix available that involves basically adding a DWORD value. Of course I can't test it right now because I can't just reboot a file server willy nilly when I want to.

Syano
Jul 13, 2005


Ok we are about to jump into the world of SAN storage and we have our sights set on an Equallogic array. I have some questions though that will help me to understand how this all works. First, what is the best way to provision the storage? Do we just take the entire set of drives and make it one big raid set then divy up LUNs from there to hand off to the servers or should we divide the array up into multiple raid sets and build LUNs based of those?

Next, I am having a difficult time wrapping my head around the idea of a snapshot. Is it really as awesome as I am thinking it is? Because I am thinking if I were able to move all my storage into the array then I would be able to use snapshots and eventually replication to replace my current backup solution. Are snapshots really that awesome or do I have an incorrect vision of what they do?

Syano
Jul 13, 2005


Misogynist posted:

You know the old adage of how RAID isn't backup?

It's still RAID. I remember a story here about some guy with a big-rear end BlueArc NAS that was replicating to another head. The firmware hit a bug and imploded the filesystem, including snapshots. It then replicated the write to the other head, which imploded the filesystem on the replica.

This is probably less of a concern when your snapshots happen at the volume level instead of the filesystem level, but there's still plenty of disaster scenarios to consider without even getting into the possibilities of malicious administrators/intruders or natural disasters. You really need to keep offline, offsite backups.

Roger that. Good info to remember.

What about provisioning? Is it usually worth it to split up the disks into separate raid groups or just build one raid set from all the available disks? Or is this something you need to know more about your IO load to make a decision on?

Syano
Jul 13, 2005


Erwin posted:

Can anybody give me an idea as to whether these speeds are to be expected? The application that the server is for has been installed, and it's hanging whenever you do anything that involves reading files from the SAN.

Is there any particular reason jumbo frames are off? We pulled a cx3-10 array off a Cisco and put it on a Dell and the performance was absolutely abysmal until we turned jumbo frames on. I didnt realize how much of a difference the two switches would make until I saw it with my own two eyes. Not sure if the procurve is your culprit but its worth a shot if you can turn jumbo frames on.

Syano
Jul 13, 2005


I am looking for something entry level. We have about 14 servers we want to virtualize and we need some flexibility in the device so we are able to grow with it and my vendors keep coming back with these out of the ballpark priced solutions. What do you guys suggest for a good entry level unit?

Syano
Jul 13, 2005


Welp! I mainly wanted just some pointers to some entry level SAN units but since you are willing to help to THAT extent I will get my performance numbers together and move on over to the virtualization thread when I get a moment.

Syano
Jul 13, 2005


Ive got a MD3200i being delivered soon. How do you like the thing?

Syano
Jul 13, 2005


Echidna posted:

I'd be interested in how your experiences compare to mine. What Virtualization platform are you using ?

I would be interested in hearing also. We are going to be doing P2V with Hyper-V on about 5 boxes over the next month or so. Looking forward to finally centralizing our storage.

Syano
Jul 13, 2005


I know this has to be an elementary question but I just wanted to throw it out there anyways to confirm with first hand experience. We have our new MD3200i humming along just nice and we have the snapshot feature.Before i start playing around though I assume that snapshotting a LUN that supports transactional workloads like SQL or Exchange is probably a bad idea right?

Syano
Jul 13, 2005


So talk to me a bit more in depth about hardware based snapshots then. I am guessing based on what you guys are saying that my best use of them as a backup tool would be to take a snapshot, mount that snapshot to a backup server, run my backup out of band from the snapshot, then delete the snapshot when that is done as not to inccur a ton of overhead? That sound about right?

Syano
Jul 13, 2005


shablamoid posted:

Would it make a difference if the system is a Windows 2008 R2 box with 2 MD1000's attached or would this generally apply to all systems?

I think what they are getting at is it does not matter what the hardware is. It matters what the workload is. What workload are you running on that box connected to those 2 MD1000s?

Syano
Jul 13, 2005


That post scares the hell out of me. We are virtualizing our entire network at the moment all on top of a Dell MD3200i. Now when I say our entire network I am really just talking 15ish servers, give or take. But hearing a story of a SAN go tits up even with 'redundant' options gives me nightmares.

Syano
Jul 13, 2005


Misogynist posted:

SANs are as redundant as you design them to be. If you put your backups in the same shelf as your production data, you're very likely to be hosed over by your own ignorance at some point no matter how good your hardware is.


Well lucky for me I spin my backups to a direct attached disk array on my backup server. Plus each VM host actually has enough local storage to run a VM or two on its own if it had to without the SAN. Still scares me to death though.

EDIT: You know what is funny. Since we are actively building this environment I have been keeping my eyes glued to this thread. One of things I saw a couple pages ago was someone mentioning how a network card could go bad. It made me stop and completely redesign my hosts to have 3 NICs minimum, all connected to different layer 2 devices.

Syano fucked around with this message at 12:45 on Sep 21, 2010

Syano
Jul 13, 2005


Well what I have is 3 dual port NICs so a total of 6 gigabit ports, 4 for iSCSI and 2 for guest network traffic. I have been keeping a close eye on network usage and so far it doesnt appear bandwidth is going to be a problem. I could be wrong though :ohdear:

Syano
Jul 13, 2005


Is there anyone that competes price wise with dell and their power vault 3xxx series of iscsi SANs? I would love to be able to get a competitor to show me what they have but i have yet to find anyone that can compete price wise

Syano
Jul 13, 2005


Snap. I had no idea netgear sold iscsi stuff. It looks pretty feature rich too. I wonder if I could overcome my innate fear of putting production data on a netgear box

Syano
Jul 13, 2005


skipdogg posted:

What config are you looking at? If you beat them up enough you might be able to get an HP MSA/P2000 unit for about the same price.

Nothing fancy. Right now I have a Powervault 3200i with 3.2tb raw storage on 15k rpm SAS disks. The main feature I would really like that I dont currently have is some sort of replication.

Syano
Jul 13, 2005


lilbean posted:

Speaking of Dell, anybody actually use the MD3200 product? It *looks* decent, and we need a successor to the Sun 2530 series - does it fit the bill?

I have one in production at the moment. For an entry level SAN Im not sure it can be beat. The one big feature I wish it had was replication but other than that its been a great unit.

Syano
Jul 13, 2005


Ive actually got the 3200i, the one with the 3.5in disks. Regardless...

This particular array is being used to host Hyper-V VMs though we do not have any shared volumes on it. I set up the thing as one huge RAID 6 array with 2 hot spares. I have 3 hosts currently connected to it, a poweredge r410 and 2 poweredge 2950s. I am not using any NIC teaming at all. I am actually using 2 catalyst gigabit switches for the fabric and connecting through it. I am presenting a single volume of storage to each host at the moment so that means I have one controller running 1 volume and the other running 2.

I wonder if your array is freakin out because you do not have all the controller ports connected or because there is a mismatch in the number of pathways between the host and each controller.

Syano
Jul 13, 2005


What do you guys think of the HP 4000 G2 series arrays? A vendor is trying to close some business before weeks end and he has offered me a really good deal on a 4300 7.2TB array. I wasnt even completely sure I was going to buy that particular array but the price certainly is right. I just want to make sure Im not buying some horrible kit.

Syano
Jul 13, 2005


What limitations would you say it has?

For reference our only array in production at the moment is a Dell powervault 3200i

Syano
Jul 13, 2005


szlevi posted:

I don't run VMware so I cannot help you there - I rather spend my money on better/safer hardware, better backup etc things instead of giving it to EMC for things that are free in Hyper-V or (some) even in XenServer.

I dont even understand what this means?

Syano
Jul 13, 2005


For someone that works with HPs lefthand products could you answer a question for me: Since the kit, when using network raid, abstracts the fact that the data is being stored across multiple units, does the kit allow you to build mirrored or raid 5 luns or anything like that? Or do you basically just set up network raid and then carve out a lun and let the kit decide what to do with it? Or did I just completely not make any sense?

Syano
Jul 13, 2005


Number19 posted:

Different volumes can have different levels set on them, so you can prioritize your availability on a volume by volume basis.

Oh thats pretty killer. Cool this is what I needed to know. Now to make a purchase!

Syano
Jul 13, 2005


Looks to me like he said something 9 times in a row... though I didnt read any of it because its annoying as hell...

Syano
Jul 13, 2005


szlevi posted:

Too bad - you might even have learned something from them at the end..

I have learned something. Ive learned that you think you know a lot about SANs and that you are a TERRIBLE poster. iSCSI only used by SMBs? Seriously?

Syano
Jul 13, 2005


It was a tough decision for us too. Vmware really made the choice between the Advanced and Midsized acceleration kits a tough one. What kills you is by stepping up to the midsize you lose the 3 host limit which is awesome but then you get saddled with the 6 core per socket limit.

Syano
Jul 13, 2005


I am super interested in EMCs new VNXe line. Its like babbys first SAN. But in all seriousness it is the first kit we have seen that can compete price wise with the Dell Powervault stuff.

EDIT: Just went through a customer presentation on the kits. Man these are nice. Dedupe, NFS/CIFS/iSCSI, WORM, all for under 10k? Yes please.

Syano fucked around with this message at 17:31 on Jan 20, 2011

Syano
Jul 13, 2005


Yeah I was more or less sort of salivating over the price point even though I knew it wasnt going to be for us. I wouldnt dare order without dual controllers and PSUs.

I was about a week from pulling the trigger on an HP4000 starter SAN for a new project before news of the VNXe dropped on Tuesday. My VAR was quoting me somewhere in the 30s for the HP kit. I should be able to beat that in a comparable VNXe config plus get a ton more features. The only thing I think I might be missing out on is network raid. I guess its really all going to depend on what software licenses I choose on top of the hardware.

Syano
Jul 13, 2005


Intraveinous posted:

I didn't bother with it, but I heard something about them stuffing a bunch of hook^H^H^H^H dancers in a Mini Cooper, and jumping a motorcycle over a bunch of Symmetrix cabinets... Sounds like a lot of fluff with not much substance to me.

That said, having not watched it, I don't know if there was any substance, but if I'm going to take my time to watch a webcast, I want to know what the product is, what it does, and how it does it. If I wanna watch stunts, I'll do that another time.


The most significant announcement for me was what I was just talking about a few posts up: the announcement of the VNXe line. What you are looking at is an SMB or ROBO kit with extremely rich features for the price point. It isnt for the enterprise because the software layer really tries to abstract some of the more granular control away from the user but they actually market that as a selling point for the SMB. Price point is pretty amazing. You can actually price a unit with non-redundant controllers for under 10k.

Syano
Jul 13, 2005


Moey posted:

Raid 1 is just mirrored. So if you have 2 drives that are 1tb each, they will mirror eachother, giving you 1tb of storage, and the ability for one drive to die.

Its a bit more complicated than that. Lefthand units do something they call network raid which means your lun is mirrored across units and not just disks.

Syano
Jul 13, 2005


demonachizer posted:

No I know what RAID 1 is but I am just not sure I have heard of a SAN that uses that as their RAID solution especially since there are going to be shitloads of disks involved. Normally you hear of a RAID 10. We acted confused then they seemed unsure of themselves so I just was wondering if that is a configuration that lefthand has available.

Read my above post. They may have been talking about network raid 1, especially if they are selling you a dual unit kit

Syano
Jul 13, 2005


Microsoft Clustering Services on my storage controllers? BARF

Syano
Jul 13, 2005


Seconding the VNXe series from EMC. NFS, iSCSI, dedupe all come with the unit and you can add replication fairly cheaply

Syano
Jul 13, 2005


Timdogg posted:

Also, Dell is trying to push their "iSCSI Optimized" switches, but I have limited experience with their switches and am hesitant to jump in now. Anyone recommend them?

The only thing their iSCSI optimization does is prioritize iSCSI traffic. If you only have iSCSI traffic on that network (like you should) then their optimization is completely irrelevant.

Syano
Jul 13, 2005


Ok so we just got our first HP lefthand kit in this Friday and this is the first of this type me or my guys have ever played with. I just assumed before shipping that the three network ports on the back of each unit would be two ports for the storage network and 1 port for the management network. As we were racking everything on Friday though it turns out that the third port is actually for the HP iLO and the management traffic goes over the storage network ports? Is that correct?

Adbot
ADBOT LOVES YOU

Syano
Jul 13, 2005


This may be better suited to the virtualization thread but whatev I will give it a go here: We are setting up a Xenserver environment on top of HP Lefthand storage. Reading through the best practice guide published by HP they recommend as best practice having 1 VM per 1 volume per 1 storage respository. Is there any reason why?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply