Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


I'd interested to know those P4000 limitations too.

I'm lurking here for a while but now I'm getting ready to close on a $100k+ solution and I'm curious what others might say about my offers Dell vs HP vs IBM...
Original idea was to keep my fast (~1.4GB/s r/w over 4x FC4) ~20TB storage as tier1 and create one slow but large tier2 (preferably SATA/iSCSI) and one very fast but small tier0 (utilizing some SSD/memory-based solution.)

My RFP called for at least 40-50TB *usable* space (after all RAID, snapshots etc accounted for), preferably iSCSI storage for tier2 and 2x 640GB Fusion-io Duo cards or comparable solutions for tier0 and the following two MS clusters:
- one dual-node Server 2008 R2 Ent failover cluster (CSV) running Hyper-V (dual 12-core Opterons w/ 64GB memory per node)
- one dual-node Storage Server 2008 R2 Ent running all file shares (CIFS), namespaces (DFS-N), everything shared off all storage tiers while, of course, taking care of necessary data tiering (probably involving FCI + my scripting kung-fu unless vendor supplies something useful.)
Backend should be all 10GbE (both iSCSI and regular data) and completely redundant (dual-dual cards, dual switches etc.)
RFP also calls for backup option with a smaller (eg 24-slot) tape library w/ LTO5 and preferably Commvault - I passionately hate BackupExec - but it'll probably end up as a simple money question ie effect on the bottom line.
All this should come from one-vendor - we don't have a large team but we do a lot of high-end 3D work as well as 2D ones and I don't want to deal with any finger-pointing if something doesn't work as promised. (OEM'd gear is OK, of course, as long as it's covered by the regular support.)

+ one added wish: I'd love to have an option at a later point to hook up few stations from my 3D group via 10GBASE-T.

IBM submitted a very good solution with certain basics still missing/being worked out: it's built the new Storwize V7000 which is 0% Storwize but 70% SVC (sw that runs on the box) + 15% DS8xxx (EasyTier) + 5% XIV (nice GUI) + 10% DS5xxx (smaller-sized but still controllers + enclosures storage.) Biggest missing item is 10GbE iSCSI: supposedly the chipset is there -or so I was told - but it needs to be validated and then a firmware upgrade can enable it...?
The huge advantage here is the SVC which can virtualize all my existing and future storage and EasyTier which would also solve all my data-tiering woes.
This is also the downside: absolutely ridiculous license fees and I'm not entirely convinced about this extra layer between my storage and OSes... does anyone know more about this SVC thing, how does it deal with block sizes etc?
Another downside is that they are not only almost 50% more expensive but included storage space is far the smallest, one-fourth/fifth of any competing offer.


HP obviously offers their (post-Lefthand so it's 100% HP) StorageWorks scale-out capacity P4500 G2 w/ DL385 G7s and their new X3000-series NAS heads but they have absolutely no data tiering as far as I know, not even within their P4000-boxes (the offer comes with 5 boxes.) This won't be a problem for now but could be if I have to add a new system in 2-3 years time (quoted 120TB raw should hold out for 2-3 years in normal circumstances.) OTOH they offer decent sync replication which could make a difference if I have to build out a new floor in the next building... they have the virtual VSA though, which scales up to 10TB but I'm reluctant to think of it anything more than a DRS solution or a dev platform for my R&D guys.
HP offers great edge switches, I could get two 5406 w/ modules around $10-12k - no 10GBASE-T whatsoever, though.

Dell offers their EQL PS6510E 'Sumo' unit w/ R715 and NX3000-line NASes but so far their pricing is horrible. Last week I almost dropped them but at the last moment they came back promising to match HP - I only gave them a one item idea what kind of discounts to aim for - so I'm assuming they do. If they indeed do it they will still lack sync replication and they only offer data tiering within EQL and even async replication only works to another EQL unit (though it could be any unit incl. cheapest PS4000E.) They also behind HP by just readying their NAS NX-line with Storage Server 2008 R2 Ent... did I mention they still do not offer remote VM snapshotting for Hyper-V, unlike HP?
On the other hand EQL boxes look much more sturdier than P4000 ones and it's a single box w/ 48 spindles... that's higher individual IOPS and no need to waste bandwidth and storage space on replication (though they are pretty much even around ~7xTB usable if I take ~30% replication on HP.)
Also they just bought Compellent and they are readying their Ocarina-enabled firmware or perhaps as an appliance (remember their purchase of Exanet?) which means pretty good future upgrades for their entire storage line.
Dell can also throw in a KACE box which would make me soften up on the lateness of new NAS heads - messy return and exchange when new one is out - and they seem to be willing to sell and support Commvault without buying a dedicated backup box.
They have a pretty cool 24-port RJ45 10GbE switch but no real competitors to a pair of 5406 + modules.

I must add that I tested a Dell PS6010XV demo unit for two weeks and I was impressed by the simplicity - I'm also testing a VSA in Hyper-V and while as I understand SANiQ 9.0 introduced a lot of positive changes it still seems much more convoluted/complex to manage than SANHQ from Dell, even when it comes to basics (how the hell you congfigure SSL mail account for reporting?)

I welcome comments from all admins and architects, with or without experience etc - thanks!

Note: it's going to be cross-posted in another server-related forum.

szlevi fucked around with this message at 22:55 on Jan 1, 2011

Adbot
ADBOT LOVES YOU

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


Nomex posted:

You should take a look at the 3Par T series arrays. They were recently purchased by HP, so they would still fit into your single vendor requirement. You can mix SSD, FC and SATA disks, they can be equipped to do both FC and iSCSI and they do autonomic storage tiering.

True but I already have a frame-based storage, my DDN S2A9xx-series system, I'm tryng to avoid lining up another one. Also I don't think they would sell it at the same price they sell their 120TB scale-out P4500... :)

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


three posted:

Equallogic also works this way.

Well, it's one RAID mode per box, true but you can switch off the autopilot and map out your LUNs/volumes manually IIRC - I don't think P4000-series boxes allow anything like that...?

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


Nebulis01 posted:

Not the P4000 series. I wish it did, but apparently the P6x00 series does.

AFAIK all EQL boxes, regardless of generation or model numbers, run exactly the same firmware, same features, same everything - are you sure you are not in some manual mode?

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


Nebulis01 posted:

Yes, the box I'm talking about is the EqualLogic PS4000X (sorry about the make/model confusion)

It's limited to 2 members in a group, 256 Volumes, 2048 Snapshots, 128 Snapshots/volume, 32 volumes for replication, and 128 replicas per volume. The PS6x00 series has a substantial increase in all of those metrics.

Correct but these are all artificial scaling limits imposed by Dell, as three noted correctly above - there's no difference at RAID/LUN levels etc. One RAID type per box and volumes are handled internally (unless you turn this auto mode off manually.)

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


InferiorWang posted:

Has anyone here had any experience with HP's VSA(virtual storage appliance)? We're probably going to move forward with a basic P4300 7.2TB SAN. The guy who is the admin for the local vo-tech school has the same unit, but has two and replicates data from the active to the passive each night at a DR site. I'd love to do that but it was hard enough to convince everyone on one SAN, 2 would be close to impossible.

He uses three because you need at least 3 nodes for HA -in his case the VSA is the acting FOM, taking care of the quorum (rerouting all volumes to one node when the other one goes down.)

quote:

VSA and an HP server with a bunch of storage seems like it would be a cheaper option. The active SAN will host VMWare data stores and half a dozen iScsi LUNS for non VMWare HA cluster resources.

Not sure how important HA is for you but again, you cannot get HA working with two nodes - you will need a third node, at least with a virtual FOM running.

quote:

I'm not looking too much at performance for a DR site, because in the case of a disaster, we're only looking to bring certain services/servers online, not everything.

Based on my limited testing of the VSA it should work fine for you - it's a fully working P4000-series node in a virtual form except it's limited in terms of capacity (10TB per VSA I believe) and sizing/striping (2TB per volume /X number of disks per RAID5) etc.

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


InferiorWang posted:

Sorry, I wasn't clear with my post. He has two units all together. One active, and one that has the data replicated to it once a day. It's not a true HA but rather an offsite copy of the LUNs(volumes in Lefthand-ese I believe) in the event of a true disaster or data loss. This is also accepting that we'd lose whatever data was created or changed between the catastrophic data loss and the last replication.

I see - FYI it's not only losing data but staying offline as well if you don't have HA...

quote:

The piece I really need to dig into is how I would then make that DR site live.

Errr... it's called HA, you might have heard it somewhere... ;)

quote:

I'm thinking a third VMWare host would be at the DR site and we'd bring the most critical virtual machines online. I work at a K-12 school district so downtime doesn't mean millions of dollars in business being lost every second of downtime. Automatic fail over is not a concern. However, having a relatively straight forward and documented plan to cut over to a DR site is what I'm looking for, even if that cut over takes a couple of hours to get going.

It's not just the hour but the PITB to re-link everything, only to link back once the primary site is back again - again, I think your best bet is to set up another VM running FOM (it's free AFAIRC) so you can have HA between your P4300 and the remote VSA... I'm not an actual P4000-series user (yet) so make sure you talk to someone from HP about this setup.

quote:

Thanks for the info though. I wouldn't expect any single volume to be larger than 2TB or combined, be more than 10TB, seeing as we're only going with 7.2TB starter kit anyway on the primary SAN.

To be sure check my facts on ITRC, I'm relying on vague memories...

quote:

Just because, here's a crappy visio diagram of what I'm thinking of:



What's really sad is that Visio 2010 still sports the same god-awful graphics...

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


adorai posted:

HA and DR are completely different concepts.

Exactly, that's why I said HA - apparently you got confused by the name "DR site"...

...or perhaps missed the most important part of his post, namely

The piece I really need to dig into is how I would then make that DR site live.

quote:

If you want an easy, drop-in, push one button for DR solution in a VMware environment with all VMDKs you would probably use SRM. It's like $3k a proc.

Did you even read his post? I doubt it - he has no money and he wants to make the DR site live, not recover from it...

...you know, HA vs DR.

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


three posted:

It's Per-VM now, I believe.

That's the killer part - he barely has money for one VSA ($3k or less.)

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


InferiorWang posted:

Thanks for the feedback. Manual Intervention is fine assuming it's something that can be properly documented and done by a trained , again, even if it takes an hour or two to get up and running.

There would have to be a real disaster like the building burning down or being pulled into the gamma quadrant through a wormhole for me to go to the redundant box, which is why I'm looking at VSA as a cheaper alternative over a second P4300 which may never have to be put into production.

That's why I am saying that your P4300 + VSA + (free) FOM gives you remote HA, without a push of a button. I don't run VMware so I cannot help you there - I rather spend my money on better/safer hardware, better backup etc things instead of giving it to EMC for things that are free in Hyper-V or (some) even in XenServer.

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


Misogynist posted:

Sorry, but SRM features aren't free in any product, unless you think all SRM does is send a "power up" command to a pile of VMs at another site.

Err,

1. FOM is exactly for that, right, not to manage anything....
2. ...maybe you're confusing it with P4000 CMC?
3. Wait, that's free too...

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


adorai posted:

he wants to know how to initiate a dr failover, making his dr site live in the event of a disaster.

Which is exactly HA, with a remote node, right.
DR would be if he would recover from it, as in Disaster Recovery.

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


Mausi posted:

Pray tell, sir goon, of these wondrous free availability features of Hyper-V which do not exist mostly in Xen and certainly in VMware?

They exist but not for free. :)

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


InferiorWang posted:

After reading it in that context, it makes sense now.

The local inside rep at HP refuses to call me back despite leaving multiple messages. I think it would be worthwhile looking to an outside source to help me out with this, especially if I can't get anything from HP.

HP do this all the time, I now only work with channel - they better at literally everything, from pushing down prices to getting back to me in time and with what I wanted (instead of something else.)

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


InferiorWang posted:

I dropped an email to a vmware rep who has been helpful in the past. I'll give my CDWG rep a shout too. It's funny you mention that because I've read people say to stay away from CDW in these cases. However, I'm not in the market for a huge rear end fibre channel san spending a quarter of a million dollars which might have been the context of those posts.

Thanks for suggestion.

Try Insight or PC Mall, they both work great for me.

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


Syano posted:

Oh thats pretty killer. Cool this is what I needed to know. Now to make a purchase!

Yeah, that's the cool part - downside is you can lose a LOT of storage space as well as bandwidth for mirroring things between boxes...

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


qutius posted:

MPIO protects against more than just a controller failing and the partner taking over disk ownership, which is more of an high availability thing.

MPIO will protect against any failure on the fabric - so target/initiator port, cable, switch, switch port, etc. Some of these failures would be protected against HA too, but MPIO is needed at the driver level too.

But maybe I'm splitting hairs...

Well, MPIO can be configured several ways... generally speaking it's typically not for redundancy but rather for better bandwidth utilization over multiple ports - if you are running on a dual or quad eth card then it cannot protect you from any failure anyway.
Also vendor's DSM typically provides LUN/volume location awareness as well - when you have larger network it makes a difference.

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


InferiorWang posted:

I started to spec up a server to run the VSA software on. We're a Dell server shop, so I figured I'd stick with them. I've asked for a quote for an R710 with embedded ESXi 4.1, the free one. I'm looking at doing a simple RAID5 SATA 7.2k array. Since this would be DR and only the most critical services would go live, I'm guessing using SATA isn't a horrible choice in this case. No oracle or mssql except for a small financial package using mssql, which has 10 users at the most at any one time. GroupWise(no laughing) and all of our Novell file servers would be brought online too. 32GB of RAM. Anyone see anything completely wrong with that hardware setup?

take a look at those sweet-priced R715s - they come with 12-core Opterons, for the same price you can get 24 core in a node... and R815s are only 2-3G away and up to 2.2GHz they come with 3rd and 4th CPU included for free, making it 48 cores total per node. :)

quote:

Also, our Dell rep has a bad habit of ignoring what you write in an email. I gave him the equote# and asked to change the support options and to NOT give me the promotion pricing he mentioned. So, he gives me a quote with the wrong support option and with the promotion pricing.

Try a channel partner, seriously - Dell often acts weird until a reseller shows up.

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


Xenomorph posted:

Pretty basic I guess, but I just ordered a Dell NX3000 NAS, loaded with 2TB drives.

If you can wait new NX models, sporting the new Storage Server 2008 R2 etc, are coming in 2-3 weeks.

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


Misogynist posted:

Can you stop spewing vendor acronyms and say something?

One expects certain knowledge from the party who's engaged in a discussion/argument about the #2 most popular iSCSI SAN system in a topic called Enterprise Storage Megathread...


...but hey, ask and you shall receive:

FOM: Failover Manager

P4000 CMC: HP StorageWorks SAN/iQ 9.0 Centralized Management Console

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


Syano posted:

Looks to me like he said something 9 times in a row... though I didnt read any of it because its annoying as hell...

Too bad - you might even have learned something from them at the end..

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


1000101 posted:

I picked this post because MPIO typically is for redundancy in conjunction with distributing load over a lot of front end ports on an array. Even if you're using iSCSI with a quad port gigabit card and you lose half your switches you'll still be able to get traffic to your storage. Every IP storage design we've ever done has involved a pair of physical switches just to provide redundancy.

You're right - hey, even I designed our system the same way - but I think you're forgetting the fact these iSCSI-boxes are the low-end SAN systems, mainly bought by SMB; you can argue about it but almost every time I talked to someone about MPIO they all said the only reason they use it is the higher throughput and no, they didn't have a second switch...

quote:

I have no idea what FOM is; but does it also send the correct instructions to VMware to attach the volumes? Will it also allow you to automatically re-IP your virtual machines? Does it handle orchestration with external dependencies as well as reporting/event notification? Does it also provide network fencing and set up things like a split off 3rd copy to avoid interrupting production processing during DR testing? Does it handle priorities and virtual machine sequencing? Does it integrate with DRS and DPM?

Well, that's the whole point: some I'd think you do at SAN level - eg detecting you need to redirect everything to the remote synced box - and some you will do in your virtual environment (cleaning up after the fallback etc.) I don't run VMware on Lefthand so I'm not the best argue about details but I read about it enough in the past few months to know what does it supposed to do..

quote:

HA is typically referred to for localized failures. i.e. one of my server's motherboards just died on me and I want to bring my applications up from that failure quickly.

Well, that's kinda moot point to argue about when Lefthand's big selling point is the remote sync option (well, up to ~5ms latency between sites) - they do support HA over remote links (and so does almost every bigger SAN vendor if I remember correctly.)

quote:

When we talk in terms of DR, we typically speak in one of two things:

1. Someone just caused a major data loss
2. My data center is a smoking hole in the ground

HA does NOT protect you against #1 and you're still going to tape. That said, in the event of #1 you're not going to necessarily do a site failover (unless you're like one of my customers who had a major security breach.)

Correct but HA has never been against data corruption, I have never argued that - data corruption or loss is where your carefully designed snapshot rotation should come in: you recover it almost immediately.
OTOH if I think about it a lagged, async remote option might be even enough for corruption issues... ;)

quote:

In the event of #2; we're talking about moving all of the business processes and applications to some other location which goes far above and beyond typical HA.

Not anymore. Almost every SAN vendor offers some sort of remote sync with failover - I'd consider these HA but it's true that lines are blurring more and more.

quote:

Which features that are worth it are you talking about? I guess you could say a centralized management console (which I believe actually is still free with Xen, as well as live migration.)

The very feature we're talking about here: failover clustering. :)

quote:

Also, for the love of whatever's important to you, consolidate your posts into one. There's no reason to reply to 6 different people separately.

I think it's more polite especially when I wrote about different things to each person - they don't need to hunt down my reply to them...

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


three posted:

:what: Plenty of large businesses use iSCSI.

Um, sure... your point is? :allears:

quote:

I think you need to learn to stop when you're wrong.

I think you need to learn to read first before you post nonsensical replies...

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


Mausi posted:

I suspect he's only talking about HP Lefthand.

Correct.

quote:

Well I hope he is, because that huge a generalisation would be pretty loving stupid.

Well, talking about generalisation after someone cited empirical evidence IS pretty fuckin stupid, y'know.

quote:

And he'd still be wrong, but whatever.

Even if I put aside the fact that you're not making any argument - trolling? - I'm still all ears how someone's experience can be wrong... :allears:

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


Syano posted:

I have learned something. Ive learned that you think you know a lot about SANs and that you are a TERRIBLE poster. iSCSI only used by SMBs? Seriously?


See, I told you: read the posts before you make any more embarrassingly stupid posts - you didn't and now you really look like an idiot with this post...

...someone has issues, it seems. :raise:

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


InferiorWang posted:

Unfortunately, the VMWare bundle we're going to go with only allows licenses for up to 6 cores per host.

EMC will rip you off at every turn, that's for sure - I didn't even even include them in my RFP list and I got pretty nasty replies from their people in another forum when I said their pricing is absolutely ridiculous around $100-200k, even if it as complete as VMware is when it comes to virtualization.
Of course he came back arguing the usual TCO-mantra but unless they give me a written statement that 3-5 years from now, when they will push me for a forklift upgrade, I will get all the licenses transferred I will never consider them around $150k, that's for sure.

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


Mausi posted:

Well unless your definition of SMB scales up to state government then your assertion draws from limited experience. Lefthand kit is used, in my experience, in both state government of some countries as well as enterprises, albeit outside the core datacentre.

I'm not following you - I said SMB; what does my SMB experience has to do with gov...?

quote:

I'm not certain about EMC, but it's basically a constitutional guarantee from VMware that, if you buy regular licensing and it is in support for a defined (and very generous) window around the release of a new version of ESX, you will be given the gratis upgrade. This happened with 2.x to 3, and happened again with 3.x to 4. There is absolutely no indication internally that this will change from 4.x to 5.
My experience with licensing from EMC is that they will drop their pants on the purchase price, then make it all back on support & maintenance.

EMC will rip you naked with the "pay-as-you-go" nonsense vs all-inclusive and cheaper iSCSI licenses, that was my point, let alone not having to repurchase all of them when you bring in new generation of boxes (well, at least EQL does allow mix'n'match all generations.)

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


FISHMANPET posted:

Is being a really lovely poster a bannable offense? Because I'm loving tired of szlevi making GBS threads up this thread with his inability to copy/paste.

It seems some think the way to show how "tough noobs" they are they have to attack someone - let me reply in the same manner...


...do you really think anyone give a sh!t about your whining? Report me for properly using the quote button or not but stop whining, kid.

(USER WAS PUT ON PROBATION FOR THIS POST)

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


1000101 posted:

This is longer than I intended it to be.


Maybe in the low end market where sales guys let customers make bad decisions this is true.

I have nothing to do with sales. I work for the same company for years, in a very specialized market (high-end medical visualization/3D) but I do help SMBs time by time (typically as a favor, not as a paid consultant, mind you.)
Almost every time the first thing I suggest is to make switches and network connections redundant - because in almost every case they are not. Aain, it's just empirical evidence but a very common problem nowadays, I think.

quote:

I've been hard pressed to find a lot of small businesses that actually need to have more than ~1gbp/s of bandwidth to the storage. I've been to plenty of shops running 1500+ user exchange databases over a single gigabit iSCSI link with a second link strictly for failover.

No offense but that's the typical problem with all "regular" storage architects: they can only think about Exchange/OLTP/SQL/CRM/SAP etc.

Let's step back for a sec: you know what most SMBs say? The "network is slow" which in reality means they are not happy with file I/O speeds. Your 1Gb/s is 125MB/s theoretical - which is literally nothing when you have 10-20 working with files especially when they use some older CIFS server (eg Windows 2003 or R2).

quote:

What you're seeing is not the norm in any sane organization.

What you claim as "sane" organization has nothing to do with any average SMB, that's for sure.

quote:

I guess the exception is in dev/QA environments. I'll give you that!

Funny, our R&D dept is a very quirky environment, that's for sure - next couple of months I have to upgrade them to 10GbE to make sure they can develop/validate tools for a workflow requiring 600-700MB/s sustained speed (essentially all high-rez volumetric dataset jobs.)

quote:

Redirecting data to a remotely synced up site doesn't provide you everything. If I move 2000 or 20 virtual machines from one physical location to another physical location then there are good odds I have a shitload more work to do than just moving the systems.

True but that's what these vendors promise when they market their sync replication and failover.

quote:

The parts you do at the SAN level would be getting the data offsite. Once the data is at the new site you have a host of questions to answer. Stuff like:

Am I using spanned VLANs? If not how am I changing my IP addresses?
How are my users going to access the servers now?
Since I only have so much disk performance at my DR site, how do I prioritize what applications I bring online first?
What about data that I'm not replicating that needs to be restored from tape/VTL?
Do I need to procure additional hardware?
...

What about testing all of this without impacting production OR replication?

This is synchronous replication and can really only happen within about 60-75 miles. Hardly appropriate for a good portion of disaster recovery scenarios people typically plan for (hurricanes, earthquakes, extended power failures.)

Yes I can use SRDF/S or VPLEX and move my whole datacenter about an hour's drive away. Is this sufficient for disaster recover planning and execution? Probably not if say hurricane katrina comes and blows through your down and knocks out power in a couple hundred mile radius.

I'm not sure why are you asking me but here's how I understand their claims: your site A goes down but your FOM redirects everything to your site B, sync'd all the time, without any of your VMs or file shares noticing it (besides being a bit slower.) This is in lieu w/ your virtual HV and its capabilities, of course.
Heck, even Equallogic supports failover albeit it won't be completely sync'd (they only do async box-to-box replication.)
Am I missing something?

quote:

Are you a sales guy?

Not at all.

quote:

I said that there are a lot more components to DR than just "failing over" as a generic catch-all.

Depending on async replication to protect you against data corruption issues is insane. "Oh poo poo I've got about 30 seconds to pull the plug before it goes to the remote site!" There's no "might even" about it. Depending on that is a good way to cause a major data loss.

No one does this.

I disagree.

quote:

Also snapshots shouldn't be the only component of your disaster recovery plan.

Never said that - I said it's good against a crazy sysadmin or idiotic user, that's it.

Will cont, have to pick up my daughter. :)

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


Xenomorph posted:

Any more info on this? At first I didn't think it would be a problem, but after using 2008 R2 on our Domain Controller, using regular 2008 feels a little "off" (basically going from Win7 back to Vista).

Besides the slightly improved interface, what advantages does Storage Server 2008 R2 offer? SMB 2.1? How much better is that than 2.0?

I'm not even familiar with the "Storage Server" product. I saw something to enable Single Instance Store (de-duplication) on the drive, which I'm guessing isn't in the regular Server products.
I'm tempted to just wipe Storage Server 2008 and install Server 2008 R2. We get Windows licenses cheap, and I'm trying to figure out if we'd be happier with the overall improvements in Server 2008 R2 compared to the NAS features we may not use in Storage Server 2008.

Sorry for the late reply, I was busy receiving and building my new cabinet, full of new systems etc. :)

Yes, WSS 2008 R2 is in the pipe, I know it as a fact but it seems it won't be out before the end of Q1 - why, don't ask me, Dell is rather tight-lipped about it for some mysterious reason (ie it's already out, there's nothing secret about it, HP introduced their G2 X3000-line last Novemeber.)
I have a theory though: until today Dell bought (storage companies) EqualLogic, Exanet, Ocarina and recently Compellent - how many you can point out in Dell's current portfolio? Right, only EqualLogic (Compellent acqusition is still under way and they will need another year to fully integrate into some unified storage lineup, they still selling Compellent-sourced systems regardless Dell's listing as theirs, you cannot configure it, no info up there etc.)

Ocarina's deduping is coming, we know that, they told us it's going to take up to a year before it shows up - Couldn't fit in EQL firmware space? Controller's inability to run it? - but they are totally silent about the Exanet IP they bought now more than a year ago... it was a scale-out, clustered NAS solution, exactly the product Dell is sorely missing (a' la SONAS, X9000, VNX/Isilon etc) and also a product that would certainly eat into the market share of NX units running Storage Server 2008 R2 clusters.
Coincidence?
I doubt it but time will tell.

As for WSS2008R2: new SMB2.1 is a lot faster, SiS is there yes, FCI is better (you might know it from Server 2008 R2), it includes iSCSI Target v3.3 and, as every previous Storage Server Edition, it includes unlimited CALs right out of the box (key selling points in many cases.)
If you're like me, planning to run two in a cluster then it's important to remember that R2 clustering is a lot easier now - and you can still get all your Server 2008 R2 features.

Licensing aside I'd never wipe Storage Server R2 and install Server 2008 R2 for sure, just like I would not hesitate for a second to wipe Storage Server 2003 R2 and install Server 2008 R2...

...but Storage Server 2008 vs Server 2008R2? Tough call... are they both Enterprise and is SS2008 x64?

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


paperchaseguy posted:

As a very rough rule of thumb, I use 120 IOPS/10k disk, 180 IOPS/15k disk, 60 IOPS/5k SATA. But yes, any major vendor will help you size it if you can collect some iostat data or give some good projections.

Same here for rough estimates w/ 80 for SATA 7.2k or 100 for SAS 7.2k added...

szlevi fucked around with this message at 21:59 on Feb 25, 2011

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


Nebulis01 posted:

WSS2008 is available in x86 or x64. WSS2008R2 is available only on x64.

Yes, I know that - I'm asking what they have up and running over there, WSS2008 x86 or x64...

quote:

Unless you really need the iSCSI or De-duplication features,

...or FCI to make policy-based automated data tiering or to have unlimited licensing etc etc...

quote:

Server 2008R2 would serve you quite well.

Right except I am having problem figuring out how much advantage Server R2 gives you (sans somewhat better SMB). :)

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


Moving data at sub-LUN vs file system level is a good question - personally I'd go with file system instead of sub-LUN due to the fact that the latter has no clue about the data but FS-level tiering at least doubles the latency and that's not always tolerable. Recently I met with F5 and I saw a demo of their appliance-based ARX platform and I really liked the granularity and added latency wasn't bad at all but the cost is just crazy, I don't see it ever making it into the mainstream... it was much more expensive than EMC's DiskXtender which is already in the ripoff-level price range.

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


ZombieReagan posted:

I just got a quote earlier today for a FAS3210 with 2x24 600GB 15K RPM SAS drives, 1x24 2TB SATA, in HA, with NFS,CIFS,iSCSI,FC and 2 10GbE cards for $110K. That's without SnapMirror though, but you don't really need that unless you've got another filer to mirror to. It's less than 25TB usable in SAS disks, but it gives you an idea.

Not exactly a high-end unit, but it's not bad.

It depends. I don't know Netapp but my 48x2TB EQL PS6510E (~42TB RAID10/70TB RAID6) was around $80k in Jan (end of last Q at Dell) as part of a package but I bet you can get it w/ the FS7500 HA NAS cluster under $100k...
I'd say if you know how to play your cards (VAR etc) at the end of this quarter (in a week or two) you can even get a PS6010XV(parhps even XVS) + PS6010E + FS7500 setup around $100k or so...

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


Drighton posted:

Yet more problems! I think we've isolated the issue to the Broadcom BCM57711 cards. While using the software initiators in either Microsoft or vSphere we can achieve 700MB/s up to 1.2GB/s. But when we try using the Offload Engine our speeds drop to 10MB/s up to 100MB/s with the latest driver update.

This is on a simpler environment, with 1 switch and 1 controller, but the symptoms are consistent on all 10 of these cards. We've confirmed our SAN configuraiton is correct with their tech's, and we've stumped the VMWare support guys - they are doing further research. Dell is now doing their required troubleshooting before we can get replacements, and I've even hit up Broadcom for support (no reply yet).

Does anything stand out to anyone here? The last troubleshooting step I can try is to load Windows directly on one of these machines and test the performance that way. I believe this is also the only way to update the firmware on these cards (which I've found on Dell's website, but not Broadcom's :confused: ).

We've also looked into the Intel cards - is my understanding correct that they do not have an equivalent iSCSI offload engine? From what I've read it looks like they just reduce the impact the software initiator has on the processors.

E: It never fails. When I post about it, I get the answer. Flow Control in ESXi is hidden very well. Throw the command to enable it: instant fix.

Ah, don't even start me with BCM57711...

Broadcom IS JUNK. Seriously, I run dozens of BCM57711 cards in my servers, two per servers and ANY OFFLOAD BREAKS SOMETHING and different things in different drivers but they all screw up your speed and/or connectivity (more than ridiculous if you think about it what's the point in offloading.)

I used to buy only BCM-based stuff to match my switches and onboard NICs but NEVER AGAIN; at least twice we spent weeks figuring out these issues and it was always a goddamn junk Broadcom drivers/HBA issue at the end...

...never use iSOE and also avoid TOE on BCM57711, stick to all sw-based iSCSI connections - BCM's offload is pure, oozing sh!t.

FYI recently bought some dual-port Intel ones, for the price of a BCM5710 (single-port version) and it works like it supposed to. From now on, Intel, that is.

PS: did I mention when a firmware update in the Summer wiped out all settings on all adapters...? That was 'fun' too, thanks to Broadcom.

szlevi fucked around with this message at 16:21 on Oct 14, 2011

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


three posted:

We just got a Compellent SAN and have been tinkering with it. Compellent's knowledge base is pretty good, but are there any other good resources?

What kind of config?

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


three posted:

Technically, Compellent controllers are built using SuperMicro hardware (for now).

Correct and I was told that it is likely to stay this way until next Summer when their new controllers are due.
That being said there's not much difference between using Dell or SM server, failing rates are pretty much the same (they are all made in the same country, heh) and as long as you get CoPilot premium support it's a moot point anyway. BTW I was also told that Dell will use CoPilot as a support model all across its storage business (like they are busy unifying we-based mgmt per EqualLogic's web UI standards.)

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


ehzorg posted:

Hey good storage peoples, advise me please.

I've recently been hired as the first and only dedicated IT support person in an medical R&D firm of about 100 employees in Switzerland. The current corporate mass storage needs are met by about a hundred external USB hard drives, mostly chained up to everybody's workstations by USB hubs. Oh, and they've got a ~4TB linux based fileserver managed by externally contracted IT. It's a loving mess. As the IT department, I'm expected to make it all better.

I'd like to push for a centralized storage solution. The problem is that I'm pretty far out of my experience envelope here. I'll explain our situation, usage characteristics, and desired features - if you could point me in the right direction, I'd be eternally grateful.

We're going to be storing mostly simulation results - no databases or virtual machines. These are huge files created and accessed by typically not more than 10 people at a time over 1Gb ethernet via SMB (so, high bandwidth - low IOPS... right?) We currently have at least 30TB of poo-poo that needs storing, but the ability to grow to 100TB+ within the next 3-5 years is probably needed. Would be nice to also have this data backed up to another location in house, but right now any kind of fault-tolerance is better than what we've got.

Budget concerns.... well, since up until now management hasn't been convinced about the necessity of upgrading past 2TB USB external hard drives, this may be a concern. I'm pretty sure I can get 15k eurobux approved, possibly 25k if I can make a solid argument and pretty powerpoint slides. More than that is unlikely to be approved.

What should I be looking at? Gigantic prosumer level NAS devices (Synology / QNAP)? Small business level storage (NetApp / Dell)?

Forget all the consumer crap.
Give a call to Dell and ask about their new (came out around May-June) NX3500 clustered NAS, it should be within your budget: http://www.dell.com/ch/unternehmen/p/powervault-nx3500/pd

AFAIK this is two Windows Storage Server 2008 R2 Enterprise in a HD cluster, sharing local SAS storage and giving you unlimited client access license (Storage Server.) There used to be another NX3500 in Storage Server 2008 R2 flavor but apparently Dell stopped offering it... anyhow, this NX3500 utilizes Dell's scalable file system formerly known as Exanet FS. This thing scales up to ~500TB or so.

Make sure your quote comes with everything redundant (power supplies etc) + proper same day support, possibly 4-hr one, for at least 3 years (5-y would be even better.)

As for growing to 100TB+ in few years... well, it would be better to start with an EqualLogic SAN + clustered NAS but that's unlikely in your budget.

Not sure how is it in Europe but here in the US I usually work with a reseller because they get 50% and more discount from list price.

Just my $0.02...

szlevi fucked around with this message at 18:53 on Dec 18, 2011

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


Vanilla posted:


The EMC and Dell relationship is dead. Not surprised they did that with the quote, they did it to all their Clariion customers to get them to move over to their platforms....and mostly failed. Latest IDC figures have EMC going up and Dell going down and trying hard to stay out of the 'others' catagory despite their best efforts.

Ehh, just read a month or so ago that in disk storage systems Dell is right behind the EMC-IBM-HP triumvirate, ahead of Netapp. Also last time I checked Dell pretty much ruled the fastest-growing segment, the iSCSI storage market with EqualLogic.
If you strictly mean higher-end storage then yes, Dell lost its EMC sales but got Compellent though obviously it will take few years until they will start generating similar revenue the Dell|EMC was making.

(FYI I have nothing to do with Dell, I'm just an EQL user who also like Compellent's approach.)

szlevi fucked around with this message at 15:20 on Dec 18, 2011

Adbot
ADBOT LOVES YOU

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]


Bluecobra posted:

I disagree. My definition of a whitebox is to use your own case or barebones kit from a place like Newegg to make a desktop or server. If Compellent chose to use a standard Dell/HP/IBM server instead, it would be miles ahead in build quality. Have you ever had the pleasure of racking Compellent gear? They have to be some of the worst rail kits ever, thanks to Supermicro. The disk enclosure rail kits don't even come assembled. It took me about 30 minutes to get just one controller rack mounted. Compare that with every other major vendor, racking shouldn't take more than a couple of minutes due to railkits that snap into the square holes.

This is nonsense. What rails have to do with the quality/longevity of server parts?

FYI I like my Dell servers but out of the 4-5 I bought back in January and installed by March there is NONE that did not give me at least one part failure.
My EqualLogic box went through SIX CONTROLLERS (unit has two) in its first 4 weeks of use... since August we had to replace 6 drives total (unit has 48 drives), we've even had a double-disk failure (RAID6 I praise your name.)
Yeah, I know, bad batch in the controllers, box has 2 hot spares anyway and support is top-notch, we always get a replacement drive within 2-3 hours but still.
And it's not just Dell: my new $30k+ 8212zl HP Procurve, full of v2 modules, came with ~100 gigabit ports where the alu around the RJ45 port openings on modules was already coming off the chassis (gigabit-only, the 8-port 10Gb modules are solid.)

Generally speaking my experience is that overall part quality went downhill and not just because we now use SATA in enterprise units (though Constellation is sold as enterprise SATA) but because literally everything is made in China and further East, from the cheapest materials, by the lowest-paid labor available.
Sure, we all buy premium support for all our server room gear and now Dell's ProSupport is actually matching IBM's (both being well ahead of HP) but now it's a recurring adrenaline shot every time to check my phone when I hear the sound of a new message arriving to my work email...

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply