Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Mausi
Apr 11, 2006



conntrack posted:

If you keep three months of one snap each day writing one block will result in 90 writes per block? Im sure that database will be fast for updates.

Edit: i guess that depends on how smart the software is. clasical snaps would turn to poo poo.

Something like a NetApp will maintain a hash table of where every logical block is physically located for any given level of the snapshot. True copy-on-write will simply allocate an unwritten block, write the data, and then change the hash table to point to the new data. Any reads will also check the hash table.
Very minor performance overhead, brilliant for all sorts of things.

Adbot
ADBOT LOVES YOU

Mausi
Apr 11, 2006



paperchaseguy posted:

XIV does redirect on write. It's taking a snapshot not of the LUN blocks, but of the LUN's block pointers. Most block storage does copy-on-first-write, which creates much more load.
Your statement about 'most block storage' is out of date - this is precisely how every SAN I currently work with operates and what I meant by the hash table (hash / pointer terminology differences)

Mausi
Apr 11, 2006



Do you want a backup somewhere that you overwrite each week or so, or do you want to do things properly and keep a rolling set of backups so you can go back a day, week or month as required?
Do you want to take it off site (presumably yes) and how much speed is required?

Basically, depending on what you mean by 'backup' depends on what solution you should best look at.

Mausi
Apr 11, 2006



Tsaven Nava posted:

My goal is to be able to recover from a catastrophic server failure quickly, with minimized downtime or loss of data. It has to be reliable, easy to manage and use. And it needs to be cheap, to fit within a budget that doesn't exist.

Quick and dirty solution:
1) Get a small NAS device, cheap as chips, probably with huge SATA disks in it.
2) Clone the OS disks of your servers using VMware converter (or something similar) onto the NAS. If it's a domain controller do Systemstate as well just to be safe.
3) Backup your data disks using a bit of software capable of change-only updates onto the NAS
4) Take the NAS home with you, or put it somewhere else safe.

That's basic server DR. You can restore the actual servers as virtual machines on a single replacement box in a data centre or anywhere else. If you're talking real DR then you'll also need the workstation images if your users are allowed to save important data there (which they shouldn't, but it's your business not mine)

Tsaven Nava posted:

And I want a pony. That flies and shoots laser beams. (I figure as long as I'm asking for the impossible, I might as well go all out)
Get a WoW account.

Mausi
Apr 11, 2006



Cultural Imperial posted:

Is anyone out there looking at 10GbE?

Sure, simple answer:
Most people do not need 10GbE beyond their core network infrastructure, if then.

Give us a little bit of description of where and how you are considering implementing it, and you'll probably get a less general answer.

Mausi
Apr 11, 2006



This customer has implemented a shiny new V-Max behind SVC for their VMware environment :ohdear:

I like SVC
I like V-Max
SVC + V-Max :pseudo:

Mausi
Apr 11, 2006



Cultural Imperial posted:

Is that 3 10GbE interfaces? I'm curious about how people are using 10GbE with esx.

Given that remote access cards only support standard ethernet, I'm going to guess that he's running 2x 10GbE for data, and 1x 100Mb ethernet for the remote access.

I've not seen anyone plan for more than 2x 10GbE into a single server.

Tell you what tho, 10GbE upsets the VMware health check utility - starts complaining that all your services are on a single switch.

Mausi
Apr 11, 2006



HorusTheAvenger posted:

Whoa, hold on. Most Operating systems if you have multiple NICs configured with different IPs on the same subnet without using some form of link aggregation you end up with ARP Flux.

http://www.google.ca/search?q=arp+flux

If you're getting this, what traffic goes on what interface can be pretty random and unpredictable.

If this is the Equalogic server he's talking about they're designed to work like this. He mentioned the Equalogic MPIO driver, so that's probably the case.
If it's the random Windows box on the other end of the iSCSI link then it would be more of a concern.

Mausi
Apr 11, 2006



three posted:

He came really close to recommending we dump our 8+ Equallogic units and repurchase all NetApp devices.

If this is the case he's not an independent consultant.
A proper consultant will never tell you to dump what you have and start again. But your average engineer will certainly steer you towards what he knows if he wants to have ongoing work.

TBH I've had issues with the built in Broadcom IIs that a lot of vendors are favouring for onboard gigabit these days. I think you'd do quite well with a dedicated iSCSI controller.

Mausi
Apr 11, 2006



Everyone has been talking about iSCSI throughput on devices other than Equalogic - does anyone know what the throughput overhead of their load balancing algorithm is?

I'm not saying the consultant is right - but it could be endemic to the Equalogic array.

Mausi
Apr 11, 2006



The last 3 banks I've worked with all have stupendously expensive support contracts on hardware that it's not economical to replace for their San attached servers.

When it comes to the VMware kit, the path I've seen most often taken is planned replacement between the 3 and 5 year mark.,

Mausi
Apr 11, 2006



mrbucket posted:

EMC tech says "welp its your network" and sends me off.

Your 'EMC' Tech is probably a douchenoodle from a partner company - don't accept his teflon response unless he gives you proof it's the network.

In these situations you have a few options;
Go back to the sales department and complain about the substandard service
Escalate directly with the engineer
Call EMC support directly, complain about the service, get a real engineer

Mausi
Apr 11, 2006



ghostinmyshell posted:

Anyone have experiences with filetek's storhouse product? They are winning bids by coming in the cheapest and taking the decision makers out to Ruth Cris.

Never heard of it personally, but my trick with this sort of management buying is doing a google search for references to the product on my client's major vendor sites.
'Filetek Storhouse' has precisely zero hits on Microsoft.com, VMware.com, Oracle.com
Feel free to continue searching.

On briefly reading their site, the product sounds like a poor man's SVC - Is it supposed to be a live storage virtualisation layer, or some kind of archiving service? If it's meant to be a live layer, and the major vendors have never heard of it, you can be drat sure they won't support it.

Mausi
Apr 11, 2006



Misogynist posted:

I don't get that impression at all -- it looks like a standard database-backed transparent HSM product and I don't imagine you would use it to host databases, virtual machines or the like.

So when they talk about storage virtualisation for ODBC they just mean external blobs, files and backups?

Mausi
Apr 11, 2006



Forgive my ignorance on this topic, but could someone point me to an explanation of how NFS compares to direct block access in terms of performance?

How is an Oracle server using NFS for it's data storage?



Thanks, found http://media.netapp.com/documents/tr-3496.pdf which is interesting reading, if a few years old.
\/ \/ \/

Mausi fucked around with this message at 11:47 on Nov 19, 2010

Mausi
Apr 11, 2006



szlevi posted:

I rather spend my money on better/safer hardware, better backup etc things instead of giving it to EMC for things that are free in Hyper-V or (some) even in XenServer.

Pray tell, sir goon, of these wondrous free availability features of Hyper-V which do not exist mostly in Xen and certainly in VMware?


1000101 is on the money; Once you've got either the live data or the image-level backups replicated to the DR site, you have to break the replication, bring them into a bootable configuration and power the drat things up.
The only real way to do this reliably is to directly reverse the process which got them there in the first place, but all too often budgets don't allow for that - best advice I can give you is EVERY time you deviate from your production kit at the DR site be drat sure you know the impact.
Also Cover Your rear end when it comes to your boss taking on the risk that it all doesn't work, be clear and concise with exactly how reliable this system they've paid for is.

Mausi
Apr 11, 2006



Why Copy/Paste when there's a perfectly good quote button :downs:

Speaking of DR vs HA and explaining it to management level cluelessness, I have a meeting this week where several very important people want to tell me that implementing VMware SRM will break their business continuity protocols (which are based on tape restore at remote site) :downsgun:

Mausi
Apr 11, 2006



I suspect he's only talking about HP Lefthand. Well I hope he is, because that huge a generalisation would be pretty loving stupid. And he'd still be wrong, but whatever.

Mausi
Apr 11, 2006



InferiorWang posted:

Unfortunately, the VMWare bundle we're going to go with only allows licenses for up to 6 cores per host.
The core limitation is a bit of a red herring as the vast majority of x86 virtualization work done runs into memory ceilings long before cpu, the notable exception to this is stuff like 32bit Terminal Services.

szlevi posted:

Even if I put aside the fact that you're not making any argument - trolling? - I'm still all ears how someone's experience can be wrong... :allears:
Well unless your definition of SMB scales up to state government then your assertion draws from limited experience. Lefthand kit is used, in my experience, in both state government of some countries as well as enterprises, albeit outside the core datacentre.

szlevi posted:

Of course he came back arguing the usual TCO-mantra but unless they give me a written statement that 3-5 years from now, when they will push me for a forklift upgrade, I will get all the licenses transferred I will never consider them around $150k, that's for sure.
I'm not certain about EMC, but it's basically a constitutional guarantee from VMware that, if you buy regular licensing and it is in support for a defined (and very generous) window around the release of a new version of ESX, you will be given the gratis upgrade. This happened with 2.x to 3, and happened again with 3.x to 4. There is absolutely no indication internally that this will change from 4.x to 5.
My experience with licensing from EMC is that they will drop their pants on the purchase price, then make it all back on support & maintenance.

Mausi fucked around with this message at 16:15 on Jan 18, 2011

Mausi
Apr 11, 2006



I don't want to turn this into the virtualisation love-in anyway, so here's the megathread for that if you haven't seen it yet.

http://forums.somethingawful.com/showthread.php?threadid=2930836

But 4 cores / 32 Gb is fine for a consolidation footprint; Depending on the workloads you're going after it's usually between 4Gb and 16Gb per core.

Mausi
Apr 11, 2006



three posted:

To relate this back to storage: storage is the #1 bottleneck people run into with VDI. We've been using Equallogic units, and we plan to add more units to our group to increase IOPS/Capacity as needed. (Currently our users are on the same SAN(s) as our server virtualization, and this is why I want to move them to their own group.)

There is a joke amongst the VMware PSO guys who deal with VDI that you can spot a VDI noob by the way storage isn't the first section of their design.

Mausi
Apr 11, 2006



Vanilla posted:

Hey guys, came across someone asking for 'server offload backup solutions'. Never heard this terminology before? Are they talking about clones for backup??
I'm not sure how well this translates to the wider storage community, but where I work in virtualisation that terminology basically means the 'processing' of the backup is done by a server other than the backup source without impacting on the cpu/memory etc of that source server. This implies that there isn't a local agent using the cpu to work out differentials or compression or anything like that, so usually the only impact is:
Local storage: disk io and network io
SAN storage: effective disk io on the SAN

In virtualisation terms, this means that the hypervisor doesn't have any involvement in backup processing, and by extension isn't dedicating resources to backup which could be used by another virtual guest. The short explanation is another server connects to the SAN, tells the hypervisor that it's taking a copy of the disk and let's it know when it's finished (it's more complicated than that, but I won't bore you).
It doesn't have to be LUN level copying from a SAN, but that's a common solution.

Mausi
Apr 11, 2006



EnergizerFellow posted:

Absolute storage throughput is very rarely an issue outside of backup media servers and streaming media concentrators.
These days I find CPU to process the deduplication as the bottleneck, rather than raw storage throughput.

This is of course TSM, the slow learner of all solutions.

Mausi
Apr 11, 2006



Syano posted:

Ive become more curious now. This Xen deployment isnt going to get very large, only 2 hosts and maybe 20 total VMs. However having a separate storage repository for each VM is sort of annoying. I think its time to do some googling.
You can put multiple guest VMs on a single LUN shared between multiple XenServer hosts - even HyperV can do this now. Don't worry about what linux FS it uses, it'll pick the right one during configuration.

The better question is why are you using XenServer? About the only reason I can think of is you got a free set of licenses from some where.

Mausi
Apr 11, 2006



conntrack posted:

Edit: i guess i nerd raged about this before? it fill me with a hatred more powerful than a thousand suns to have to deal with lovely portals.

And yet your management will continue to buy EMC because it's 'cheaper'.

Mausi
Apr 11, 2006



I've got EMC coming in on Friday to convince me to use their SANs for a new environment, I've also got Dell and NetApp lined up.

We're doing a split environment, Virtual Desktop and Server both on vSphere5 but Desktop is going to be XenDesktop (because management say so) and server is pretty boring as far as storage requirements go beyond part of it being under vCloud control for Development. There'll be about 4000 concurrent desktop users (Win7, probably 30Gb disks before dedup and thin prov carve it down) and about 250Tb of Production server data of which only about 25% is busy (estimated from logs).
Server will be async replicated to a DR site for VMware SRM for about 50% of the capacity, the rest will be handled by backup and datadomain replication (probably, early days in design land as yet).
I don't much care about copper or fibre, it's a new DC so I can cable and switch it up how I want.

I may have to operate server/desktop from two separate SANs do to logistics but not certain yet. I want to get the VDI masters on SSD either via caching or Tier0 luns (would prefer automated management) to cut back on the number of spindles to handle a 3 hour login Window (pan-EU datacentre), also XenDesktop tends to REALLY like NFS for MCS.
I would prefer if the server disks were self tiering as well, but otherwise not particularly fussy as long as I can tie into SRM/Commvault and do thin provisioning at a LUN level.

So questions I guess:
Is NetApp still king of NFS if it turns out that they won't use thin VMFS for XenDesktop? Who else competes now?
Where in the range am I looking here? Seems small enterprise to me, but I'm a little out of touch on storage tech lately. I don't have a budget yet, what I'm seeking is a recommendation of arrays which should support these requirements so I can bully the right vendors for pricing like dogs in a pit.
Is it possible to use a single SAN to operate the split environment intelligently?

If someone can tell me something like "X array will do it all for you but it's expensive, probably try Y or Z or a combination of Desktop on A and Server on B" then if you're ever in London I'll take you to Hawkesmoor for a steak.

If any of this thinking is out of date or stupid please hit me for it, I'm just a VMware nerd with enough knowledge to be dangerous at the moment.

Mausi
Apr 11, 2006



Thanks :)
There's another guy 'doing' VDI while I take care of the broad VMware infrastructure - he's the one who mentioned MCS so I'll check whether PVS is what will be going in.

I appreciate the NetApp info - the comments about Caching reads for VDI meshes with what I currently know so it's good to hear I'm not too far off current best practice.

If anyone has some choice info on the EMC or Dell side of things I'd be very glad to hear it :) From what I've read so far EMC are likely to try and pitch a vMAX and then fall back to a VNX with SSD cache, no idea what Dell will bring to the table but if it's an Equallogic I'm going to giggle.

Mausi
Apr 11, 2006



Thanks for the info, will read those now. :) PM me your details when you're going to be down here in London, I'll definitely shout you a beer.

Given that we're designing and delivering to a certain level of maturity rather than having them grow into it organically, self-managing systems like auto storage tiering and powerpath are ideal as long as we can tie them into the central alerting system, which currently looks to be M$ System Centre and VMware OpsMgr 5.

Vanilla posted:

Dell will really only pitch Compellent...and well....all I have to say is 4Gb cache (mentioned this last page).
I'll make up a new mandatory requirement that no 32bit systems can be purchased as the CTO considers it a key indicator of an outmoded technology, or some poo poo ;)

Mausi
Apr 11, 2006



Dell turned up today; pitched their next gen blades against HP's current gen, and barely came up even for my requirements. Then pitched Compellent against a VNX55xx / FAS32xx and was more roadmap than features I can have now but "oh look we did tiering first". I suspect they'll be cheaper, but less flexible/performant under a broad range of conditions.
Faffed when I asked about how they deal with bursty traffic quicker than their 24hr post-process tiering "Have to ask technical about that". ho-hum. They do have a point with being able to relocate hot blocks down to small sizes, and traditionally are pretty good at +3year costs.

To be fair to them, they did better than EMC when I asked about the current state of Unisphere integration for the various legacy Celerra/Clariion components for replication and what not.
Both of them are doing better than NetApp and 3Par, as they aren't doing anything to get out of the "That's nice, but you're too drat expensive" bucket, despite industrial-strength hints.

Mausi
Apr 11, 2006



Bitch Stewie posted:

When we did our last SAN refresh EQL weren't the right fit for us, neither were Netapp, but when the Netapp quote came, the software licenses were more than the hardware, which was pretty scary.
This is true for all the enterprise vendors, NetApp are just honest about it (so are compellent iirc).

When it comes down to it, the metal you run the SAN on isn't that expensive (although some are certainly better quality), what costs is software-hardware development, support teams and loving salespeople.

Adbot
ADBOT LOVES YOU

Mausi
Apr 11, 2006



I had a director at my desk today asking why I haven't started migrating to the 100TB of Hitachi he'd bought to replace our ageing 3PAR frames.
I asked him where the management tooling was to allow us to troubleshoot performance issues.
His response was that the American office had been using it for two months.
I responded by opening an email about their VSP outage that's still open between VMware and Hitachi.
He advised he was going to yell at engineering about getting the management suite setup.
Gold.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply