Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
sanchez
Feb 26, 2003
VMWare essentials plus is what, 5k for 3 hosts? That's nothing compared to what storage, servers and especially application licensing and staff cost.

Adbot
ADBOT LOVES YOU

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


evil_bunnY posted:

I've said this before, but the first time your virtual environment saves your bacon/allows you to fix/upgrade hardware during business hours instead of when you could be boozing you'll stop thinking about the license costs.

The people that have to fix it when they could be boozing aren't the ones that get to write the checks unfortunately.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Aniki posted:

Some of VMware's licensing packages make the cost more palatable for small to medium sized businesses, but the cost of licensing does nullify a lot of the benefits in certain situations.
What I've used of Hyper-V in Server 2008 R2 hasn't wowed me, but Windows 8 might make me change my mind. It's been my opinion for quite some time that OS-independent virtualization like VMware probably wouldn't stick around for long, and that it's only a matter of time before the Windows guys and Linux guys each run their own native virtualization stacks in most environments. I'm curious what the virtualization landscape will look like in a few years, but right now I still wouldn't touch Hyper-V with a 2km single-mode pole. The weird little gotchas like lack of promiscuous mode support for a VM and lack of support for link teaming (fixed in Server 8) are painful to most organizations doing serious IT.

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


To be fair though, it's hard to justify the extra expense when your primary use of virtualization is running a dozen or so loadbalanced IIS7 servers with another handful of servers running supporting windows services on the back end. In cases like this, you can take down an entire host without impacting application performance or uptime. Live failover or migration isn't something that's going to gain you much.

I agree though it will be interesting to see how this plays out. If MS grabs a lot of businesses on the low end of the growth curve and improves Hyper-V greatly before those businesses have a need for more advanced featureset, it's going to be a hard sell to get them to migrate to something completely different.

Docjowles
Apr 9, 2009

sanchez posted:

VMWare essentials plus is what, 5k for 3 hosts? That's nothing compared to what storage, servers and especially application licensing and staff cost.

Essentials Plus is a really, really good deal for small-medium business (aka, me). And an awesome marketing move by VMware. I'm now pretty well hooked on their impressive feature set.

Serfer
Mar 10, 2003

The piss tape is real



Setting up some new ESXi 5 storage, and I guess I don't get how iSCSI is supposed to be setup in 5. Now only one network card can be assigned to an iSCSI attached vmKernel interface, if I have two network cards, and two iSCSI VLANs, I need to setup 4 vmKernel interfaces (one for each VLAN, per NIC)? That seems rather clunky.

My Rhythmic Crotch
Jan 13, 2011

I have a pretty basic question...

All I really would like to do is have Linux guests on an OS X host. I would like the guests to have their own IP address (bridged networking I guess) and be able to access services running on the guests from the host and from other machines on the network. Currently I have tried Virtualbox and X11 forwarding, port forwarding, etc just aren't working for me. I have spent a while troubleshooting and feel that it's probably an issue with my host hardware and/or Virtualbox drivers.

What's the next step up after Virtualbox? Preferably still free.

My Rhythmic Crotch fucked around with this message at 11:08 on Mar 30, 2012

Kachunkachunk
Jun 6, 2011
Bridged? Well in that case each VM has an IP from your router/gateway which presumably puts out IP addresses via DHCP.
Or you assigned static IPs. Confirm that you can ping from one box to another first (even off of your physical host running VirtualBox).
Port forwarding and the likes is for routers/gateways, you won't need it in your network unless you're crossing networks.

Here's an example:
Desktop running Player, Workstation, or VirtualBox: 192.168.0.2
Router: 192.168.0.1
Random desktop in your house: 192.168.0.3
VM 1: 192.168.0.200 (HTTP)
VM 2: 192.168.0.201 (Ventrilo, Samba)
VM 3: 192.168.0.202 (SQL)
Your friend, across the Internet: 24.x.x.x

Everything in 192.168.0.x can talk to each other if you're really using Bridged networking for the hypervisor/app you're using. The Guests/VMs don't need any special arrangements (just consider maybe firewalls and the likes maybe stopping stuff from working).
The only place you need port forwarding is at the router level (so 192.168.0.1), if you need 24.x.x.x to be able to reach a service in your network.

Anyway VirtualBox I think I've heard has a more responsive desktop experience, but perhaps VMware Player could be an alternative for you.


Edit: More specific examples added. If you wanted to reach the HTTP server, you literally open up 192.168.0.200 in your browser. Your friend needs to enter your public IP address that your modem has, and your gateway/router needs to be forwarding that port (80 by default) to 192.168.0.200. That's a NAT function. If there's a separate firewall to think about, you have to also permit the port (default, port 80).

Kachunkachunk fucked around with this message at 13:34 on Mar 30, 2012

My Rhythmic Crotch
Jan 13, 2011

Hi Kachunkachunk, the problem doesn't seem to be so simple. I can verify that (for example) apache is running from within the guest, and I can ping the guest from the host, but I can't load a page hosted by the guest from the host. Virtualbox does indeed have the concept of "port forwarding" but only in NAT mode and not bridged mode. So it would seem that I need to define some combination of NAT and bridged network adapters, but I just haven't found the working combination when others have reported that it works.

KS
Jun 10, 2003
Outrageous Lumpwad
You probably just have a firewall blocking it or something. Bridged mode is what you should be using.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
From what I can find (mainly the VMware Setup for Failover Clustering and Microsoft Cluster Service guide) iSCSI is not supported as a protocol for the clustered disks. It says that in the 5.0, 4.1, and 4.0 guide. However I've gotten it working in 4.0 following the instructions in the 4.0 guide, except with an iSCSI disk instead of FC.

Specifically where it says it doesn't support iSCSI:

VMware posted:

The following environments and functions are not supported for MSCS setups with this release of vSphere:
* Clustering on iSCSI, FCoE, and NFS disks

So, any insight on what "not supported" means? Does that mean VMware tech support won't support the configuration, even though it works perfectly fine?

Docjowles
Apr 9, 2009

Probably. I know we used to have a terrible app that was really really memory hungry, but the vendor only offered support when running on a 32-bit OS, JVM and database engine :downs: It worked fine in 64-bit, but they hadn't done QA and training on it I guess. We ended up running it in an unsupported 64-bit OS because in the "supported" environment the piece of crap would go OOM and crash several times a week.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
would I see a significant performance difference on bulldozer processors running 5.0 rather than 4.1u2?

Kachunkachunk
Jun 6, 2011

My Rhythmic Crotch posted:

Hi Kachunkachunk, the problem doesn't seem to be so simple. I can verify that (for example) apache is running from within the guest, and I can ping the guest from the host, but I can't load a page hosted by the guest from the host. Virtualbox does indeed have the concept of "port forwarding" but only in NAT mode and not bridged mode. So it would seem that I need to define some combination of NAT and bridged network adapters, but I just haven't found the working combination when others have reported that it works.
It sounds like networking is fine, then. It could be a firewall in the Guest or perhaps Apache is not listening on all addresses. What Guest OS is it?

FISHMANPET posted:

From what I can find (mainly the VMware Setup for Failover Clustering and Microsoft Cluster Service guide) iSCSI is not supported as a protocol for the clustered disks. It says that in the 5.0, 4.1, and 4.0 guide. However I've gotten it working in 4.0 following the instructions in the 4.0 guide, except with an iSCSI disk instead of FC.

Specifically where it says it doesn't support iSCSI:


So, any insight on what "not supported" means? Does that mean VMware tech support won't support the configuration, even though it works perfectly fine?
Basically VMware and Microsoft have not fully validated/tested for this particular use case, from my understanding. Just fiber. But I quite honestly thought that hardware iSCSI was in fact supported by now. I haven't really checked in a long time. Anyway VMware Support performs best-effort, to the least. They just can't file bugs or guarantee anything for your MSCS VMs if anything goes awry. If the configuration seems to be related to the issue, they will ask you to revert/change it.

adorai posted:

would I see a significant performance difference on bulldozer processors running 5.0 rather than 4.1u2?
I haven't heard anything about Bulldozers gaining or losing performance between versions, really. Was there anything in particular that made you wonder (such as existing performance hits on 4.x for any reason at all)?

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Kachunkachunk posted:

I haven't heard anything about Bulldozers gaining or losing performance between versions, really. Was there anything in particular that made you wonder (such as existing performance hits on 4.x for any reason at all)?
We got two demo servers, and they seem to be performing well, except for a few applications that need single threaded performance. The bulldozer procs are 2.1GHz in comparison to our nehalam procs which are 2.9GHz, and one application in particular is taking 3x as long to run. I could stomach 50% longer, not 300%.

Kachunkachunk
Jun 6, 2011
Is it heavily-dependent on L3 cache performance? The more cores a processor has, the more threads (vCPUs) that will be sharing that cache.
I remember that some desktop/productivity apps were performing terribly as a result in some environments, prompting some requirements with tweaking DRS rules.

evil_bunnY
Apr 2, 2003

adorai posted:

We got two demo servers, and they seem to be performing well, except for a few applications that need single threaded performance.
Bulldozer single thread performance is horrible, but I wasn't expecting it to be this bad.

Devian666
Aug 20, 2008

Take some advice Chris.

Fun Shoe

Moey posted:

What kind of performance increase in processing power should I expect from hyperthreading? This is for a home whitebox lab setup.

All my google searches hint at around a 20% performance increase.


additional info:

I have the option of getting a chip real cheap, but options are limited (i5-2500 or i7-2700k). I really think i would rather lose the 20% in cpu performance and gain VT-D so I can properly pass through disks and such to VMs.

edit:
new chips
i5-2500 $100
i7-2700k $180

edit 2:
Looks like if I spend a little more I can have my cake and eat it too. The e3-1230 supports VT-D and HT.

edit 3:
Cheaping out with the i5-2500, using the savings for a nice SSD to run VMs from.

For heavy processing jobs that I do I expect 30% increase from the hyper threads. This is generally true but a 20% increase running VMs sounds realistic when heavily loaded.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

evil_bunnY posted:

Bulldozer single thread performance is horrible, but I wasn't expecting it to be this bad.
You'd figure they would have learned something from Sun's experiences with the T2/T3 chips.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

evil_bunnY posted:

Bulldozer single thread performance is horrible, but I wasn't expecting it to be this bad.
I agree. We were expecting a bit of trouble, but nothing like this. In aggregate it's great, total cpu usage (as a percentage) is lower, it's just these single thread applications that peg the CPU for an hour or two or more that are suffering.

Kachunkachunk
Jun 6, 2011

My Rhythmic Crotch posted:

Hi Kachunkachunk, the problem doesn't seem to be so simple. I can verify that (for example) apache is running from within the guest, and I can ping the guest from the host, but I can't load a page hosted by the guest from the host. Virtualbox does indeed have the concept of "port forwarding" but only in NAT mode and not bridged mode. So it would seem that I need to define some combination of NAT and bridged network adapters, but I just haven't found the working combination when others have reported that it works.
Hey again, I was looking at a problem VM in my environment (I can SSH to everything else with port redirection over WAN, but not to this specific VM, despite good rules).
Turns out it's because the SSH server in this Guest is replying over the wrong WAN interface (the VM has a VPN as the default gateway). Thus I can't SSH to it. Maybe you have a different default gateway? But then again if you have a flat network and aren't routing/NATing anything, it's probably not your issue.

Noghri_ViR
Oct 19, 2001

Your party has died.
Please press [ENTER] to continue to the
Las Vegas Bowl
So I've got a small little VMware install of 3 hosts and 1 vCenter. I'm doing the upgrade from 4.1 to 5.0 today and I ran into a little hiccup and got the following error message:



Now after googling I found the following VMware KB article:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2006664

Which told me my datatstores were not mounted identically. So I ran the SQL query at the bottom of the article and got this:


So am I right in thinking that the trailing slash on the 18 host is the one causing the issue because they are all mounted via IP address?

madsushi
Apr 19, 2009

Baller.
#essereFerrari

Noghri_ViR posted:

So I've got a small little VMware install of 3 hosts and 1 vCenter. I'm doing the upgrade from 4.1 to 5.0 today and I ran into a little hiccup and got the following error message:



Now after googling I found the following VMware KB article:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2006664

Which told me my datatstores were not mounted identically. So I ran the SQL query at the bottom of the article and got this:


So am I right in thinking that the trailing slash on the 18 host is the one causing the issue because they are all mounted via IP address?

Yes. You can also check the volumes by browsing to /vmfs/ via an SSH session and looking at the volume UUID. That's definitely your problem. I had the same issue where one was mounted with uppercase letters and one was mounted with lowercase.

Noghri_ViR
Oct 19, 2001

Your party has died.
Please press [ENTER] to continue to the
Las Vegas Bowl

madsushi posted:

Yes. You can also check the volumes by browsing to /vmfs/ via an SSH session and looking at the volume UUID. That's definitely your problem. I had the same issue where one was mounted with uppercase letters and one was mounted with lowercase.

Thanks, after I migrated everything off that host and then unmounted and then readded the datastore everything worked. This upgrade went pretty drat smooth and I'm outa here to get a beer

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

FISHMANPET posted:

So, any insight on what "not supported" means? Does that mean VMware tech support won't support the configuration, even though it works perfectly fine?
Hypothetically, you can get it working "perfectly fine" using VMDKs on a service with physical bus sharing involved also. In practice, it dramatically increases the probability that you will experience cluster disk timeouts and failover events under load.

The main concern when using iSCSI is that most people opt to use software HBAs. Even with TOE and other hardware acceleration turned on, the CPU usage of iSCSI is significantly higher than using Fibre Channel. In 99% of production use cases, you'll never notice this CPU usage. However, one impact is that if you are pegging the CPU on your system at 100% utilization, or otherwise jamming up the scheduling queue (i.e. by scheduling a VM that uses every core on the box), you run a much higher risk of introducing random I/O timeouts and errors into your stack than if you used Fibre Channel. If these stack up, and you cause a number of cluster failovers in quick succession, you risk major data corruption on your cluster disks.

Should it be supported in the year 2012 when most new production boxes are shipping with 16-20 cores? Probably. But that's the original rationale.

Do note that Microsoft and VMware do support iSCSI when it's used from the guest OS's initiator. You can't possibly introduce an I/O timeout while the guest OS is descheduled.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Misogynist posted:

The main concern when using iSCSI is that most people opt to use software HBAs. Even with TOE and other hardware acceleration turned on, the CPU usage of iSCSI is significantly higher than using Fibre Channel. In 99% of production use cases, you'll never notice this CPU usage. However, one impact is that if you are pegging the CPU on your system at 100% utilization, or otherwise jamming up the scheduling queue (i.e. by scheduling a VM that uses every core on the box), you run a much higher risk of introducing random I/O timeouts and errors into your stack than if you used Fibre Channel.
If you are running your host CPU that high you have other things to worry about. The outrageous levels of CPU power you find today over, say, 2008 levels, makes iSCSI CPU overhead a complete non issue in well over 99% of real world production installs. The CPU requirements of iSCSI shouldn't even enter into the mind of someone doing a deployment these days. When a high end VMware server was 2x 2 core Xeons with 8GB of RAM, yes, but no longer.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

adorai posted:

If you are running your host CPU that high you have other things to worry about. The outrageous levels of CPU power you find today over, say, 2008 levels, makes iSCSI CPU overhead a complete non issue in well over 99% of real world production installs. The CPU requirements of iSCSI shouldn't even enter into the mind of someone doing a deployment these days. When a high end VMware server was 2x 2 core Xeons with 8GB of RAM, yes, but no longer.
"Other things to worry about" typically aren't as severe or permanent as catastrophic data corruption, but your point is well taken otherwise. It's a corner case to be sure.

sanchez
Feb 26, 2003
Are there any goons managing virtual desktop (in vmware view) deployments? I've been asked to investigate because CLOUD! and am struggling to see where they would meet our clients needs better than a standard terminal server and/or windows 7 laptop.

I'm sure deployment/provisioning is much faster, so there are savings on that, but the average end user only goes through that experience every 3-5 years. The other times we touch their desktop it's to help with application related issues, which would exist regardless of platform.

ghostinmyshell
Sep 17, 2004



I am very particular about biscuits, I'll have you know.
I haven't heard of MS turning down support for iscsi clustering in vmware if you pass the cluster ready check. Sure the tech will mumble at you but they will help.

EoRaptor
Sep 13, 2003

by Fluffdaddy

sanchez posted:

Are there any goons managing virtual desktop (in vmware view) deployments? I've been asked to investigate because CLOUD! and am struggling to see where they would meet our clients needs better than a standard terminal server and/or windows 7 laptop.

I'm sure deployment/provisioning is much faster, so there are savings on that, but the average end user only goes through that experience every 3-5 years. The other times we touch their desktop it's to help with application related issues, which would exist regardless of platform.

A lot of terminal server issues (printing!) simply disappear, which can be very nice. You should also get access to a suite of configuration and provisioning tools that allow you to micro manage everybodies access level, allocated cpu and memory, and what applications are deployed (thinstall) to the system.

VMWare has a traveling roadshow with a lab you can do that lets you play around with the whole thing, I'd ask a vmware rep about when the next one near you is.

It is a high priced option, especially with the new version 5 licensing, but it does offer a lot. The biggest single drawback is the reliance on PCoIP, which is not as optimized as RemoteFX

Wizzle
Jun 7, 2004

Most
Parochial
Poster


I know I'm a little late to the party here, but no one is really talking about Citrix XenServer other than a few complaints.

*Bias Warning* - I am a Citrix partner and am XenServer certified.

XenServer's best selling point over VMWare is the price. The free edition gives you clustering, unlimited CPU and RAM support and shared storage. The only downside is you have to remember to re-license it annually or you can't start your VMs.

The next version up is $1000 (MSRP) though your local Citrix partner can get it to you for less. This adds HA and Heterogeneous CPUs in your cluster. It's about the same as VMWare Standard Edition at less than 1/3rd the cost.

The full version - all features - is $5000 (MSRP) and comes with a DR solution built in that when paired with a compatible SAN, can snap and replicate your VMs to a DR-site. I think the only SAN that works with it right now is NetApp but that EqualLogic support is coming soon.

As the stability issues are concerned, I can offer a few suggestions.

1. Make sure you don't implement XenServer on any hardware that isn't on the XenServer HCL. Doing so can cause all sorts of strangeness. The good news is you can download the driver development virtual appliance and compile your own drivers... at your own risk.

2. Install all publicly available hotfixes. Since XenServer is built on Linux it comes with potentially a ton of bugs from things baked in to the base CentOS distribution that aren't even being used in XenServer.

3. Learn the command line. The xe commands are very powerful and if you know a linux scripting language of any kind the possibilities are nearly endless.

Please PM me if anyone wants more information about XenServer or really anything Citrix related. It's pretty much what I spend 8-5 working on in one capacity or another.


Edit: And XenDesktop, the VDI offering from Citrix is pretty amazing. It works with any of the 3 major hypervisors and uses far less bandwidth than PCoIP or RemoteFX. Remember, the ICA protocol is what Citrix is really famous for. The only advantage to VDI on XenServer is the use of Intellicache which will cache desktop boot images on the server's local SSD drive to eliminate boot storm problems. Otherwise, you're best off with Hyper-V due to Microsoft's little tricks for Windows 7 optimizations on Hyper-V hosts.

Wizzle fucked around with this message at 08:05 on Apr 3, 2012

KS
Jun 10, 2003
Outrageous Lumpwad
Anyone with experience using vCloud Director? I have some architectural questions.

We're aiming for two datacenters. Datacenter A will have the production environment. It will be SAN replicated to Datacenter B for DR and we will use SRM for failover.

Separately, we spawn dev environments off of prod. It is currently a manual process. I know vCloud Director can snapshot production and stand up copies for development in datacenter A. My question is, can it do the same based off of the replicated data in datacenter B?

We think it'd be cool to have the DR cluster do double duty as the dev cluster, but we can't get a straight answer on if vcloud director can automatically stand up environments based on replicated data rather than the live production VMs.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!
VCD doesn't quite behave in the manner you're expecting it to. You could add your production VMs to VCD but it won't exactly be as straightforward as "snapshotting your production servers into dev."

You'd need to take a writable snapshot of your replicated datastores and present it to your DR cluster. Then you could add those VMs to the vcloud director catalog to be provisioned/managed.

That said, using your DR cluster as a dev site is a good idea.

Pantology
Jan 16, 2006

Dinosaur Gum

KS posted:

Anyone with experience using vCloud Director? I have some architectural questions.

We're aiming for two datacenters. Datacenter A will have the production environment. It will be SAN replicated to Datacenter B for DR and we will use SRM for failover.

This in itself is trickier than you may be expecting. Have you read this yet? http://www.vmware.com/resources/techresources/10254

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
So as I read through Masterping vSphere 5 and VMware vSphere Design, I'm mentally planning my departments virtualization build out (and my boss is listening to me on this, so I can't gently caress it up) and I decided to look for 10 Gb Switches.

:swoon: http://www.dell.com/us/enterprise/p/managed-10gigabit-ethernet-switches :swoon:

Looks like Cisco doesn't even have an equivalent, so I'm guessing nobody does yet. And we can get it for only $10k!

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

FISHMANPET posted:

So as I read through Masterping vSphere 5 and VMware vSphere Design, I'm mentally planning my departments virtualization build out (and my boss is listening to me on this, so I can't gently caress it up) and I decided to look for 10 Gb Switches.

:swoon: http://www.dell.com/us/enterprise/p/managed-10gigabit-ethernet-switches :swoon:

Looks like Cisco doesn't even have an equivalent, so I'm guessing nobody does yet. And we can get it for only $10k!

How much data are you moving again?

Not saying that 24Port 10Gb isn't awesome but how many can you get at that price? It might be more feasable to order some <12 port 10Gib as a backbone for Iscsi/NFS and vMotion, and run everything else on Gig or aggregated gig.

Dilbert As FUCK fucked around with this message at 04:37 on Apr 6, 2012

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Corvettefisher posted:

How much data are you moving again?

Not saying that 24Port 10Gb isn't awesome but how many can you get at that price? It might be more feasable to order some <12 port 10Gib as a backbone for Iscsi/NFS and vMotion, and run everything else on Gig or aggregated gig.

University pricing, so pretty much as many as I want. And right now we're probably a year out from anything (that's when the richest department's infrastructure goes out of contract) though when I actually think about it that's not a lot of time.

What are some other 10Gb switches? I'm not really sure which is better, fiber or copper, but for short ranges (100 m) CAT 6a can do 10Gb, so it seems like an easier proposition. Looking at everything Cisco has, most everything just has a couple SFP ports.

As for actual data, I have no idea yet, but I'm guessing we'll get 10Gb more because we can than anything. It's a fancy buzzword we can throw at decision makers, and it means we don't need a ton of ports and cabling to make everything fully redundant.

And if we didn't get those, we'd probably get 3750s because we're pretty dumb, so it's not like we save a ton of money doing something else.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

FISHMANPET posted:

University pricing, so pretty much as many as I want. And right now we're probably a year out from anything (that's when the richest department's infrastructure goes out of contract) though when I actually think about it that's not a lot of time.

What are some other 10Gb switches? I'm not really sure which is better, fiber or copper, but for short ranges (100 m) CAT 6a can do 10Gb, so it seems like an easier proposition. Looking at everything Cisco has, most everything just has a couple SFP ports.

As for actual data, I have no idea yet, but I'm guessing we'll get 10Gb more because we can than anything. It's a fancy buzzword we can throw at decision makers, and it means we don't need a ton of ports and cabling to make everything fully redundant.

And if we didn't get those, we'd probably get 3750s because we're pretty dumb, so it's not like we save a ton of money doing something else.

Well depending on your environment and what throughput you need you may want to look at Fibre Channel, it may provide better performance for things that need high I/O and low latency like DB servers, front end web servers(that get constantly hit), and the like. It is a bit of a hit 8Gb/s but you latency is non exsistant if done right. 10Gb would be great for things like Vmotion, FT, or backup lines to the Data stores; and use Gig => 10Gig for web/other traffic.

10Gb is a great investment, and will give you a lot of growing room that you might want, but you might waste money on buying what you may not ever utilize fully.

If you set VM affinities right such as things like Spam filter servers for email on the Machine running the Email server you will cut a lot of traffic out, same with other servers that have to access each other, like servers that access a DB server or what not.

just my 2c but,

Depending on your environment I would do like this
High I/O servers/DB servers => Fiber Channel
High usage Mission Critical servers => 10Gb/E
Other servers and web(access) => 1Gb/E
Run Affinities to keep servers that have relations to run on the same host

Cause to utilize those 10Gb switches ya going to need 10Gb network cards


E: You might also want to look into SSD caching for some of the servers it can really prove useful

Dilbert As FUCK fucked around with this message at 05:07 on Apr 6, 2012

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
We'll most likely be getting a Compellent SAN with an SSD tier, so that's taken care of (I assume that's what you mean by SSD caching?).

I did some messing around with pricing, and I don't think the value is there for Fiber Channel. An 8 Gb FC card is $700, a dual port 10Gb is $600. Add in the cost of FC switches and it starts to not look very great. Though FCoE could be useful, assuming it uses the same physical connections on the SAN as regular 10Gb.

I also think that politically, anything FC would be a tough sell to the rest of staff. We just recently started using iSCSI for some storage, we've got a simple array that's a couple years old, and that's pretty much it. Even then we're basically using it as direct attached SCSI. So change is tough.

We're working on that overall, but it's nice in a case like this I think to point out a few areas that won't change (or at least we're not buying hardware that forces a change). If everything is loaded up with 10Gb it's a lot easier to switch to FCoE when we've fudged it saying we'll start with iSCSI.

It looks like the Intel X520 and X540 (the cards I'd be using) support FCoE, though not sure about switching FCoE. Guess I need to dig into that awful EMC storage book again.

E: Looks like the 8024 does... something with FCoE:

quote:

FCoE FIP Snooping (FCoE Transit), single-hop
E2: I guess so, this sounds useful, right?

quote:

PowerConnect 8000 Series switches support Fibre Channel over Ethernet (FCoE) connectivity with the FCoE Initialization Protocol (FIP) snooping feature. This allows converged Ethernet and FCoE data streams to pass seamlessly to the top of rack.

Our network admin is still scared of link aggregation, so no chance of any help in that department. Just another thing I gotta figure out myself.

FISHMANPET fucked around with this message at 05:51 on Apr 6, 2012

Adbot
ADBOT LOVES YOU

complex
Sep 16, 2003

FISHMANPET posted:

So as I read through Masterping vSphere 5 and VMware vSphere Design, I'm mentally planning my departments virtualization build out (and my boss is listening to me on this, so I can't gently caress it up) and I decided to look for 10 Gb Switches.

:swoon: http://www.dell.com/us/enterprise/p/managed-10gigabit-ethernet-switches :swoon:

Looks like Cisco doesn't even have an equivalent, so I'm guessing nobody does yet. And we can get it for only $10k!

These are not in Dell's Force10 line (http://www.dell.com/us/enterprise/p/force10-networking) so I have to assume they are rebranded from some other company.

I could be wrong, but based on Dell's history and the design it looks to be a rebranded Brocade/Foundry.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply