Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

We're moving from an active-standby cluster config to an active-active with cross-replication of the storage. UNFORTUNATELY, our current active-standby configuration was done on the cheap and I am concerned I am going to have conflicts on the storage side of things.

We started out with a single active cluster with NFS datastores on a NetApp unit, with zero DR site or plan. We were then given an unfunded edict to come up with a DR plan that can get us running at a different site within a day. A small amount of money was allocated for a baby-garbage-SATA-only NetApp to act as the snapmirror destination and I dusted off some old retired ESXi hosts to run in stand-alone mode over there if we need it. Snapmirrors of the volumes get shipped to the DR NetApp unit every night and the standalone ESXi hosts have an identical storage network IP mapping as the production site so the GUIDs of the datastores all match up and I am able to just click on the VMX file, import, and have a server back up and running in a few minutes.

But now the money came through to license the other side as a cluster with another vSphere instance in linked mode and upgrade the NetApps to matching pairs, and I don't know how vSphere will handle it. Both sides will see what I assume they will believe is the same storage array and volumes on two different clusters at completely different sites and being backed by completely different NetApp units and I could see that completely loving up vCenter, but I'm not sure. Does assigning my second cluster to its own site in the console force the system to understand that the datastores on each end are completely different or will the GUIDs continue to match up and potentially cause problems?

Adbot
ADBOT LOVES YOU

Nebulis01
Dec 30, 2003
Technical Support Ninny

EuphrosyneD posted:

Newbie question about automatic virtual machine activation in Hyper-V ahead.

Can I build the machines I want to activate on Windows 8.1's Hyper-V ahead of time, then migrate them to another Hyper-V host based on WS2012R2 Datacenter, and expect the activation to work?

Or do the guests have to be built on the host they'll activate on?

You can build them on Client Hyper-V and replicate, import/export it works just fine and they will activate when the windows license service runs it's next license check (or on boot)

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

BangersInMyKnickers posted:

But now the money came through to license the other side as a cluster with another vSphere instance in linked mode and upgrade the NetApps to matching pairs, and I don't know how vSphere will handle it. Both sides will see what I assume they will believe is the same storage array and volumes on two different clusters at completely different sites and being backed by completely different NetApp units and I could see that completely loving up vCenter, but I'm not sure. Does assigning my second cluster to its own site in the console force the system to understand that the datastores on each end are completely different or will the GUIDs continue to match up and potentially cause problems?

Even in Linked mode each VCenter still interacts with it's objects as separate entities. NetApp VSC is not supported in linked mode though, so be mindful of that. That said, there was no need to have matching IP schemes for the storage network at both sides for DR to work.

along the way
Jan 18, 2009
I want to setup a home lab for VMWare and I'm thinking of buying a small (120-240GB) OS drive in combination with either a 1TB SSD or 2-4TB HDD.

Will there be that much of a performance hit going with HDD over SSD?

This is strictly for learning, so I don't need it to be screaming fast but I don't want to severely gimp my setup either.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
My home lab is a few old AMD PCs that I turn on as needed and an openindiana NAS with 8x 1TB drives. It's plenty fast for what I do.

Erwin
Feb 17, 2006

along the way posted:

I want to setup a home lab for VMWare and I'm thinking of buying a small (120-240GB) OS drive in combination with either a 1TB SSD or 2-4TB HDD.

Will there be that much of a performance hit going with HDD over SSD?

This is strictly for learning, so I don't need it to be screaming fast but I don't want to severely gimp my setup either.

ESXi and not Workstation I assume? Install ESXi on a USB stick, use the small SSD as a fast tier, and the spinning disk as a slow tier. That'll get you thinking about vmdk layouts in general.

Kachunkachunk
Jun 6, 2011

BangersInMyKnickers posted:

But now the money came through to license the other side as a cluster with another vSphere instance in linked mode and upgrade the NetApps to matching pairs, and I don't know how vSphere will handle it. Both sides will see what I assume they will believe is the same storage array and volumes on two different clusters at completely different sites and being backed by completely different NetApp units and I could see that completely loving up vCenter, but I'm not sure. Does assigning my second cluster to its own site in the console force the system to understand that the datastores on each end are completely different or will the GUIDs continue to match up and potentially cause problems?
Honestly, I don't have a good feeling about it. Since your GUIDs are not globally unique anymore, then I think things will be a lot less deterministic than they should be.
Linked Mode and VC in general are products I'm less familiar with... but while I presume separate VCs will get away with this just fine, I think inventory service or search functionality could possibly break.

I'm inclined to say you can test/entertain it, but the moment things behave strangely, you would probably then want to start to plan for reconfiguration of the DR site's NFS mounts for a GUID change (unmount, unshare, rename the mountpoint or re-IP, then remount). The DR site should re-register VMs fairly painlessly, but indeed in some cases you have to re-configure the VM (RDMs had a gotcha, looking back at the ESX/ESXi 3.x days).

Even block device replication doesn't amount to non-unique GUIDs (naa/eui/t10 IDs), so I think a lot of products could misbehave.

Edit: You could probably just add a trailing slash for the mountpoint and it'll result in a GUID change (or remove the trailing slash, whichever makes sense).

Kachunkachunk fucked around with this message at 03:59 on Apr 2, 2015

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Erwin posted:

ESXi and not Workstation I assume? Install ESXi on a USB stick, use the small SSD as a fast tier, and the spinning disk as a slow tier. That'll get you thinking about vmdk layouts in general.

This is exactly what I am doing. Works great. I run a mix of "production" home crap, as well as spin up and tear down labs on it.

Kachunkachunk
Jun 6, 2011
I had a USB stick poo poo the bed and the ESXi 5.5 server kept running for several more months without me even noticing its boot device had dropped off. It's a bit irritating having USB sticks go bad, so I ended up going back to a hard drive I had lying around. The local storage space is used for replication of VMs off my NAS so there's some sort of NAS failure tolerance (with respect to data loss, not uptime).

Moey
Oct 22, 2010

I LIKE TO MOVE IT
I bought a bad style of drive that would crap out. Noticed when VMware tools couldn't mount.

Ended up getting some tiny mostly metal thumb drives to replace them. If I have issues, I may move to internal SD card booting.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Kachunkachunk posted:

Honestly, I don't have a good feeling about it. Since your GUIDs are not globally unique anymore, then I think things will be a lot less deterministic than they should be.
Linked Mode and VC in general are products I'm less familiar with... but while I presume separate VCs will get away with this just fine, I think inventory service or search functionality could possibly break.

I'm inclined to say you can test/entertain it, but the moment things behave strangely, you would probably then want to start to plan for reconfiguration of the DR site's NFS mounts for a GUID change (unmount, unshare, rename the mountpoint or re-IP, then remount). The DR site should re-register VMs fairly painlessly, but indeed in some cases you have to re-configure the VM (RDMs had a gotcha, looking back at the ESX/ESXi 3.x days).

Even block device replication doesn't amount to non-unique GUIDs (naa/eui/t10 IDs), so I think a lot of products could misbehave.

Edit: You could probably just add a trailing slash for the mountpoint and it'll result in a GUID change (or remove the trailing slash, whichever makes sense).

Great, thanks for the input guys. I'll guess I will just bite the bullet and re-work the storage network at the DR site before I need the additional host capacity any more than I do right now. Its not like that problem will get better, ever.

TeMpLaR
Jan 13, 2001

"Not A Crook"

BangersInMyKnickers posted:

Great, thanks for the input guys. I'll guess I will just bite the bullet and re-work the storage network at the DR site before I need the additional host capacity any more than I do right now. Its not like that problem will get better, ever.

If you are willing to take a few minutes of downtime it sounds like SRM will do what you are looking for. Check it out, I use it in Linked Mode on all NetApp for DR.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

TeMpLaR posted:

If you are willing to take a few minutes of downtime it sounds like SRM will do what you are looking for. Check it out, I use it in Linked Mode on all NetApp for DR.

Yeah, that is going to be my recommendation. Gotta spend that money.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

BangersInMyKnickers posted:

Yeah, that is going to be my recommendation. Gotta spend that money.

Zerto is a really good replication tool as well that is a little bit easier to manage than SRM and that doesn't require storage integration.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
Zerto is really awesome, and you should definitely look at it.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

TeMpLaR posted:

If you are willing to take a few minutes of downtime it sounds like SRM will do what you are looking for. Check it out, I use it in Linked Mode on all NetApp for DR.
It's SO drat expensive now though. We had it for a while, but the maintenance was outrageous. We can do almost everything that SRM did in powershell.

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.

1000101 posted:

There's a lot of stuff to take into consideration here.

For example, storage. If you're not using something like EMC VPLEX then that means the VM is going to be accessing storage back in the original site and all of that traffic is going over your WAN. You could attempt to storage vmotion it to a local datastore at the remote site but again that's going to chew up a lot of bandwidth.

On the network side, not only do you need to make sure you've stretched the VLAN (using OTV, VXLAN or some other means) you need to think about how traffic flows to and from the VM. For example, if the default gateway is back in the other site and I'm trying to route to another subnet, that traffic has to go back to Seattle over the WAN link. You basically end up with this traffic tromboning effect. OTV (and VXLAN via NSX) can address some of this by making sure there's always a "local" default gateway no matter where the VM lives. You still need to consider who's talking to the VM or if the VM has traffic coming into it from the internet. For example a web server behind an F5 load balancer in Seattle that gets VMotioned over to Portland is still going to be accessed via Seattle (via your data center interconnect). A lot of these problems can be addressed with things like LISP but that is still pretty new.
Currently, our two offices are 192.168.0.0/24 and 192.168.0.1/24. 20Mbps and 12Mbps internet/mpls. I got approval to implement 50Mbps fiber internet "backed up" with 100Mbps microwave that is burstable to 1Gbps in Seattle as well as 1Gbps layer 2 fiber between the two offices. I'll probably get a coax connection in Portland as a backup, but I'm planning on resubnetting both networks into a 172.16.x.x/16 network with DHCP breaking sections up. Portland internet will go over the 1Gbps pipe, out over the microwave. While saving at least 10k a year, probably more once I deal with the voice side of things. I had no idea the CFO was actually going to bite on this. I don't really need the 2nd Sophos we just bought since I can just plug the fiber hand-off directly into the switches... but I suppose I can keep it to be able to fire up a ipsec tunnel in the event the fiber between the offices goes down.

e: Currently, there's 3 hosts and a SAN in Seattle and one server in Portland. Besides one server that's comparatively new in Seattle, all of the hardware is in excess of 6 years old. I'm hoping to do a full refresh of the hosts, core switches and SAN in Seattle this year, then relegate the existing equipment to Portland to act as a DR site. My main concern is that since the hardware age creating compatibility issues. Two hosts are technically only supported up to 4.1 and we're running 5.1, but who knows if they'll work with anything newer.

goobernoodles fucked around with this message at 03:59 on Apr 3, 2015

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

goobernoodles posted:

Currently, our two offices are 192.168.0.0/24 and 192.168.0.1/24. 20Mbps and 12Mbps internet/mpls. I got approval to implement 50Mbps fiber internet "backed up" with 100Mbps microwave that is burstable to 1Gbps in Seattle as well as 1Gbps layer 2 fiber between the two offices. I'll probably get a coax connection in Portland as a backup, but I'm planning on resubnetting both networks into a 172.16.x.x/16 network with DHCP breaking sections up. Portland internet will go over the 1Gbps pipe, out over the microwave. While saving at least 10k a year, probably more once I deal with the voice side of things. I had no idea the CFO was actually going to bite on this. I don't really need the 2nd Sophos we just bought since I can just plug the fiber hand-off directly into the switches... but I suppose I can keep it to be able to fire up a ipsec tunnel in the event the fiber between the offices goes down.

e: Currently, there's 3 hosts and a SAN in Seattle and one server in Portland. Besides one server that's comparatively new in Seattle, all of the hardware is in excess of 6 years old. I'm hoping to do a full refresh of the hosts, core switches and SAN in Seattle this year, then relegate the existing equipment to Portland to act as a DR site. My main concern is that since the hardware age creating compatibility issues. Two hosts are technically only supported up to 4.1 and we're running 5.1, but who knows if they'll work with anything newer.

Wide area vmotion/svmotion is of limited utility without something like vplex providing transparent storage level movement between the two sites. You're unlikely to be able to move more than one or two VMs at a time so a full site move would take a very long time and it's not useful at all in a DR scenario when your primary site goes down unexpectedly because the VM storage still lives at one site or the other and it can't be moved when that site burns down.

You could do hypervisor or storage level replication for DR, but then what's the benefit of moving live VM to the same site that already has a replicated copy of the data?

Layer 2 adjacency is still nice to have for DR purposes, but I'm wondering what cross site vmotion buys you in this scenario.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

goobernoodles posted:

Currently, our two offices are 192.168.0.0/24 and 192.168.0.1/24. 20Mbps and 12Mbps internet/mpls. I got approval to implement 50Mbps fiber internet "backed up" with 100Mbps microwave that is burstable to 1Gbps in Seattle as well as 1Gbps layer 2 fiber between the two offices. I'll probably get a coax connection in Portland as a backup, but I'm planning on resubnetting both networks into a 172.16.x.x/16 network with DHCP breaking sections up. Portland internet will go over the 1Gbps pipe, out over the microwave. While saving at least 10k a year, probably more once I deal with the voice side of things. I had no idea the CFO was actually going to bite on this. I don't really need the 2nd Sophos we just bought since I can just plug the fiber hand-off directly into the switches... but I suppose I can keep it to be able to fire up a ipsec tunnel in the event the fiber between the offices goes down.

e: Currently, there's 3 hosts and a SAN in Seattle and one server in Portland. Besides one server that's comparatively new in Seattle, all of the hardware is in excess of 6 years old. I'm hoping to do a full refresh of the hosts, core switches and SAN in Seattle this year, then relegate the existing equipment to Portland to act as a DR site. My main concern is that since the hardware age creating compatibility issues. Two hosts are technically only supported up to 4.1 and we're running 5.1, but who knows if they'll work with anything newer.

You probably won't have everything in place you'll want to do proper vmotion between sites but you certainly have other options!

Also, make sure you subnet out that 172.16.0.0/16 network into smaller networks. Don't just have one big subnet with everything in it or you'll regret it later. Ideally you should be routing between all sites over a point to point link to prevent l2 issues from impacting everyone. This is true even if you happen to want to stretch l2 between sites (you should be doing some kind of l2 VPN for this so you'll probably have to buy an ASR 1000 for each site and run OTV or something.)

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.

NippleFloss posted:

Layer 2 adjacency is still nice to have for DR purposes, but I'm wondering what cross site vmotion buys you in this scenario.
I'm not thinking of using it for DR purposes. Our Veeam backups reside on Quantum DXi4601's that cross replicate from SEA<>POR. I would just fire up backups in the event of a disaster. What I'm hoping to use it for is to be able to vmotion a handful of core servers - primary DC, exchange, and a few application servers before maintenance that would otherwise take these servers down. I've been slowly cleaning up this hodgepodge for a while, so I don't think it will be as frequent of a requirement moving forward. That said, it would give me the freedom to shut physical servers down at a site, and physically clean things up with minimal impact. I could make a lot more progress if I didn't have to wait until off hours (and motivating myself to work on a saturday etc) to do tasks that aren't critical, but should be done.

The opinion of my MS 70-410 class instructor is to just do one big 172.16.0.0/16 subnet and that subnetting is for "people who want to show how cool they are." I literally just wrapped my head around subnetting for the first time this week, so... networking definitely is not my strong suit. That said, it seemed like the only reason he said this was because of the auto-populating subnet address in some cases. Seems like thats a pretty trivial reason not to segregate a network, especially assuming DHCP is set up correctly.

KS
Jun 10, 2003
Outrageous Lumpwad

goobernoodles posted:

The opinion of my MS 70-410 class instructor is to just do one big 172.16.0.0/16 subnet and that subnetting is for "people who want to show how cool they are."
That is loving stupid. Question everything else you're "learning" from him. It's actually critically important in an environment where L2 is spanned across sites, because you want to be very careful and deliberate about traffic going across your WAN.

goobernoodles posted:

What I'm hoping to use it for is to be able to vmotion a handful of core servers - primary DC, exchange, and a few application servers before maintenance that would otherwise take these servers down.

Sorry, but so is this. DCs are highly available -- just stand up another one in the other site. Use a DAG for Exchange. Active/active clusters are pretty complex undertakings: you need spanned L2 networks, and you need a redundant highly available WAN since you're running storage traffic over it. Candidly, you don't sound qualified to design this stuff if you're questioning the need to subnet.

goobernoodles posted:

That said, it would give me the freedom to shut physical servers down at a site, and physically clean things up with minimal impact.
A single site cluster gives you this as long as you have N+1 capacity. It's kinda the whole point.

KS fucked around with this message at 18:05 on Apr 3, 2015

Moey
Oct 22, 2010

I LIKE TO MOVE IT

KS posted:

That is loving stupid. Question everything else you're "learning" from him.

This.

Also how big is this company? Are you the only IT person?

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

"Subnetting is for people trying to look cool" may be the dumbest IT related statement I've ever heard, and I once had a director of IT infrastructure ask how much room in the budget we needed to purchase all of the virtual switches and virtual nics we would need to connect in our virtual servers.

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.
Subnets are a crutch :smuggo:

Sometimes the amount of people in IT who just teach, spread, or have an understanding of completely wrong information and misinformation boggle my mind.

Right now I'm working on cleaning up a 100 host VMWare datacenter that does not have multipathing enabled on any host or datastore. Fun times. The company was under the impression from the previous guy that they were in a completely fault tolerant/high availability scenario.

Mr Shiny Pants
Nov 12, 2012

KS posted:

That is loving stupid. Question everything else you're "learning" from him. It's actually critically important in an environment where L2 is spanned across sites, because you want to be very careful and deliberate about traffic going across your WAN.


Sorry, but so is this. DCs are highly available -- just stand up another one in the other site. Use a DAG for Exchange. Active/active clusters are pretty complex undertakings: you need spanned L2 networks, and you need a redundant highly available WAN since you're running storage traffic over it. Candidly, you don't sound qualified to design this stuff if you're questioning the need to subnet.

A single site cluster gives you this as long as you have N+1 capacity. It's kinda the whole point.

If the WAN is just a leg in the network, an extension of the normal network, why would you need to worry about traffic going over it? You want it as transparent as possible. Granted you would put servers that need services from each other close together.

You can have affinity with servers located close to where your users are so as to not make the WAN a bottleneck.

We have a 100 mbit linkup to another DC that also runs the same IP space, so in the event of a failover we can just power-on servers without needing to worry about reassigning IP addresses or DNS entries.

If the connection is reliable, get two at least from different providers, I don't see the need to split-up your network and make things harder for yourself in the event of a fail-over.

Mr Shiny Pants fucked around with this message at 18:22 on Apr 3, 2015

Erwin
Feb 17, 2006

NippleFloss posted:

"Subnetting is for people trying to look cool" may be the dumbest IT related statement I've ever heard, and I once had a director of IT infrastructure ask how much room in the budget we needed to purchase all of the virtual switches and virtual nics we would need to connect in our virtual servers.

You say this as if IT companies aren't known for charging for software features.

Mr Shiny Pants
Nov 12, 2012

Erwin posted:

You say this as if IT companies aren't known for charging for software features.

Cisco Nexus VM anyone?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Erwin posted:

You say this as if IT companies aren't known for charging for software features.
As though there aren't commercially-available virtual switches like the Nexus 1000V for VMware or a dozen different vendor network plugins for OpenStack

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
The best example is how IBM charges you to enable processor cores on already purchased and installed processors on the iSeries devices (or whatever they call the power series now).

Richard Noggin
Jun 6, 2005
Redneck By Default
People still buy IBM?

Gucci Loafers
May 20, 2006

Ask yourself, do you really want to talk to pair of really nice gaudy shoes?


adorai posted:

The best example is how IBM charges you to enable processor cores on already purchased and installed processors on the iSeries devices (or whatever they call the power series now).

On the larger systems this is a common option. You have a Capacity on Demand where you can temporarily unlock more cores for performance testing. You save money and it's a good deal if you're expecting growth or sudden demand.

Curious, are there any X86 providers that offer the same?

Richard Noggin posted:

People still buy IBM?

IBM has/is essentially sold most of their X86 Hardware to Lenovo. On the other hand IBM Powersystem is their special baby and appears to hold a lot of potential.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Misogynist posted:

As though there aren't commercially-available virtual switches like the Nexus 1000V for VMware or a dozen different vendor network plugins for OpenStack

This was back when standard vSwitch was the only option and also the question wasn't about licensing, it was literally about whether we needed to purchase nics to install in virtual servers to connect to virtual switches. Like, as if there were an actual virtual nic card that came in a box and we needed 200 of them for our virtual servers.

Gucci Loafers
May 20, 2006

Ask yourself, do you really want to talk to pair of really nice gaudy shoes?


NippleFloss posted:

This was back when standard vSwitch was the only option and also the question wasn't about licensing, it was literally about whether we needed to purchase nics to install in virtual servers to connect to virtual switches. Like, as if there were an actual virtual nic card that came in a box and we needed 200 of them for our virtual servers.

I'm totally confused what is going?

Richard Noggin
Jun 6, 2005
Redneck By Default

Tab8715 posted:

On the larger systems this is a common option. You have a Capacity on Demand where you can temporarily unlock more cores for performance testing. You save money and it's a good deal if you're expecting growth or sudden demand.

But...you've already purchased the cores!

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.

Moey posted:

This.

Also how big is this company? Are you the only IT person?
Construction GC with ~195 employees, but a good chunk of that are field guys. Roughly 125 users with computers, ~24 VM's, 3 host cluster in Seattle with a SAN. Single host in Portland running on local storage and no capacity. They're under one vCenter server. I'm going to add some drives to a spare server and move it down to PDX asap to give me some capacity to start doing needed server migrations and replace all of the 2003 servers that should have been gone a long time ago soon. Full refresh in SEA as soon as I can convince the president to pull the trigger.

Yes, I'm the only IT person. I was hired as the "help desk coordinator" back in 2010 and my then boss quit and went off on a job hopping spree for 3 years. He kindly recommended I take on his role, management agreed, I got left with an out of date, crumbling environment and without much experience in a lot of the areas I now support. I have no degree and 9 years experience. I was left with a dumpster fire of an environment and of course I've made some mistakes due to lack of knowledge and experience. But compared to a year ago or three years ago, the differences are night and day.

KS posted:

That is loving stupid. Question everything else you're "learning" from him. It's actually critically important in an environment where L2 is spanned across sites, because you want to be very careful and deliberate about traffic going across your WAN.
Well, that was my impression as well, which is why I posted it. My initial impression was to use a /23 or /24 subnet mask, to separate servers, wired users, wireless of each site as well as vpn users. I was going to bounce some ideas off of a network engineer friend to see what his thoughts were. Probably better than asking on here. Christ.

KS posted:

Sorry, but so is this. DCs are highly available -- just stand up another one in the other site. Use a DAG for Exchange. Active/active clusters are pretty complex undertakings: you need spanned L2 networks, and you need a redundant highly available WAN since you're running storage traffic over it. Candidly, you don't sound qualified to design this stuff if you're questioning the need to subnet.

You're a real peach.

KS posted:

A single site cluster gives you this as long as you have N+1 capacity. It's kinda the whole point.
Yeah, except when that cluster has a clusterfuck of a cabling situation that I need to fix. The rack was installed so close to the wall I don't ever want to gently caress with cabling when there's I/O.

I don't have a massive environment or strict requirements for uptime. I was just simply exploring the possibility of running servers from Seattle in Portland in the event I need to take the servers down for whatever reason. poo poo, they don't even NEED to be on during the transfer.

goobernoodles fucked around with this message at 21:22 on Apr 3, 2015

Moey
Oct 22, 2010

I LIKE TO MOVE IT

goobernoodles posted:

My initial impression was to use a /23 or /24 subnet mask, to separate servers, wired users, wireless of each site as well as vpn users. I was going to bounce some ideas off of a network engineer friend to see what his thoughts were. Probably better than asking on here.

You will get some good sound advice from most folks on here. As for your network segments, I am doing a very similar setup (which is common). Each of my sites has different subnets for different roles. Production wired, production wireless, servers, iSCSI, vMotion, DMZ, guest wireless.... VPN sites get their own network segments as well as a segment for client VPNs.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

goobernoodles posted:

I don't have a massive environment or strict requirements for uptime. I was just simply exploring the possibility of running servers from Seattle in Portland in the event I need to take the servers down for whatever reason. poo poo, they don't even NEED to be on during the transfer.

This is a good thing to know. With this I would consider using something like Zerto and have it change the IP/update DNS for you during these types of events. It's going to be a lot cheaper to setup and a lot more reliable than setting up a stretched layer 2 environment. For some services like AD, etc. they may have built in functionality to provide multi-site availability so it's worth using these to further simplify your infrastructure.

Another option if you're looking to build up your network chops is make the remote site the layer 2 "equivalent." I did something like this for a customer last year for the DR plan because we had ~1000 systems and we weren't allowed to change the IP on failover. The problem was the DR site was 2600 miles away. What we ultimately decided on was to stop advertising server networks in OSPF from the primary site and start advertising from the secondary site where we'd spin up a copy of the servers. You probably won't have quite the same equipment budget but it is possible to get something similar online.

Stealthgerbil
Dec 16, 2004


Dealing with a DPC latency issue with Hyper-V running on Server 2012 and its driving me crazy. It is a problem with the network adapter handling traffic from my research. Does anyone have any resources on how to properly set up hyper-v and my guest machines to reduce the dpc latency as much as possible?

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

If my reading of the release notes is right, NetApp VSC 6.0 now supports linked mode configurations for vSphere. With some other caveats that might apply to you.

Adbot
ADBOT LOVES YOU

Pile Of Garbage
May 28, 2007



BangersInMyKnickers posted:

If my reading of the release notes is right, NetApp VSC 6.0 now supports linked mode configurations for vSphere. With some other caveats that might apply to you.

I can confirm that it supports linked mode (We had a project which was held up by NetApp lagging on releasing the new version of VSC).

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply