|
We're moving from an active-standby cluster config to an active-active with cross-replication of the storage. UNFORTUNATELY, our current active-standby configuration was done on the cheap and I am concerned I am going to have conflicts on the storage side of things. We started out with a single active cluster with NFS datastores on a NetApp unit, with zero DR site or plan. We were then given an unfunded edict to come up with a DR plan that can get us running at a different site within a day. A small amount of money was allocated for a baby-garbage-SATA-only NetApp to act as the snapmirror destination and I dusted off some old retired ESXi hosts to run in stand-alone mode over there if we need it. Snapmirrors of the volumes get shipped to the DR NetApp unit every night and the standalone ESXi hosts have an identical storage network IP mapping as the production site so the GUIDs of the datastores all match up and I am able to just click on the VMX file, import, and have a server back up and running in a few minutes. But now the money came through to license the other side as a cluster with another vSphere instance in linked mode and upgrade the NetApps to matching pairs, and I don't know how vSphere will handle it. Both sides will see what I assume they will believe is the same storage array and volumes on two different clusters at completely different sites and being backed by completely different NetApp units and I could see that completely loving up vCenter, but I'm not sure. Does assigning my second cluster to its own site in the console force the system to understand that the datastores on each end are completely different or will the GUIDs continue to match up and potentially cause problems?
|
# ? Apr 1, 2015 17:05 |
|
|
# ? May 29, 2024 10:38 |
|
EuphrosyneD posted:Newbie question about automatic virtual machine activation in Hyper-V ahead. You can build them on Client Hyper-V and replicate, import/export it works just fine and they will activate when the windows license service runs it's next license check (or on boot)
|
# ? Apr 1, 2015 21:31 |
|
BangersInMyKnickers posted:But now the money came through to license the other side as a cluster with another vSphere instance in linked mode and upgrade the NetApps to matching pairs, and I don't know how vSphere will handle it. Both sides will see what I assume they will believe is the same storage array and volumes on two different clusters at completely different sites and being backed by completely different NetApp units and I could see that completely loving up vCenter, but I'm not sure. Does assigning my second cluster to its own site in the console force the system to understand that the datastores on each end are completely different or will the GUIDs continue to match up and potentially cause problems? Even in Linked mode each VCenter still interacts with it's objects as separate entities. NetApp VSC is not supported in linked mode though, so be mindful of that. That said, there was no need to have matching IP schemes for the storage network at both sides for DR to work.
|
# ? Apr 1, 2015 22:53 |
|
I want to setup a home lab for VMWare and I'm thinking of buying a small (120-240GB) OS drive in combination with either a 1TB SSD or 2-4TB HDD. Will there be that much of a performance hit going with HDD over SSD? This is strictly for learning, so I don't need it to be screaming fast but I don't want to severely gimp my setup either.
|
# ? Apr 1, 2015 23:14 |
|
My home lab is a few old AMD PCs that I turn on as needed and an openindiana NAS with 8x 1TB drives. It's plenty fast for what I do.
|
# ? Apr 1, 2015 23:29 |
|
along the way posted:I want to setup a home lab for VMWare and I'm thinking of buying a small (120-240GB) OS drive in combination with either a 1TB SSD or 2-4TB HDD. ESXi and not Workstation I assume? Install ESXi on a USB stick, use the small SSD as a fast tier, and the spinning disk as a slow tier. That'll get you thinking about vmdk layouts in general.
|
# ? Apr 1, 2015 23:32 |
|
BangersInMyKnickers posted:But now the money came through to license the other side as a cluster with another vSphere instance in linked mode and upgrade the NetApps to matching pairs, and I don't know how vSphere will handle it. Both sides will see what I assume they will believe is the same storage array and volumes on two different clusters at completely different sites and being backed by completely different NetApp units and I could see that completely loving up vCenter, but I'm not sure. Does assigning my second cluster to its own site in the console force the system to understand that the datastores on each end are completely different or will the GUIDs continue to match up and potentially cause problems? Linked Mode and VC in general are products I'm less familiar with... but while I presume separate VCs will get away with this just fine, I think inventory service or search functionality could possibly break. I'm inclined to say you can test/entertain it, but the moment things behave strangely, you would probably then want to start to plan for reconfiguration of the DR site's NFS mounts for a GUID change (unmount, unshare, rename the mountpoint or re-IP, then remount). The DR site should re-register VMs fairly painlessly, but indeed in some cases you have to re-configure the VM (RDMs had a gotcha, looking back at the ESX/ESXi 3.x days). Even block device replication doesn't amount to non-unique GUIDs (naa/eui/t10 IDs), so I think a lot of products could misbehave. Edit: You could probably just add a trailing slash for the mountpoint and it'll result in a GUID change (or remove the trailing slash, whichever makes sense). Kachunkachunk fucked around with this message at 03:59 on Apr 2, 2015 |
# ? Apr 1, 2015 23:36 |
|
Erwin posted:ESXi and not Workstation I assume? Install ESXi on a USB stick, use the small SSD as a fast tier, and the spinning disk as a slow tier. That'll get you thinking about vmdk layouts in general. This is exactly what I am doing. Works great. I run a mix of "production" home crap, as well as spin up and tear down labs on it.
|
# ? Apr 1, 2015 23:47 |
|
I had a USB stick poo poo the bed and the ESXi 5.5 server kept running for several more months without me even noticing its boot device had dropped off. It's a bit irritating having USB sticks go bad, so I ended up going back to a hard drive I had lying around. The local storage space is used for replication of VMs off my NAS so there's some sort of NAS failure tolerance (with respect to data loss, not uptime).
|
# ? Apr 2, 2015 04:02 |
|
I bought a bad style of drive that would crap out. Noticed when VMware tools couldn't mount. Ended up getting some tiny mostly metal thumb drives to replace them. If I have issues, I may move to internal SD card booting.
|
# ? Apr 2, 2015 04:30 |
|
Kachunkachunk posted:Honestly, I don't have a good feeling about it. Since your GUIDs are not globally unique anymore, then I think things will be a lot less deterministic than they should be. Great, thanks for the input guys. I'll guess I will just bite the bullet and re-work the storage network at the DR site before I need the additional host capacity any more than I do right now. Its not like that problem will get better, ever.
|
# ? Apr 2, 2015 15:03 |
|
BangersInMyKnickers posted:Great, thanks for the input guys. I'll guess I will just bite the bullet and re-work the storage network at the DR site before I need the additional host capacity any more than I do right now. Its not like that problem will get better, ever. If you are willing to take a few minutes of downtime it sounds like SRM will do what you are looking for. Check it out, I use it in Linked Mode on all NetApp for DR.
|
# ? Apr 2, 2015 17:19 |
|
TeMpLaR posted:If you are willing to take a few minutes of downtime it sounds like SRM will do what you are looking for. Check it out, I use it in Linked Mode on all NetApp for DR. Yeah, that is going to be my recommendation. Gotta spend that money.
|
# ? Apr 2, 2015 18:30 |
|
BangersInMyKnickers posted:Yeah, that is going to be my recommendation. Gotta spend that money. Zerto is a really good replication tool as well that is a little bit easier to manage than SRM and that doesn't require storage integration.
|
# ? Apr 2, 2015 20:16 |
|
Zerto is really awesome, and you should definitely look at it.
|
# ? Apr 2, 2015 22:09 |
|
TeMpLaR posted:If you are willing to take a few minutes of downtime it sounds like SRM will do what you are looking for. Check it out, I use it in Linked Mode on all NetApp for DR.
|
# ? Apr 3, 2015 01:24 |
|
1000101 posted:There's a lot of stuff to take into consideration here. e: Currently, there's 3 hosts and a SAN in Seattle and one server in Portland. Besides one server that's comparatively new in Seattle, all of the hardware is in excess of 6 years old. I'm hoping to do a full refresh of the hosts, core switches and SAN in Seattle this year, then relegate the existing equipment to Portland to act as a DR site. My main concern is that since the hardware age creating compatibility issues. Two hosts are technically only supported up to 4.1 and we're running 5.1, but who knows if they'll work with anything newer. goobernoodles fucked around with this message at 03:59 on Apr 3, 2015 |
# ? Apr 3, 2015 03:48 |
|
goobernoodles posted:Currently, our two offices are 192.168.0.0/24 and 192.168.0.1/24. 20Mbps and 12Mbps internet/mpls. I got approval to implement 50Mbps fiber internet "backed up" with 100Mbps microwave that is burstable to 1Gbps in Seattle as well as 1Gbps layer 2 fiber between the two offices. I'll probably get a coax connection in Portland as a backup, but I'm planning on resubnetting both networks into a 172.16.x.x/16 network with DHCP breaking sections up. Portland internet will go over the 1Gbps pipe, out over the microwave. While saving at least 10k a year, probably more once I deal with the voice side of things. I had no idea the CFO was actually going to bite on this. I don't really need the 2nd Sophos we just bought since I can just plug the fiber hand-off directly into the switches... but I suppose I can keep it to be able to fire up a ipsec tunnel in the event the fiber between the offices goes down. Wide area vmotion/svmotion is of limited utility without something like vplex providing transparent storage level movement between the two sites. You're unlikely to be able to move more than one or two VMs at a time so a full site move would take a very long time and it's not useful at all in a DR scenario when your primary site goes down unexpectedly because the VM storage still lives at one site or the other and it can't be moved when that site burns down. You could do hypervisor or storage level replication for DR, but then what's the benefit of moving live VM to the same site that already has a replicated copy of the data? Layer 2 adjacency is still nice to have for DR purposes, but I'm wondering what cross site vmotion buys you in this scenario.
|
# ? Apr 3, 2015 05:29 |
|
goobernoodles posted:Currently, our two offices are 192.168.0.0/24 and 192.168.0.1/24. 20Mbps and 12Mbps internet/mpls. I got approval to implement 50Mbps fiber internet "backed up" with 100Mbps microwave that is burstable to 1Gbps in Seattle as well as 1Gbps layer 2 fiber between the two offices. I'll probably get a coax connection in Portland as a backup, but I'm planning on resubnetting both networks into a 172.16.x.x/16 network with DHCP breaking sections up. Portland internet will go over the 1Gbps pipe, out over the microwave. While saving at least 10k a year, probably more once I deal with the voice side of things. I had no idea the CFO was actually going to bite on this. I don't really need the 2nd Sophos we just bought since I can just plug the fiber hand-off directly into the switches... but I suppose I can keep it to be able to fire up a ipsec tunnel in the event the fiber between the offices goes down. You probably won't have everything in place you'll want to do proper vmotion between sites but you certainly have other options! Also, make sure you subnet out that 172.16.0.0/16 network into smaller networks. Don't just have one big subnet with everything in it or you'll regret it later. Ideally you should be routing between all sites over a point to point link to prevent l2 issues from impacting everyone. This is true even if you happen to want to stretch l2 between sites (you should be doing some kind of l2 VPN for this so you'll probably have to buy an ASR 1000 for each site and run OTV or something.)
|
# ? Apr 3, 2015 05:45 |
|
NippleFloss posted:Layer 2 adjacency is still nice to have for DR purposes, but I'm wondering what cross site vmotion buys you in this scenario. The opinion of my MS 70-410 class instructor is to just do one big 172.16.0.0/16 subnet and that subnetting is for "people who want to show how cool they are." I literally just wrapped my head around subnetting for the first time this week, so... networking definitely is not my strong suit. That said, it seemed like the only reason he said this was because of the auto-populating subnet address in some cases. Seems like thats a pretty trivial reason not to segregate a network, especially assuming DHCP is set up correctly.
|
# ? Apr 3, 2015 15:51 |
|
goobernoodles posted:The opinion of my MS 70-410 class instructor is to just do one big 172.16.0.0/16 subnet and that subnetting is for "people who want to show how cool they are." goobernoodles posted:What I'm hoping to use it for is to be able to vmotion a handful of core servers - primary DC, exchange, and a few application servers before maintenance that would otherwise take these servers down. Sorry, but so is this. DCs are highly available -- just stand up another one in the other site. Use a DAG for Exchange. Active/active clusters are pretty complex undertakings: you need spanned L2 networks, and you need a redundant highly available WAN since you're running storage traffic over it. Candidly, you don't sound qualified to design this stuff if you're questioning the need to subnet. goobernoodles posted:That said, it would give me the freedom to shut physical servers down at a site, and physically clean things up with minimal impact. KS fucked around with this message at 18:05 on Apr 3, 2015 |
# ? Apr 3, 2015 17:52 |
|
KS posted:That is loving stupid. Question everything else you're "learning" from him. This. Also how big is this company? Are you the only IT person?
|
# ? Apr 3, 2015 17:55 |
|
"Subnetting is for people trying to look cool" may be the dumbest IT related statement I've ever heard, and I once had a director of IT infrastructure ask how much room in the budget we needed to purchase all of the virtual switches and virtual nics we would need to connect in our virtual servers.
|
# ? Apr 3, 2015 18:11 |
|
Subnets are a crutch Sometimes the amount of people in IT who just teach, spread, or have an understanding of completely wrong information and misinformation boggle my mind. Right now I'm working on cleaning up a 100 host VMWare datacenter that does not have multipathing enabled on any host or datastore. Fun times. The company was under the impression from the previous guy that they were in a completely fault tolerant/high availability scenario.
|
# ? Apr 3, 2015 18:15 |
|
KS posted:That is loving stupid. Question everything else you're "learning" from him. It's actually critically important in an environment where L2 is spanned across sites, because you want to be very careful and deliberate about traffic going across your WAN. If the WAN is just a leg in the network, an extension of the normal network, why would you need to worry about traffic going over it? You want it as transparent as possible. Granted you would put servers that need services from each other close together. You can have affinity with servers located close to where your users are so as to not make the WAN a bottleneck. We have a 100 mbit linkup to another DC that also runs the same IP space, so in the event of a failover we can just power-on servers without needing to worry about reassigning IP addresses or DNS entries. If the connection is reliable, get two at least from different providers, I don't see the need to split-up your network and make things harder for yourself in the event of a fail-over. Mr Shiny Pants fucked around with this message at 18:22 on Apr 3, 2015 |
# ? Apr 3, 2015 18:16 |
|
NippleFloss posted:"Subnetting is for people trying to look cool" may be the dumbest IT related statement I've ever heard, and I once had a director of IT infrastructure ask how much room in the budget we needed to purchase all of the virtual switches and virtual nics we would need to connect in our virtual servers. You say this as if IT companies aren't known for charging for software features.
|
# ? Apr 3, 2015 18:42 |
|
Erwin posted:You say this as if IT companies aren't known for charging for software features. Cisco Nexus VM anyone?
|
# ? Apr 3, 2015 18:45 |
|
Erwin posted:You say this as if IT companies aren't known for charging for software features.
|
# ? Apr 3, 2015 18:47 |
|
The best example is how IBM charges you to enable processor cores on already purchased and installed processors on the iSeries devices (or whatever they call the power series now).
|
# ? Apr 3, 2015 19:59 |
|
People still buy IBM?
|
# ? Apr 3, 2015 20:35 |
|
adorai posted:The best example is how IBM charges you to enable processor cores on already purchased and installed processors on the iSeries devices (or whatever they call the power series now). On the larger systems this is a common option. You have a Capacity on Demand where you can temporarily unlock more cores for performance testing. You save money and it's a good deal if you're expecting growth or sudden demand. Curious, are there any X86 providers that offer the same? Richard Noggin posted:People still buy IBM? IBM has/is essentially sold most of their X86 Hardware to Lenovo. On the other hand IBM Powersystem is their special baby and appears to hold a lot of potential.
|
# ? Apr 3, 2015 20:41 |
|
Misogynist posted:As though there aren't commercially-available virtual switches like the Nexus 1000V for VMware or a dozen different vendor network plugins for OpenStack This was back when standard vSwitch was the only option and also the question wasn't about licensing, it was literally about whether we needed to purchase nics to install in virtual servers to connect to virtual switches. Like, as if there were an actual virtual nic card that came in a box and we needed 200 of them for our virtual servers.
|
# ? Apr 3, 2015 20:45 |
|
NippleFloss posted:This was back when standard vSwitch was the only option and also the question wasn't about licensing, it was literally about whether we needed to purchase nics to install in virtual servers to connect to virtual switches. Like, as if there were an actual virtual nic card that came in a box and we needed 200 of them for our virtual servers. I'm totally confused what is going?
|
# ? Apr 3, 2015 20:46 |
|
Tab8715 posted:On the larger systems this is a common option. You have a Capacity on Demand where you can temporarily unlock more cores for performance testing. You save money and it's a good deal if you're expecting growth or sudden demand. But...you've already purchased the cores!
|
# ? Apr 3, 2015 21:00 |
|
Moey posted:This. Yes, I'm the only IT person. I was hired as the "help desk coordinator" back in 2010 and my then boss quit and went off on a job hopping spree for 3 years. He kindly recommended I take on his role, management agreed, I got left with an out of date, crumbling environment and without much experience in a lot of the areas I now support. I have no degree and 9 years experience. I was left with a dumpster fire of an environment and of course I've made some mistakes due to lack of knowledge and experience. But compared to a year ago or three years ago, the differences are night and day. KS posted:That is loving stupid. Question everything else you're "learning" from him. It's actually critically important in an environment where L2 is spanned across sites, because you want to be very careful and deliberate about traffic going across your WAN. KS posted:Sorry, but so is this. DCs are highly available -- just stand up another one in the other site. Use a DAG for Exchange. Active/active clusters are pretty complex undertakings: you need spanned L2 networks, and you need a redundant highly available WAN since you're running storage traffic over it. Candidly, you don't sound qualified to design this stuff if you're questioning the need to subnet. KS posted:A single site cluster gives you this as long as you have N+1 capacity. It's kinda the whole point. I don't have a massive environment or strict requirements for uptime. I was just simply exploring the possibility of running servers from Seattle in Portland in the event I need to take the servers down for whatever reason. poo poo, they don't even NEED to be on during the transfer. goobernoodles fucked around with this message at 21:22 on Apr 3, 2015 |
# ? Apr 3, 2015 21:19 |
|
goobernoodles posted:My initial impression was to use a /23 or /24 subnet mask, to separate servers, wired users, wireless of each site as well as vpn users. I was going to bounce some ideas off of a network engineer friend to see what his thoughts were. Probably better than asking on here. You will get some good sound advice from most folks on here. As for your network segments, I am doing a very similar setup (which is common). Each of my sites has different subnets for different roles. Production wired, production wireless, servers, iSCSI, vMotion, DMZ, guest wireless.... VPN sites get their own network segments as well as a segment for client VPNs.
|
# ? Apr 3, 2015 22:25 |
|
goobernoodles posted:I don't have a massive environment or strict requirements for uptime. I was just simply exploring the possibility of running servers from Seattle in Portland in the event I need to take the servers down for whatever reason. poo poo, they don't even NEED to be on during the transfer. This is a good thing to know. With this I would consider using something like Zerto and have it change the IP/update DNS for you during these types of events. It's going to be a lot cheaper to setup and a lot more reliable than setting up a stretched layer 2 environment. For some services like AD, etc. they may have built in functionality to provide multi-site availability so it's worth using these to further simplify your infrastructure. Another option if you're looking to build up your network chops is make the remote site the layer 2 "equivalent." I did something like this for a customer last year for the DR plan because we had ~1000 systems and we weren't allowed to change the IP on failover. The problem was the DR site was 2600 miles away. What we ultimately decided on was to stop advertising server networks in OSPF from the primary site and start advertising from the secondary site where we'd spin up a copy of the servers. You probably won't have quite the same equipment budget but it is possible to get something similar online.
|
# ? Apr 5, 2015 18:54 |
|
Dealing with a DPC latency issue with Hyper-V running on Server 2012 and its driving me crazy. It is a problem with the network adapter handling traffic from my research. Does anyone have any resources on how to properly set up hyper-v and my guest machines to reduce the dpc latency as much as possible?
|
# ? Apr 8, 2015 15:00 |
|
If my reading of the release notes is right, NetApp VSC 6.0 now supports linked mode configurations for vSphere. With some other caveats that might apply to you.
|
# ? Apr 9, 2015 19:52 |
|
|
# ? May 29, 2024 10:38 |
|
BangersInMyKnickers posted:If my reading of the release notes is right, NetApp VSC 6.0 now supports linked mode configurations for vSphere. With some other caveats that might apply to you. I can confirm that it supports linked mode (We had a project which was held up by NetApp lagging on releasing the new version of VSC).
|
# ? Apr 9, 2015 19:55 |