|
Wicaeed posted:Can anyone tell me, on a brand new ESX host, what the following script outputs? vpxa, by default, is verbose. So technically you're not out of compliance here. Chances are you are logging a lot of events for other reasons and that needs to be looked at. Do you see log spew from vmkernel or syslog? Are you using EMC RecoverPoint for VMs? That poo poo's kdriver is chatty, almost to a fault.
|
# ? Mar 6, 2018 09:12 |
|
|
# ? May 4, 2024 20:12 |
|
hey guys, i'm trying to understand some best practice stuff with distributed switches - i'm using standard vswitches right now. in my current environement i have something like this: vswitch0 -> vmnic0, vmnic1 management vmk default vm network and that's connected to a couple of switch ports vlaned into my dmz from the switch side vswitch1 -> vmnic2,vnmic3 storage vmk1, storage vmk2 and that's connected to a couple of switch ports vlaned into my storage vlans from the switch side vswitch2 -> vmnic4,vmnic5,vmnic6 bunch of portgroups segregated by vlan vmotion vmk and that's connected to three switch ports trunking. so my question really, is there a best practice setup for networking the hosts? should i use the lacp features of a distributed switch to put all of the ports in aggregation and run everything as a single trunk link and separate out all my port groups as vlans, including my management and storage? these are all 1gb links, btw
|
# ? Mar 7, 2018 01:12 |
|
What's everyones favorite way of backing up a vcsa database. Preferably free and lovely.
|
# ? Mar 7, 2018 01:17 |
|
Methanar posted:What's everyones favorite way of backing up a vcsa database. Preferably free and lovely. Is the built in backup functionality not sufficient?
|
# ? Mar 7, 2018 01:24 |
|
sudo rm -rf posted:hey guys, i'm trying to understand some best practice stuff with distributed switches - i'm using standard vswitches right now. Link aggregation (with LACP) is of limited value for vmkernel traffic, particularly storage traffic, at least in terms of bandwidth aggregation. Distributed switches have some pretty painful failure modes as well, so I don’t like using them for infrastructure traffic. There’s no right or wrong answer but my preference is to keep storage and management on standard vswitches with load balancing based on originating port ID and use dvswitches for VM traffic and maybe vmotion. Vmotion benefits from dedicated ports. Depending on your IO requirements storage may as well. If you do NSX there are some additional considerations.
|
# ? Mar 7, 2018 01:38 |
|
Methanar posted:What's everyones favorite way of backing up a vcsa database. Preferably free and lovely. I just captured it with my VM container backups in RapidRecovery
|
# ? Mar 7, 2018 17:38 |
|
It's been a few years since I've had to wrestle with LACP on VMware but its big selling point was proving faster failover for a NFS storage fabric since NFS3 can't do multi path IO and VMware NFS4 stack didn't support that feature at the time. I was hoping that it would also aggregate traffic but VMware is doing something wonky where the NFS initiator binds on to a specific member of the LACP aggregate instead of the logical interface which stops it from aggregating traffic if you're pointed at multiple target storage IPs/ports/whatever. This was on 6.5u0/1 I believe so maybe that story has changed. LACP will default to slow PDUs/long timeouts which is Bad Bad Bad and at the time the only way to configure that was to run a script at host startup to change the vdSwitch to run fast PDUs/short timeouts. The LACP Implimentation was really half-assed and it was clear they put much more work in to Etherchannel.
|
# ? Mar 7, 2018 17:50 |
|
We currently run 2 discrete data centers off a single 6.5 VCenter server with one SSO site/domain. We are going to move to 2 VCenter servers. I *think* what I want to do is use the below topology with 1 VCenter server and external PSC at each DC and use Enhanced Link Mode between the servers to get manageability of the whole infrastructure from either server. Is that reasonable? Will I need to make a new SSO domain (site? 2 sites?) and migrate our VMs to the new one, or can I just join the new VCenter servers to the existing domain? Can I get a sanity check or suggestions on things to google? I'm efforting pretty hard at $NewJob with a way more complicated environment and different technologies that I'm used to.
|
# ? Mar 11, 2018 17:21 |
|
Happiness Commando posted:We currently run 2 discrete data centers off a single 6.5 VCenter server with one SSO site/domain. We are going to move to 2 VCenter servers. I *think* what I want to do is use the below topology with 1 VCenter server and external PSC at each DC and use Enhanced Link Mode between the servers to get manageability of the whole infrastructure from either server. Is that reasonable? Will I need to make a new SSO domain (site? 2 sites?) and migrate our VMs to the new one, or can I just join the new VCenter servers to the existing domain? You’d want a PSC and a VCenter server at each DC, with the two PSCs replicating between each other. Each DC would be a separate site in the same SSO domain. You can migrate your existing VCenter to an external PSC, then deploy a second the second PSC, link it to the first (but as a new site), then deploy the second VCenter and point it at your second PSC. https://kb.vmware.com/s/article/2148924 The one downside is moving hosts from one VCenter to another will require disconnecting them from The old VCenter and connecting them to the new, meaning you’ll lose all of the historical data, resource pool config, etc. And also if your hosts have distributed switches you’ll need to move to standard and then recreate the DVS on the new VCenter.
|
# ? Mar 11, 2018 19:03 |
|
Does anyone have any experience running macOS in a VM on Linux, preferably with virt-manager/qemu? There are a couple of macOS only apps that I'd love to be able to use on my Linux laptop.
|
# ? Mar 13, 2018 12:24 |
|
Boris Galerkin posted:Does anyone have any experience running macOS in a VM on Linux, preferably with virt-manager/qemu? There are a couple of macOS only apps that I'd love to be able to use on my Linux laptop. It works, there is some bullshit with creating install media, and you need something like enochboot, otherwise it works fine.
|
# ? Mar 13, 2018 18:09 |
|
I've got two offline bundles in VUM that my hosts can't download for some reason, other packages are downloading and installing fine, I can't reupload the same bundle to VUM, and connecting to the host to download and install the package works without issue, including to the point where VUM recognizes that the package is installed. Is there a way I can clear out the file from the bad file from VUM to redownload it, or am I stuck manually installing it on my hosts?
|
# ? Mar 13, 2018 20:13 |
|
YOLOsubmarine posted:You’d want a PSC and a VCenter server at each DC, with the two PSCs replicating between each other. Each DC would be a separate site in the same SSO domain. Is that supported without load balancers?
|
# ? Mar 14, 2018 00:50 |
|
Happiness Commando posted:Is that supported without load balancers? Yes. In this case each PSC would only be serving its local VCenter so you wouldn’t want a load balancer in front of them. If you wanted PSC redundancy you’d do two PSCs at each site with a load balancer in front of each and all of them replicating.
|
# ? Mar 14, 2018 01:41 |
|
YOLOsubmarine posted:If you wanted PSC redundancy you’d do two PSCs at each site with a load balancer in front of each and all of them replicating. I want redundancy without load balancers 😐 Repointing VCSA to a PSC in a different site isn't supported in 6.5 - so if we lost a PSC, I would have to stand up a new linked PSC and then repoint my hosts to it. It seems like sites are just administrative groupings, what does separating them into two sites actually get me? Our DCs have roughly 15 ms ping latency between them, so we are almost at the recommended 10 ms intra-site figure...
|
# ? Mar 14, 2018 15:24 |
|
Happiness Commando posted:I want redundancy without load balancers 😐 You could have two PSCs in a site replicating in the same SSO domain, but one could have no vCenters attached to it as a backup. It's not perfect, but gives you some local site redundancy for the PSC. It's also easy enough to create a new PSC and get it replicating with your other site if there is a PSC failure.
|
# ? Mar 14, 2018 18:01 |
|
Happiness Commando posted:I want redundancy without load balancers 😐 What problem are you trying to solve here? If you want actual PSC redundancy then you need a load balancer. If you don’t want a load balancer then it sounds like you don’t need that level of redundancy. Can your environment survive for a few minutes while a PSC reboots due to HA? Probably so, and in that case that’s likely all the protection you need. If you need VCenter available ALL the time then you need a load balancer for your PSCs and VCenter HA. As to one site or two, it’s just a question of whether you want all of your eggs in one basket or split between two. If the main site that hosts VCenter goes down you now cannot manage the second site. If that’s not a major issue then a single site is fine.
|
# ? Mar 14, 2018 18:41 |
|
Heads up for anyone using VMware with Windows Server 2008 R2 or Windows 7 guests, our TAM has advised us that there's a known issue with some recently released Microsoft patches which affects vmxnet3 adapters causing them to lose their IP configuration:quote:VMware has received reports that some recent Microsoft patches released appear to cause loss of IP address for Windows (2008 R2 and 7) virtual machines with vmxnet3 adapters. As a result, please find below the currently identified MS Patches, Community Threads and VMware Blog Workaround:
|
# ? Mar 15, 2018 08:34 |
|
cheese-cube posted:Heads up for anyone using VMware with Windows Server 2008 R2 or Windows 7 guests, our TAM has advised us that there's a known issue with some recently released Microsoft patches which affects vmxnet3 adapters causing them to lose their IP configuration: This smells like the exact same bug that we saw with the 2008R2 update rollup.
|
# ? Mar 15, 2018 19:43 |
|
YOLOsubmarine posted:What problem are you trying to solve here?
|
# ? Mar 16, 2018 02:00 |
|
Vcsa availability isn't really a problem anymore If you have true six sigma uptime requirements and the seven figure budget to back it, then sure, play games with multiple pscs, but...... This doesn't seem work your time, given the actual risk of outage and the impact and duration of that outage Potato Salad fucked around with this message at 08:43 on Mar 16, 2018 |
# ? Mar 16, 2018 08:41 |
|
How do I disable guest not heartbeating failure response on one specific VM in VMware? I don't want this VM to be reset if it fails a heartbeat. I don't see this option in VM overrides? kiwid fucked around with this message at 16:16 on Mar 19, 2018 |
# ? Mar 19, 2018 16:08 |
|
kiwid posted:How do I disable guest not heartbeating failure response on one specific VM in VMware? It’s controlled by the “VM Monitoring” setting in the overrides.
|
# ? Mar 19, 2018 18:13 |
|
YOLOsubmarine posted:It’s controlled by the “VM Monitoring” setting in the overrides. That's what I thought but even with it disabled, it still shows a failure response as "reset". Maybe it always shows that though regardless?
|
# ? Mar 19, 2018 20:29 |
|
Yeah, that pane has reporting wildly incorrect info in my experience.
|
# ? Mar 20, 2018 13:31 |
|
Powercli
|
# ? Mar 20, 2018 15:12 |
|
Another question. What is the best way of attaching storage to your Veeam server? My predecesor attached our 60TB QNAP to the vmware hosts via iscsi and then on the Veeam server created and attached a 60TB .vmdk hard drive. Is this the best way or rather should the QNAP have been directly connected to the Veeam server via Microsoft's iscsi initiator rather than going through VMware?
|
# ? Mar 20, 2018 17:05 |
|
I mean considering you might want to be able to restore VMs when your VMware environment is hosed, and the Qnap device sounds like it can just present an SMB share, I'd have done that.
|
# ? Mar 20, 2018 17:16 |
|
Oh yikes. Just present an SMB share directly to Veeam.
|
# ? Mar 20, 2018 18:58 |
|
Look at what I just found... I guess this is another indicator that it shouldn't be a .vmdk Edit: oh gently caress, this is going to take weeks to delete I'm sure. kiwid fucked around with this message at 19:18 on Mar 20, 2018 |
# ? Mar 20, 2018 19:06 |
|
jesus christ
|
# ? Mar 20, 2018 22:13 |
|
Can you export an NFS share from the Qnap box? I'd be inclined to add that to your host and then just copy the contents of the VMDK out using the CLI and then trash it.
|
# ? Mar 20, 2018 22:20 |
|
Holy mother of God Storage is cheap these days
|
# ? Mar 20, 2018 22:36 |
|
Thanks Ants posted:Can you export an NFS share from the Qnap box? I'd be inclined to add that to your host and then just copy the contents of the VMDK out using the CLI and then trash it.
|
# ? Mar 20, 2018 22:37 |
|
Sorry but may I share your predicament on slack right now, that's hilarious
|
# ? Mar 20, 2018 22:38 |
|
Real talk, my advice would be to architect this so a Windows Server 2016 machine controls the block level filesystem of your Veeam backup repositories. I use MS's software iscsi initiators to storage appliance targets. I've found that Win2016's dedupe and compression is more aware of the underlying filesystem of what's being backed up. This saves, in my environment, an additional 15% on my Potato Salad fucked around with this message at 22:51 on Mar 20, 2018 |
# ? Mar 20, 2018 22:44 |
|
I don't know, when you're playing with data in multiple tiers that don't have any knowledge of each other I start getting uncomfortable. Have you done any testing of disabling Veeam's compression/dedupe and just relying on WS2016? I'd be interested to see the difference there.
|
# ? Mar 20, 2018 23:00 |
|
All this aside, if you do a windows or CIFS repo consider doing not joining it to the domain and using local credentials used only by VEEAM because when you find out your production servers and file based backups both got encrypted by malware that is not a fun day at all.
|
# ? Mar 20, 2018 23:17 |
|
H2SO4 posted:I don't know, when you're playing with data in multiple tiers that don't have any knowledge of each other I start getting uncomfortable. Have you done any testing of disabling Veeam's compression/dedupe and just relying on WS2016? I'd be interested to see the difference there. I actually have! You want Veeam to do dedupe and at least fast compression first. Veeam's dedupe with CBT is a game changer regarding just how fast incremental and reverse incremental "full" backup and restore can be. You want Veeam to be aware of that. OS level storage optimization on your repo isn't necessary, just extra benefit if you need it. You're doing copy jobs between tiers, right? Potato Salad fucked around with this message at 12:29 on Mar 21, 2018 |
# ? Mar 21, 2018 12:23 |
|
|
# ? May 4, 2024 20:12 |
|
Potato Salad posted:Sorry but may I share your predicament on slack right now, that's hilarious Who mine? Go for it!
|
# ? Mar 21, 2018 13:23 |