Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Isn't MS still claiming MSSQL under vmdk's is "unsupported" and tells you to either use iSCSI or vhd mounts, or did they finally drop that idiocy?

Adbot
ADBOT LOVES YOU

kiwid
Sep 30, 2013

BangersInMyKnickers posted:

Isn't MS still claiming MSSQL under vmdk's is "unsupported" and tells you to either use iSCSI or vhd mounts, or did they finally drop that idiocy?

If I remember correctly from my Nimble course, they said that MSSQL in a vmdk wasn't supported and to instead use Windows iSCSI to connect to the volume which is what prompted me to ask about file server as well. I know Nimble does some special things to volumes dedicated to different performance profiles.

Internet Explorer
Jun 1, 2005





BangersInMyKnickers posted:

Isn't MS still claiming MSSQL under vmdk's is "unsupported" and tells you to either use iSCSI or vhd mounts, or did they finally drop that idiocy?

I haven't heard of this and couldn't find any evidence of it in a quick glance, but that doesn't mean a whole lot as I try to talk to Microsoft as little as possible. Are you sure that this isn't specifically VMDKs on NFS?

kiwid posted:

If I remember correctly from my Nimble course, they said that MSSQL in a vmdk wasn't supported and to instead use Windows iSCSI to connect to the volume which is what prompted me to ask about file server as well. I know Nimble does some special things to volumes dedicated to different performance profiles.



Every vendor will tell you that their way is the best way to do it. I am all for following best practices, but at some point knowing when not to follow best practices is where the value is.

Unless you are seriously pushing your hardware or have very specific design considerations, I would run SQL / Exchange / File Servers in a VMDK. The added flexibility and maintainability is well worth it. Being able to use snapshots and have them be consistent, being able to use storage vMotion, plus a bunch of other factors.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

MSSQL is supported when using VMDKs. The main reason you might split your SQL or other application volumes out onto a separate Nimble volume and mount via iSCSI is if you're doing failover clustering, or if you want to use array level snapshots for consistent backup and replication using Nimble's tools. Those tools will work better if the LUN is presented directly to the host, rather than as one VMDK on a VMFS volume. Since you're using VEEAM for backup you should certainly just keep all of your data in VMDKs and let VEEAM handle it, because VEEAM can't do anything with a guest attached iSCSI LUN.

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum
Is there any reason I shouldn't enable MPIO on a 2012R2 VMware guest?

mayodreams
Jul 4, 2003


Hello darkness,
my old friend

anthonypants posted:

Is there any reason I shouldn't enable MPIO on a 2012R2 VMware guest?

Unless you are doing iSCSI to a LUN, I don't know why you would do that. And even that is not really useful because of the bandwidth internally on the host.

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

mayodreams posted:

Unless you are doing iSCSI to a LUN, I don't know why you would do that. And even that is not really useful because of the bandwidth internally on the host.
Yeah, it's iSCSI to our storage appliance. I'm trying to wrench out performance for this disk because our SQL devs are complaining about performance. If I really need MPIO, I guess I'd do MPIO to the host and then expose that LUN to Windows or something?

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


I would still do MPIO in guest if you were doing iSCSI at the guest level. You may not need redundancy in adapters at the guest level due to host redundancy, but what about the other end? If your storage appliance has 4 storage links to the network, your are going to want your guest to be able to use all of them in the case of adapter or link failure on the storage side.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Yea, if you're doing iSCSI in Windows you should enable MPIO, yes.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

anthonypants posted:

Yeah, it's iSCSI to our storage appliance. I'm trying to wrench out performance for this disk because our SQL devs are complaining about performance. If I really need MPIO, I guess I'd do MPIO to the host and then expose that LUN to Windows or something?

MPIO is going to help with throughput, not ops/latency. MPIO is great and all but I don't think its going to help anything unless the bottleneck is somehow at one of your NAS controller heads and this can split load between them.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

BangersInMyKnickers posted:

The issues with the X710's are driver related and numerous, both in ESXi and any other linux distro you care to name. General stability issues that causes the whole nic driver stack to reinitialize and cause your connections to rapidly flap. It might be resolved with the 1.2.1 native driver (as opposed to the emulated vmklinux drivers I am using) but I can't validate that because the native driver has some kind of arp bug that keeps it from communicating with NetApp hardware on the network. I would say avoid them like the plague and stick with the X520's if they meet spec. My problem is I have to have quad port 10gig for my blade deployment so I'm trying to push Dell to replace them with QLogic 57840S's, which is probably worth the extra money if you want a quad 10gigE NIC that actually works. I have a trial unit in the mail at the moment and should know by Tue or Wed if that sorts things out.

e: This is with the latest NIC firmware as well, does not resolve the issue.

Known Issues:
- LLDP packets containing PFC and ETS info coming from Intel X710 ports cause packet loss with certain bond configurations in Linux.

I think this might be the source of what I am dealing with, lacp logs are flooded with PFC/ETS errors until the whole driver crashes and re-inits.

Got the Qlogic 57840S in, immediately cleared up the flood of garbage in the lacp logs, acting stable and not flapping under load like the X710. Can't get it to work with the native driver but that isn't a deal breaker. Big drawback seems to be that it only has ~40gig of rated thoughput on a quad port full duplex card so don't use this thing if you're expecting to push near 50% of your total fabric throughput. As far as I know the X710 is rated for full saturation 80gig load and seems to be better hardware, but the drivers make it worthless. Here's hoping that fixes are coming soon.

Methanar
Sep 26, 2013

by the sex ghost
gently caress x710


evol262
Nov 30, 2010
#!/usr/bin/perl
It's ok. Their Linux drivers are trash, too.

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

BangersInMyKnickers posted:

MPIO is going to help with throughput, not ops/latency. MPIO is great and all but I don't think its going to help anything unless the bottleneck is somehow at one of your NAS controller heads and this can split load between them.
Well the SQL guys do a nightly restore from our production database to a not-production server and people can do work against the non-production server, but as the database grows that restore is taking forever. So throughput is a definite issue, but I was able to do some research into it today and I think the main reason is that the backups are stored on a DataDomain, and trying to un-dedupe a SQL database is bad. So we can either pay for a license for "DDBoost" to get around this (lol) or we can host the data somewhere else. I am sure we will do neither!!!! and my boss will bitch about how the SQL guys are upset about every little thing

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

anthonypants posted:

Well the SQL guys do a nightly restore from our production database to a not-production server and people can do work against the non-production server, but as the database grows that restore is taking forever. So throughput is a definite issue, but I was able to do some research into it today and I think the main reason is that the backups are stored on a DataDomain, and trying to un-dedupe a SQL database is bad. So we can either pay for a license for "DDBoost" to get around this (lol) or we can host the data somewhere else. I am sure we will do neither!!!! and my boss will bitch about how the SQL guys are upset about every little thing

What is your primary storage? And where are the backups being restored to?

Nitr0
Aug 17, 2005

IT'S FREE REAL ESTATE

Internet Explorer posted:

I haven't heard of this and couldn't find any evidence of it in a quick glance, but that doesn't mean a whole lot as I try to talk to Microsoft as little as possible. Are you sure that this isn't specifically VMDKs on NFS?


Every vendor will tell you that their way is the best way to do it. I am all for following best practices, but at some point knowing when not to follow best practices is where the value is.

Unless you are seriously pushing your hardware or have very specific design considerations, I would run SQL / Exchange / File Servers in a VMDK. The added flexibility and maintainability is well worth it. Being able to use snapshots and have them be consistent, being able to use storage vMotion, plus a bunch of other factors.

In a previous life I created a 20tb clustered file server and created a 15k mailbox 2013 exchange cluster across two data centers all on nimble and I didn't run any of it in vmdk. We still used snapshots and they were consistent when you initiate the snapshot from the nimble.

In a file server cluster or an exchange DAG you shouldn't be using storage vmotion at all. There are very valid reasons not to use vmdk's.

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

big money big clit posted:

What is your primary storage? And where are the backups being restored to?
The restore happens to a 2012R2 box on a VMware host and the disks on that are NFS shares attached to the ESXi from our VNXe3300s, or maybe they're 3200s. I can't remember off the top of my head which is which. This new refresh process they're trying tonight is from that same DataDomain to an iSCSI LUN attached straight to the VM, because I think that might be slightly more performant. And also the current SQL data volume was partitioned with a 4K block size, and I wanted to get a new volume made regardless. I don't think it's going to be a huge increase, but I ran CrystalDiskMark in the middle of the day after I attached the LUN, and it went from this


to this


I don't really know how good this is (and I'm guessing the 4k performance sucks on the new disk because it's using 64k blocks) but most of the numbers did go up, so ???

mayodreams
Jul 4, 2003


Hello darkness,
my old friend

anthonypants posted:

The restore happens to a 2012R2 box on a VMware host and the disks on that are NFS shares attached to the ESXi from our VNXe3300s, or maybe they're 3200s. I can't remember off the top of my head which is which. This new refresh process they're trying tonight is from that same DataDomain to an iSCSI LUN attached straight to the VM, because I think that might be slightly more performant. And also the current SQL data volume was partitioned with a 4K block size, and I wanted to get a new volume made regardless. I don't think it's going to be a huge increase, but I ran CrystalDiskMark in the middle of the day after I attached the LUN, and it went from this

I don't really know how good this is (and I'm guessing the 4k performance sucks on the new disk because it's using 64k blocks) but most of the numbers did go up, so ???

What is your network connectivity on the storage network? Those number looks like a single 1g connection between the Data Domain and the ESXi host. Also, are you using vmxnet3 on your Windows VM?

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

mayodreams posted:

What is your network connectivity on the storage network? Those number looks like a single 1g connection between the Data Domain and the ESXi host. Also, are you using vmxnet3 on your Windows VM?
Right now, this VM is in a cluster of machines, and all network traffic (excluding VMware management traffic) is on a pair of 10Gb NICs. This is how all of our hosts are configured. The VM used to be on a single host, but that had a 1Gb NIC so it was moved to the cluster. The second image is from the guest OS to the VNXe, and the guest has a 1Gb NIC, so I guess that's correct. And yes, we're using VMXNET3. It's probably the one good sane decision made in this entire mess.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

The best way to do this sort of developer refresh is through array based snapshots and clones. It might be worth investigating that option rather than moving a bunch of traffic off of the network onto the data domain, and then back over the network off of the data domain, every day. You could have a network bottleneck if you've got a mis-configuration or a 1GB link somewhere in the path, but likely your bottleneck will be disk on either the DD side or the VNX side.

Also, VMXNET3 presents as a 10GbE adapter so if your uplinks are 10GbE then the guest should have that available to communicate with the LUN.

YOLOsubmarine fucked around with this message at 20:01 on Jan 10, 2017

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

big money big clit posted:

The best way to do this sort of developer refresh is through array based snapshots and clones. It might be worth investigating that option rather than moving a bunch of traffic off of the network onto the data domain, and then back over the network off of the data domain, every day. You could have a network bottleneck if you've got a mis-configuration or a 1GB link somewhere in the path, but likely your bottleneck will be disk on either the DD side or the VNX side.

Also, VMXNET3 presents as a 10GbE adapter so if your uplinks are 10GbE then the guest should have that available to communicate with the LUN.
Everything I've read says not to back up/restore SQL databases to a filesystem-based deduplication product like DataDomain. It turns out we have a license for a product called "DDBoost" which offloads that dedupe/un-dedupe to the SQL box or to other boxes. We're not using it.

Oh, hey, you're right, the guest does see a 10Gb interface. I really wouldn't be surprised if there were a 1Gb link somewhere in this clusterfuck, so I guess I'll have to dig into that.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

BOOST would improve your backup performance if you were network bound, but if you're disk bound it won't. It won't do anything for restores either way. If you're network bound you could also compress the backups before sending over the wire. There's no reason that writing SQL backups to deduplicating storage performs worse than any other method. It's just a disk target, it's backup speed is going to be wholly governed by its ingest rate.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
I've done some searching but I haven't found anything good. I have a 4-bay RAID array in JBOD mode (connects via USB or eSATA), and a Windows host. I'd like to present the drives to an Ubuntu Server VM, and have it properly recognize a LVM group/partitions. I'm not super experienced with Virtualbox but I've played around a little bit, anyone have a cliffnotes on how I can present "raw" drives like that? The "raw drive" stuff I saw sounded like it would break compatibility with regular Linux systems pretty badly.

evol262
Nov 30, 2010
#!/usr/bin/perl

Paul MaudDib posted:

I've done some searching but I haven't found anything good. I have a 4-bay RAID array in JBOD mode (connects via USB or eSATA), and a Windows host. I'd like to present the drives to an Ubuntu Server VM, and have it properly recognize a LVM group/partitions. I'm not super experienced with Virtualbox but I've played around a little bit, anyone have a cliffnotes on how I can present "raw" drives like that? The "raw drive" stuff I saw sounded like it would break compatibility with regular Linux systems pretty badly.

LVM keeps metadata on the disks. Device naming doesn't matter. As long as they can be passed through (one by one with an eSATA expander or the whole thing via USB), LVM will assemble fine.

If these are backed by mdadm, you'll need to do a little more work, but not much.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

evol262 posted:

LVM keeps metadata on the disks. Device naming doesn't matter. As long as they can be passed through (one by one with an eSATA expander or the whole thing via USB), LVM will assemble fine.

If these are backed by mdadm, you'll need to do a little more work, but not much.

I'm a derp who totally missed the "USB-to-ATA" adapter in my USB device list :downs:

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
I've got a four node ESXi 6 VSAN cluster in my lab and I'm going to be installing 10Gb NICs. Is it really as easy as just dropping a host in maintenance mode, installing the card, then migrating the DVS uplinks to the 10Gb interfaces one by one? Is it best to add the 10Gb interface to the DVS uplinks and then remove the 1Gb uplinks afterwards as a separate task? Should I disable HA host monitoring during the migration?

Basically I'm taking bets on how bad I'm going to make my cluster poo poo itself. Should be entertaining.

H2SO4 fucked around with this message at 11:56 on Jan 17, 2017

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Maintenance mode, install latest VIB for the new nic, shut down host, add nic (Not Intel X710s), bring up host still in maintenance mode, migrate vmKernels to new interfaces, pull out of maintenance mode to validate its working, maint mode and shutdown to pull old nic, rinse and repeat for all the hosts. If its in maintenance mode then HA monitoring is disabled for the system so I wouldn't worry about messing with that. If something gets messed up and you lose network paths its going to be immediately apparent. You might want to put DRS in manual mode so you can control which VMs move over to the first host when it comes out of maint mode while you are validating things since it sounds like you don't have test/QA hardware.

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
Thanks for the sanity check. This is my lab so even if I manage to burn it all to the ground I'll just be annoyed and not crying.

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum
Can you take a snapshot of a single vmdk? On VMware 5.5?

Internet Explorer
Jun 1, 2005





If you mark the other disks and independent before you take the snap you can do it via the GUI. I'd be surprised if you couldn't take a snap of an individual vmdk using PowerCLI, but I'm not 100% on that.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Been putting these Qlogic replacement nics through their paces and they seem to be holding up but one of the things I noticed was the out of box Rx buffer size was 512KB with a max of 4MB. Some amount of FIFO Rx errors so I'm playing around with the idea of increasing the buffer size but I'm not sure if there is a drawback similar to running deep queue depths on your storage adapters where latency can snowball on you. Maybe only increment it up to 1024KB?

code:
 esxcli network nic stats get -n vmnic0

NIC statistics for vmnic0
   Packets received: 293900719219
   Packets sent: 1688347185
   Bytes received: 1669266739098
   Bytes sent: 1863377034903
   Receive packets dropped: 0
   Transmit packets dropped: 0
   Multicast packets received: 962616
   Broadcast packets received: 0
   Multicast packets sent: 0
   Broadcast packets sent: 0
   Total receive errors: 53143
   Receive length errors: 0
   Receive over errors: 0
   Receive CRC errors: 0
   Receive frame errors: 0
   Receive FIFO errors: 53143
   Receive missed errors: 0
   Total transmit errors: 0
   Transmit aborted errors: 0
   Transmit carrier errors: 0
   Transmit FIFO errors: 0
   Transmit heartbeat errors: 0
   Transmit window errors: 0



ethtool -g vmnic0

Ring parameters for vmnic0:
Pre-set maximums:
RX:             4078
RX Mini:        0
RX Jumbo:       0
TX:             4078
Current hardware settings:
RX:             512
RX Mini:        0
RX Jumbo:       0
TX:             4078
The X710's default to 512KB for both Tx and Rx buffers and I've never seen them throwing FIFO error counts, though they were so busy making GBS threads the bed under any kind of load that they may have never gotten the opportunity to get to that point.

BallerBallerDillz
Jun 11, 2009

Cock, Rules, Everything, Around, Me
Scratchmo
I'm having a strange problem with my home qemu/KVM lab running on Ubuntu desktop 17.04 (although this issue was around in 16.04 and 16.10 too). Since I'm using my laptop as the host and it's often connected my wifi which doesn't support bridged connections, I have it set up with a NAT virtual network which also provides DHCP for the guests. If I let my guests use the DNS address that the DHCP provides, which is the network address of the NAT network (192.168.100.1), some websites won't connect while others work fine. I get a response from nslookup for the sites that don't work, but they immediately return a server not found page if I try to access them through a web browser from the guest. If I manually set the DNS to Google's everything works just fine. I can't seem to find a pattern of what works and what doesn't either - microsoft.com and mozilla.com are no good, google.com and somethingawful.com work fine. I get the same results from nslookup using 192.168.100.1 or 8.8.8.8 as DNS on my guest and the same result from nslookup on the host using my ISP DNS.

I may have messed up something a few months ago when I was trying to cudgel the virtual network into working as a bridge over wifi, but I can't find anything else that I did then that I haven't undone. I was running a DNS server on one of the guests with all the other guests pointed at that so I didn't notice.

Can anyone think of something that might be causing this?

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

The Nards Pan posted:

I'm having a strange problem with my home qemu/KVM lab running on Ubuntu desktop 17.04 (although this issue was around in 16.04 and 16.10 too). Since I'm using my laptop as the host and it's often connected my wifi which doesn't support bridged connections, I have it set up with a NAT virtual network which also provides DHCP for the guests. If I let my guests use the DNS address that the DHCP provides, which is the network address of the NAT network (192.168.100.1), some websites won't connect while others work fine. I get a response from nslookup for the sites that don't work, but they immediately return a server not found page if I try to access them through a web browser from the guest. If I manually set the DNS to Google's everything works just fine. I can't seem to find a pattern of what works and what doesn't either - microsoft.com and mozilla.com are no good, google.com and somethingawful.com work fine. I get the same results from nslookup using 192.168.100.1 or 8.8.8.8 as DNS on my guest and the same result from nslookup on the host using my ISP DNS.

I may have messed up something a few months ago when I was trying to cudgel the virtual network into working as a bridge over wifi, but I can't find anything else that I did then that I haven't undone. I was running a DNS server on one of the guests with all the other guests pointed at that so I didn't notice.

Can anyone think of something that might be causing this?

Are you trying to NAT FROM and TO the same network? The NAT Virtual network you're using uses the same IP scope as the "main" or primary network?

BallerBallerDillz
Jun 11, 2009

Cock, Rules, Everything, Around, Me
Scratchmo
No, my host is on the 192.168.1.1 network. Both /24s.

Edit: There does seem to be some difference between the nslookup results as shown below. Unfortunately I'm not smart enough (yet) to know what it means - this is driving me up the loving wall:



E2: may be a red herring. The host gets the shorter answer when doing nslookup but can connect just fine. ugh.

BallerBallerDillz fucked around with this message at 05:39 on Jan 20, 2017

theperminator
Sep 16, 2009

by Smythe
Fun Shoe
You're not blocking tcp DNS queries somehow are you? Large DNS responses sometimes require TCP.

Thanks Ants
May 21, 2004

#essereFerrari


Has anybody had an issue browsing NFS datastores in the ESXi 6.5 web client? Using SSH works fine, vCenter also works perfectly.

evol262
Nov 30, 2010
#!/usr/bin/perl

The Nards Pan posted:

I'm having a strange problem with my home qemu/KVM lab running on Ubuntu desktop 17.04 (although this issue was around in 16.04 and 16.10 too). Since I'm using my laptop as the host and it's often connected my wifi which doesn't support bridged connections, I have it set up with a NAT virtual network which also provides DHCP for the guests. If I let my guests use the DNS address that the DHCP provides, which is the network address of the NAT network (192.168.100.1), some websites won't connect while others work fine. I get a response from nslookup for the sites that don't work, but they immediately return a server not found page if I try to access them through a web browser from the guest. If I manually set the DNS to Google's everything works just fine. I can't seem to find a pattern of what works and what doesn't either - microsoft.com and mozilla.com are no good, google.com and somethingawful.com work fine. I get the same results from nslookup using 192.168.100.1 or 8.8.8.8 as DNS on my guest and the same result from nslookup on the host using my ISP DNS.

I may have messed up something a few months ago when I was trying to cudgel the virtual network into working as a bridge over wifi, but I can't find anything else that I did then that I haven't undone. I was running a DNS server on one of the guests with all the other guests pointed at that so I didn't notice.

Can anyone think of something that might be causing this?

Try putting virbr0 in promiscuous mode and tcpdumping it.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Thanks Ants posted:

Has anybody had an issue browsing NFS datastores in the ESXi 6.5 web client? Using SSH works fine, vCenter also works perfectly.

Been working for me in both Flash and HTML5 versions. Keep in mind that while you are accessing the page view through vCenter, querying the actual contents of the datastore is being done by your browser initiating a direct connection to one of the hosts on the NFC protocol in that cluster so firewall rules or untrusted certs could be shooting it down.

BallerBallerDillz
Jun 11, 2009

Cock, Rules, Everything, Around, Me
Scratchmo

theperminator posted:

You're not blocking tcp DNS queries somehow are you? Large DNS responses sometimes require TCP.

I don't think so. I've tried flushing IPtables on the host, and I've disabled SELinux and firewalld on the guests for troubleshooting with no joy. It happens when the host is connected to different networks, so I don't think it's being blocked upstream anywhere.


evol262 posted:

Try putting virbr0 in promiscuous mode and tcpdumping it.

Someone in IRC suggested that last night. I got the following output just attempting to load https://www.microsoft.com:

host kvm dnsmasq:
http://pastebin.com/6K7QN4wE

google dns:
http://pastebin.com/WqxWwGwF

Nothing jumped out at me, obviously the one that worked has a lot more data. I see 802.1d in the first one - could it be because the bridge has STP on?

I appreciate the ideas, but I'll stop hijacking the thread with my troubleshooting - if nothing obvious jumps out at you more experienced folks I'll likely just save my qcow2 files and blow away the host and reinstall, it's set up for learning; there's nothing critical on here.

Adbot
ADBOT LOVES YOU

Thanks Ants
May 21, 2004

#essereFerrari


BangersInMyKnickers posted:

Been working for me in both Flash and HTML5 versions. Keep in mind that while you are accessing the page view through vCenter, querying the actual contents of the datastore is being done by your browser initiating a direct connection to one of the hosts on the NFC protocol in that cluster so firewall rules or untrusted certs could be shooting it down.

That makes a lot of sense since the CA isn't in place yet so all the certs are untrusted self-signed ones. It works perfectly in vCenter, but the VM I was moving was the vCenter one, so I had to do it on the host.

I had no issues moving it via the terminal and registering it in the inventory.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply