|
Isn't MS still claiming MSSQL under vmdk's is "unsupported" and tells you to either use iSCSI or vhd mounts, or did they finally drop that idiocy?
|
# ? Jan 9, 2017 15:40 |
|
|
# ? Apr 25, 2024 01:29 |
|
BangersInMyKnickers posted:Isn't MS still claiming MSSQL under vmdk's is "unsupported" and tells you to either use iSCSI or vhd mounts, or did they finally drop that idiocy? If I remember correctly from my Nimble course, they said that MSSQL in a vmdk wasn't supported and to instead use Windows iSCSI to connect to the volume which is what prompted me to ask about file server as well. I know Nimble does some special things to volumes dedicated to different performance profiles.
|
# ? Jan 9, 2017 17:08 |
|
BangersInMyKnickers posted:Isn't MS still claiming MSSQL under vmdk's is "unsupported" and tells you to either use iSCSI or vhd mounts, or did they finally drop that idiocy? I haven't heard of this and couldn't find any evidence of it in a quick glance, but that doesn't mean a whole lot as I try to talk to Microsoft as little as possible. Are you sure that this isn't specifically VMDKs on NFS? kiwid posted:If I remember correctly from my Nimble course, they said that MSSQL in a vmdk wasn't supported and to instead use Windows iSCSI to connect to the volume which is what prompted me to ask about file server as well. I know Nimble does some special things to volumes dedicated to different performance profiles. Every vendor will tell you that their way is the best way to do it. I am all for following best practices, but at some point knowing when not to follow best practices is where the value is. Unless you are seriously pushing your hardware or have very specific design considerations, I would run SQL / Exchange / File Servers in a VMDK. The added flexibility and maintainability is well worth it. Being able to use snapshots and have them be consistent, being able to use storage vMotion, plus a bunch of other factors.
|
# ? Jan 9, 2017 17:29 |
|
MSSQL is supported when using VMDKs. The main reason you might split your SQL or other application volumes out onto a separate Nimble volume and mount via iSCSI is if you're doing failover clustering, or if you want to use array level snapshots for consistent backup and replication using Nimble's tools. Those tools will work better if the LUN is presented directly to the host, rather than as one VMDK on a VMFS volume. Since you're using VEEAM for backup you should certainly just keep all of your data in VMDKs and let VEEAM handle it, because VEEAM can't do anything with a guest attached iSCSI LUN.
|
# ? Jan 9, 2017 18:38 |
|
Is there any reason I shouldn't enable MPIO on a 2012R2 VMware guest?
|
# ? Jan 9, 2017 22:04 |
|
anthonypants posted:Is there any reason I shouldn't enable MPIO on a 2012R2 VMware guest? Unless you are doing iSCSI to a LUN, I don't know why you would do that. And even that is not really useful because of the bandwidth internally on the host.
|
# ? Jan 9, 2017 22:14 |
|
mayodreams posted:Unless you are doing iSCSI to a LUN, I don't know why you would do that. And even that is not really useful because of the bandwidth internally on the host.
|
# ? Jan 9, 2017 22:24 |
|
I would still do MPIO in guest if you were doing iSCSI at the guest level. You may not need redundancy in adapters at the guest level due to host redundancy, but what about the other end? If your storage appliance has 4 storage links to the network, your are going to want your guest to be able to use all of them in the case of adapter or link failure on the storage side.
|
# ? Jan 9, 2017 22:31 |
|
Yea, if you're doing iSCSI in Windows you should enable MPIO, yes.
|
# ? Jan 9, 2017 22:55 |
|
anthonypants posted:Yeah, it's iSCSI to our storage appliance. I'm trying to wrench out performance for this disk because our SQL devs are complaining about performance. If I really need MPIO, I guess I'd do MPIO to the host and then expose that LUN to Windows or something? MPIO is going to help with throughput, not ops/latency. MPIO is great and all but I don't think its going to help anything unless the bottleneck is somehow at one of your NAS controller heads and this can split load between them.
|
# ? Jan 9, 2017 23:34 |
|
BangersInMyKnickers posted:The issues with the X710's are driver related and numerous, both in ESXi and any other linux distro you care to name. General stability issues that causes the whole nic driver stack to reinitialize and cause your connections to rapidly flap. It might be resolved with the 1.2.1 native driver (as opposed to the emulated vmklinux drivers I am using) but I can't validate that because the native driver has some kind of arp bug that keeps it from communicating with NetApp hardware on the network. I would say avoid them like the plague and stick with the X520's if they meet spec. My problem is I have to have quad port 10gig for my blade deployment so I'm trying to push Dell to replace them with QLogic 57840S's, which is probably worth the extra money if you want a quad 10gigE NIC that actually works. I have a trial unit in the mail at the moment and should know by Tue or Wed if that sorts things out. Got the Qlogic 57840S in, immediately cleared up the flood of garbage in the lacp logs, acting stable and not flapping under load like the X710. Can't get it to work with the native driver but that isn't a deal breaker. Big drawback seems to be that it only has ~40gig of rated thoughput on a quad port full duplex card so don't use this thing if you're expecting to push near 50% of your total fabric throughput. As far as I know the X710 is rated for full saturation 80gig load and seems to be better hardware, but the drivers make it worthless. Here's hoping that fixes are coming soon.
|
# ? Jan 9, 2017 23:39 |
|
gently caress x710
|
# ? Jan 10, 2017 00:29 |
|
It's ok. Their Linux drivers are trash, too.
|
# ? Jan 10, 2017 01:04 |
|
BangersInMyKnickers posted:MPIO is going to help with throughput, not ops/latency. MPIO is great and all but I don't think its going to help anything unless the bottleneck is somehow at one of your NAS controller heads and this can split load between them.
|
# ? Jan 10, 2017 02:25 |
|
anthonypants posted:Well the SQL guys do a nightly restore from our production database to a not-production server and people can do work against the non-production server, but as the database grows that restore is taking forever. So throughput is a definite issue, but I was able to do some research into it today and I think the main reason is that the backups are stored on a DataDomain, and trying to un-dedupe a SQL database is bad. So we can either pay for a license for "DDBoost" to get around this (lol) or we can host the data somewhere else. I am sure we will do neither!!!! and my boss will bitch about how the SQL guys are upset about every little thing What is your primary storage? And where are the backups being restored to?
|
# ? Jan 10, 2017 06:51 |
|
Internet Explorer posted:I haven't heard of this and couldn't find any evidence of it in a quick glance, but that doesn't mean a whole lot as I try to talk to Microsoft as little as possible. Are you sure that this isn't specifically VMDKs on NFS? In a previous life I created a 20tb clustered file server and created a 15k mailbox 2013 exchange cluster across two data centers all on nimble and I didn't run any of it in vmdk. We still used snapshots and they were consistent when you initiate the snapshot from the nimble. In a file server cluster or an exchange DAG you shouldn't be using storage vmotion at all. There are very valid reasons not to use vmdk's.
|
# ? Jan 10, 2017 07:53 |
|
big money big clit posted:What is your primary storage? And where are the backups being restored to? to this I don't really know how good this is (and I'm guessing the 4k performance sucks on the new disk because it's using 64k blocks) but most of the numbers did go up, so ???
|
# ? Jan 10, 2017 08:15 |
|
anthonypants posted:The restore happens to a 2012R2 box on a VMware host and the disks on that are NFS shares attached to the ESXi from our VNXe3300s, or maybe they're 3200s. I can't remember off the top of my head which is which. This new refresh process they're trying tonight is from that same DataDomain to an iSCSI LUN attached straight to the VM, because I think that might be slightly more performant. And also the current SQL data volume was partitioned with a 4K block size, and I wanted to get a new volume made regardless. I don't think it's going to be a huge increase, but I ran CrystalDiskMark in the middle of the day after I attached the LUN, and it went from this What is your network connectivity on the storage network? Those number looks like a single 1g connection between the Data Domain and the ESXi host. Also, are you using vmxnet3 on your Windows VM?
|
# ? Jan 10, 2017 17:53 |
|
mayodreams posted:What is your network connectivity on the storage network? Those number looks like a single 1g connection between the Data Domain and the ESXi host. Also, are you using vmxnet3 on your Windows VM?
|
# ? Jan 10, 2017 19:35 |
|
The best way to do this sort of developer refresh is through array based snapshots and clones. It might be worth investigating that option rather than moving a bunch of traffic off of the network onto the data domain, and then back over the network off of the data domain, every day. You could have a network bottleneck if you've got a mis-configuration or a 1GB link somewhere in the path, but likely your bottleneck will be disk on either the DD side or the VNX side. Also, VMXNET3 presents as a 10GbE adapter so if your uplinks are 10GbE then the guest should have that available to communicate with the LUN. YOLOsubmarine fucked around with this message at 20:01 on Jan 10, 2017 |
# ? Jan 10, 2017 19:59 |
|
big money big clit posted:The best way to do this sort of developer refresh is through array based snapshots and clones. It might be worth investigating that option rather than moving a bunch of traffic off of the network onto the data domain, and then back over the network off of the data domain, every day. You could have a network bottleneck if you've got a mis-configuration or a 1GB link somewhere in the path, but likely your bottleneck will be disk on either the DD side or the VNX side. Oh, hey, you're right, the guest does see a 10Gb interface. I really wouldn't be surprised if there were a 1Gb link somewhere in this clusterfuck, so I guess I'll have to dig into that.
|
# ? Jan 10, 2017 20:29 |
|
BOOST would improve your backup performance if you were network bound, but if you're disk bound it won't. It won't do anything for restores either way. If you're network bound you could also compress the backups before sending over the wire. There's no reason that writing SQL backups to deduplicating storage performs worse than any other method. It's just a disk target, it's backup speed is going to be wholly governed by its ingest rate.
|
# ? Jan 11, 2017 02:36 |
|
I've done some searching but I haven't found anything good. I have a 4-bay RAID array in JBOD mode (connects via USB or eSATA), and a Windows host. I'd like to present the drives to an Ubuntu Server VM, and have it properly recognize a LVM group/partitions. I'm not super experienced with Virtualbox but I've played around a little bit, anyone have a cliffnotes on how I can present "raw" drives like that? The "raw drive" stuff I saw sounded like it would break compatibility with regular Linux systems pretty badly.
|
# ? Jan 11, 2017 04:43 |
|
Paul MaudDib posted:I've done some searching but I haven't found anything good. I have a 4-bay RAID array in JBOD mode (connects via USB or eSATA), and a Windows host. I'd like to present the drives to an Ubuntu Server VM, and have it properly recognize a LVM group/partitions. I'm not super experienced with Virtualbox but I've played around a little bit, anyone have a cliffnotes on how I can present "raw" drives like that? The "raw drive" stuff I saw sounded like it would break compatibility with regular Linux systems pretty badly. LVM keeps metadata on the disks. Device naming doesn't matter. As long as they can be passed through (one by one with an eSATA expander or the whole thing via USB), LVM will assemble fine. If these are backed by mdadm, you'll need to do a little more work, but not much.
|
# ? Jan 11, 2017 16:14 |
|
evol262 posted:LVM keeps metadata on the disks. Device naming doesn't matter. As long as they can be passed through (one by one with an eSATA expander or the whole thing via USB), LVM will assemble fine. I'm a derp who totally missed the "USB-to-ATA" adapter in my USB device list
|
# ? Jan 12, 2017 04:39 |
|
I've got a four node ESXi 6 VSAN cluster in my lab and I'm going to be installing 10Gb NICs. Is it really as easy as just dropping a host in maintenance mode, installing the card, then migrating the DVS uplinks to the 10Gb interfaces one by one? Is it best to add the 10Gb interface to the DVS uplinks and then remove the 1Gb uplinks afterwards as a separate task? Should I disable HA host monitoring during the migration? Basically I'm taking bets on how bad I'm going to make my cluster poo poo itself. Should be entertaining. H2SO4 fucked around with this message at 11:56 on Jan 17, 2017 |
# ? Jan 17, 2017 11:52 |
|
Maintenance mode, install latest VIB for the new nic, shut down host, add nic (Not Intel X710s), bring up host still in maintenance mode, migrate vmKernels to new interfaces, pull out of maintenance mode to validate its working, maint mode and shutdown to pull old nic, rinse and repeat for all the hosts. If its in maintenance mode then HA monitoring is disabled for the system so I wouldn't worry about messing with that. If something gets messed up and you lose network paths its going to be immediately apparent. You might want to put DRS in manual mode so you can control which VMs move over to the first host when it comes out of maint mode while you are validating things since it sounds like you don't have test/QA hardware.
|
# ? Jan 17, 2017 17:32 |
|
Thanks for the sanity check. This is my lab so even if I manage to burn it all to the ground I'll just be annoyed and not crying.
|
# ? Jan 17, 2017 19:29 |
|
Can you take a snapshot of a single vmdk? On VMware 5.5?
|
# ? Jan 17, 2017 21:44 |
|
If you mark the other disks and independent before you take the snap you can do it via the GUI. I'd be surprised if you couldn't take a snap of an individual vmdk using PowerCLI, but I'm not 100% on that.
|
# ? Jan 17, 2017 22:27 |
|
Been putting these Qlogic replacement nics through their paces and they seem to be holding up but one of the things I noticed was the out of box Rx buffer size was 512KB with a max of 4MB. Some amount of FIFO Rx errors so I'm playing around with the idea of increasing the buffer size but I'm not sure if there is a drawback similar to running deep queue depths on your storage adapters where latency can snowball on you. Maybe only increment it up to 1024KB?code:
|
# ? Jan 18, 2017 18:09 |
|
I'm having a strange problem with my home qemu/KVM lab running on Ubuntu desktop 17.04 (although this issue was around in 16.04 and 16.10 too). Since I'm using my laptop as the host and it's often connected my wifi which doesn't support bridged connections, I have it set up with a NAT virtual network which also provides DHCP for the guests. If I let my guests use the DNS address that the DHCP provides, which is the network address of the NAT network (192.168.100.1), some websites won't connect while others work fine. I get a response from nslookup for the sites that don't work, but they immediately return a server not found page if I try to access them through a web browser from the guest. If I manually set the DNS to Google's everything works just fine. I can't seem to find a pattern of what works and what doesn't either - microsoft.com and mozilla.com are no good, google.com and somethingawful.com work fine. I get the same results from nslookup using 192.168.100.1 or 8.8.8.8 as DNS on my guest and the same result from nslookup on the host using my ISP DNS. I may have messed up something a few months ago when I was trying to cudgel the virtual network into working as a bridge over wifi, but I can't find anything else that I did then that I haven't undone. I was running a DNS server on one of the guests with all the other guests pointed at that so I didn't notice. Can anyone think of something that might be causing this?
|
# ? Jan 19, 2017 19:58 |
|
The Nards Pan posted:I'm having a strange problem with my home qemu/KVM lab running on Ubuntu desktop 17.04 (although this issue was around in 16.04 and 16.10 too). Since I'm using my laptop as the host and it's often connected my wifi which doesn't support bridged connections, I have it set up with a NAT virtual network which also provides DHCP for the guests. If I let my guests use the DNS address that the DHCP provides, which is the network address of the NAT network (192.168.100.1), some websites won't connect while others work fine. I get a response from nslookup for the sites that don't work, but they immediately return a server not found page if I try to access them through a web browser from the guest. If I manually set the DNS to Google's everything works just fine. I can't seem to find a pattern of what works and what doesn't either - microsoft.com and mozilla.com are no good, google.com and somethingawful.com work fine. I get the same results from nslookup using 192.168.100.1 or 8.8.8.8 as DNS on my guest and the same result from nslookup on the host using my ISP DNS. Are you trying to NAT FROM and TO the same network? The NAT Virtual network you're using uses the same IP scope as the "main" or primary network?
|
# ? Jan 19, 2017 21:33 |
|
No, my host is on the 192.168.1.1 network. Both /24s. Edit: There does seem to be some difference between the nslookup results as shown below. Unfortunately I'm not smart enough (yet) to know what it means - this is driving me up the loving wall: E2: may be a red herring. The host gets the shorter answer when doing nslookup but can connect just fine. ugh. BallerBallerDillz fucked around with this message at 05:39 on Jan 20, 2017 |
# ? Jan 19, 2017 21:40 |
|
You're not blocking tcp DNS queries somehow are you? Large DNS responses sometimes require TCP.
|
# ? Jan 20, 2017 08:04 |
|
Has anybody had an issue browsing NFS datastores in the ESXi 6.5 web client? Using SSH works fine, vCenter also works perfectly.
|
# ? Jan 20, 2017 12:53 |
|
The Nards Pan posted:I'm having a strange problem with my home qemu/KVM lab running on Ubuntu desktop 17.04 (although this issue was around in 16.04 and 16.10 too). Since I'm using my laptop as the host and it's often connected my wifi which doesn't support bridged connections, I have it set up with a NAT virtual network which also provides DHCP for the guests. If I let my guests use the DNS address that the DHCP provides, which is the network address of the NAT network (192.168.100.1), some websites won't connect while others work fine. I get a response from nslookup for the sites that don't work, but they immediately return a server not found page if I try to access them through a web browser from the guest. If I manually set the DNS to Google's everything works just fine. I can't seem to find a pattern of what works and what doesn't either - microsoft.com and mozilla.com are no good, google.com and somethingawful.com work fine. I get the same results from nslookup using 192.168.100.1 or 8.8.8.8 as DNS on my guest and the same result from nslookup on the host using my ISP DNS. Try putting virbr0 in promiscuous mode and tcpdumping it.
|
# ? Jan 20, 2017 12:57 |
|
Thanks Ants posted:Has anybody had an issue browsing NFS datastores in the ESXi 6.5 web client? Using SSH works fine, vCenter also works perfectly. Been working for me in both Flash and HTML5 versions. Keep in mind that while you are accessing the page view through vCenter, querying the actual contents of the datastore is being done by your browser initiating a direct connection to one of the hosts on the NFC protocol in that cluster so firewall rules or untrusted certs could be shooting it down.
|
# ? Jan 20, 2017 17:27 |
|
theperminator posted:You're not blocking tcp DNS queries somehow are you? Large DNS responses sometimes require TCP. I don't think so. I've tried flushing IPtables on the host, and I've disabled SELinux and firewalld on the guests for troubleshooting with no joy. It happens when the host is connected to different networks, so I don't think it's being blocked upstream anywhere. evol262 posted:Try putting virbr0 in promiscuous mode and tcpdumping it. Someone in IRC suggested that last night. I got the following output just attempting to load https://www.microsoft.com: host kvm dnsmasq: http://pastebin.com/6K7QN4wE google dns: http://pastebin.com/WqxWwGwF Nothing jumped out at me, obviously the one that worked has a lot more data. I see 802.1d in the first one - could it be because the bridge has STP on? I appreciate the ideas, but I'll stop hijacking the thread with my troubleshooting - if nothing obvious jumps out at you more experienced folks I'll likely just save my qcow2 files and blow away the host and reinstall, it's set up for learning; there's nothing critical on here.
|
# ? Jan 20, 2017 17:35 |
|
|
# ? Apr 25, 2024 01:29 |
|
BangersInMyKnickers posted:Been working for me in both Flash and HTML5 versions. Keep in mind that while you are accessing the page view through vCenter, querying the actual contents of the datastore is being done by your browser initiating a direct connection to one of the hosts on the NFC protocol in that cluster so firewall rules or untrusted certs could be shooting it down. That makes a lot of sense since the CA isn't in place yet so all the certs are untrusted self-signed ones. It works perfectly in vCenter, but the VM I was moving was the vCenter one, so I had to do it on the host. I had no issues moving it via the terminal and registering it in the inventory.
|
# ? Jan 20, 2017 20:30 |