Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
kiwid
Sep 30, 2013

I haven't really hosed with VMware for over a year. Anyway, last night I decided to get my home lab up and running again and got everything installed. I setup freenas and exposed a zfs volume over iscsi. I connected to my VMware host and added the iscsi software adapter and setup the networking, etc. I then added the iscsi discovery IP and it found the static paths just fine. 3 to be exact, which is correct since I have 3 nics both on the host and the freenas box. So, VMware successfully sees my freenas LUN and the paths, but when I go into the storage part and try to create a datastore, VMware gets hung up on loading "Current disk layout".

(not my image)


Basically it gets stuck on the above but doesn't show any of the device information, it just says "loading" forever.

What could be wrong?

edit: And this happens on multiple VMware hosts.

Adbot
ADBOT LOVES YOU

kiwid
Sep 30, 2013

Wicaeed posted:

Is LACP usage still only officially supported with the vSphere Enterprise license?

It's been a while since I fudged around with it, but I recall that the free (and lower licensed tiers) never really worked "right" or something.

Or maybe I'm just retarded.,

You need the enterprise plus license.

kiwid
Sep 30, 2013

I'm having troubles getting nested esxi hosts to work in two networks, one being the labeled vm network (with management port), and the second being a labeled iscsi network on a different vswitch. Here is a screenshot of the current setup:



10.10.10.10 is the physical/top level host and the screenshot shows the networking setup.

vswitch0 has the labeled vm network and the management port and uses both physical nics in teaming.
vswitch1 has the labeled iscsi network and also has a port in the iscsi subnet and uses no physical nics (because they aren't needed as all the networking should be in the vswitches).

The 4 vms are two nested esxi hosts, one vcenter vm, and one freenas with iscsi. Each VM has two nics, one in the labeled vm network and one in the labeled iscsi network.

The vcenter vm doesn't really need to be in the iscsi network but i have one nic in there for troubleshooting and pinging other vms. The vcenter vm can ping the freenas and vice versa.

The problem is that I don't know how to setup the nested esxi host networking. Here is a screenshot of how I envisioned it should look like (.13 is esxi1 and .14 is esxi2):



Like I said, the vcenter and freenas vms can ping each other on either subnet, how come I'm having problems with the esxi vms?

edit: I can ping the management port on the nested esxi vms, just not the iscsi port.

edit2: Solved. Apparently I need to enable promiscuous mode on the physical hosts vswitches, though, I don't really know why. I saw it in this article: http://www.vladan.fr/nested-esxi-6-in-a-lab/ and it appears to have solved my networking woes. If someone wants to explain why this is required, I'm all ears.

kiwid fucked around with this message at 03:33 on Jul 9, 2015

kiwid
Sep 30, 2013

What are your guys opinions on Nutanix?

kiwid
Sep 30, 2013

Martytoof posted:

I've got a VCSA6 server at home that I'm deploying some test VMs on.

I need to get access to this from work for reasons, just so I can tweak VMs, etc.

I've forwarded https://mydynds.url.com:44481 to https://vcsa.localdomain:443 and while the initial connection works fine (get screen prompting me to click here to access Web UI), once I click trough it tries to redirect me to https://vcsa.localdomain:443/something/something, which I obviously won't be able to resolve externally.

Is there any way to tell VCSA to not force a URL redirect and just use the URL I came in on?

Yes, I realize that this is ridiculously insecure but I don't really have the cycles to set up a VPN home right now and I was hoping this would be a quick workaround. This isn't a long term solution, but I'd like to get this working today, if it's at all possible. So far my googling hasn't come up with much.

What happens if you just hit the IP address instead?

kiwid
Sep 30, 2013

Is it recommended that we have at least one physical domain controller?

Also, do you guys virtualize vCenter or does that sit on a physical box as well?

kiwid
Sep 30, 2013

The environment I currently work in is super lovely when it comes to hardware. We have a tonne of "beige box" desktops running as servers all due to having literally a zero dollar budget for the past 7 years. Luckily, the company was bought out and we have new owners/management and have been approved to purchase some brand new equipment.

The idea here is to purchase a Nimble SAN and a couple beefy servers and virtualize everything. I'm pretty good with VMware but only from a home lab environment. I've never really touched a Nimble SAN (or an enterprise storage device) before but I will be the one in charge of purchasing this equipment and setting it up and I'm confident I can do so (I've been on a Nimble course).

The one problem I have though is that I don't understand how backups now work. Before, we just used software like BackupAssist or Backup Exec that would use a traditional file-level backup technique and backup to either tape or NAS. The tapes would be taken offsite and the NAS would be sync'd to an offsite NAS.

In a virtualized world, I have no understanding of backups and was hoping someone could explain this to me? We'll be using VMware Essentials Plus and Veeam Essentials Plus. Originally I planned on backing up to NAS with Veeam but how does Veeam work? Does Veeam backup file level or does it backup snapshots or something? Also my CDW rep told me instead of backing up to a NAS, I should get a second Nimble and use it as a DR site. Would this eliminate the need for backups?

kiwid
Sep 30, 2013

So if we do go the backup to QNAP NAS route, how does the QNAP hook up to the network? Do you guys put it on your core switches via iscsi then have the vaeem VM attach to the qnap or does the hypervisor attach to the qnap and you expose VMDK's to it?

kiwid
Sep 30, 2013

We're about to order our brand new infrastructure and I was wondering if you guys can take a look at it real quick and see if this all looks right (from a networking perspective).

Assume all SFP+ links are 10G.

edit: the vmware teaming links are going to be in failover mode rather than teaming.

kiwid fucked around with this message at 16:10 on Jan 7, 2016

kiwid
Sep 30, 2013

cheese-cube posted:

Are the two switches on the core side in the same stack or standalone? If they're standalone then you're going to have to provide a lot more detail about the Layer 2/3 configuration if you want anyone to validate it from a networking perspective.

Also I'd be wary about using the Cisco SMB switches as they aren't exactly scalable (Also they've got weird specs, apparently they can only do jumbo frames on 10/100/1000 ports).

They are standalone and we'd be doing round-robin MPIO with them. Do you recommend something else? We didn't want to spend a gently caress load of money on the switches if we didn't have to. We aren't too worried about scalability since for the foreseeable future we're just going to be running VMware essentials plus which limits us to 3 hosts anyway. However, I didn't know about the jumbo frame limitation so I'll have to look into that.

kiwid
Sep 30, 2013

cheese-cube posted:

How is the storage going to be presented? iSCSI or NFS?

iSCSI both with the nimble and the qnap.

edit: Do you have any sources for the jumbo frame limitation I can look at?

kiwid fucked around with this message at 16:58 on Jan 7, 2016

kiwid
Sep 30, 2013

cheese-cube posted:

Hmm it should work I guess with iSCSI provided that each target/initiator IP address is tied to a single interface. Are you purchasing via a VAR? It would be a good idea to get one of their sales engineers or whatnot to eyeball the design.

Re jumbo frames, I just noticed it when skimming this datasheet which has the following line: http://www.cisco.com/c/en/us/products/collateral/switches/small-business-500-series-stackable-managed-switches/c78-695646_data_sheet.html


Just thought it was weird how they specifically mention only 10/100/1000. Maybe the hardware can't switch 9K frames at 10G or something. Maybe ask over in the Cisco.

Great, thanks for your help. Yes it's through a VAR and they say it's solid as they have other SMBs running a similar setup.

edit: I started raising these questions with our VAR so they're setting us up with calls to Cisco professionals so we'll see where it goes.

kiwid fucked around with this message at 17:30 on Jan 7, 2016

kiwid
Sep 30, 2013

For a simple 10G copper switch (16-24 port) for the SAN fabric, what should I be looking at price wise?

I'm getting quotes from CDW for about $3500-$4000 per unit. Does this sound relatively normal/average?

kiwid
Sep 30, 2013

KS posted:

There's nothing simple about storage switching. Some no-frills switches will handle it fine, but others like the SG500 series will poo poo the bed because of small buffers.

What are you buying?

CDW's "Networking professionals" recommended these: http://www.cisco.com/c/en/us/support/switches/sg550xg-24t-24-port-10gbase-t-stackable-switch/model.html or these:
http://www.cisco.com/c/en/us/support/switches/sg550xg-24f-24-port-ten-gigabit-switch/model.html depending on if we go copper or sfp+

Keep in mind this is a Nimble CS300 with two hosts.

I was also looking at Dell N4000 series but they're a couple grand more for each unit.

kiwid
Sep 30, 2013

NippleFloss posted:

Nexus 3524x is the cheapest 10g option from Cisco that I'd recommend for storage traffic.

What about non-cisco?

We're cheap as gently caress so really I'm looking for the cheapest thing that will work well.

edit: Our CDW rep is advising against the Cisco SG switches as well now after our latest conversation. He agrees the Cisco Nexus is the way to go. Also getting them to look into the Dell N4032F.

My boss is going to poo poo her pants when she sees me coming back with $7000/switch quotes though.

kiwid fucked around with this message at 16:00 on Jan 15, 2016

kiwid
Sep 30, 2013

We just modernized our infrastructure from physical to virtual. We now have an actual SAN too :D

I'm about to migrate our file server and I was wondering if I should create the 2TB volume as a .vmdk or if I should create a volume on the Nimble SAN itself and attach to the volume through Windows iSCSI?

kiwid
Sep 30, 2013

Thanks Ants posted:

How are you backing up?

Vaeem onto a huge QNAP.

kiwid
Sep 30, 2013

BangersInMyKnickers posted:

Isn't MS still claiming MSSQL under vmdk's is "unsupported" and tells you to either use iSCSI or vhd mounts, or did they finally drop that idiocy?

If I remember correctly from my Nimble course, they said that MSSQL in a vmdk wasn't supported and to instead use Windows iSCSI to connect to the volume which is what prompted me to ask about file server as well. I know Nimble does some special things to volumes dedicated to different performance profiles.

kiwid
Sep 30, 2013

How do I disable guest not heartbeating failure response on one specific VM in VMware?



I don't want this VM to be reset if it fails a heartbeat. I don't see this option in VM overrides?

kiwid fucked around with this message at 16:16 on Mar 19, 2018

kiwid
Sep 30, 2013

YOLOsubmarine posted:

It’s controlled by the “VM Monitoring” setting in the overrides.

That's what I thought but even with it disabled, it still shows a failure response as "reset". Maybe it always shows that though regardless?

kiwid
Sep 30, 2013

Another question. What is the best way of attaching storage to your Veeam server? My predecesor attached our 60TB QNAP to the vmware hosts via iscsi and then on the Veeam server created and attached a 60TB .vmdk hard drive. Is this the best way or rather should the QNAP have been directly connected to the Veeam server via Microsoft's iscsi initiator rather than going through VMware?

kiwid
Sep 30, 2013

Look at what I just found...



I guess this is another indicator that it shouldn't be a .vmdk

Edit: oh gently caress, this is going to take weeks to delete I'm sure.

kiwid fucked around with this message at 19:18 on Mar 20, 2018

kiwid
Sep 30, 2013

Potato Salad posted:

Sorry but may I share your predicament on slack right now, that's hilarious

Who mine? Go for it!

kiwid
Sep 30, 2013

Potato Salad posted:

Real talk, my advice would be to architect this so a Windows Server 2016 machine controls the block level filesystem of your Veeam backup repositories. I use MS's software iscsi initiators to storage appliance targets.

I've found that Win2016's dedupe and compression is more aware of the underlying filesystem of what's being backed up. This saves, in my environment, an additional 15% on my car insurance final on-prem tier beyond Veeam's own dedupe and compression.

I think this is what I'll probably do. I don't think I can do SMB share without reconfiguring some poo poo because we already have the iscsi on the core 10gig SFP switches.

kiwid
Sep 30, 2013

cheese-cube posted:

Hey kiwid if you decide to try and delete the snapshot instead of writing off the entire VM and deploying a new one then please keep us updated with the progress. Maybe we can start a pool for how long it will take?

It only took 5 hours to complete, surprisingly. We do have 10gig though. Writing off the VM crossed my mind.

kiwid
Sep 30, 2013

Sorry for such a dumb question but for some reason someone created additional VMkernel ports for management on one of our hosts and so I cleaned it up but not the other hosts have the error:



Where do I fix this so these hosts don't care about these networks?

edit: nevermind, it was under the cluster settings.

kiwid
Sep 30, 2013

Question, I've been tasked to clone a virtual machine running some old lovely software so that they can mess around on the clone without affecting the production machine. We don't have a dev environment/network so this machine would run along side the other one on the same network but with just different IP and hostname.

How do I actually achieve this in vCenter 6.0? I know there is a clone button but I've been reading that I need to edit the .vmx file after cloning and change the UUID and other people have said I need to sysprep the machine after the clone.

Can anyone clarify the exact process of cloning a VM and bringing it up on the same network?

edit: Also we have Veeam B&R 9.5u3, can I skip the whole vCenter thing and bring a copy up with this instead?

kiwid fucked around with this message at 22:00 on Jul 12, 2018

kiwid
Sep 30, 2013

Thanks Ants posted:

Veeam can replicate the VM for you, use that to do this. You can re-IP the box as part of the replication task as well.

If it's domain joined then bring it up with the network disconnected the first time the replica boots, remove it from the domain, reboot, reconnect the virtual network and then bind to the domain with the new hostname.

Doesn't replication keep the two VMs in sync? I want them to be completely independent.

kiwid fucked around with this message at 22:21 on Jul 12, 2018

kiwid
Sep 30, 2013

Thanks Ants posted:

You can use the replication job to clone the VM as a one-time thing, you don't need to keep it in sync.

I see. Couldn't I just use a VM Copy job then or is that not the same thing?

kiwid
Sep 30, 2013

I need to reboot one of my storage arrays for maintenance. What is the proper procedure to rebooting an iscsi target attached to a vmware cluster?

I have a cluster of two esxi hosts with a bunch of vms. Two of these vms use a datastore that is presented via iscsi to both hosts.

Can I just shutdown the two vms using this datastore and then reboot the storage or do I have to unmount or detach the storage?

kiwid
Sep 30, 2013

Internet Explorer posted:

You can put a datastore in maintenance mode just like a host. That would be the most cautious way of doing it. That being said, if your storage device is set up properly and has dual controllers, it should update one controller at a time and not cause any sort of issues. That does require everything to be set up properly, though, and is one of those things I like to test the first time I'm doing it in an environment.

Our main storage array is a proper array with dual controllers but this storage unit I need to reboot is just a huge QNAP for local backups and doesn't have dual controllers. Only two VMs use it but it's attached to the hosts and presented to the VMs via a VMFS datastore rather than the vms attaching directly to it at the OS level. I didn't know I could put the datastore into maintenance mode so I guess I'll just do that. Thanks.

kiwid
Sep 30, 2013

adorai posted:

How long does the reboot take? If your disk timeout value is high enough in the guest, you can just reboot the array. It is not recommended, but is possibly your best option.

Between 5-10 minutes when doing a firmware update.

I ended up just shutting down the two VMs and rebooting it. I forgot to put it in maintenance mode but everything worked out so gently caress it.

kiwid
Sep 30, 2013

Anyone know why I can't add a VM to the same port group that a VMkernel port is on using ESXi 6.7 free?

I know in our vCenter 6.7 I can do this.


edit: err maybe I can't. I seem to be confused.

kiwid fucked around with this message at 16:12 on Sep 5, 2019

kiwid
Sep 30, 2013

SlowBloke posted:

There is nothing stopping you from creating a new port group with the same vlan id and a different name from the problematic one and associating that vm

That's what I ended up doing, I just thought you were able to use a vm network and vmkernel in the same port group.

kiwid
Sep 30, 2013

We buy ours through CDW. We buy everything through CDW, Veeam, VMware, Office365, etc.

kiwid
Sep 30, 2013

Can someone help me here?

Our vCenter server is saying we have two hosts with a health issue with the i40e network driver detailed here: https://kb.vmware.com/s/article/2126909 and that we should switch to using the native driver.

I see that the native driver is installed:
code:
[root@bart:~] esxcli software vib list | grep i40
net-i40e                       1.3.45-1OEM.550.0.0.1331820          Intel            VMwareCertified   2018-11-29
i40en                          1.8.1.9-2vmw.670.3.73.14320388       VMW              VMwareCertified   2019-09-03
So I went to check which nic is using this driver and none of them are?
code:
[root@bart:~] esxcli network nic list
Name          PCI Device    Driver    Admin Status  Link Status  Speed  Duplex   MTU  Description
------------  ------------  --------  ------------  -----------  -----  ------  ----  -------------------------------------------------------
vmnic0        0000:02:00.0  ntg3      Up            Down             0  Half    1500  Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet
vmnic1        0000:02:00.1  ntg3      Up            Down             0  Half    1500  Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet
vmnic1000402  0000:08:00.0  nmlx4_en  Up            Up           10000  Full    1500  Mellanox Technologies MT27520 Family
vmnic1000502  0000:05:00.0  nmlx4_en  Up            Up           10000  Full    1500  Mellanox Technologies MT27520 Family
vmnic2        0000:02:00.2  ntg3      Up            Down             0  Half    1500  Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet
vmnic3        0000:02:00.3  ntg3      Up            Down             0  Half    1500  Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet
vmnic4        0000:08:00.0  nmlx4_en  Up            Up           10000  Full    9000  Mellanox Technologies MT27520 Family
vmnic5        0000:05:00.0  nmlx4_en  Up            Up           10000  Full    9000  Mellanox Technologies MT27520 Family
So what gives? Is it just complaining because that old i40e driver is installed but not in use?

Adbot
ADBOT LOVES YOU

kiwid
Sep 30, 2013

Cool, that's what I figured.

Doing a "esxcli software vib remove --vibname=net-i40e" seems to have fixed the issue.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply