|
I haven't really hosed with VMware for over a year. Anyway, last night I decided to get my home lab up and running again and got everything installed. I setup freenas and exposed a zfs volume over iscsi. I connected to my VMware host and added the iscsi software adapter and setup the networking, etc. I then added the iscsi discovery IP and it found the static paths just fine. 3 to be exact, which is correct since I have 3 nics both on the host and the freenas box. So, VMware successfully sees my freenas LUN and the paths, but when I go into the storage part and try to create a datastore, VMware gets hung up on loading "Current disk layout". (not my image) Basically it gets stuck on the above but doesn't show any of the device information, it just says "loading" forever. What could be wrong? edit: And this happens on multiple VMware hosts.
|
# ¿ Jun 26, 2015 13:56 |
|
|
# ¿ Apr 25, 2024 01:52 |
|
Wicaeed posted:Is LACP usage still only officially supported with the vSphere Enterprise license? You need the enterprise plus license.
|
# ¿ Jul 7, 2015 17:12 |
|
I'm having troubles getting nested esxi hosts to work in two networks, one being the labeled vm network (with management port), and the second being a labeled iscsi network on a different vswitch. Here is a screenshot of the current setup: 10.10.10.10 is the physical/top level host and the screenshot shows the networking setup. vswitch0 has the labeled vm network and the management port and uses both physical nics in teaming. vswitch1 has the labeled iscsi network and also has a port in the iscsi subnet and uses no physical nics (because they aren't needed as all the networking should be in the vswitches). The 4 vms are two nested esxi hosts, one vcenter vm, and one freenas with iscsi. Each VM has two nics, one in the labeled vm network and one in the labeled iscsi network. The vcenter vm doesn't really need to be in the iscsi network but i have one nic in there for troubleshooting and pinging other vms. The vcenter vm can ping the freenas and vice versa. The problem is that I don't know how to setup the nested esxi host networking. Here is a screenshot of how I envisioned it should look like (.13 is esxi1 and .14 is esxi2): Like I said, the vcenter and freenas vms can ping each other on either subnet, how come I'm having problems with the esxi vms? edit: I can ping the management port on the nested esxi vms, just not the iscsi port. edit2: Solved. Apparently I need to enable promiscuous mode on the physical hosts vswitches, though, I don't really know why. I saw it in this article: http://www.vladan.fr/nested-esxi-6-in-a-lab/ and it appears to have solved my networking woes. If someone wants to explain why this is required, I'm all ears. kiwid fucked around with this message at 03:33 on Jul 9, 2015 |
# ¿ Jul 9, 2015 01:56 |
|
What are your guys opinions on Nutanix?
|
# ¿ Jul 11, 2015 02:33 |
|
Martytoof posted:I've got a VCSA6 server at home that I'm deploying some test VMs on. What happens if you just hit the IP address instead?
|
# ¿ Jul 13, 2015 17:17 |
|
Is it recommended that we have at least one physical domain controller? Also, do you guys virtualize vCenter or does that sit on a physical box as well?
|
# ¿ Jul 14, 2015 15:19 |
|
The environment I currently work in is super lovely when it comes to hardware. We have a tonne of "beige box" desktops running as servers all due to having literally a zero dollar budget for the past 7 years. Luckily, the company was bought out and we have new owners/management and have been approved to purchase some brand new equipment. The idea here is to purchase a Nimble SAN and a couple beefy servers and virtualize everything. I'm pretty good with VMware but only from a home lab environment. I've never really touched a Nimble SAN (or an enterprise storage device) before but I will be the one in charge of purchasing this equipment and setting it up and I'm confident I can do so (I've been on a Nimble course). The one problem I have though is that I don't understand how backups now work. Before, we just used software like BackupAssist or Backup Exec that would use a traditional file-level backup technique and backup to either tape or NAS. The tapes would be taken offsite and the NAS would be sync'd to an offsite NAS. In a virtualized world, I have no understanding of backups and was hoping someone could explain this to me? We'll be using VMware Essentials Plus and Veeam Essentials Plus. Originally I planned on backing up to NAS with Veeam but how does Veeam work? Does Veeam backup file level or does it backup snapshots or something? Also my CDW rep told me instead of backing up to a NAS, I should get a second Nimble and use it as a DR site. Would this eliminate the need for backups?
|
# ¿ Dec 17, 2015 20:33 |
|
So if we do go the backup to QNAP NAS route, how does the QNAP hook up to the network? Do you guys put it on your core switches via iscsi then have the vaeem VM attach to the qnap or does the hypervisor attach to the qnap and you expose VMDK's to it?
|
# ¿ Dec 18, 2015 21:11 |
|
We're about to order our brand new infrastructure and I was wondering if you guys can take a look at it real quick and see if this all looks right (from a networking perspective). Assume all SFP+ links are 10G. edit: the vmware teaming links are going to be in failover mode rather than teaming. kiwid fucked around with this message at 16:10 on Jan 7, 2016 |
# ¿ Jan 7, 2016 16:07 |
|
cheese-cube posted:Are the two switches on the core side in the same stack or standalone? If they're standalone then you're going to have to provide a lot more detail about the Layer 2/3 configuration if you want anyone to validate it from a networking perspective. They are standalone and we'd be doing round-robin MPIO with them. Do you recommend something else? We didn't want to spend a gently caress load of money on the switches if we didn't have to. We aren't too worried about scalability since for the foreseeable future we're just going to be running VMware essentials plus which limits us to 3 hosts anyway. However, I didn't know about the jumbo frame limitation so I'll have to look into that.
|
# ¿ Jan 7, 2016 16:46 |
|
cheese-cube posted:How is the storage going to be presented? iSCSI or NFS? iSCSI both with the nimble and the qnap. edit: Do you have any sources for the jumbo frame limitation I can look at? kiwid fucked around with this message at 16:58 on Jan 7, 2016 |
# ¿ Jan 7, 2016 16:56 |
|
cheese-cube posted:Hmm it should work I guess with iSCSI provided that each target/initiator IP address is tied to a single interface. Are you purchasing via a VAR? It would be a good idea to get one of their sales engineers or whatnot to eyeball the design. Great, thanks for your help. Yes it's through a VAR and they say it's solid as they have other SMBs running a similar setup. edit: I started raising these questions with our VAR so they're setting us up with calls to Cisco professionals so we'll see where it goes. kiwid fucked around with this message at 17:30 on Jan 7, 2016 |
# ¿ Jan 7, 2016 17:16 |
|
For a simple 10G copper switch (16-24 port) for the SAN fabric, what should I be looking at price wise? I'm getting quotes from CDW for about $3500-$4000 per unit. Does this sound relatively normal/average?
|
# ¿ Jan 14, 2016 17:19 |
|
KS posted:There's nothing simple about storage switching. Some no-frills switches will handle it fine, but others like the SG500 series will poo poo the bed because of small buffers. CDW's "Networking professionals" recommended these: http://www.cisco.com/c/en/us/support/switches/sg550xg-24t-24-port-10gbase-t-stackable-switch/model.html or these: http://www.cisco.com/c/en/us/support/switches/sg550xg-24f-24-port-ten-gigabit-switch/model.html depending on if we go copper or sfp+ Keep in mind this is a Nimble CS300 with two hosts. I was also looking at Dell N4000 series but they're a couple grand more for each unit.
|
# ¿ Jan 15, 2016 02:10 |
|
NippleFloss posted:Nexus 3524x is the cheapest 10g option from Cisco that I'd recommend for storage traffic. What about non-cisco? We're cheap as gently caress so really I'm looking for the cheapest thing that will work well. edit: Our CDW rep is advising against the Cisco SG switches as well now after our latest conversation. He agrees the Cisco Nexus is the way to go. Also getting them to look into the Dell N4032F. My boss is going to poo poo her pants when she sees me coming back with $7000/switch quotes though. kiwid fucked around with this message at 16:00 on Jan 15, 2016 |
# ¿ Jan 15, 2016 15:44 |
|
We just modernized our infrastructure from physical to virtual. We now have an actual SAN too I'm about to migrate our file server and I was wondering if I should create the 2TB volume as a .vmdk or if I should create a volume on the Nimble SAN itself and attach to the volume through Windows iSCSI?
|
# ¿ Jan 9, 2017 14:18 |
|
Thanks Ants posted:How are you backing up? Vaeem onto a huge QNAP.
|
# ¿ Jan 9, 2017 14:35 |
|
BangersInMyKnickers posted:Isn't MS still claiming MSSQL under vmdk's is "unsupported" and tells you to either use iSCSI or vhd mounts, or did they finally drop that idiocy? If I remember correctly from my Nimble course, they said that MSSQL in a vmdk wasn't supported and to instead use Windows iSCSI to connect to the volume which is what prompted me to ask about file server as well. I know Nimble does some special things to volumes dedicated to different performance profiles.
|
# ¿ Jan 9, 2017 17:08 |
|
How do I disable guest not heartbeating failure response on one specific VM in VMware? I don't want this VM to be reset if it fails a heartbeat. I don't see this option in VM overrides? kiwid fucked around with this message at 16:16 on Mar 19, 2018 |
# ¿ Mar 19, 2018 16:08 |
|
YOLOsubmarine posted:It’s controlled by the “VM Monitoring” setting in the overrides. That's what I thought but even with it disabled, it still shows a failure response as "reset". Maybe it always shows that though regardless?
|
# ¿ Mar 19, 2018 20:29 |
|
Another question. What is the best way of attaching storage to your Veeam server? My predecesor attached our 60TB QNAP to the vmware hosts via iscsi and then on the Veeam server created and attached a 60TB .vmdk hard drive. Is this the best way or rather should the QNAP have been directly connected to the Veeam server via Microsoft's iscsi initiator rather than going through VMware?
|
# ¿ Mar 20, 2018 17:05 |
|
Look at what I just found... I guess this is another indicator that it shouldn't be a .vmdk Edit: oh gently caress, this is going to take weeks to delete I'm sure. kiwid fucked around with this message at 19:18 on Mar 20, 2018 |
# ¿ Mar 20, 2018 19:06 |
|
Potato Salad posted:Sorry but may I share your predicament on slack right now, that's hilarious Who mine? Go for it!
|
# ¿ Mar 21, 2018 13:23 |
|
Potato Salad posted:Real talk, my advice would be to architect this so a Windows Server 2016 machine controls the block level filesystem of your Veeam backup repositories. I use MS's software iscsi initiators to storage appliance targets. I think this is what I'll probably do. I don't think I can do SMB share without reconfiguring some poo poo because we already have the iscsi on the core 10gig SFP switches.
|
# ¿ Mar 21, 2018 13:30 |
|
cheese-cube posted:Hey kiwid if you decide to try and delete the snapshot instead of writing off the entire VM and deploying a new one then please keep us updated with the progress. Maybe we can start a pool for how long it will take? It only took 5 hours to complete, surprisingly. We do have 10gig though. Writing off the VM crossed my mind.
|
# ¿ Mar 21, 2018 19:23 |
|
Sorry for such a dumb question but for some reason someone created additional VMkernel ports for management on one of our hosts and so I cleaned it up but not the other hosts have the error: Where do I fix this so these hosts don't care about these networks? edit: nevermind, it was under the cluster settings.
|
# ¿ Apr 4, 2018 15:18 |
|
Question, I've been tasked to clone a virtual machine running some old lovely software so that they can mess around on the clone without affecting the production machine. We don't have a dev environment/network so this machine would run along side the other one on the same network but with just different IP and hostname. How do I actually achieve this in vCenter 6.0? I know there is a clone button but I've been reading that I need to edit the .vmx file after cloning and change the UUID and other people have said I need to sysprep the machine after the clone. Can anyone clarify the exact process of cloning a VM and bringing it up on the same network? edit: Also we have Veeam B&R 9.5u3, can I skip the whole vCenter thing and bring a copy up with this instead? kiwid fucked around with this message at 22:00 on Jul 12, 2018 |
# ¿ Jul 12, 2018 21:57 |
|
Thanks Ants posted:Veeam can replicate the VM for you, use that to do this. You can re-IP the box as part of the replication task as well. Doesn't replication keep the two VMs in sync? I want them to be completely independent. kiwid fucked around with this message at 22:21 on Jul 12, 2018 |
# ¿ Jul 12, 2018 22:19 |
|
Thanks Ants posted:You can use the replication job to clone the VM as a one-time thing, you don't need to keep it in sync. I see. Couldn't I just use a VM Copy job then or is that not the same thing?
|
# ¿ Jul 12, 2018 22:22 |
|
I need to reboot one of my storage arrays for maintenance. What is the proper procedure to rebooting an iscsi target attached to a vmware cluster? I have a cluster of two esxi hosts with a bunch of vms. Two of these vms use a datastore that is presented via iscsi to both hosts. Can I just shutdown the two vms using this datastore and then reboot the storage or do I have to unmount or detach the storage?
|
# ¿ Apr 23, 2019 15:35 |
|
Internet Explorer posted:You can put a datastore in maintenance mode just like a host. That would be the most cautious way of doing it. That being said, if your storage device is set up properly and has dual controllers, it should update one controller at a time and not cause any sort of issues. That does require everything to be set up properly, though, and is one of those things I like to test the first time I'm doing it in an environment. Our main storage array is a proper array with dual controllers but this storage unit I need to reboot is just a huge QNAP for local backups and doesn't have dual controllers. Only two VMs use it but it's attached to the hosts and presented to the VMs via a VMFS datastore rather than the vms attaching directly to it at the OS level. I didn't know I could put the datastore into maintenance mode so I guess I'll just do that. Thanks.
|
# ¿ Apr 23, 2019 15:52 |
|
adorai posted:How long does the reboot take? If your disk timeout value is high enough in the guest, you can just reboot the array. It is not recommended, but is possibly your best option. Between 5-10 minutes when doing a firmware update. I ended up just shutting down the two VMs and rebooting it. I forgot to put it in maintenance mode but everything worked out so gently caress it.
|
# ¿ Apr 24, 2019 15:02 |
|
I know in our vCenter 6.7 I can do this. edit: err maybe I can't. I seem to be confused. kiwid fucked around with this message at 16:12 on Sep 5, 2019 |
# ¿ Sep 5, 2019 15:59 |
|
SlowBloke posted:There is nothing stopping you from creating a new port group with the same vlan id and a different name from the problematic one and associating that vm That's what I ended up doing, I just thought you were able to use a vm network and vmkernel in the same port group.
|
# ¿ Sep 6, 2019 14:43 |
|
We buy ours through CDW. We buy everything through CDW, Veeam, VMware, Office365, etc.
|
# ¿ Sep 23, 2019 19:32 |
|
Can someone help me here? Our vCenter server is saying we have two hosts with a health issue with the i40e network driver detailed here: https://kb.vmware.com/s/article/2126909 and that we should switch to using the native driver. I see that the native driver is installed: code:
code:
|
# ¿ Sep 25, 2019 22:09 |
|
|
# ¿ Apr 25, 2024 01:52 |
|
Cool, that's what I figured. Doing a "esxcli software vib remove --vibname=net-i40e" seems to have fixed the issue.
|
# ¿ Sep 26, 2019 14:47 |