Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Actuarial Fables
Jul 29, 2014

Taco Defender
When you created your Windows VM, ESXi attached a virtual monitor. This virtual monitor doesn't ever display anything, but by having it attached the Windows OS will do its part and compose images to be displayed to this fake monitor. Both the ESXi viewer and VNC Viewer use the framebuffer used for the fake monitor to compose their own images for you to see what's going on. This fake monitor is never detached from the OS, so even when you close VNC Viewer the images are still being composed so your script can still monitor for changes.

RDP doesn't use any pre-existing framebuffers. As H2SO4 said, it creates its own virtual devices when you initiate the session, then removes them once you end the session. When the RDP monitor is removed, Windows stops creating video for that video device, so your script can't run anymore.

Adbot
ADBOT LOVES YOU

Actuarial Fables
Jul 29, 2014

Taco Defender
I'm in the process of re-configuring my home lab and have networking question. My (planned so far) setup is:

VM Host1: CentOS(maybe) KVM on AMD 1700x/64GB, one 1gb ethernet port
VM Host2: Windows Server 2016 Hyper-V on Intel 4790k/16GB, one 1gb ethernet port
Storage/VHD Store: FreeNAS on Intel E3-1220 v3/32GB, 6 4TB in RAID10, two 1gb ethernet ports

All connected to an 8-port managed switch

Because the storage server is the only one with multiple network interfaces, I wouldn't be able to utilize iscsi multipathing on the two hosts, correct? I have the ability to use link-aggregation between the storage and switch, so if I'm not able to use multipathing I assume it would probably be best to enable LAG.

Actuarial Fables
Jul 29, 2014

Taco Defender

TheFace posted:

Why even attempt to use iSCSI?

I want to try it out! :)

quote:

You can technically use MPIO with a single NIC because the storage end has two NICs (so two paths, Server NIC to Storage 1, and Server NIC to storage 2). I just don't think the juice is worth the squeeze when you can just go file instead of block.

Makes sense. I was reading up on aggregating links in FreeNAS and noticed that they recommended leaving the links separate for iSCSI if multipathing was desired. I doubt I would notice any performance difference either way, but it's a lab so hey.

Actuarial Fables
Jul 29, 2014

Taco Defender
If you're looking for something high-level that's already configured for use, then Proxmox is pretty good. Web GUI management.

https://www.proxmox.com/en/proxmox-ve

Actuarial Fables
Jul 29, 2014

Taco Defender
Having fun trying to get PCIe passthrough of a GPU working. Mostly writing this down for my own benefit, but if anyone has a suggestion I'd be happy to hear it.

Host OS: Proxmox (Ubuntu with KVM/QEMU)
Guest OS: Windows 10 Pro 1903
CPU: AMD Ryzen 1700x
Motherboard: Gigabyte AM350M-Gaming 3, BIOS version F42a/AGESA 1.0.0.3 ABB
GPU1 in PCIe slot 1: Nvidia GTX 1080 - passthrough card
GPU2 in PCIe slot 2: Nvidia GTX 745 - display adapter for host system

Virtualization extensions and IOMMU are enabled in the BIOS. The 1080's video and audio are in their own IOMMU group, and the vfio-pci kernel driver is loaded instead of the nvidia/noveau driver. I was running into an issue described here where the pcie device would provide an error when trying to passthrough, and recompiling the kernel with the patched PCI settings has gotten me closer to my goal as windows detected the pcie device after the patch, albeit with error code 43.

After changing VM settings to hide the hypervisor from the card I got the Windows 10 guest to detect the PCIe device and correctly identify it as a GTX 1080, but after a minute the guest became unresponsive and locked up. After rebooting the guest, the VM fails to load windows until I remove the passthrough device. Rebooting the host doesn't affect anything, so I'm now thinking it's a VM configuration issue and not a host not-letting-go-of-the-pcie-device issue.

lspci -v output https://pastebin.com/0DCCmdMV
/etc/default/grub https://pastebin.com/1wva1z7k
/etc/modprobe.d files https://pastebin.com/HzTh9xfd
iommu groups https://pastebin.com/vnY8yJUA
Guest VM settings https://pastebin.com/9jMxYgVf

e. Deleted the guest VM and recreated it, using a CPU type of "host". Same situation - Windows is fine until it auto-installs the video driver then it locks up, can't boot back up without removing the device.

This time however I used one of my two brain cells and made a snapshot before adding the PCIe device.

e2. The monitor I have attached to the 1080 never detects a signal during the entire process of Windows starting up - windows detecting the pcie device - windows locking up
e3. Disabled automatic driver installation in the guest, but it's still automatically picking up the GPU. Task manager shows the CPU pegged at ~100%, but because of the unresponsiveness it doesn't show individual process percentages. Attempting to disable the gpu through Device Manager just makes Device Manager hang
e4. Attached a virtual display adapter while also attaching the PCIe device. Windows wasn't able to automatically install the driver for the PCIe device, manually installed new drivers to see what would happen. Rebooted, windows bluescreens when trying to load as expected, but it gave me a new item to search for, nvlddmkm.sys. Found a post stating that if using the q35 chipset you may need to set "kernel_irqchip=on" for the machine config. Seems to be stable for now, though proxmox does not like having that setting in the config file.

Woo

Actuarial Fables fucked around with this message at 03:58 on Sep 17, 2019

Actuarial Fables
Jul 29, 2014

Taco Defender
I'm trying to get iscsi multipath working the way I would like to on my Linux host (proxmox), one host to one storage device. In Windows you can configure Round Robin w/ Subset to create a primary group and a standby group should the primary fail entirely - how would one create a similar setup under Linux? I've been able to create a primary group w/ 4 paths and that works fine, but I can't figure out how one would add in a "don't use this unless everything else has failed" path.

Actuarial Fables
Jul 29, 2014

Taco Defender

Pile Of Garbage posted:

I'm not familiar with proxmox but is there a specific reason why you need iSCSI Multipath and can't just rely on link aggregation?

I'm mostly just trying to do stupid things in my lab so that I can understand things better.

BangersInMyKnickers posted:

Could probably pull off the same without the need for the extra lun layer with NFS4 MPIO support.

That was going to be my next project once I finally wrap my head around iscsi multipath configuration.

Actuarial Fables
Jul 29, 2014

Taco Defender

Pile Of Garbage posted:

Nice, I can get on-board with that (And explains why I've got such expensive poo poo in my home network). What SAN are you using and does it present multiple target IPs?



(it does present multiple target IPs)

My goal is to have the four paths connected through the Lab switch to be the active group and load balancing between themselves, and also include the Admin path as a failover path that is only used if all the Lab paths were to go down (like if I unplugged the lab switch or something). I've been able to get the four lab paths working as a multipath group (or I did until I broke it yesterday), so now I'm trying to figure out how to get the failover path configured. Figure if Windows has the kind of config I'd like (Round Robin w/ Subset) then it should be possible to make something similar under Linux.

e.From what I'm able to gather, I need to set the grouping policy to be based on priority then set a lower priority to the Admin path. This should create two separate path groups. Not sure if I can have a group of just one path though, but I suppose I'll find out.

Actuarial Fables fucked around with this message at 02:40 on Jan 14, 2020

Actuarial Fables
Jul 29, 2014

Taco Defender
I was fighting with GPU passthrough again with kvm/qemu, figured I'd share in case anyone else runs into issues.

Setup:
OS: Proxmox 6.1 (Debian)
CPU: Ryzen 2700x
Motherboard: Asus X470-Pro Prime, 1.0.0.4 patch B firmware
Host GPU: Nvidia Quadro K600
Guest GPU: Nvidia GTX 1080

Kernel cli params, vfio, and lspci stuff: https://pastebin.com/fgXuk1X1

Everything works flawlessly when I don't have a display device attached to the guest GPU when the host is booting. Attaching a monitor afterwards doesn't mess it up and the guest VM is as happy as can be, but this manual intervention isn't desired.

Booting the host with a monitor attached to the guest GPU will prevent passthrough as the framebuffer claims some memory, even with "video=vesafb:off,efifb:off" kernel parameters supposedly disabling them. Without the parameters there is a lot more log output, with the parameters there's only a few lines, either way the GPU can not be passed through. I was able to solve this issue previously by selecting which GPU the motherboard uses and eventually hands over to the OS, but the Asus board doesn't have that feature.

Trying to start the guest VM will output this message in dmesg
code:
[   87.889480] vfio-pci 0000:09:00.0: BAR 1: can't reserve [mem 0xe0000000-0xefffffff 64bit pref]
Looking at /proc/iomem...
code:
c0000000-fec2ffff : PCI Bus 0000:00
  ...
  e0000000-f1ffffff : PCI Bus 0000:09
    e0000000-efffffff : 0000:09:00.0
      e0000000-e02fffff : efifb
    f0000000-f1ffffff : 0000:09:00.0
The solution was to remove vesafb:off from the kernel parameters, leaving video=efifb:off :doh:

Actuarial Fables
Jul 29, 2014

Taco Defender

movax posted:

So I've given up on ESXi and trying out Proxmox now.

One thing I can't figure out... in ESXi, you spawned vmkernel NICs for access to management interface. How does that work on Proxmox? I can't seem to find an obvious place to control where the management interface listens.

It is a DeskMini 310 w/ single NIC I'm finally going to colo, so in my ideal world, I'd like too:

Untagged <--> eno1 <--> vmbr0 <---> pfSense VM WAN Interface
VLAN 10 <--> eno1.10 <--> vmbr0.10 <--> Proxmox management interface

And then vmbr1 is Proxmox, pfSense LAN, TrueNAS and my Linux VM. This will stop me from getting locked out of Proxmox if pfSense is down / let me use it for initial setup at home.

Essentially, any IP address you configure under Networking is a management interface.

The documentation has an example of what you're trying to do with the NIC, under Host System Administraton > Network Configuration > 802.1q. I don't think you can make this kind of configuration through the GUI, so you'll have to edit /etc/network/interfaces manually.

code:
Example: Use VLAN 10 for the Proxmox VE management IP with VLAN aware Linux bridge

auto lo
iface lo inet loopback

iface eno1 inet manual


auto vmbr0.10
iface vmbr0.10 inet static
        address  10.10.10.2
        netmask  255.255.255.0
        gateway  10.10.10.1

auto vmbr0
iface vmbr0 inet manual
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes

Actuarial Fables fucked around with this message at 09:44 on Jan 12, 2022

Actuarial Fables
Jul 29, 2014

Taco Defender

Ok so posting at 3am apparently means I don't read your full question.

https://pve.proxmox.com/pve-docs/pveproxy.8.html

You're able to configure which IP address the management service binds on (default=all) and which addresses it allows/denies in /etc/default/pveproxy

Actuarial Fables
Jul 29, 2014

Taco Defender

Subjunctive posted:

I’m sort of itching to try Proxmox vGPU stuff on my desktop, because 3950X+3090 should be able to handle more than one game at a time, but I might wait a few months until this machine isn’t my daily driver any more.

Does that work with consumer cards now? I've got a 3700X and a 1080 in one of my hosts, could be fun to get that configured.

Actuarial Fables
Jul 29, 2014

Taco Defender

Subjunctive posted:

I’m told it works up to Turing, yeah.

Thanks for bringing this up. I was able to get my GTX1080 set up as a vgpu-capable device and my gaming VM now has a nice """Quadro P5000""" installed.

Adbot
ADBOT LOVES YOU

Actuarial Fables
Jul 29, 2014

Taco Defender

Mr Shiny Pants posted:

What did you use? Is this different from the patched vGPU stuff?

https://gitlab.com/polloloco/vgpu-proxmox/-/tree/master
https://www.michaelstinkerings.org/using-vgpu-unlock-with-proxmox-7/

Patching the driver to allow consumer cards to be a vGPU-capable device is the host portion. Spoofing the PCI device to present a workstation card to the VM instead of a GRID vGPU is for the guest VM, otherwise performance would degrade as the vGPU gets more and more angry that it can't reach a licensing server.

The guides have you set the Vendor and Device ID's in the profile_override.toml file, but I had to set them in the VM config file. If using the GUI, there's fields when adding a PCI device to set those variables.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply