Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
There has been some discussion about this in other threads. It's doable, but not as simple as you would hope.

PC Building/Upgrading/Parts-picking Megathread

Police Automaton posted:

I did some research into this as I'm building exactly a computer like this and nowadays this works a lot better than it did just a few years ago and if you do it correctly, you'll basically get native performance in windows for all practical gaming purposes. Your CPU has to be IOMMU capable, you'll probably go with intel so that means a vt-d capable CPU. You need to really look into if the mainboard is doing it correctly, the implementation on some mainboards around is apparently borked even if it's theoretically supported, there are some lists with tested mainboards around if you search long enough. Also you'd probably want to avoid nvidia cards apparently as nvidia seems to not like people using their non-quadro cards for such purposes. (even though there are workarounds, they apparently come at a performance loss, but take that with a grain at salt, I'm not sure how current that information is) I went with a 5820k because of the six cores making everything easier and a bit more future-proof, a recent i5 is probably enough too though, seeing as you can overcommit CPUs and plenty of games don't really tax modern CPUs in any way whatsoever. (an i7 does have a slight advantage here though) Be aware that you need plenty of RAM, my latest knowledge on the topic is that you can't do lazy allocation if you want to do passthrough.

Many people will not recommend AMD cards for linux gaming and they're probably right about that, but if you're like me already ready to build a system such as this, you probably also already know that gaming with linux is basically a fools errant anyways and will only want to use the AMD card for windows gaming. You either need two dedicated graphics cards (one for the VM, one for the host) or a CPU with integrated GPU&dedicated card. (you'd use the GPU for the linux host then) Be aware that it is currently apparently impossible to wrestle back control of the passed-through card back to Linux without a reboot, so dynamically switching cards is sadly not that easy. It is theoretically also possible to have the host without a graphics card but I don't think that's very practical for most usage scenarios.

The Linux Questions Thread, SCO non-infringing version

evol262 posted:

Basically, passthrough works like this:

If your chipset and CPU support IOMMO (VT-d and AMD-vi), your PCI devices are grouped off into an arbitrary number of logical groups, which hopefully has some relation to the actual slot layout, but doesn't always.

You can pass through IOMMU groups to guests, along with all the associated PCI IDs. On one of my test systems:
code:
/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
/sys/kernel/iommu_groups/1/devices/0000:01:00.0
/sys/kernel/iommu_groups/1/devices/0000:01:00.1
/sys/kernel/iommu_groups/2/devices/0000:00:02.0
/sys/kernel/iommu_groups/3/devices/0000:00:03.0
/sys/kernel/iommu_groups/4/devices/0000:00:14.0
/sys/kernel/iommu_groups/5/devices/0000:00:16.0
/sys/kernel/iommu_groups/6/devices/0000:00:1a.0
/sys/kernel/iommu_groups/7/devices/0000:00:1b.0
/sys/kernel/iommu_groups/8/devices/0000:00:1c.0
/sys/kernel/iommu_groups/8/devices/0000:00:1c.5
/sys/kernel/iommu_groups/8/devices/0000:00:1c.6
/sys/kernel/iommu_groups/8/devices/0000:02:00.0
/sys/kernel/iommu_groups/8/devices/0000:03:00.0
/sys/kernel/iommu_groups/8/devices/0000:04:00.0
/sys/kernel/iommu_groups/9/devices/0000:00:1d.0
/sys/kernel/iommu_groups/10/devices/0000:00:1f.0
/sys/kernel/iommu_groups/10/devices/0000:00:1f.2
/sys/kernel/iommu_groups/10/devices/0000:00:1f.3
If you snip off the "0000:" bits (some systems can have more than one IOMMU controller, but I've never seen one), these match directly to PCI IDs:
code:
00:00.0 Host bridge: Intel Corporation 4th Gen Core Processor DRAM Controller (rev 06)
00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller (rev 06)
01:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] (rev a1)
01:00.1 Audio device: NVIDIA Corporation GM204 High Definition Audio Controller (rev a1)
00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller (rev 06)
00:03.0 Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller (rev 06)
00:14.0 USB controller: Intel Corporation 9 Series Chipset Family USB xHCI Controller
00:16.0 Communication controller: Intel Corporation 9 Series Chipset Family ME Interface #1
00:1a.0 USB controller: Intel Corporation 9 Series Chipset Family USB EHCI Controller #2
00:1b.0 Audio device: Intel Corporation 9 Series Chipset Family HD Audio Controller
00:1c.0 PCI bridge: Intel Corporation 9 Series Chipset Family PCI Express Root Port 1 (rev d0)
00:1c.5 PCI bridge: Intel Corporation 9 Series Chipset Family PCI Express Root Port 6 (rev d0)
00:1c.6 PCI bridge: Intel Corporation 82801 PCI Bridge (rev d0)
02:00.0 Network controller: Qualcomm Atheros AR93xx Wireless Network Adapter (rev 01)
03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 0c)
04:00.0 PCI bridge: ASMedia Technology Inc. ASM1083/1085 PCIe to PCI Bridge (rev 03)
00:1d.0 USB controller: Intel Corporation 9 Series Chipset Family USB EHCI Controller #1
00:1f.0 ISA bridge: Intel Corporation 9 Series Chipset Family H97 Controller
00:1f.2 SATA controller: Intel Corporation 9 Series Chipset Family SATA Controller [AHCI Mode]
00:1f.3 SMBus: Intel Corporation 9 Series Chipset Family SMBus Controller
Paired up, these give me:
code:
 IOMMU Group |  PCI ID  | Device 
           0 | 00:00.0  | Host bridge: Intel Corporation 4th Gen Core Processor DRAM Controller (rev 06)
           1 | 00:01.0  | PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller (rev 06)
           1 | 01:00.0  | VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] (rev a1)
           1 | 01:00.1  | Audio device: NVIDIA Corporation GM204 High Definition Audio Controller (rev a1)
           2 | 00:02.0  | VGA compatible controller: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller (rev 06)
           3 | 00:03.0  | Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller (rev 06)
           4 | 00:14.0  | USB controller: Intel Corporation 9 Series Chipset Family USB xHCI Controller
           5 | 00:16.0  | Communication controller: Intel Corporation 9 Series Chipset Family ME Interface #1
           6 | 00:1a.0  | USB controller: Intel Corporation 9 Series Chipset Family USB EHCI Controller #2
           7 | 00:1b.0  | Audio device: Intel Corporation 9 Series Chipset Family HD Audio Controller
           8 | 00:1c.0  | PCI bridge: Intel Corporation 9 Series Chipset Family PCI Express Root Port 1 (rev d0)
           8 | 00:1c.5  | PCI bridge: Intel Corporation 9 Series Chipset Family PCI Express Root Port 6 (rev d0)
           8 | 00:1c.6  | PCI bridge: Intel Corporation 82801 PCI Bridge (rev d0)
           8 | 02:00.0  | Network controller: Qualcomm Atheros AR93xx Wireless Network Adapter (rev 01)
           8 | 03:00.0  | Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 0c)
           8 | 04:00.0  | PCI bridge: ASMedia Technology Inc. ASM1083/1085 PCIe to PCI Bridge (rev 03)
           9 | 00:1d.0  | USB controller: Intel Corporation 9 Series Chipset Family USB EHCI Controller #1
          10 | 00:1f.0  | ISA bridge: Intel Corporation 9 Series Chipset Family H97 Controller
          10 | 00:1f.2  | SATA controller: Intel Corporation 9 Series Chipset Family SATA Controller [AHCI Mode]
          10 | 00:1f.3  | SMBus: Intel Corporation 9 Series Chipset Family SMBus Controller
Group 8 would be a mess to pass through, for example, because you can't just pass this device or that device easily. The guest would get them all if the GPU were in that bus because someone did something dumb when designing the groups. Group 1 is just the GPU, which is extremely convenient.

So you pass through a raw device. The "hardware support" is 100% up to being able to get drivers in the guest which support that device ID, and it's like running it native.

The problem is that nVidia eats a huge bag of dicks, and their driver checks, and says: "is this machine using any hardware that we know only exists on VMs? How about virtual CPUs? Are VMware or Hyper-V guest additions installed? Or...?"

If any of those things are true, their driver just says "nope, not gonna do it, you should buy a quadro or tesla", because gently caress people with consumer cards.


If you're running ZFS, your options are basically KVM or Xen. Or hoping your hardware is on the ESXi HCL, installing it, and having IOMMU support (see above) to pass a card through to a VM which handles ZFS or doing raw disk mapping, which is terrible in all sorts of ways.

The usual considerations for small scale virt are:
  • "What do we use at work or what do I want to get familiar with professionally?" VMware is a good pick here
  • "Do I want my hypervisor to run Windows so I don't need to learn anything really different?" Hyper-V is a good choice here (or if you use it at work)
  • "Do I have multiple systems and I want them to be highly available?" Hyper-V and VMware are bad choices here unless you're going to :files: them or you're rich. KVM and Xen do this for free pretty well.
  • "Do I care about nested virt and passthrough?" Hyper-V is a bad choice here

KVM with virt-manager is probably a very easy, very capable solution for what you want. But if you have more considerations than just "run some VMs and silo stuff", it doesn't hurt to bring it up. Note that you can almost certainly enable nested virt and virtualize VMware if you want to play with it.

Adbot
ADBOT LOVES YOU

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Hi Jinx posted:

I'm building a workstation/gaming rig and would love some input on the virtualization aspect.

I obviously need Windows (Games, Outlook, Visual Studio), and I really want to use ZFS for storage. So I have two options:

Maybe make your life easier and separate the storage to it's own computer and stick it to some closet. Also forget about SLI, I believe it's usually more trouble than it's worth. Just buy a more powerful GPU.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
Does anyone have opinions of running Docker containers with non-priviliged users?

At work the most need we have for Docker is with computation servers of research group and the software they require. For example a group may have an Ubuntu 14.04 and they need to install software that is designed for debian 8. And another that is distributed as RPMs. With apt-pinning and alien it's probably possible to make that work, but I feel it will end up a mess over time. Running the different software inside containers tailored for them would seem much cleaner solution.

But a big stumbling block is how to allow non-root users to do this without giving them root through Docker loopholes, even if the users are semi-trusted. A solution I have come up with is a shell script users are allowed to run with sudo, and which would start the containers with specific whitelisted options. But how to make a script that wouldn't have major holes in it and would the users still be able to escape the containers with privileges?

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Vulture Culture posted:

Previously coming from research/academia myself: what's the material business risk in a research group getting root access to their own server?

Truth to tell, it gives the server administration indigestion when they aren't Doing It Right™. Since we would want to use a wrapper anyway so the users don't need to know all the weirdo parameters, then the wrapper might also hopefully hinder them for doing anything silly and maybe even neuter a bit a convenient attack vector.

But yes, we do operate largely on the assumption that no one would bother come after these groups with a targeted attack. It's more of a question would this provide any actual benefit, or is it just security theater.

Saukkis fucked around with this message at 19:56 on Jun 13, 2016

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

nicky_glasses posted:

Anyone doing any vmware automation not in Powershell? The vmware SDK documentation is painfully obtuse and picking any bindings outside of PS is very difficult as there are no books or tutorials as far as I can tell for Python or even others, except for Java which I don't care to learn.

We use the python at work to create new virtual machines, but it has been mostly done by my coworkers so I don't really know it that well.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

anthonypants posted:

Dehumanize yourself and face to Remote Desktop Licensing

Would per-device CALs be simple enough in this situation, since it sounds like there is a specific set of computers that would connect to it?

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
In VMware vSphere, is there a way to schedule a virtual machine to start automatically after it has been powered off from the guest OS? My coworker has interpreted that powering off is necessary for the VM to get access to the new CPU flags and the Spectre mitigations to become effective. It would be most convenient to schedule startup and then during the normal update cycle power off the VMs instead of rebooting them, and not having to go start them up manually.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

cheese-cube posted:

Isn't virtualbox free or did I just miss the joke?

VirtualBox is open source software, but the Extension Pack is closed source and commercial. You will need to purchase a license unless you are only using it in personal, non-business use. I was helping a coworker to install VBox on his computer, and as a hazing I prompted him to read the license agreement. And that's when we learned that we can't use that poo poo at work.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Magnetic North posted:

I still have Windows 7 but this is all coming to mind because I want to reformat this 7-year-old PC. I was hoping to save money and not be annoyed by all the spy poo poo live services in Windows 10. I'm not in IT but I am in the tech sector, so you're probably right that being able to say I've used Hyper V would be better than VirtualBox. I guess $200 isn't that much for a multiyear investment, especially since I am either doing it now or in 2 years time, and it will last many years.

I guess the choice is between useful skills and a few extra bux. Thank you very much for the advice.

What exactly are you planning to do with those $200? Because buying a full price Win 10 license isn't necessary.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
We have just run into a situation with our VMware cluster where starting an ESXi host causes an All Paths Down for about a minute, starting just before the host loads vmw_vaaip_netapp. A coworker suspects it happens when the host is mounting the datastores and needs a lock on them during the instance. Yesterday was the first we've rebooted any of the hosts since the L1TF patches and we haven't experienced something like this before. The affected cluster is running 6.0 and it's connected to three Netapp SANs, (FAS3220, FAS8020, FAS8200).

Our zoning doesn't follow the single-initiator, single-target best practice. I've been told implementing it would take a quite a lot of effort. I've seen that mentioned in many documents, but I haven't found any explanation on what kind of effects there could be without the best-practice implementation. Could this be related to our current issue? We haven't had this kind of problem before, but there have been frequent cases where rebooting one Netapp node may cause "stuttering" or some paths going down even with datastores on other Netapps.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

YOLOsubmarine posted:

Do you have zones with multiple initiators?

Single initiator zones are the bare minimum recommendation for VMware. If you have Brocade or Cisco switches then you can use peer zoning or smart zoning to easily create what are effectively single initiator/single target zones.

We switched to single initiator zoning and that solved this issue and hopefully helps with the other glitches we've experienced.


Another question. Is there any useful alternative for beacon probing if having three NICs isn't an option. We use HPE Proliant BL460c blade servers for our ESXi hosts, but with the latest Gen10 servers 4 NIC ports is the maximum with an FC card and we have preferred to keep guest and management VLANs on separate NIC pairs. Is the practical option to just use beacon probing with two NIC ports and accept "shotgunning", or would this cause other issues? Until now we've only used "link status", since the servers only have 4 connections and our c7000 chassises aren't equipped to provide more. But we've had one case when "link status" wasn't enough to detect malfunction.

I would think it could be possible to connect every port to a monitoring VLAN and randomly "ping" from every port to other hosts' ports to find any malfunctioning link, but VMware doesn't have such functionality.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
How to downsize a vCenter 6.7 appliance?

We did an upgrade from 6.5 appliance and the only available deployment size was X-Large, which left us with way too big appliance. I suspect the reason was that we had originally installed the appliance with Medium deployment size, but X-Large storage.

Simply reducing vCPUs and RAM isn't supported. I think our options are to either install a new Medium appliance and do the upgrade from the current appliance to transfer config, or install a new appliance and restore the config from backup. But I'm not quite sure would that process work, I only have vague recollections left from the previous vCenter deployments. If I install the appliance using the temporary IP we used in the upgrade, will the upgrade-transfer or backup restore do all required to set the new appliance in the place of the current.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

TheFace posted:

Where are you seeing that reducing vCPU & RAM isn't supported? VCSA allows for hot removal of vCPU, and you can reduce RAM by shutting it down and then changing it?

Though the backup and restore to a new deployment definitely works (I've used it, though not to downsize but it does let you pick a different size on the deployment if you want).

We asked this from the VMware support and their view was that, "from the issue description I understand that you are looking for a way to decrease the numbre of CPU's assigned for VCSA .

There is no supported/recommended way to do this as this is VCSA I dont recommend to test on it, instead you cna have a another DUMMY VCSA appliance deployed with large and then try to reduce it to less number of CPU.

But this is not recommended."

My coworker tried to google this issue and he found in articles about upsizing, that instructed what kind of configuration modifications should be done internally after the vCPU and RAM increases. I think he mentioned at least Java memory sizing. So if the deployment size sets different kinds of configurations inside the appliance, then I would understand why downsizing would not be a tested or supported. Too complicated to figure out what should be fixed.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

TheFace posted:

I guess that makes sense though I've seen people change the amount of RAM and CPU on the VCSA and never run into a problem. YMMV obviously.

If you're overly concerned definitely the backup and deploy a new VCSA and restore.

Yeah, we too suspect it would work. But this vCenter will be for our major VMware environment so we not only want to be sure it works, we also want it to be in a state completely supported by VMware. Fortunately this vCenter isn't yet in production so we are leaning towards rebuilding it from scratch. But we still consider trying the restore option to learn how it works. We have another production 6.5 appliance with the same Medium deployment/X-large storage issue and we need to figure out a way around it.

This also raises me the question, did we do something wrong? I would expect Medium deployment/X-large storage to be a common configuration for 6.5 appliances and I think the 6.7 upgrade should be able to handle it just fine. Could there have been some other issue preventing smaller sized deployment options?

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Saukkis posted:

Yeah, we too suspect it would work. But this vCenter will be for our major VMware environment so we not only want to be sure it works, we also want it to be in a state completely supported by VMware. Fortunately this vCenter isn't yet in production so we are leaning towards rebuilding it from scratch. But we still consider trying the restore option to learn how it works. We have another production 6.5 appliance with the same Medium deployment/X-large storage issue and we need to figure out a way around it.

We tried the restore, but were unsuccesful. If we choose the Restore option from VCSA installer, then it will check the backup for the appliance size, set it to X-Large and downsizing isn't possible. We then tried installing a new Medium sized appliance and try the Restore option from appliance configuration. Here we are stopped by error:

quote:

Validation Result
Error: Metadata and system validation failed.
Error: Failed to retrieve appliance storage list.

This might be because it thinks the restore would not fit in the new appliance.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
Another fun mess. Oracle DB cluster with vSAN. Really tight schedule, need to limit core count, very little experience with vSAN. Original plan was for three 16-core nodes, until we realized it should really be 4 nodes. So if we drop to 12 -core nodes we can either choose a pathetically weak processors, unnecessarily powerful and expensive CPUs, or a single 12-core CPU.

The 12-core CPU looks like the most practical option, but a single CPU VMware node feels like a bad idea. Will it be able to handle all the PCIe devices we need. At least Gartner considers 1-CPU a nifty trick for budget conscious CIOs.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

CommieGIR posted:

So Dell has dual SD Cards for hosting the hypervisor in places where you are doing remote logging.

Except in my M915s case, the redundant SD card didn't work. Oops. Oh well, back to hosting the hypervisor on a RAID1 SAS. I fully suspect it wouldn't work well, but since I had HA servers, I figured I'd give it a shot.

Our VDI guys have also used the Dual SD setup on HPEs and it was also problematic, but I don't remember the details.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Arishtat posted:

It's worth noting that Dell has replaced the bootable SD card option with either a single or dual NVMe drive which is way better from both a reliability and a performance standpoint. I assume that HPE is doing the same on their Proliants.

M.2 seems to be the best choice for any use where hot-swapping isn't necessary. When I rebuilt my homeserver I seriously considered using a pair of 16GB Intel Optane drives for the OS, because those were the cheapest NVMe drives around and the OS doesn't need more space than that. What finally stopped me was that I couldn't verify from the motherboard manual whether those NVMe slots would eat any of the SATA ports.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
How much benefit would UEFI and Secure boot provide on ESXi 6.7 with Linux virtual machines? I doubt we would be making proper secure boot working with these machines and these won't have large boot disks, so that's two biggest benefits out. And I haven't found other benefits that might be worth any extra hassle with PXE or CD-image boots.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
What is the most convenient way to automatically power on a VM after it has powered of in VMware? We just raised the EVC mode on our cluster and now have to power off hundreds of VMs. Currently I'm running a Powershell script that checks every hour which servers have maintenance window and then waits for them to power off and starts them right after. At least the script also fixes guest OS and other stuff, so it's not completely useless work.

For a long time I've wished the scheduled tasks had a trigger "after power off". I have never used scheduled tasks and that's probably the only feature that would get me to use them.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

SlowBloke posted:

Why not use tags? You could mark which vm has a certain service window, which vm has been shut down already, etc...

I'm not sure if that would help with the scheduling. In the simplest way I want vCenter to autostart VMs right after they have powered off, without needing to run a custom script on a separate management server. Tags might be useful to determine whether VM should be autostarted or not. Scheduled tasks sound like the right tool, but it's lacking convenient trigger. Run once at $time is close, but we can't know the precise time in advance. A VM starts running 'yum update' at 12:01, but we don't know if it finishes at 12:03 or 12:23.

I would think this should be a common desire, everyone needs the option to power on a VM soon after it powers off. EVC upgrades, Meltdown/Spectre, E1000->VMXNET3 conversions, guest OS fixes. After Spectre we scripted the power off for our VMs, but we didn't yet have a script for powering on, so every hour someone was staring at vCenter, waiting for a new VM to show up on the "powered off" list. And be careful not to start any of the VMs that were supposed to be powered off.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

ihafarm posted:

Why are you shutting down vms after updates, or is this also related to moving them into the evc cluster?

We decommissioned our oldest hosts and we could then upgrade the EVC mode from Sandy Bridge to Haswell. But that is only one of many operations that require shutting them down. This round we also have a large number of RHEL8 servers that are listed as RHEL7 in VMware and we can now fix those after upgrading to 6.7.

SlowBloke posted:

What I mean is to run a powercli script on a always on host every hour with a mapping of service window tag to time to execute along with a done/todo tag. Get-vm * -> filter on todo tag -> check which vm has a service window tag that is coherent with the script execution time -> gracefully shut down vm that are in the service window -> wait 2 minutes -> power up every shut down vm that has the service window tag -> update done tag on vms that have been executed on.

Scheduled tasks haven’t got that much intelligence to do what you are asking.

Oh yeah, that's basically what we do. Every time the hour changes the script checks a website listing the servers that use that window, checks which of them are VMs, waits for them to power off, does any planned operations and then starts them up within seconds. But it feels cumbersome compared to KVM's autostart setting.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
It does sound like you would have better chance trying to help your mouse use. Maybe through some of the alternative mouse options, or Windows Ease of Access provides the option to control mouse with keypad, but don't know if that would be precise and convenient enough. Or even eye control.

Saukkis fucked around with this message at 15:26 on Feb 17, 2021

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

diremonk posted:

Here's the results of a ls -a on the host. Strange that it is saying that there is no .vmx~ file. Sorry it is kind of small, remoted into my work desktop to grab the screenshot..



I might try copying or moving all the files to another directory, sounds like there is something screwy with the directory.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Magnetic North posted:

I'm getting disk size warnings on my VirtualBox 18.04.6 Ubuntu VM where it says it is running out of space. I have no idea what is using all the space since I literally only use this this for social media and only occasionally, but whatever. I didn't imagine this would be an issue, since it's using a 10-gig dynamically allocated VDI. Therefore, I figured it'd grab more space when it needed it. I imagine that Ubuntu doesn't know it can do this, so it's throwing up the warning.

Basically, do I need to anything? Or will VirtualBox make the drive bigger without my intervention?

I believe 10GB is the maximum size of the VDI disk and 10 gigs doesn't sound that much when you include browsers caches and everything else. Run the command 'df -h' to check which mount is running out of space and then 'du -hx /MOUNT | egrep "G\s"' to see what is using the space.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
Why would you subject yourself to Oracle's licensing extortion if you don't absolutely have to. I didn't choose Oracle, but I was one of the people who had to design, purchase and build a new VMware cluster after Oracle sent their goons for a visit. I would have had better things to do.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
One thing that looked suspect to me was the CIDR for vmbr0 in the second picture. I'm not familiar with Proxmox, but I would have assumed it should be 192.168.69.13/24.

It may be useful if you post the output from 'ip address' and 'ip route show'.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

PBCrunch posted:

Keeping the VM from booting until the Zoneminder instance is responding its assigned port?

That's is probably the most exact method, use a ExecStartPre script that waits until the Zoneminder is responsive. Best to also to increase the timeout for the Home Assistant service.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
What is everyone's view on Veeam and Russia? We are testing image backup systems and we dismissed Veeam because of the Russian origins, but maybe it is separated enough nowadays? It is owned by venture capitalist Insight Partners from New York and on the top management there seems to be only the R&D chief who might be from Russia based on the name.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

namlosh posted:

Anyone know if there’s a way to have a windows vm with dual monitor support? I figure maybe it’s possible using GPU passthrough but is that the only way?
e: should have specified that the host would have two monitors… and I’d like to use the vm using both of them. Sort of a desktop replacement thing for work

I used to do that with VirtualBox running on Linux. It was just two separate windows I could position where I want. Should also be possible with KVM and Virt-manager, but I didn't get it to work last time I tried.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

in a well actually posted:

Luckily we've got pricing locked in for 2.5 more years (I am sure broadcom will do their best to invalidate this), but lol at going to an ibm product for price relief

Admittedly IBM's Red Hat purchase didn't seem to cause any pricing catastrophe and the academic licensing still exists, so at least better than Broadcom.

Adbot
ADBOT LOVES YOU

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
Does it really matter if the upgrades take a long time. That's just how things work with a multinode clusters where the upgrades can and should be done one piece at a time. Last month I upgraded a few ESXi clusters and it took days, but it was just several steps of clicking few buttons and then going off to do something else while all the services were working fine the whole time. Just set an alarm in half an hour so you remember to check if you can proceed to the next step.

Are there any non-trivial clusters that aren't just collection of black boxes. Even if it was open source it will be complicated enough only few people will understand how it works. A breakroom axiom from work is that when you build a fancy high-availability cluster you just get a new and unexpected ways for everything to break down. But I have to admit I am really surprised how well VMware has worked for the past decade that I've been dealing with it. It has been more realiable than many of the simple clusters I've had to deal with during that time. Thankfully that intraweb cluster where both nodes had to be started exactly at the same time was before my time. I've heard the story how two guys were standing in front of turned off servers and yelling "1, 2, 3, START!" and hoping they timed it right. At least nowadays the simple clusters work pretty well, but it hasn't been long time since I still had a cluster where the service IP moved between the nodes right as planned, it just forgot to tell the rest of the world where the IP was now.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply