|
Moey posted:Look at HP Elitedesk minis on eBay, up to quad core, pancake sized, can fill em with ram. Spend a little more on a larger SSD. I'll check one of those out, thanks. I should have mentioned, what I'm using right now is an old Sandy Bridge laptop. I was hoping it would have an mSATA slot like a ThinkPad of similar vintage, so I'd be able to put two drives in it but I can't. I have 6GB of RAM in there, but I could add up to 16GB. However, it has the dual core i5, and that's what's my main problem right now. I'm only using the CPU about 10% of the time but I'm waiting while I use it. So, I wanted something quad-core and with more RAM (I probably won't put 32GB in it any time soon)
|
# ? Sep 10, 2019 14:02 |
|
|
# ? Apr 25, 2024 08:01 |
|
Bob Morales posted:What’s the cheapest way to get 4+ cores, 32GB memory, and 2 but hopefully 3 drives in something to run VM’s? I assume you're excluding used, or the answer is probably a refurb minitower with a 2nd-4th gen i5/i7 and 4x8GB DDR3. I'd guess that's around $200-250.
|
# ? Sep 10, 2019 14:06 |
|
The Dell T20 used to be great for this thanks to Xeon and ECC but they were only super cheap back then because Intel had to move a lot of Haswell CPUs. These days I'd either buy a refurbished Thinkpad (having a built in emergency keyboard and screen is nice, idle power consumption near zero, refurb wouldn't bother me as much for a remote machine) or for heavier workloads I'd ask the Build a PC thread for a budget Ryzen recommendation. That'd also bypass the whole spectre/meltdown security and performance topic, has the option of ECC and an upgrade path to at least one more CPU refresh. You also get a lot of SATA ports, full size PCIe slots for 10GbE down the road, etc. Just dont buy a mac mini like I did, blindly expecting macOS to have decent hypervisors.
|
# ? Sep 10, 2019 15:33 |
|
Eletriarnation posted:I assume you're excluding used, or the answer is probably a refurb minitower with a 2nd-4th gen i5/i7 and 4x8GB DDR3. I'd guess that's around $200-250. I have a pile of DDR3 4GB DIMM's too
|
# ? Sep 10, 2019 15:44 |
|
Bob Morales posted:I'll check one of those out, thanks. I have a Lenovo TS440 with a 1245 CPU, It has 4 cores 8 threads 32GB Ram and holds 4 3,5" and 4 SSDs on its SAS card. Well recommended. Mr Shiny Pants fucked around with this message at 16:58 on Sep 10, 2019 |
# ? Sep 10, 2019 16:50 |
|
You don't need much to play with docker and k8s. They are extremely lightweight. Don't spend a bunch of money on something with a lot of cores and memory and ssds that you likely won't use unless you've got some relatively large project you want to test. I'll also suggest just setting up and AWS or Azure account and doing it with their IaaS services. You can also test their container platforms while your at it.
|
# ? Sep 10, 2019 17:36 |
|
Bob Morales posted:I'll check one of those out, thanks. mSATA is pretty rare at this point. Those EliteDesks have room for both a 2.5" SSD and a m.2 SSD from what I see looking at the HP hardware reference guide. This seems to fit your needs, with room for a little expansion as well. https://www.ebay.com/itm/HP-Elitede...woAAOSwA9tdEoRv Goind from a G1 to a G2 would give jump you from a max of 16gb to 32gb RAM. Moey fucked around with this message at 19:49 on Sep 10, 2019 |
# ? Sep 10, 2019 19:38 |
|
YOLOsubmarine posted:You don't need much to play with docker and k8s. They are extremely lightweight. Don't spend a bunch of money on something with a lot of cores and memory and ssds that you likely won't use unless you've got some relatively large project you want to test.
|
# ? Sep 12, 2019 12:24 |
|
Having fun trying to get PCIe passthrough of a GPU working. Mostly writing this down for my own benefit, but if anyone has a suggestion I'd be happy to hear it. Host OS: Proxmox (Ubuntu with KVM/QEMU) Guest OS: Windows 10 Pro 1903 CPU: AMD Ryzen 1700x Motherboard: Gigabyte AM350M-Gaming 3, BIOS version F42a/AGESA 1.0.0.3 ABB GPU1 in PCIe slot 1: Nvidia GTX 1080 - passthrough card GPU2 in PCIe slot 2: Nvidia GTX 745 - display adapter for host system Virtualization extensions and IOMMU are enabled in the BIOS. The 1080's video and audio are in their own IOMMU group, and the vfio-pci kernel driver is loaded instead of the nvidia/noveau driver. I was running into an issue described here where the pcie device would provide an error when trying to passthrough, and recompiling the kernel with the patched PCI settings has gotten me closer to my goal as windows detected the pcie device after the patch, albeit with error code 43. After changing VM settings to hide the hypervisor from the card I got the Windows 10 guest to detect the PCIe device and correctly identify it as a GTX 1080, but after a minute the guest became unresponsive and locked up. After rebooting the guest, the VM fails to load windows until I remove the passthrough device. Rebooting the host doesn't affect anything, so I'm now thinking it's a VM configuration issue and not a host not-letting-go-of-the-pcie-device issue. lspci -v output https://pastebin.com/0DCCmdMV /etc/default/grub https://pastebin.com/1wva1z7k /etc/modprobe.d files https://pastebin.com/HzTh9xfd iommu groups https://pastebin.com/vnY8yJUA Guest VM settings https://pastebin.com/9jMxYgVf e. Deleted the guest VM and recreated it, using a CPU type of "host". Same situation - Windows is fine until it auto-installs the video driver then it locks up, can't boot back up without removing the device. This time however I used one of my two brain cells and made a snapshot before adding the PCIe device. e2. The monitor I have attached to the 1080 never detects a signal during the entire process of Windows starting up - windows detecting the pcie device - windows locking up e3. Disabled automatic driver installation in the guest, but it's still automatically picking up the GPU. Task manager shows the CPU pegged at ~100%, but because of the unresponsiveness it doesn't show individual process percentages. Attempting to disable the gpu through Device Manager just makes Device Manager hang e4. Attached a virtual display adapter while also attaching the PCIe device. Windows wasn't able to automatically install the driver for the PCIe device, manually installed new drivers to see what would happen. Rebooted, windows bluescreens when trying to load as expected, but it gave me a new item to search for, nvlddmkm.sys. Found a post stating that if using the q35 chipset you may need to set "kernel_irqchip=on" for the machine config. Seems to be stable for now, though proxmox does not like having that setting in the config file. Woo Actuarial Fables fucked around with this message at 03:58 on Sep 17, 2019 |
# ? Sep 16, 2019 23:35 |
|
I had to run the latest Agesa on my TR1950 otherwise I need to compile my own kernel because of some PCIe reset issues. Might be worth checking if you are running the latest BIOS.
|
# ? Sep 17, 2019 07:57 |
|
I started messing around with Hyper-V on my Win10 machine yesterday, mostly successfully. I learned that the default virtual switch is poo poo. I was able to successfully install CentOS7 to a vm, it gets online and behaves well. I set it up as a gen 2 image, and have enhanced mode enabled; the thing I haven't been able to figure out is how to get audio passthrough working. I've installed the hyperv-daemons rpm to CentOS7, but I'm not sure if or what more I need to do with them. I'm really not sure if this is a Hyper-V question, or a CentOS7 question.
|
# ? Sep 20, 2019 17:48 |
|
Just FYI: If you notice sluggishness while running HyperV on your host, you may need to disable hyperthreading on your machine.
|
# ? Sep 20, 2019 18:18 |
|
CommieGIR posted:Just FYI: If you notice sluggishness while running HyperV on your host, you may need to disable hyperthreading on your machine. Is that still an issue on the latest Win10/Server 2019 builds?
|
# ? Sep 22, 2019 10:14 |
|
Pile Of Garbage posted:Is that still an issue on the latest Win10/Server 2019 builds? In Windows 10 at least
|
# ? Sep 22, 2019 14:41 |
|
So I'm about to buy some VMware licenses, do you guys just buy them directly from VMware, or is there a cheaper reseller option?
TooLShack fucked around with this message at 17:05 on Sep 23, 2019 |
# ? Sep 23, 2019 16:55 |
|
TooLShack posted:So I'm about to buy some VMware licenses, do you guys just buy them directly from VMware, or is there a cheaper reseller option? EU/US? Commercial/Academic/Government?
|
# ? Sep 23, 2019 18:10 |
|
US Commercial.
|
# ? Sep 23, 2019 18:28 |
|
I just go through our normal VAR/reseller who I do most of our purchasing through. Discounts are pretty standard from what I can tell, nothing groundbreaking tho. We only have like 20 sockets of vSphere Standard and like 350 concurrent licenses for Horizon Advanced. Moey fucked around with this message at 19:34 on Sep 23, 2019 |
# ? Sep 23, 2019 18:45 |
|
We buy ours through CDW. We buy everything through CDW, Veeam, VMware, Office365, etc.
|
# ? Sep 23, 2019 19:32 |
|
TooLShack posted:So I'm about to buy some VMware licenses, do you guys just buy them directly from VMware, or is there a cheaper reseller option? var. even an ela usually goes though a var
|
# ? Sep 23, 2019 20:22 |
|
I'm sure this is a rhetorical question but have any of you Goons had to troubleshoot slow disk I/O before? A few months ago I snagged an old Dell r610 from my work to upgrade my home-lab with replacing an old Optiplex 390 I was using prior. I installed Proxmox 5.x and promptly spun up some Ubunutu 18.04 guest VMs. There's a very noticeable disk I/O performance issue on both the hypervisor and guest VMs. Write speeds are piss-poor at best and makes the whole system unbearable to use. Executing the dd command listed below as a baseline test gives ~20MB/s writes on the Hypervisor and ~10MB/s writes within the guest VMs. I've yet to check what my read speeds are like but I suspect they'll be below average as well. Beyond spinning up guest VMs and hoping they work as they should I'm pretty green to virtualization administration. Could anyone give me some advice on where I should begin running the problem down? Is it more likely that I'm having a hardware issue (ie - raid controller or bad drives) or is it more likely that I have some poo poo settings within my hypervisor? For what it's worth the 610 is specs are 1x X5660 CPU @2.80GHz, 24GB of memory, and 2x 148GB Dell SAS Drives paired to a perc 6/i controller. code:
|
# ? Sep 24, 2019 03:03 |
|
Are we talking 10k sas or some 7.2k nl-sas drives? I have seen similar behaviors when an array is degraded and constantly trying to rebuild itself, so it might be worth checking out health and logs on the lifecycle controller. Check the raid group to make sure the backup battery is healthy and that read ahead/write back caching is enabled. Overly large stripe sizes could be a problem as well, wouldn't go above 64k unless you have a good reason to and know your IO profile. Where is the hypervisor installed? If you're booting it off the same raid group as the guests, you may be dealing with a situation where the background log writes and other disk IO from the hypervisor itself is leaving the guesting starved for ops. This would be extra apparent on slower nl-sas drives. I know VMware can be extremely chatty with its default logging levels which is why its usually nice to run it off an embedded sd card to keep it from thrashing the heads.
|
# ? Sep 24, 2019 03:19 |
|
Diametunim posted:I'm sure this is a rhetorical question but have any of you Goons had to troubleshoot slow disk I/O before? A few months ago I snagged an old Dell r610 from my work to upgrade my home-lab with replacing an old Optiplex 390 I was using prior. I installed Proxmox 5.x and promptly spun up some Ubunutu 18.04 guest VMs. There's a very noticeable disk I/O performance issue on both the hypervisor and guest VMs. Write speeds are piss-poor at best and makes the whole system unbearable to use. Executing the dd command listed below as a baseline test gives ~20MB/s writes on the Hypervisor and ~10MB/s writes within the guest VMs. I've yet to check what my read speeds are like but I suspect they'll be below average as well. ZFS on Linux kernel 5? It has some performance issues.
|
# ? Sep 24, 2019 05:04 |
|
Troubleshooting IO is basically what virtualization is all about.
|
# ? Sep 24, 2019 05:28 |
|
Heads up if anyone is running ESX 6.x on VMAX AF/PowerMax (code family 5978).. There's an issue where XCOPY primitives are moving datastore headers instead of the actual VMs when sVmotioning lol. Pretty bad one, we had a bunch of datastores blow up and long nights restoring from backup. EMC KB 537000
|
# ? Sep 25, 2019 02:51 |
|
Can someone help me here? Our vCenter server is saying we have two hosts with a health issue with the i40e network driver detailed here: https://kb.vmware.com/s/article/2126909 and that we should switch to using the native driver. I see that the native driver is installed: code:
code:
|
# ? Sep 25, 2019 22:09 |
|
Looks like. You seem to have no Intel NICs at all (nor any 40GbE NICs), so the driver's just sitting there. The non-native ixgbe and i40e drivers have had some historical issues so VMware really really really doesn't want anyone using them outside of a few specific circumstances that I can't remember the details of
|
# ? Sep 26, 2019 02:07 |
|
Cool, that's what I figured. Doing a "esxcli software vib remove --vibname=net-i40e" seems to have fixed the issue.
|
# ? Sep 26, 2019 14:47 |
|
Anyone see the balloon driver kill networking on a VM when it activates? I've been chasing down an issue where Availability Group members suffer a node eviction from the cluster for a minute or two and today I finally managed to align balloon memory load to the exact moment that it happens The VM doesn't hang as far as I can tell as it's still logging events in the event log, it just loses connectivity to the other node members until the balloon driver finishes its thing, then the node has no trouble talking to the other nodes and the cluster becomes healthy again. Most of the time this has hit our secondary AG members so we don't suffer a failover. It makes sense really, the secondary nodes don't have a ton of active memory since they aren't seeing app load so they have the memory to give up when the host needs it, but I've never see this disrupt networking like that before. This has happened on multiple VMs, both 2012 R2 and 2016.
|
# ? Oct 10, 2019 02:42 |
|
bull3964 posted:Anyone see the balloon driver kill networking on a VM when it activates? Real talk: I'm not sure what you're doing where the application is large and important enough to need clustering, but unimportant enough where you can play games with the performance by randomly evicting cache. Unless you're seriously budget-constrained, it probably makes sense to disable ballooning on these VMs.
|
# ? Oct 12, 2019 17:43 |
|
I think I actually figured it out on Thursday. Between a combination of bad configuration and bad VMware tools version, RSS isn't enabled on any of these machines. I see a spike in CPU (MHz) right when these events happen, so I think the lack of RSS is disrupting connectivity during the CPU spike of the reclamation. Most of this has happened on secondary nodes of the AG that aren't servicing application load so killing cache isn't that big of a concern because any cache they might have from the last time they were live is likely invalid.
|
# ? Oct 12, 2019 18:18 |
|
Oh boy, I have some 2008R2 Domain Controllers to replace within 50 days . A few of them are VMs on HyperV 2012R2, and the focus of this internet forums post. Apparently, Server 2019 VMs would not supported on HyperV 2012R2. https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn792027(v=ws.11) (article is dated 01/10/2017. Server 2019 was released October 2018. Hoping that the article is just outdated) So if I want a Microsoft Supported solution, I have to either choose my new DCs to be Win2016, or I have to upgrade my HyperV boxes first. I don't want either of those. I spun up some Server 2019 VMs and they run just fine on HyperV 2012R2, as both Gen1 and Gen2 machines. So the real issue is being comfortable running DCs in an "unsupported" fashion. No idea how to get more comfortable than my simple test to spin them up... Any ideas?
|
# ? Nov 19, 2019 00:21 |
|
Ehhh, either accept running unsupported or upgrade your hosts.
|
# ? Nov 19, 2019 00:25 |
|
Tell management that you have to upgrade the host, upgrade the host.
|
# ? Nov 19, 2019 08:19 |
|
SEKCobra posted:Tell management that you have to upgrade the host, upgrade the host. Yeah, do this, you're not going to go back and upgrade the host later, might as well do it now.
|
# ? Nov 19, 2019 09:16 |
|
In-place upgrades from 2008r2 to 2012r2 went very smoothly the few times I had to run them. Since you can snapshot, might be a viable low-effort route to buy time and stay inside the support matrix until you can get the hypervisor upgraded
|
# ? Nov 19, 2019 14:45 |
|
Thanks for input goons. There's some resistance to the upgrading the hyperv hosts because scope creep, but I'll share the recommendation and see what happens.
|
# ? Nov 20, 2019 00:23 |
|
Alfajor posted:Thanks for input goons. Upgrading systems to maintain vendor support is not scope creep.
|
# ? Nov 20, 2019 00:51 |
|
BangersInMyKnickers posted:In-place upgrades from 2008r2 to 2012r2 went very smoothly the few times I had to run them. Since you can snapshot, might be a viable low-effort route to buy time and stay inside the support matrix until you can get the hypervisor upgraded NEVER do inplace upgrades on domain controllers, all FSMO roles will get hosed up. Upgrade the 2012r2 VM host (if you cannot allow downtime there is this option https://docs.microsoft.com/en-us/windows-server/failover-clustering/cluster-operating-system-rolling-upgrade) and then make new DCs with a modern OS.
|
# ? Nov 20, 2019 19:00 |
|
|
# ? Apr 25, 2024 08:01 |
|
For sure. This ain't NT4, DCs should be cattle and not pets.
|
# ? Nov 20, 2019 19:02 |