|
Oh server rack under the stairs? Watch that Linus vid for a tutorial
|
# ? Jan 16, 2018 01:08 |
|
|
# ? Apr 19, 2024 23:35 |
|
ESXi runs pretty good on desktop hardware (just watch out for NIC and storage controller drivers). Quiet and power efficient.
|
# ? Jan 16, 2018 01:47 |
|
If you're looking to do a nested virtual lab the Supermicro Xeon D motherboard/SoC is the best bang for the buck. You can get 8, 12, or 16 core models depending on how much you want to spend, and the motherboards support dual 10Gbase-T, dual 1Gbase-T and a dedicated IPMI network management port. They'll support up to 128GB of ECC memory or 64GB of non-ECC. They're mini-ITX boards so they can fit in a mini tower case. Get the right case and they can be very quiet and they don't consume a ton of power.
|
# ? Jan 16, 2018 01:56 |
|
YOLOsubmarine posted:If you're looking to do a nested virtual lab the Supermicro Xeon D motherboard/SoC is the best bang for the buck. I love mine and how quiet it is, but pricewise it's a much bigger hit than a Dell server off eBay.
|
# ? Jan 16, 2018 02:04 |
|
Do Supermicro boards still have backdoors in them?
|
# ? Jan 16, 2018 02:07 |
|
fordan posted:I love mine and how quiet it is, but pricewise it's a much bigger hit than a Dell server off eBay. Sure, but it’s also not a big, loud, electricity sucking head engine. Convenience ain’t cheap!
|
# ? Jan 16, 2018 03:53 |
|
jre posted:Curious why you've went for a rack mount server for home lab ? What technologies are you wanting to learn ? Well the plan is to get a rack eventually and it seemed like the best bang for the buck. I’m open to suggestions though. As for technologies I want to learn at this point in time, just Docker and Kubernetes for starters. Want to learn more networking and server admin skills. There’s some apps that I wrote that are used daily and want to run them in VMs. Right now it’s on one VPS. Other things I plan to run/play with: pfsense (never really did anything with firewalls or networking), Plex (media is on a NAS), guacamole, and BlueIris.
|
# ? Jan 16, 2018 04:03 |
|
bsaber posted:Well the plan is to get a rack eventually and it seemed like the best bang for the buck. I’m open to suggestions though.
|
# ? Jan 16, 2018 04:54 |
|
Happiness Commando posted:The plan is a 2016 Hyper-V host with 2016 RDSH (among others) inside of it.
|
# ? Jan 16, 2018 05:02 |
|
BangersInMyKnickers posted:RemoteFX will work with standard terminal services and multiple concurrent user sessions, no idea what that is about. vGPU hardware acceleration is one user per guest VM. I promise I'm not trying to be dumb, but I can't make these two statements mesh. I'm seeing RemoteFX and vGPU being used interchangeably, so it reads like you're saying that RemoteFX will work for concurrent users and will only work for one user. I know I just need to get some hardware and play with it, but I can't do that yet.
|
# ? Jan 16, 2018 15:54 |
|
RemoteFX is just a bolt-on feature set to RDP that enables advanced functionality like h.264 encoding, device passthrough, and some other stuff. The encoding can either be done in software on the CPU or on a dedicated GPU encoder if available. vGPU is the virtualization of a graphics card and allocation of its resources to a guest VM. This is what gets your Direct3D support inside an RDP session. You need remoteFX to use vGPU because standard RDP video encoding cannot handle it.
|
# ? Jan 16, 2018 16:35 |
|
I really wish this stuff was documented better. So vGPU is appropriately a virtual GPU that gets passed to the guest RDSH, which then slices it up using RemoteFX so that concurrent sessions can use it? So vGPU is implemented by adding an adapter to a VM in the HyperV console and then RemoteFX is some RDS role that gets added to the guest RDSH. Yes?
|
# ? Jan 16, 2018 16:41 |
|
I'll try to throw this in to a table to break it down:
|
# ? Jan 16, 2018 16:54 |
|
OK guys, I guess it's time for me to pack it up. I'm an imposter.
|
# ? Jan 16, 2018 18:44 |
|
Get approval for some AWS or Azure instances with video cards and test it, you'll figure it out.
|
# ? Jan 16, 2018 20:18 |
|
Potato Salad posted:Dude, don't bother doing nvme passthrough I just figured it would "simpler" to passthrough the PCI device to the Guest VM, since I've had good luck doing it with the LSI controllers and Intel NICs. I changed passthru.map to add the VID/DID of the Samsung SSD, and change its reset type to d3d0; VM still hard-locked, unfortunately. But, I could restart the VM with no issues this time, instead of having to restart the host. So maybe now the problem is with Fedora 27, and not my VM configuration?
|
# ? Jan 16, 2018 20:53 |
|
Happiness Commando posted:I really wish this stuff was documented better. So vGPU is appropriately a virtual GPU that gets passed to the guest RDSH, which then slices it up using RemoteFX so that concurrent sessions can use it? Really simply, assigning a GPU enabled it. Enterprisey GPUs present SR-IOV and are able to be sliced up by presenting themselves multiple times on the PCIe bus. You'll need an enterprisey GPU to use vGPU anyway.
|
# ? Jan 16, 2018 21:01 |
|
I ran 4 concurrent seats with 3d CAD/Autodesk sessions with 512mb vRam each on a cheapo Firepro W4100. People overestimate what their actual 3d workload is and you can get very good density and value with a true vGPU implementation instead of these passthrough/partitioning nonsense that VMware has been pushing.
|
# ? Jan 16, 2018 21:29 |
|
If I want a lightweight vm whose only purpose is to manage a couple of docker containers, is RancherOS the way to go? CoreOS? Or just stick with Debian as long as it's about a single-digit amount of containers?
|
# ? Feb 2, 2018 11:16 |
|
I mean, we're mincing over megs of overhead, right?
|
# ? Feb 2, 2018 12:00 |
|
mike12345 posted:If I want a lightweight vm whose only purpose is to manage a couple of docker containers, is RancherOS the way to go? CoreOS? Or just stick with Debian as long as it's about a single-digit amount of containers? CoreOS is dead because Red Hat just bought them out.
|
# ? Feb 2, 2018 12:09 |
|
Potato Salad posted:I mean, we're mincing over megs of overhead, right? Probably? I mean that Debian install works and all, but I thought why stick with the full blown distro, when all it does is babysit containers. Boris Galerkin posted:CoreOS is dead because Red Hat just bought them out. Ah ok.
|
# ? Feb 2, 2018 12:11 |
|
mike12345 posted:If I want a lightweight vm whose only purpose is to manage a couple of docker containers, is RancherOS the way to go? CoreOS? Or just stick with Debian as long as it's about a single-digit amount of containers? Depending on hat your hyper visor is I’ll toss my hat for what I just did. I had ESXI and FreeNas with an LSI control passing through, then some Ubuntu VMs for various things including one being a docker container VM. I didn’t like having the FreeNAS memory not being able to be overcommitted and didn’t like then having to figure out any possible limits on what was left to the docker VM vs others. So I dropped ESXI and FreeNAS for Debian + proxmox + ZFS on Linux. Now there’s no pass through as the hypervisor is the ZFS server. Then I created an LXC container to run Docker and was able to allocate the full system memory to it. It takes 12MB ram on startup. I’m finding that I use LXC more than docker for anything that isn’t a straight hub published container.
|
# ? Feb 2, 2018 14:36 |
|
Hughlander posted:Depending on hat your hyper visor is I’ll toss my hat for what I just did. I had ESXI and FreeNas with an LSI control passing through, then some Ubuntu VMs for various things including one being a docker container VM. I didn’t like having the FreeNAS memory not being able to be overcommitted and didn’t like then having to figure out any possible limits on what was left to the docker VM vs others. So I dropped ESXI and FreeNAS for Debian + proxmox + ZFS on Linux. Now there’s no pass through as the hypervisor is the ZFS server. Then I created an LXC container to run Docker and was able to allocate the full system memory to it. It takes 12MB ram on startup. I’m finding that I use LXC more than docker for anything that isn’t a straight hub published container. This is also how I run my server, it is just a regular Debian server running LXC for Gitlab and KVM for some Windows VMs. The only problem is that ZFS might get a bit RAM hungry, but with 32GB RAM this is not really an issue.
|
# ? Feb 2, 2018 15:10 |
|
I also do something similar, only with vanilla FreeBSD + Jails + bhyve. Just about everything I run has a Port and works fine in a Jail, and I use bhyve VMs for the few services that don't. OS containers > hardware virtualization SamDabbers fucked around with this message at 18:35 on Feb 2, 2018 |
# ? Feb 2, 2018 18:33 |
|
You have servers with less than 64gb ram?
|
# ? Feb 2, 2018 18:36 |
|
Wibla posted:You have servers with less than 64gb ram? Now that 64GB of RAM costs more than all the other parts of the server put together, it's a tough sell.
|
# ? Feb 2, 2018 18:52 |
|
bobfather posted:Now that 64GB of RAM costs more than all the other parts of the server put together, it's a tough sell.
|
# ? Feb 2, 2018 19:53 |
|
Vulture Culture posted:My home lab is also built out of $100 NUCs. That’s a lie unless you stole them.
|
# ? Feb 2, 2018 19:56 |
|
freeasinbeer posted:That’s a lie unless you stole them.
|
# ? Feb 2, 2018 20:18 |
|
Wibla posted:You have servers with less than 64gb ram? Blame Intel for their processor segmentation, the 1245 does not take anymore.
|
# ? Feb 2, 2018 20:39 |
|
I mean but even then ram and ssd/hd is at least $100 on top.
|
# ? Feb 2, 2018 20:41 |
|
Vulture Culture posted:My home lab is also built out of $100 NUCs. Don’t be daft. You can get a preowned E5-2650 V2 or 1650 V2 system for $300 or less these days with 8gb of RAM. Cheapest EBay DDR3 for a Z420 or similar starts at ~$60 per 8gb DIMM, so $420 additional to bring yourself up to 64GB. Or you could buy an E3-1225 barebones new for $300. Dell sells these regularly. God help you if you get one, because it takes DDR4 and that starts at ~$80 per 8gb DIMM on EBay, so $560 to get to 64GB.
|
# ? Feb 2, 2018 22:48 |
|
I don’t really know which thread is best for this, but the question is in the context of our XenServer hypervisors so I guess I’ll start here. Can I get some real talk about NUMA? We are building out a new cluster of hypervisors and I want to make sure we have all our ducks in a row. I mentioned looking at NUMA settings and my coworker responded that it’s only relevant to HPC setups and not something you should be messing with unless you have a good reason. NUMA support is currently off and he says that is best. Is that accurate? My understanding is that all modern server cpu architectures (these chips are Broadwell-EP) are built for NUMA. And if you aren’t enabling those features in your BIOS and OS, your system is just seeing a faked SMP setup where all memory appears equally accessible and performant. Whereas behind the scenes it could be running tasks on a CPU that’s remote from the memory it’s addressing and performing suboptimally. Enabling NUMA exposes the additional info about which node a job is running on and allows the OS to make smarter scheduling decisions. Am I talking out of my rear end here? I’ve spent all afternoon reading blog posts about NUMA and I feel like I now know less than when I started.
|
# ? Feb 2, 2018 23:57 |
|
My understanding is that you’re correct, https://blogs.vmware.com/performance/2017/03/virtual-machine-vcpu-and-vnuma-rightsizing-rules-of-thumb.html
|
# ? Feb 3, 2018 00:18 |
|
From my limited knowledge: You're not talking out of your rear end, but I'm not sure your coworker is wrong per se. Unless you really need every ounce of performance, it's likely not necessary and may add some complexity. It's been a while since I touched XenServer, but at first glance it looks like it had fairly spotty support where you had to pin a VM to a host because if it migrated then it wouldn't update the memory state and things like that. Seems to still be a problem - https://xenorg.uservoice.com/forums/172169-xen-development/suggestions/3059892-automatic-numa-optimization That being said, I don't think leaving it off is the right call. Simply turning NUMA support on, not assigning more vCPU than you have physical cores (or memory) in a NUMA node, and understanding that if you live migrate a guest it will run less efficiently until a reboot, seem like a minor inconvenience in most cases. But I'd be really interested to hear what others think.
|
# ? Feb 3, 2018 00:19 |
|
Yeah, the punchline to all of this is that XenServer 7.x’s dom0 kernel isn’t compiled with NUMA support anyway so it doesn’t loving matter I’d still love for anyone who has a deeper understanding of this to weigh in. There is a lot of info out there as to what NUMA is academically, but not much for concrete takeaways in terms of what settings you can tune and why. And a bunch of cargo culting advice from when it referred to HPC clusters with CPUs communicating over a network instead of multiple sockets in one box.
|
# ? Feb 3, 2018 01:28 |
|
bobfather posted:Don’t be daft. You can get a preowned E5-2650 V2 or 1650 V2 system for $300 or less these days with 8gb of RAM.
|
# ? Feb 3, 2018 01:56 |
|
bobfather posted:Now that 64GB of RAM costs more than all the other parts of the server put together, it's a tough sell. Have you looked at used hp G7 or G8 servers? For home lab use they're just fine, but they are noisier than a nuc. G7 is really cheap now, and G8 is following suit.
|
# ? Feb 3, 2018 02:44 |
|
|
# ? Apr 19, 2024 23:35 |
|
Wibla posted:Have you looked at used hp G7 or G8 servers? For home lab use they're just fine, but they are noisier than a nuc. G7 is really cheap now, and G8 is following suit. I picked up a Lenovo S30 with 48GB of ECC recently, and it works a treat without being noisy. Not conventionally rackable though.
|
# ? Feb 3, 2018 02:46 |