Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
mewse
May 2, 2006

Oh server rack under the stairs? Watch that Linus vid for a tutorial

Adbot
ADBOT LOVES YOU

Moey
Oct 22, 2010

I LIKE TO MOVE IT
ESXi runs pretty good on desktop hardware (just watch out for NIC and storage controller drivers). Quiet and power efficient.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

If you're looking to do a nested virtual lab the Supermicro Xeon D motherboard/SoC is the best bang for the buck. You can get 8, 12, or 16 core models depending on how much you want to spend, and the motherboards support dual 10Gbase-T, dual 1Gbase-T and a dedicated IPMI network management port. They'll support up to 128GB of ECC memory or 64GB of non-ECC. They're mini-ITX boards so they can fit in a mini tower case. Get the right case and they can be very quiet and they don't consume a ton of power.

fordan
Mar 9, 2009

Clue: Zero

YOLOsubmarine posted:

If you're looking to do a nested virtual lab the Supermicro Xeon D motherboard/SoC is the best bang for the buck.

I love mine and how quiet it is, but pricewise it's a much bigger hit than a Dell server off eBay.

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum
Do Supermicro boards still have backdoors in them?

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

fordan posted:

I love mine and how quiet it is, but pricewise it's a much bigger hit than a Dell server off eBay.

Sure, but it’s also not a big, loud, electricity sucking head engine. Convenience ain’t cheap!

bsaber
Jul 27, 2007

jre posted:

Curious why you've went for a rack mount server for home lab ? What technologies are you wanting to learn ?

Well the plan is to get a rack eventually and it seemed like the best bang for the buck. I’m open to suggestions though.

As for technologies I want to learn at this point in time, just Docker and Kubernetes for starters. Want to learn more networking and server admin skills.

There’s some apps that I wrote that are used daily and want to run them in VMs. Right now it’s on one VPS. Other things I plan to run/play with: pfsense (never really did anything with firewalls or networking), Plex (media is on a NAS), guacamole, and BlueIris.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

bsaber posted:

Well the plan is to get a rack eventually and it seemed like the best bang for the buck. I’m open to suggestions though.

As for technologies I want to learn at this point in time, just Docker and Kubernetes for starters. Want to learn more networking and server admin skills.

There’s some apps that I wrote that are used daily and want to run them in VMs. Right now it’s on one VPS. Other things I plan to run/play with: pfsense (never really did anything with firewalls or networking), Plex (media is on a NAS), guacamole, and BlueIris.
You can pick this up with $80 worth of Raspberry Pis for 5% of the power consumption and 0% of the noise

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Happiness Commando posted:

The plan is a 2016 Hyper-V host with 2016 RDSH (among others) inside of it.

All the MS documentation I can find says RemoteFX only works for one concurrent login. We are going to use Gen 1 VMs because Datto backup units can't export images of gen 2 VMs.

MS indicates that gen 1 RDSH servers are unsupported.

Am I missing something here? It does look like I could use DDA to pass the whole video card to the RDSH guest, though, so that would work, I think?

(I have no hardware to test this on)
RemoteFX will work with standard terminal services and multiple concurrent user sessions, no idea what that is about. vGPU hardware acceleration is one user per guest VM. If you're doing some kind of GPU passthrough then it would be 1 GPU per user but I would not recommend that. I believe the vGPU stuff requires Gen 2 so get your backups sorted. You should be able to run the VM images off any standard SMB3 file server so backups are usually trivial in that scenario.

Happiness Commando
Feb 1, 2002
$$ joy at gunpoint $$

BangersInMyKnickers posted:

RemoteFX will work with standard terminal services and multiple concurrent user sessions, no idea what that is about. vGPU hardware acceleration is one user per guest VM.

I promise I'm not trying to be dumb, but I can't make these two statements mesh. I'm seeing RemoteFX and vGPU being used interchangeably, so it reads like you're saying that RemoteFX will work for concurrent users and will only work for one user.

I know I just need to get some hardware and play with it, but I can't do that yet.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

RemoteFX is just a bolt-on feature set to RDP that enables advanced functionality like h.264 encoding, device passthrough, and some other stuff. The encoding can either be done in software on the CPU or on a dedicated GPU encoder if available.

vGPU is the virtualization of a graphics card and allocation of its resources to a guest VM. This is what gets your Direct3D support inside an RDP session. You need remoteFX to use vGPU because standard RDP video encoding cannot handle it.

Happiness Commando
Feb 1, 2002
$$ joy at gunpoint $$

I really wish this stuff was documented better. So vGPU is appropriately a virtual GPU that gets passed to the guest RDSH, which then slices it up using RemoteFX so that concurrent sessions can use it?

So vGPU is implemented by adding an adapter to a VM in the HyperV console and then RemoteFX is some RDS role that gets added to the guest RDSH. Yes?

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

I'll try to throw this in to a table to break it down:

Happiness Commando
Feb 1, 2002
$$ joy at gunpoint $$

OK guys, I guess it's time for me to pack it up. I'm an imposter.

Thanks Ants
May 21, 2004

#essereFerrari


Get approval for some AWS or Azure instances with video cards and test it, you'll figure it out.

movax
Aug 30, 2008

Potato Salad posted:

Dude, don't bother doing nvme passthrough

The esx nvme driver is great. Discover the drive from the host, put vmfs on it.

E: at the very least, do a raw storage mapping. Pcie psssthrough isn't going to save you any performance

I just figured it would "simpler" to passthrough the PCI device to the Guest VM, since I've had good luck doing it with the LSI controllers and Intel NICs. I changed passthru.map to add the VID/DID of the Samsung SSD, and change its reset type to d3d0; VM still hard-locked, unfortunately. But, I could restart the VM with no issues this time, instead of having to restart the host. So maybe now the problem is with Fedora 27, and not my VM configuration?

evol262
Nov 30, 2010
#!/usr/bin/perl

Happiness Commando posted:

I really wish this stuff was documented better. So vGPU is appropriately a virtual GPU that gets passed to the guest RDSH, which then slices it up using RemoteFX so that concurrent sessions can use it?

So vGPU is implemented by adding an adapter to a VM in the HyperV console and then RemoteFX is some RDS role that gets added to the guest RDSH. Yes?

Really simply, assigning a GPU enabled it. Enterprisey GPUs present SR-IOV and are able to be sliced up by presenting themselves multiple times on the PCIe bus. You'll need an enterprisey GPU to use vGPU anyway.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

I ran 4 concurrent seats with 3d CAD/Autodesk sessions with 512mb vRam each on a cheapo Firepro W4100. People overestimate what their actual 3d workload is and you can get very good density and value with a true vGPU implementation instead of these passthrough/partitioning nonsense that VMware has been pushing.

mike12345
Jul 14, 2008

"Whether the Earth was created in 7 days, or 7 actual eras, I'm not sure we'll ever be able to answer that. It's one of the great mysteries."





If I want a lightweight vm whose only purpose is to manage a couple of docker containers, is RancherOS the way to go? CoreOS? Or just stick with Debian as long as it's about a single-digit amount of containers?

Potato Salad
Oct 23, 2014

nobody cares


I mean, we're mincing over megs of overhead, right?

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

mike12345 posted:

If I want a lightweight vm whose only purpose is to manage a couple of docker containers, is RancherOS the way to go? CoreOS? Or just stick with Debian as long as it's about a single-digit amount of containers?

CoreOS is dead because Red Hat just bought them out.

mike12345
Jul 14, 2008

"Whether the Earth was created in 7 days, or 7 actual eras, I'm not sure we'll ever be able to answer that. It's one of the great mysteries."





Potato Salad posted:

I mean, we're mincing over megs of overhead, right?

Probably? I mean that Debian install works and all, but I thought why stick with the full blown distro, when all it does is babysit containers.

Boris Galerkin posted:

CoreOS is dead because Red Hat just bought them out.

Ah ok.

Hughlander
May 11, 2005

mike12345 posted:

If I want a lightweight vm whose only purpose is to manage a couple of docker containers, is RancherOS the way to go? CoreOS? Or just stick with Debian as long as it's about a single-digit amount of containers?

Depending on hat your hyper visor is I’ll toss my hat for what I just did. I had ESXI and FreeNas with an LSI control passing through, then some Ubuntu VMs for various things including one being a docker container VM. I didn’t like having the FreeNAS memory not being able to be overcommitted and didn’t like then having to figure out any possible limits on what was left to the docker VM vs others. So I dropped ESXI and FreeNAS for Debian + proxmox + ZFS on Linux. Now there’s no pass through as the hypervisor is the ZFS server. Then I created an LXC container to run Docker and was able to allocate the full system memory to it. It takes 12MB ram on startup. I’m finding that I use LXC more than docker for anything that isn’t a straight hub published container.

Mr Shiny Pants
Nov 12, 2012

Hughlander posted:

Depending on hat your hyper visor is I’ll toss my hat for what I just did. I had ESXI and FreeNas with an LSI control passing through, then some Ubuntu VMs for various things including one being a docker container VM. I didn’t like having the FreeNAS memory not being able to be overcommitted and didn’t like then having to figure out any possible limits on what was left to the docker VM vs others. So I dropped ESXI and FreeNAS for Debian + proxmox + ZFS on Linux. Now there’s no pass through as the hypervisor is the ZFS server. Then I created an LXC container to run Docker and was able to allocate the full system memory to it. It takes 12MB ram on startup. I’m finding that I use LXC more than docker for anything that isn’t a straight hub published container.

This is also how I run my server, it is just a regular Debian server running LXC for Gitlab and KVM for some Windows VMs. The only problem is that ZFS might get a bit RAM hungry, but with 32GB RAM this is not really an issue.

SamDabbers
May 26, 2003



I also do something similar, only with vanilla FreeBSD + Jails + bhyve. Just about everything I run has a Port and works fine in a Jail, and I use bhyve VMs for the few services that don't.

OS containers > hardware virtualization

SamDabbers fucked around with this message at 18:35 on Feb 2, 2018

Wibla
Feb 16, 2011

You have servers with less than 64gb ram? :psyduck:

bobfather
Sep 20, 2001

I will analyze your nervous system for beer money

Wibla posted:

You have servers with less than 64gb ram? :psyduck:

Now that 64GB of RAM costs more than all the other parts of the server put together, it's a tough sell.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

bobfather posted:

Now that 64GB of RAM costs more than all the other parts of the server put together, it's a tough sell.
My home lab is also built out of $100 NUCs.

freeasinbeer
Mar 26, 2015

by Fluffdaddy

Vulture Culture posted:

My home lab is also built out of $100 NUCs.

That’s a lie unless you stole them.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

freeasinbeer posted:

That’s a lie unless you stole them.
The NUC6CAYH is a little more expensive now due to scarcity, but I was paying about $115 when they came out. MSRP on ARK still shows $129.00.

Mr Shiny Pants
Nov 12, 2012

Wibla posted:

You have servers with less than 64gb ram? :psyduck:

Blame Intel for their processor segmentation, the 1245 does not take anymore. :)

freeasinbeer
Mar 26, 2015

by Fluffdaddy
I mean but even then ram and ssd/hd is at least $100 on top.

bobfather
Sep 20, 2001

I will analyze your nervous system for beer money

Vulture Culture posted:

My home lab is also built out of $100 NUCs.

Don’t be daft. You can get a preowned E5-2650 V2 or 1650 V2 system for $300 or less these days with 8gb of RAM.

Cheapest EBay DDR3 for a Z420 or similar starts at ~$60 per 8gb DIMM, so $420 additional to bring yourself up to 64GB.

Or you could buy an E3-1225 barebones new for $300. Dell sells these regularly. God help you if you get one, because it takes DDR4 and that starts at ~$80 per 8gb DIMM on EBay, so $560 to get to 64GB.

Docjowles
Apr 9, 2009

I don’t really know which thread is best for this, but the question is in the context of our XenServer hypervisors so I guess I’ll start here.

Can I get some real talk about NUMA? We are building out a new cluster of hypervisors and I want to make sure we have all our ducks in a row. I mentioned looking at NUMA settings and my coworker responded that it’s only relevant to HPC setups and not something you should be messing with unless you have a good reason. NUMA support is currently off and he says that is best.

Is that accurate? My understanding is that all modern server cpu architectures (these chips are Broadwell-EP) are built for NUMA. And if you aren’t enabling those features in your BIOS and OS, your system is just seeing a faked SMP setup where all memory appears equally accessible and performant. Whereas behind the scenes it could be running tasks on a CPU that’s remote from the memory it’s addressing and performing suboptimally. Enabling NUMA exposes the additional info about which node a job is running on and allows the OS to make smarter scheduling decisions.

Am I talking out of my rear end here? I’ve spent all afternoon reading blog posts about NUMA and I feel like I now know less than when I started.

Thanks Ants
May 21, 2004

#essereFerrari


My understanding is that you’re correct, https://blogs.vmware.com/performance/2017/03/virtual-machine-vcpu-and-vnuma-rightsizing-rules-of-thumb.html

Internet Explorer
Jun 1, 2005





From my limited knowledge:

You're not talking out of your rear end, but I'm not sure your coworker is wrong per se. Unless you really need every ounce of performance, it's likely not necessary and may add some complexity. It's been a while since I touched XenServer, but at first glance it looks like it had fairly spotty support where you had to pin a VM to a host because if it migrated then it wouldn't update the memory state and things like that. Seems to still be a problem - https://xenorg.uservoice.com/forums/172169-xen-development/suggestions/3059892-automatic-numa-optimization

That being said, I don't think leaving it off is the right call. Simply turning NUMA support on, not assigning more vCPU than you have physical cores (or memory) in a NUMA node, and understanding that if you live migrate a guest it will run less efficiently until a reboot, seem like a minor inconvenience in most cases.

But I'd be really interested to hear what others think.

Docjowles
Apr 9, 2009

Yeah, the punchline to all of this is that XenServer 7.x’s dom0 kernel isn’t compiled with NUMA support anyway so it doesn’t loving matter :pseudo:

I’d still love for anyone who has a deeper understanding of this to weigh in. There is a lot of info out there as to what NUMA is academically, but not much for concrete takeaways in terms of what settings you can tune and why. And a bunch of cargo culting advice from when it referred to HPC clusters with CPUs communicating over a network instead of multiple sockets in one box.

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

bobfather posted:

Don’t be daft. You can get a preowned E5-2650 V2 or 1650 V2 system for $300 or less these days with 8gb of RAM.

Cheapest EBay DDR3 for a Z420 or similar starts at ~$60 per 8gb DIMM, so $420 additional to bring yourself up to 64GB.

Or you could buy an E3-1225 barebones new for $300. Dell sells these regularly. God help you if you get one, because it takes DDR4 and that starts at ~$80 per 8gb DIMM on EBay, so $560 to get to 64GB.
Exactly how many NUCs have slots for eight DIMMs

Wibla
Feb 16, 2011

bobfather posted:

Now that 64GB of RAM costs more than all the other parts of the server put together, it's a tough sell.

Have you looked at used hp G7 or G8 servers? For home lab use they're just fine, but they are noisier than a nuc. G7 is really cheap now, and G8 is following suit.

Adbot
ADBOT LOVES YOU

bobfather
Sep 20, 2001

I will analyze your nervous system for beer money

Wibla posted:

Have you looked at used hp G7 or G8 servers? For home lab use they're just fine, but they are noisier than a nuc. G7 is really cheap now, and G8 is following suit.

I picked up a Lenovo S30 with 48GB of ECC recently, and it works a treat without being noisy. Not conventionally rackable though.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply