Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

Ciaphas posted:

At home last weekend i tried to install linux mint in an oracle virtualbox vm, as a place for loving around with programming stuff (trying out .net core and vs code at the time) without dualbooting out of Windows. Gotta play my games and all. Problem was the linux mint desktop kept turning into a glitchy godawful mess if i had 3d accel on, and dogshit slow with it off, and I didn't have a lot of time to mess around right then so I haven't got back to it.

Anyway, question is, what do people here like as far as free Windows-based virtualization hosts, and guest linux OSes for said? Is Oracle Virtualbox the only free game in town? (I know there's ESXi on the bare metal side.)

VMWare Workstation has a free-for-personal-use version known as VMWare Player. As far as distros go CentOS (which is the same packages as RHEL without the RH branding) seems to be the overall business favorite, with Fedora being the more cutting-edge desktop-oriented variant of that, and Ubuntu seems to be the consumer favorite. Both are pretty beginner-friendly as far as Linux goes. I would probably give Ubuntu a small edge from a pure "how hard will I have to look around to figure out how to get things working" perspective, but if you intend to actually work with Linux at some point you're probably more likely to see a RHEL-like in production and it's really not that different.

Eletriarnation fucked around with this message at 04:18 on Dec 16, 2016

Adbot
ADBOT LOVES YOU

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

SEKCobra posted:

Both Virtualization and/or NASing work fine on their own.

I am very much a novice as far as professional virtualization goes so I might be reaching too far with this statement, but I think the key is that you don't usually have the same device doing both of these in serious enterprise setups - you have your SAN for storage and your compute nodes may have a bit of local storage, but they're not intended to themselves be serving as network storage for other nodes. You can definitely do it with Linux/KVM because you're using a full featured OS with its own optional NAS features available as a hypervisor, but something like Hyper-V or ESXi is fundamentally not meant as an OS for a shared storage device.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Depends on what kind of CPU you want but you might find it more cost effective to get a cheap home-office server like a Poweredge T30 or the Lenovo equivalent (TS140-150?) than to actually build something around a server-chipset motherboard.

I really doubt you're going to run into compatibility issues with whatever random desktop you want to run on, especially if you're OK popping in a NIC if the one you have isn't supported.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
KVM is an alternative to VirtualBox as a platform to host virtual machines on Linux, the main differences being that KVM is open-source and Linux exclusive whereas VirtualBox is closed-source and has versions available on multiple platforms.

Virt-manager is just a frontend for KVM which is readily available, well supported and straightforward to use in a normal desktop environment. There are other frontends like the CLI or Wok/Kimchi (a web interface) that you could use instead, just like you can use either VMWare's Windows client or the vCenter web client to access an ESX host.

If you're just looking to run a single Windows install to have access to MS Office, it probably doesn't matter much which you use.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Assuming that you didn't pick a networking setup that prevents the guest from communicating directly with the host, the easiest solution for Windows guests in my experience is probably to just install WinSCP on it and use SCP to the host. Slightly more work with a similar effect would be setting up an FTP server on the host like vsftpd. If you want a remote folder that can be used as a filesystem directly like a local folder, you want to set up Samba or NFS (which may be a bit wonky with Windows, but should work).

I don't recall ever setting up filesystem passthrough with KVM/QEMU, only with VirtualBox on Windows hosts, so I can't say much about it.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

Mr. Crow posted:

Is "Linux gaming" at near native really as simple as running a windows VM and doing GPU passthrough? My buddy says that's all he's doing and gets like 97% performance of native, I have a hard time believing that doesn't run into weird driver issues or something?

You don't get driver issues because Linux does not interact with the card being used. Windows sees it just as if it were a physical machine with a physical GPU.

I had the best success setting it up on Proxmox, but with the RX 460 I used as a test card I did have a strange issue where if I shut down the VM after starting it then trying to start it again before rebooting the host caused a kernel panic. It booted back up in under a minute and performance inside the VM seemed fine though.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

Bob Morales posted:

What’s the cheapest way to get 4+ cores, 32GB memory, and 2 but hopefully 3 drives in something to run VM’s?

I’m thinking of a Lenovo P50, or other similar laptop?

I assume you're excluding used, or the answer is probably a refurb minitower with a 2nd-4th gen i5/i7 and 4x8GB DDR3. I'd guess that's around $200-250.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
I only manage my own home server and a few low-importance systems used inside test environments at work so I'll freely admit I'm not an expert, but systemd seems pretty functional to me and I've never been motivated to try to replace it. I looked at the ihatesystemd stuff and there's some "I don't think like the developers and don't like how they handled this edge case", some "this very complicated software undergoing continual development occasionally has BUGS!", and some "this very complicated software sometimes forces you to do things in complicated ways!" As someone who has tested closed-source enterprise software for several years, none of this seems notable let alone crippling.

If systemd is so bad, why is it incredibly popular - just inertia and network effect? Seems like you would at the very least have popular forks out there, if not ground-up replacements.

Eletriarnation fucked around with this message at 21:05 on Apr 10, 2021

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

BlankSystemDaemon posted:

Well, systemd is popular for a few reasons.
Partially it's because RedHat are well-positioned to affect a lot of other distributions, since a lot of distributions take cues from how RedHat does things, if they don't directly consume most of what's being offered and just alter a few bits and bobs.
Secondly, because a lot of the non-RedHat-affected distributions adopted systemd early on, pretty much through unilateral decisions by a few people (Debian was one of the few places where there was any substantive debate, and even then it was still decided to adopt it - and even more things are based on Debian than are based on RedHat stuff).

There's also a very substantial userbase of Linux users who's happily act like Windows users and just reboot when they encounter problems, instead of roocausing them, so when they do encounter problems, they won't know what's the cause (although this doesn't mean systemd is always at fault, naturally).
This, ironically, is probably also the userbase that's most convinced by the "it boots faster" argument, although that one has arguably been shown to be largely untrue for newer versions, if it ever was faster.

Someone more clever than me can probably also comment on how something doesn't have to be good to be popular.

I think a lot of it comes down to the fact that it's a compound thing for the people who don't like systemd; ie. it's not just systemd itself, it's also Lennart Poettering, RedHat, and a bunch of other factors.

To add to the other link, there's also an (archived) list of articles critical of systemd as well as some (archived) arguments against systemd.

EDIT: For reference, I'm not saying that none of the things systemd does is good - however everything good that it does has been done better and before it by launchd (introduced in Mac OS X, still in macOS, and is an on-demand unit-structured startup handler that replaced BSD init - it only starts up things when there's an actual demand for them by using sockets not unlike how inetd launches daemons on demand, and it also handles units in a way that'd be familiar to a systemd user) and SMF (Solaris Management Facility, one of the only startup processes that've managed to produce a graph that can't get you into a failed state).
Had I the money for it, I wouldn't mind paying someone to make something for FreeBSD out of BSD init, rc, devd, daemon, and all the utilities that're already in base and could be used to achieve all of this, without making a monolithic beast that's consuming seemingly all of the Linux userland.

I appreciate the insight here. I guess some of it just opens up more questions though - Red Hat, being responsible for countless actual paid-support production systems, surely had good reasons for choosing systemd over... whatever they were using before, right? If launchd and SMF are better and older, why did systemd choose a different path for what it's doing and why did everyone (in Linux-land, at least) go along with that? I don't expect you to educate me on the history of all this, but it feels like there's more going on here than the Linux userbase not knowing a good init system from a bad one.

BlankSystemDaemon posted:

As a personal anecdote, the last time I was running a Linux distribution on my entire network, it managed to stop booting properly, and when I checked things with zfs-diff(8); no changes related to systemd or its configuration - nor had anything else had changed, only a couple of files of userdata.
That was pretty much the last straw, for me.

If nothing changed related to systemd, what made you think that the problem was caused by it?

Eletriarnation fucked around with this message at 00:06 on Apr 11, 2021

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Sure, I know that they couldn't just wholesale lift closed source software - what I'm saying is that if Poettering set out to emulate those examples knowing how they work, it doesn't really make sense that what he developed would just be worse for no reason instead of working in the same way as the examples. At the very least, we have to claim that he didn't know what he was doing or that he made bad decisions on how to make necessary-for-Linux changes instead of just replicating what was in front of him.

I don't even know the specifics of how they are different because I've never gone into the weeds that far on OSX or used BSD at all, so I'm not going to speculate on why those decisions were made - if you want to say that he made a mistake because he has bad opinions on how things should work, OK. But... like you say, he was in the right place at the right time to solve a problem that needed solving. Not only was his employer who paid for the solution happy to use it, but most of the rest of the Linux world was too.

Even if I believe that Red Hat created a deliberately flawed solution to sell more support contracts (or for the less conspiracy minded, spent a while developing something lovely and fell victim to sunk cost fallacy), for me it doesn't really pass the laugh test that the rest of the open source world would say "well, ok" and get in line and stay there for ten years instead of forking or starting over from scratch.

Eletriarnation fucked around with this message at 14:58 on Apr 11, 2021

Adbot
ADBOT LOVES YOU

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Yeah, as I understand it Proxmox is fundamentally Debian with KVM, LXC, various storage/networking features and a nice web interface for managing them. It can do a lot out of the box but also you can pretty much just install whatever you want with apt if there are any features you need in the base OS instead of in a container/VM.

There appear to be a few straightforward methods to migrate a VM guest from Hyper-V: https://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE#HyperV

Installation is very easy and it runs well even on pretty old hardware, so I'd recommend giving it a try if you have a spare machine.

However, it doesn't have a desktop environment by default so if you are looking for a desktop Linux distro that can also host VMs then I'd just get Ubuntu/Fedora and run virt-manager to manage KVM.

Eletriarnation fucked around with this message at 17:11 on Jan 9, 2024

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply