Since my searching ability with the forums is either not good enough, it hasn't been asked or I'm just plain stupid: I'm planning on building a home server with ESXi and want to run FreeNAS and OpenELEC virtualized. Here's my question: Can I in ESXi use VT-d to passthrough the graphics card directly to OpenELEC (as FreeNAS obviously doesn't need it)? The reason I'm not just straight installing FreeBSD and putting XBMC on a mythtv backend connected to a HDHomeRun is that I want to play around with visualization more than I've done in the past (vmware player/vmware fusion/hyper-v in windows 8 pro). I just don't fancy going out and buying new hardware (I do plan on buying new hardware eventuallly, as my current HP N36L-based NAS is getting too small) only to learn that I can't do this.
|
|
# ¿ Dec 10, 2012 22:12 |
|
|
# ¿ Apr 26, 2024 23:29 |
Well, as I've understood it, View 5.2 and whatever else you guys are talking about is to make the gpu accessible as a resource to all the guest OS' - what I want to do is just point it towards one as FreeNAS doesn't need it at all. I wish I had some lab hardware to test this on, but unfortunately I don't. Also, it turns out that I have a friend who has a setup quite like the one I'll have, so when I get hold of him I'll have to ask what setup he's running and how it's configured. I'll return with more information when I have it.
|
|
# ¿ Dec 11, 2012 09:57 |
I noticed that the USB 3.0 PCI-ex daughterboard I've got plugged into my Windows 10 VM through VMDirectPath I/O wasn't picking up the USB 2.0 device I plugged in, and when I added a USB 2.0 controller through VMDirectPath I/O, something went absolutely apeshit and the audio started stuttering wildly after not playing back for the first minute whenever I'd start something with audio. I'm thinking DPC latency issues caused by lack of MSI-X or something else along those lines, but haven't really got any solution other than using USB 3.0, so my question is: Shouldn't the USB 3.0 controller be capable of picking up USB 2.0 devices, so I don't have to use a separate USB 2.0 controller?
|
|
# ¿ Mar 24, 2021 15:39 |
CommieGIR posted:Yes, USB 3.0 is fully backwards compatible. Sounds like there was an interrupt issue. However, I can play X4 perfectly fine (albeit only at 10% better FPS than my former workstation, but that's entirely expected, since I was both CPU and GPU blocked there). Anyway, I'm pretty sure I just need to test a bit more with the HDMI-CEC adapter, as I just had my HOTAS connected to play X4, and that uses USB 2.0 too. Also, from the Linux thread: SamDabbers posted:I look forward to your posts about bhyve It's not using libucl, so it may be refactored again before long, I need to ask jhb@ about it. It'll likely be ready by 13.1, and will probably be in 12.3 as well, given the MFC. Also, even if 13.0-RELEASE isn't out yet (a bug was found, and OpenSSL just announced a high-severity vulnerability has been fixed), the release notes are worth reading through for things like VirtIO V1 (read: Q35 chipset support), 9pfs, snapshots (not live snapshots, yet), as well as a PCI HDAudio device and COM ports 3 and 4. The VNC protocol support has also received a bump, so you can now attach macOS' Screen Sharing feature to it. Oh, and in case you have more memory than should be allowed, you can now use 57-bit virtual addressing and five-layer nested page tables. BlankSystemDaemon fucked around with this message at 02:41 on Mar 25, 2021 |
|
# ¿ Mar 25, 2021 02:34 |
BlankSystemDaemon posted:In most exciting news, bhyve now has proper UCL-like configuration management. This makes it possible to map UCL compatible files onto those OIDs, and it seems to me that it's a pretty excellent GSoC project in case anyone's looking for something to do over the summer. Send me a PM or an email to debdrup@freebsd.org if you're interested.
|
|
# ¿ Mar 30, 2021 11:01 |
Gaming in Windows on ESXi was a fun adventure, but it doesn't really work out. A combination of a slightly-janky but memory hungry game (X4) and the EFI boot process in ESXi being piss-poor when doing +4GB PCI BARs means that the game seems to freeze when saving. It involved at least two PSODs, and a lot of frustration.
|
|
# ¿ Apr 4, 2021 01:42 |
SamDabbers posted:Try KVM with a relatively recent kernel and qemu. I have had good results passing through an Nvidia card and USB3 controller to a Windows VM for this use case under Fedora. I can pass through the GPU and USB3 daughterboard just fine, that's not the problem. The problem is all the ancillary issues that crop up.
|
|
# ¿ Apr 4, 2021 11:13 |
SysV init is not the only way to do things, nor am I especially fond of it, and I'm certainly not recommending anyone go back to it. The suggestion was to not use ESXi and instead use KVM, which means Linux and unless I go out of my way, it also means systemd et al - which is fine except when something inevitably breaks (because it always does, Murphy makes sure of it), I'll want to rootcause it (because that's just the kind of geek I am), and I've been there before, where I end up having to use something other than Linux to do this, which means that little things like log files won't be available to me, because of systemd. Nowhere was there mention of me running Windows as as bare-metal hypervisor, Windows is just the de-facto gaming platform because even if Linux gaming exists, I don't own X4 on Steam (and any other games I play aren't availble on Steam for Linux) and even if I did, I wouldn't know where to start vis-a-vis finding some Linux appliance with Steam that actually works reliably, and probably also involves systemd et al, so we're back to square one. Well, with the exception that at least Windows has dtrace. And speaking of Windows as a bare-metal hypervisor, I wouldn't touch that with a ten-foot pole, because even such a simple thing as IOMMU is an absolute loving pain in the neck to setup and requires messing around in PowerShell. BlankSystemDaemon fucked around with this message at 16:08 on Apr 9, 2021 |
|
# ¿ Apr 9, 2021 16:06 |
The question I want answered is, why are you rebooting your systems often enough that the speed of the reboot matters.
|
|
# ¿ Apr 10, 2021 13:17 |
Well, systemd is popular for a few reasons. Partially it's because RedHat are well-positioned to affect a lot of other distributions, since a lot of distributions take cues from how RedHat does things, if they don't directly consume most of what's being offered and just alter a few bits and bobs. Secondly, because a lot of the non-RedHat-affected distributions adopted systemd early on, pretty much through unilateral decisions by a few people (Debian was one of the few places where there was any substantive debate, and even then it was still decided to adopt it - and even more things are based on Debian than are based on RedHat stuff). There's also a very substantial userbase of Linux users who's happily act like Windows users and just reboot when they encounter problems, instead of roocausing them, so when they do encounter problems, they won't know what's the cause (although this doesn't mean systemd is always at fault, naturally). This, ironically, is probably also the userbase that's most convinced by the "it boots faster" argument, although that one has arguably been shown to be largely untrue for newer versions, if it ever was faster. Someone more clever than me can probably also comment on how something doesn't have to be good to be popular. I think a lot of it comes down to the fact that it's a compound thing for the people who don't like systemd; ie. it's not just systemd itself, it's also Lennart Poettering, RedHat, and a bunch of other factors. To add to the other link, there's also an (archived) list of articles critical of systemd as well as some (archived) arguments against systemd. EDIT: For reference, I'm not saying that none of the things systemd does is good - however everything good that it does has been done better and before it by launchd (introduced in Mac OS X, still in macOS, and is an on-demand unit-structured startup handler that replaced BSD init - it only starts up things when there's an actual demand for them by using sockets not unlike how inetd launches daemons on demand, and it also handles units in a way that'd be familiar to a systemd user) and SMF (Solaris Management Facility, one of the only startup processes that've managed to produce a graph that can't get you into a failed state). Had I the money for it, I wouldn't mind paying someone to make something for FreeBSD out of BSD init, rc, devd, daemon, and all the utilities that're already in base and could be used to achieve all of this, without making a monolithic beast that's consuming seemingly all of the Linux userland. BlankSystemDaemon fucked around with this message at 23:02 on Apr 10, 2021 |
|
# ¿ Apr 10, 2021 22:47 |
If it's objectively better, how can it get into a state when the system is unbootable without the configuration being touched by the user? Let me guess, "it works for me" - which is the exact response a lot of people get when they report real bugs and Lennart doesn't wanna bother with something.
|
|
# ¿ Apr 10, 2021 23:25 |
jaegerx posted:Elaborate As a personal anecdote, the last time I was running a Linux distribution on my entire network, it managed to stop booting properly, and when I checked things with zfs-diff(8); no changes related to systemd or its configuration - nor had anything else had changed, only a couple of files of userdata. That was pretty much the last straw, for me. BlankSystemDaemon fucked around with this message at 23:34 on Apr 10, 2021 |
|
# ¿ Apr 10, 2021 23:28 |
Eletriarnation posted:I appreciate the insight here. I guess some of it just opens up more questions though - Red Hat, being responsible for countless actual paid-support production systems, surely had good reasons for choosing systemd over... whatever they were using before, right? If launchd and SMF are better and older, why did systemd choose a different path for what it's doing and why did everyone (in Linux-land, at least) go along with that? I don't expect you to educate me on the history of all this, but it feels like there's more going on here than the Linux userbase not knowing a good init system from a bad one. RedHat was looking to replace SysV init, because it needed replacing, and Lennart Poettering was in the right place at the right time. It also doesn't hurt that RedHat makes money on selling support contracts, and that a non-zero amount of companies moved to paid support, because of systemd. Eletriarnation posted:If nothing changed related to systemd, what made you think that the problem was caused by it? Mr. Crow posted:So systemd was the cause of this how? Or did you just assume it was at fault? If it'd been FreeBSD, I'd have run rcorder /etc/rc.d /usr/local/etc/rc.d a whole bunch of times and looked for differences, but systemd-analyze only prints the last successful boot, it doesn't simulate a new one. The system in question had ECC memory which I checked by using a known-bad DIMM. I replaced the PSU, and did a bunch of other testing including moving it to a completely different system (virtualized on known-good hardware), none of which isolated the problem. BlankSystemDaemon fucked around with this message at 10:02 on Apr 11, 2021 |
|
# ¿ Apr 11, 2021 09:57 |
Eletriarnation posted:Sure, I know that they couldn't just wholesale lift closed source software - what I'm saying is that if Poettering set out to emulate those examples knowing how they work, it doesn't really make sense that what he developed would just be worse for no reason instead of working in the same way as the examples. At the very least, we have to claim that he didn't know what he was doing or that he made bad decisions on how to make necessary-for-Linux changes instead of just replicating what was in front of him. The point I'm obviously failing to make is that if systemd is still as full of holes as it obviously is, and if it's still seeing scope creep that doesn't fix the issues and adds new ones, he's not solved anything. I'm not saying they did it deliberately, I'm not into conspiracy theories. I'm saying they unintentionally benefited from him being in in a particular place at a particular time. Whether it's right is debatable. EDIT: Also, don't be so quick to assume RedHat won't replace systemd - they're already replacing pulseaudio with PipeWire, despite the fact that that was written by Lennart Poettering too. BlankSystemDaemon fucked around with this message at 15:30 on Apr 11, 2021 |
|
# ¿ Apr 11, 2021 15:28 |
Anywho, I think enough have been said to exhaust this. The battle lines have been drawn pretty clearly on this topic for a long time, there's no changing anyone's mind - so let's just have a drink and shitpost about something else.
|
|
# ¿ Apr 11, 2021 18:32 |
What overhead? I'd love to see some numbers that prove which one has more overhead, because unless you're looking at OccamBSD (just as an example, because it's what I know; it's FreeBSD stripped to its absolute minimum to only run bhyve and nothing else (not even networking)), I'd be surprised if there's a measurable difference on comparable workloads.
|
|
# ¿ May 6, 2021 19:26 |
IOMMU passthrough in Hyper-V is embarrassingly bad.
|
|
# ¿ May 18, 2021 15:18 |
Perplx posted:Microsoft is working on using linux on as the root partition for hyper-v https://www.theregister.com/2021/02/17/linux_as_root_partition_on_hyper_v/. Eventually Azure will be running linux on bare metal like every other cloud provider.
|
|
# ¿ May 31, 2021 22:26 |
The question that prompted this got me thinking: I do wish there was a abstraction framework that could multiplex several hypervisors on the same hardware, when they all use hardware-accelerated virtualization (ie. VMENTER/VMEXIT with SLAT). As it is, when you've got one type-2 hypervisor running, you can only launch additional instances of that hypervisor. You can't, as an example, have bhyve, xen, and virtualbox interchangably, nor can you live migrate between them. In that sense it seems like there's still a lot of work to be done, but I think there's very little hope for an open standard for either the abstraction nor the API for live migration. BlankSystemDaemon fucked around with this message at 07:44 on Jun 30, 2021 |
|
# ¿ Jun 30, 2021 07:41 |
You're thinking of Windows Defender Application Guard, which makes use of HyperV to sandbox quite a few things, including the Edge browser - it also underpins Windows Sandbox, I think.
|
|
# ¿ Jul 10, 2021 07:49 |
Multi-seat configurations (invented for IRIX by SGI, if memory serves) isn't natively supported in Windows, but there are third-party solutions that make it possible.
|
|
# ¿ Jul 16, 2021 09:16 |
SlowBloke posted:It’s supported natively up to windows server 2016 as multipoint server, it was EoL on 2019 due to the introduction of azure desktops and windows 10 enterprise multi-session(formerly EVD). The biggest hurdle is that the 3D support might not be the best for games.
|
|
# ¿ Jul 16, 2021 19:06 |
You can get used Ivy-Bridge Xeon-based servers with ECC memory for the same price as NUCs, if not cheaper.
|
|
# ¿ Aug 19, 2021 17:45 |
Clark Nova posted:I'm pretty sure people are passing on these due to noise and power consumption. I've been waiting a while for a really good deal on something tiny with two NICs or a PCIE slot with no luck It idles at ~160W and while it's not exactly quiet, a closed single wooden door between me and it is enough to muffle it so I can't hear it when sitting 2m from the door. Mind you, this is only achievable because of the HP-branded SAS controllers and NIC, as otherwise the ILO configuration will automatically turn the fans up to 40%. BlankSystemDaemon fucked around with this message at 21:42 on Aug 19, 2021 |
|
# ¿ Aug 19, 2021 18:26 |
CommieGIR posted:My main NAS is an R720 with 2 x 12 core Xeons and 128GB of RAM + Spinning disks and under load maybe hits ~350-400 watts.
|
|
# ¿ Aug 19, 2021 21:43 |
CommieGIR posted:Nah. I just have personal issues.
|
|
# ¿ Aug 20, 2021 10:25 |
Pile Of Garbage posted:I assume most servers these days come with M.2 NVMe SSD sockets so should be simple enough to use those instead whilst still avoiding the need for a SATA/SAS drive. ESXi is designed to load its OS into memory and run it entirely from there, only writing to the disks when configuration is changed and saved, so there's hardly any point in using NVMe. The marginal boot time gains aren't going to mean anything, because any company who's serious about nines will be doing live migration. SlowBloke posted:Well the writing was on the wall, hopefully they will not start requiring stupidly big local disks to run.
|
|
# ¿ Oct 4, 2021 13:47 |
Number19 posted:Lol how does something like that make it to production? Of course, that's not something that's unique to VMware because there's no such thing as too many tests - even if you have unit tests and integration tests for everything, there are combinations of edge-cases that can only be hit by fuzzing, and even then you not guaranteed 100% coverage.
|
|
# ¿ Oct 23, 2021 13:50 |
CommieGIR posted:Man, say what you want about Xen, but I've never had an update "break" the hypervisors or vms yet.
|
|
# ¿ Oct 23, 2021 23:06 |
I'd like to point out that NFS is supported in Windows, nowadays - so maybe that's worth looking into?Mr. Crow posted:Are you trying to make it accessible to the host? If you're just trying to do a full passthrough, you'll want to use virtio. This is how I'm doing it on one of my machines but there are a dozen ways to do it. With the added "benefit" that the block device that you're suggesting features even more buffering that keeps the data away from the disks in order to manipulate it, in addition to what Linux does on its own - which risks dataloss.
|
|
# ¿ Nov 4, 2021 13:06 |
Slack3r posted:Hmm.. I forgot about NFS. I have a SAMBA share setup from sdc1 as ext4 fs. All my test Window instances can see that share just fine and are mapped to it. Last time I messed with NFS, I was able to gain root access on a university BSD server in the mid 90s. lol. Was able to mount / on my linux box via dialup. Since I was root locally.... boom done. Good times. I'm still wary of NFS. More than likely, you either had access the 'nobody' user, since that's historically the user that's been used for that, as the default UID of -2 means that you get errors when trying to mount as root. NFS has changed considerably since then, because NFSv4 no longer sends UIDs/GIDs over the wire and instead uses usernames (well, unless you set vfs.nfsd.enable_stringtouid (or its equivalent on not-FreeBSD), which exists to fix interoperability with NFSv4 implementations that still use UIDs/GIDs over the wire). Also, Rick Macklem, who documented the thing about mapping the uid of root above, is still the maintainer of FreeBSDs NFS implementation to this day. BlankSystemDaemon fucked around with this message at 15:14 on Nov 5, 2021 |
|
# ¿ Nov 5, 2021 15:10 |
GreenBuckanneer posted:How is the performance of setting up a linux distro as the main system, firing up some sort of virtualbox on that, installing windows 10 on that, and then giving it as much resources as possible, and gaming on the VM?
|
|
# ¿ Nov 5, 2021 19:33 |
Combat Pretzel posted:I'm setting up my NAS with TrueNAS Scale, which uses Kubernetes. BlankSystemDaemon posted:kubernetes is made for hyperscalers that need massive scale-out orchestration Methanar posted:It's incredible we've progressed from containers being good because they're lightweight and portable to just fuckin shipping entire single-node kubernetes stacks as a distribution mechanism. BlankSystemDaemon fucked around with this message at 01:12 on Dec 9, 2021 |
|
# ¿ Dec 9, 2021 01:07 |
Keito posted:Lightweight compared to running a whole VM, you mean. Everything is relative. Service jails on FreeBSD, where you have a single binary executable and its configuration file(s) in a container, have been a thing since 1999.
|
|
# ¿ Dec 9, 2021 01:25 |
Mr Shiny Pants posted:It's a poor man's Erlang.
|
|
# ¿ Dec 16, 2021 09:00 |
Mr Shiny Pants posted:Clustering physical machines and running microservices on top of them is Erlang in a nutshell. Erlang is just one VM (BEAM) that can run actors (microservices that pass messages) with supervisors (if you use OTP). You need to program in Erlang but from a technical standpoint they are roughly comparable at what they accomplish. Erlang is elegant though (like really elegant), Kubernetes is not IMHO.
|
|
# ¿ Jan 12, 2022 15:25 |
Bob Morales posted:It's like Microsoft buying HoTMaiL all over again But wait, does this mean Facebook is going to have an apparent face turn like Microsoft did, before doing a slow heel turn like Microsoft is doing?
|
|
# ¿ Jan 12, 2022 22:17 |
The only legitimate reason to restart a computer nowadays is when the kernel binary executable has been updated.
|
|
# ¿ Feb 5, 2022 19:34 |
Broadcom is so bad, they managed to infect Avago after Avago sold off their networking to Intel and acquired Broadcom, then renamed themselves to Broadcom.
|
|
# ¿ May 26, 2022 12:04 |
|
|
# ¿ Apr 26, 2024 23:29 |
I don't know Cohesity, but if a backup system doesn't let you practice restoring the data easily and programmatically, so that you can assure yourself that 1) you can do it when the poo poo hits the fan and 2) that it works and isn't just a theoretical backup, then I'm not sure it's worth paying for. If it does both, it's better than a lot of other software you could spend money/time on.
|
|
# ¿ May 29, 2022 23:11 |