|
Is anyone else here running Ovirt? I'm having a bitch of a time getting it set up. I only have a single host to work with which makes parts of it a bit difficult since you can't turn off some of the HA stuff with the hosted engine and it won't let you do some config without moving the engine VM and putting the host in maintenance mode. I'm also trying to export local storage as nfs shares and then mount them on the host to satisfy the network storage requirement of the hosted engine but I have all kinds of problems with storage disconnecting from the host and being unable to reactivate. I've done most of it on Cent7 but tried oVirt node last night unsuccessfully, it wouldn't even initialize vdsm properly and systemd kept masking lvm2-lvmetad despite my best effort to unmask and enable it. In my 3AM haze I convinced myself it was my lovely third-hand hardware and ordered an H700 to replace my Perc6i, but in the cold light of day that doesn't seem like it will help. I probably should have bought an actual nas enclosure instead and pulled most my local storage to that. Mostly I'm tired and frustrated and just wanted to piss and moan about it - should I just give up on oVirt on a single host? What would be a better option? I already have experience with VMware and Hyper-V and wanted to try something new. Proxmox? Xenserver?
|
# ? Sep 26, 2017 17:37 |
|
|
# ? Apr 20, 2024 01:14 |
|
Out of my experience, Ovirt is fundamentally designed to be run on 2+ nodes (+1 if you don't do self hosted controller) with a NAS of some sort (preferably NFS). Out side of that config you rapidly start running into weird issues and corner cases that someone hasn't programmed for.
|
# ? Sep 26, 2017 19:40 |
|
unknown posted:Out of my experience, Ovirt is fundamentally designed to be run on 2+ nodes (+1 if you don't do self hosted controller) with a NAS of some sort (preferably NFS). It can do self-hosted pretty easily, but it's definitely more common to see a multiple-node deployment. The Nards Pan posted:Is anyone else here running Ovirt? I'm having a bitch of a time getting it set up. I only have a single host to work with which makes parts of it a bit difficult since you can't turn off some of the HA stuff with the hosted engine and it won't let you do some config without moving the engine VM and putting the host in maintenance mode. I'm also trying to export local storage as nfs shares and then mount them on the host to satisfy the network storage requirement of the hosted engine but I have all kinds of problems with storage disconnecting from the host and being unable to reactivate. vdsm expects to be able to reconfigure the network, and this probably disconnects you from NFS storage. Then you need to wait for sanlock to get back at it. I suspect that if you look at /var/log/sanlock/sanlock.log, you'll see that it's busy trying to acquire a lock after it disconnects, which can take minutes. Use gluster. Or, honestly, just use ovirt live, which is designed for an "all in one" scenario. The Nards Pan posted:I've done most of it on Cent7 but tried oVirt node last night unsuccessfully, it wouldn't even initialize vdsm properly and systemd kept masking lvm2-lvmetad despite my best effort to unmask and enable it. oVirt Node runs "vdsm-tool configure --force" as part of startup, so it should definitely be configured. If for whatever reason your clock was behind and you didn't configure ntp/chrony at installation, the vdsm certificate may have a "Not before:" date in the future. There's currently a patch up to resolve this, but it probably won't merge until oVirt 4.1.7, and it's a bit of an edge case anyway. If it isn't that, have you sent a mail to users@ovirt.org ? vdsm should definitely work out of the box. The Nards Pan posted:In my 3AM haze I convinced myself it was my lovely third-hand hardware and ordered an H700 to replace my Perc6i, but in the cold light of day that doesn't seem like it will help. I probably should have bought an actual nas enclosure instead and pulled most my local storage to that. Use gluster for a single host. Though, honestly, if you just want a single host, use plain KVM with kimchi or some other frontend. There's no point to running most 'enterprise' virtualization on a single host, unless you want to go down the rabbit hole of creating nested VMs for labbing (which also works fine).
|
# ? Sep 26, 2017 20:26 |
|
Wow, thank you for all that. You were actually right about the time thing causing strange errors with lvm2-metad being masked. I had updated my idrac firmware recently and the time was off which was causing the installation of the hosted engine on oVirt Node to fail early in the install. I had configured NTP but for some reason the network wasn't coming up correctly either. Fixing both of those let it continue in the install. Although I'd read about Gluster, I've never tinkered with it. I tried quickly setting up a volume with my current Node install, now that I was able to actually get to the option to select storage - unfortunately it wouldn't let me install to the volume because it's not using replica 3. From what I understand it means it's not set up in a Gluster cluster with at least 3 hosts. Obviously I understand why you'd want this for any production VM environment but I can't really lab unless I just set up dummy 5GB volumes on cent VMs on my laptop, but that seems kinda silly. I appreciate the advice, I think I'm going to play with a different hypervisor for a while. I've used plenty of pure kvm and virt-manager on an ubuntu jump box but I wanted to lab with some enterprise virtualization solutions now that I have an actual host, but it seems like I still need at least a dedicated network storage first. I'll check out oVirt Live too. For now I'm ready to be done with this and actually deploy some machines so I'll probably drop back to hyper-v server just so I can keep doing something with Windows. Apparently Hyper-V server 16 handles nested virtualization pretty well so maybe I'll build my ovirt/gluster cluster in there.
|
# ? Sep 30, 2017 09:49 |
|
Oracle Cloud Migration Bombshell #1: My boss, who had advocated against this move, or to even allow our department to evaluate Oracle Cloud as a hosting solution before this move, was terminated with prejudice today. He was greeted at the door by security, who collected his things and walked him out. While not 100% the cause of his firing, I feel that his opposition to the move was on a long list of "issues" that management had with him. And by issues I mean completely valid complaints about how our supposed security focused company chooses to implement things. Wicaeed fucked around with this message at 21:54 on Oct 5, 2017 |
# ? Oct 5, 2017 21:52 |
|
So you're looking, right?
|
# ? Oct 5, 2017 22:13 |
|
Make sure you have his contact info. Maybe you can ride his coattails on out of that trainwreck.
|
# ? Oct 5, 2017 22:27 |
|
For whatever it's worth, I've been told by people inside the company that Dyn is basically taking over Oracle's corporate culture on the services side, in sort of the same way that Pixar got paid a lot of money to take over Disney's whole animation division. Hopefully it turns out better than it's been.
|
# ? Oct 6, 2017 02:01 |
|
Or they'll jump on the serverless hype train and push it as the solution for everything
|
# ? Oct 6, 2017 02:58 |
|
evol262 posted:Or they'll jump on the serverless hype train and push it as the solution for everything (Like everything forward-looking that I post, hedge this with "in 10-15 years".) Vulture Culture fucked around with this message at 14:08 on Oct 6, 2017 |
# ? Oct 6, 2017 14:06 |
|
Athena is unbelievable hilariously expensive to actually use. Redshift isn't server less. EMR isn't serverless either and is horrible to use for anything other than a very narrow set of batch workloads
|
# ? Oct 6, 2017 15:01 |
|
Vulture Culture posted:FaaS is babby nonsense but serverless is gonna eat the world. RedShift and Athena are to EMR what EMR was to on-premises Hadoop. IaaS got us where we are today, and I'm grateful, but serverless is the thing that's going to pose an actual existential threat to old-school server-obsessed sysadmins. I don't disagree with this at all. I was primarily referring to AWS Lambda, Serverless (TM), and other FaaS providers. I'm all about PaaS, and FaaS fits a use case that PaaS and/or container-oriented microservices don't, but the sudden surge in press is hilarious. "Everyone can be a developer now!", like it's any easier than it already was with basic containers. Providing an endpoint which does nothing but spin up a tiny unikernel and execute my code is neat, and infinitely more practical than any other options for some stuff, but not the end-all-be-all it's touted as. Athena/Redshift/BigQuery are different bests entirely, though I'd honestly expect then to be (eventually) consumed by something more like Firebase.
|
# ? Oct 6, 2017 17:01 |
|
jre posted:Athena is unbelievable hilariously expensive to actually use. jre posted:Redshift isn't server less. jre posted:EMR isn't serverless either and is horrible to use for anything other than a very narrow set of batch workloads Vulture Culture fucked around with this message at 18:00 on Oct 6, 2017 |
# ? Oct 6, 2017 17:54 |
|
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2151061 Thank the lord, a client on vsan also uses Veeam with CBT enabled, and it seems to be twice or three times a week that I've had to fix file issues with a clone job + delete old VM. My ticket on this has been open for a while. Let me tell you, there's nothing quite like typing "rm -rf /vol/path/to/vsan/[thumbprint]" into a production esxi host. No, pinkie finger, stay the gently caress away from the Enter key. Potato Salad fucked around with this message at 16:04 on Oct 9, 2017 |
# ? Oct 9, 2017 16:00 |
|
For VMware, I've got a VM that uses a ton of compute resources on the weekends, and almost none during the week. To perform the calculations in a reasonable amount of time, it needs a ton of CPUs, but obviously this ends up leading to wait time issues if there are other VMs on the host. My tentative solution is to give a CPU reservation to the VM to make sure it always has those CPUs available and other VMs can wait. First: Is there a better solution for this? I'd feel dumb if there was some new feature that was designed for this. Second: I'm currently writing a script to see if I can set the reservation to 100% on weekends, 0% during the week. Is there an easy way to do this so I don't reinvent the wheel?
|
# ? Oct 10, 2017 19:01 |
|
Give it the bunch of cores but reduce its CPU share either manually on the VM or by dumping it in a low share resource pool so the other VMs can win if they need to. The reservation is just going to lock up all those cores on your host 24/7 and be a big waste. It'll probably still be fast enough on the weekends unless the other VMs are very CPU busy and will keep it from causing excessive wait on the other VMs during the week. Alternately, get comfortable the the powershell API and schedule some jobs to either shutdown/re-allocate/restart the VM with the new core count or dynamically adjust the share count on a schedule.
|
# ? Oct 10, 2017 19:25 |
|
This is also an ideal cloud workload. I haven't looked but I'd also suspect there has to be some sort of batch job scheduler for VMware that is used for big data.
|
# ? Oct 11, 2017 00:24 |
|
Posted in the GPU thread but I also feel I should ask here. I know its been posted before about people doing similar, but I was wondering how difficult and what hardware might be needed to host either a Linux or Windows Server to host a single game that will run on 10-12 clients through RDP. The Game is RealFlight 7.5 but the requirements haven't changed much. The clients are literally the lowest end All-In-Ones that can just run the game now, but certain scenes and in Multiplayer they just fall on their face. I was hoping to find a way to scale this setup by keeping the AIO's and just have a single server do all the heavy lifting that I can just copy for future sites. One user in the GPU forum mentioned running Hyper-V and having a login screen, VM spinup, and hosting up to 20 CS 1.6 clients which should be easily more than I need and sounds like exactly What I need. However all of the searching I seem to come up with shows mainly RemoteFX stuff from 2012 give or take. The other obstacle I feel is getting the game to work with the Dongle and R/C Remote it uses. Could the game input be set to detect what the remote client has connected to keep that working as it should? I might have to reach out to the RealFlight guys to see if they have any Server Hosted setup experience already or not.
|
# ? Oct 12, 2017 01:51 |
|
EdEddnEddy posted:Posted in the GPU thread but I also feel I should ask here.
|
# ? Oct 12, 2017 01:56 |
|
anthonypants posted:Dehumanize yourself and face to Remote Desktop Licensing Hate to empty quote but RDS licensing is hell.
|
# ? Oct 12, 2017 02:05 |
|
Da Mott Man posted:Hate to empty quote but RDS licensing is hell. It was you I quoted in the GPU thread. Can you expand on your CS 1.6 Lan Party setup? That sounded literally like my exact need, but I am still unsure if it is possible.
|
# ? Oct 12, 2017 02:09 |
|
EdEddnEddy posted:It was you I quoted in the GPU thread. Can you expand on your CS 1.6 Lan Party setup? That sounded literally like my exact need, but I am still unsure if it is possible. It was a Supermicro x9drg-hf with 2x RX470 GPUs installed as the virtualization host. Remote Desktop Services with remotefx, on WS2012R2. Clients were W8.1Ent. EDIT: Other infrastructure for RDS was borrowed from other servers on the network, SQL Server, AD, disks hosted with SOFS over infiniband, etc... Da Mott Man fucked around with this message at 02:27 on Oct 12, 2017 |
# ? Oct 12, 2017 02:22 |
|
anthonypants posted:Dehumanize yourself and face to Remote Desktop Licensing Would per-device CALs be simple enough in this situation, since it sounds like there is a specific set of computers that would connect to it?
|
# ? Oct 12, 2017 23:15 |
|
Yep. Either you license all the devices that will connect or all the users. Whichever is lower.
|
# ? Oct 13, 2017 04:40 |
|
An update to Flash broke the vSphere client (it tries to load but it crashes Flash; e.g. you get the error "Shockwave Flash has crashed" in Chrome), and the workaround is to just use the old version of Flash player. Incidentally, Adobe admits that the older version of Flash player is known to have exploits in the wild, which is why Flash got updated in the first place. Also per Adobe, there should be a beta version of the Flash update next week.
|
# ? Oct 18, 2017 00:29 |
|
anthonypants posted:An update to Flash broke the vSphere client (it tries to load but it crashes Flash; e.g. you get the error "Shockwave Flash has crashed" in Chrome), and the workaround is to just use the old version of Flash player. Incidentally, Adobe admits that the older version of Flash player is known to have exploits in the wild, which is why Flash got updated in the first place. Well that explains why I've been having nothing but headaches with one of the companies I consult with. Thanks for posting this! Thankfully I have another jump box over there with an older version of flash.
|
# ? Oct 18, 2017 18:43 |
|
TheFace posted:Well that explains why I've been having nothing but headaches with one of the companies I consult with. Thanks for posting this! Thankfully I have another jump box over there with an older version of flash.
|
# ? Oct 23, 2017 22:22 |
|
Here's a neat paper/project wherein they make VMs with resource usage on par (or better!) with containers. Basically it sounds useful where isolation is important. It was a pretty interesting read to me, particularly where they dove into all the bottlenecks in current VM creation, booting, etc lies.
|
# ? Nov 2, 2017 19:40 |
|
Anyone else can't get to the VMware HOL (https://labs.hol.vmware.com/)? I keep getting SSL errors, with Chrome and Firefox
|
# ? Nov 3, 2017 21:35 |
|
Borked here too Edit: https://www.htbridge.com/ssl/?id=UB8xDJ3A There's no HTTPS, so turn off HTTPS Everywhere or any equivalent Thanks Ants fucked around with this message at 21:54 on Nov 3, 2017 |
# ? Nov 3, 2017 21:52 |
|
Thanks Ants posted:turn off HTTPS Everywhere
|
# ? Nov 3, 2017 22:18 |
|
Alfajor posted:Good call, this was it. Thanks Thanks Ants! Thants
|
# ? Nov 4, 2017 05:58 |
|
Thermopyle posted:Here's a neat paper/project wherein they make VMs with resource usage on par (or better!) with containers. quote:Despite its compelling performance, LightVM is still not as easy to use as containers. Container users can rely on a large ecosystem of tools and support to run unmodifed existing applications. With the unikernel stuff, they also assumed that only one process is going to run in the VM. So... no log shippers, Memcache instances, telemetry agents, debugging tools like sshd/bash...? Adequate for the narrow use-cases they mentioned, but impractical for more standard workloads. They also only compared with Docker without digging into profiling Docker itself. I couldn't even find any description of the Docker image they used, so it's not clear how long Docker would take to unpack it and set up the CoW system, which could have contributed significantly to the boot time. It'd probably be fairer to also compare against rkt, runc, and systemd-nspawn. Still, I'm glad I took a look at the paper, lots of interesting stuff about hypervisor internals I was unaware of.
|
# ? Nov 5, 2017 07:57 |
|
minato posted:Like... yeah. That's kind of a big caveat. They did good work to optimize Xen, and basically remade an even-more-minimal-ContainerLinux (nee CoreOS), but practical development of any system to run on their LightVM is still gonna be hampered by the inadequate tooling. minato posted:With the unikernel stuff, they also assumed that only one process is going to run in the VM. So... no log shippers, Memcache instances, telemetry agents, debugging tools like sshd/bash...? Adequate for the narrow use-cases they mentioned, but impractical for more standard workloads. minato posted:They also only compared with Docker without digging into profiling Docker itself. I couldn't even find any description of the Docker image they used, so it's not clear how long Docker would take to unpack it and set up the CoW system, which could have contributed significantly to the boot time. It'd probably be fairer to also compare against rkt, runc, and systemd-nspawn. I'm basically taking this as "in 2 years, we expect that there will be a container-like solution which leverages virtualization for more isolation"
|
# ? Nov 5, 2017 14:48 |
|
minato posted:With the unikernel stuff, they also assumed that only one process is going to run in the VM. So... no log shippers, Memcache instances, telemetry agents, debugging tools like sshd/bash...? Adequate for the narrow use-cases they mentioned, but impractical for more standard workloads. That said, you're correct about the difficulty this presents in debugging or even getting an interactive shell, whereas with a container it's as simple as injecting a new process into the namespace.
|
# ? Nov 5, 2017 15:07 |
|
evol262 posted:They mimicked container workloads. Logging and telemetry in containers is handled from the host level. This is just more tooling missing. They acknowledge this, and their solution is to use a minimal Linux. But this doesn't completely solve the issue either, because adding those support tools to the VM image will bloat it and negate many of the boot-time perf improvements they got by stripping it down. I think they acknowledge this too, which is why they focused on such narrow workload types. Perhaps the title should have been "My VM boots faster Lighter (and is Safer) than your Container, as long as you don't need any sidecars lol". evol262 posted:I'm basically taking this as "in 2 years, we expect that there will be a container-like solution which leverages virtualization for more isolation" Looking much further ahead to the future, serverless/lambda is likely to be much bigger and so this tech could be useful for multi-tenant public cloud providers who want fast booting VMs with high security.
|
# ? Nov 5, 2017 18:57 |
|
Yeah, I don't think anyone should read that and think "oh, let's deploy this". Like most academic papers, the subject is not ready for prime time.
|
# ? Nov 5, 2017 19:26 |
|
minato posted:Yeah, which I think underscores my point about how impractical unikernels are. If the VM image is optimized for a single process, then it basically precludes adding those support tools & sidecars which will limit the types of applications this is good for. As noted, this is already the case for containers. There's a single entrypoint, and best practice is that this maps to a single process (instead of some forking script) minato posted:They acknowledge this, and their solution is to use a minimal Linux. But this doesn't completely solve the issue either, because adding those support tools to the VM image will bloat it and negate many of the boot-time perf improvements they got by stripping it down. I'm not understanding why "support tools" are needed here. Presumably, if this lands in a project, those support tools will live in the project framework, other lightweight VMs, or similar. See Clear Containers, kubevirt, and a variety of other projects doing stuff similar to this right now. minato posted:I think they acknowledge this too, which is why they focused on such narrow workload types. Perhaps the title should have been "My VM boots faster Lighter (and is Safer) than your Container, as long as you don't need any sidecars lol". I mean, yes, but the "sidecars" would also boot quicker and be more secure. Plus containers need exactly the same use case anyway. minato posted:This sounds about right. As I see it, isolation is on a spectrum. Containers start from "no isolation" and can easily choose to add "cladding" onto themselves to achieve the level of isolation they're happy with. VMs are on the far right of the spectrum and don't have any choice, which is both their value-add (for security) and a curse (for lack of resource sharing). So perhaps advancements could be made to allow VMs to move a little further left down the spectrum. minato posted:Looking much further ahead to the future, serverless/lambda is likely to be much bigger and so this tech could be useful for multi-tenant public cloud providers who want fast booting VMs with high security. Serverless is ok, but doesn't fundamentally differ from PaaS in a lot of ways, and the amount of times a business says "well, I really need this API, but I can't be bothered containerizing it and I'm ok with a high turnaround time" is low. FaaS fits for a very narrow range of cases compared to "how do I lock down something like a container while retaining almost all of the benefits." Not trying to be argumentative, but it seems that you're reading this and saying, "yeah, but a unikernel can't replace my varnish+nginx machine with VMware tools". You're right. It can't. But that's not what they're gunning for. The question is "can this replace my nginx container without worrying that a vulnerability which allows remote code execution will let an attacker break out of the namespace and see my other poo poo". And it can do that.
|
# ? Nov 5, 2017 22:23 |
|
evol262 posted:I'm not understanding why "support tools" are needed here. Presumably, if this lands in a project, those support tools will live in the project framework, other lightweight VMs, or similar. See Clear Containers, kubevirt, and a variety of other projects doing stuff similar to this right now. quote:I'm firmly meh on serverless having an impact for 99% of us. This tech could be/is useful for providers who want the speed and single-mindedness of containers with the security of VMs. That's where it goes. As I see it, modern apps need to be (**waves hands mysteriously**) "cloud native", and combined with the whole DevOps principle of "you wrote it, you help run it", there's been a push to get Software Engineers to understand this massive evergrowing stack of IaaS/PaaS technology (previously the domain of Ops) just to get their feature out the door. But in my experience SWEs don't want to know all that stuff, they just want to push a button and get their feature into prod. They're actively resistant to learning about how the sausage is made. They just want a magic "PaaS 2.0" where they click a button and get a deployment pipeline, telemetry, logs, alerts, & reliability. They don't want to know anything about configuration, auto-scaling, backups, availability-zones, security, load-balancing or service meshes; that's just an opaque implementation detail to them. And I can see their point. I see the handwavey concept of serverless as quite appealing to them because it's a step closer to that utopia, even if there are still very restrictive caveats and Ops will never be able to hide every detail from them. So I see a strong incentive to develop serverless in that direction.
|
# ? Nov 6, 2017 00:00 |
|
|
# ? Apr 20, 2024 01:14 |
|
minato posted:But in my experience SWEs don't want to know all that stuff, they just want to push a button and get their feature into prod. They're actively resistant to learning about how the sausage is made. They just want a magic "PaaS 2.0" where they click a button and get a deployment pipeline, telemetry, logs, alerts, & reliability. They don't want to know anything about configuration, auto-scaling, backups, availability-zones, security, load-balancing or service meshes; that's just an opaque implementation detail to them. And I can see their point. This is exactly why serverless or whatever it evolves into is going to be big.
|
# ? Nov 6, 2017 00:37 |