Search Amazon.com:
Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us $3,400 per month for bandwidth bills alone, and since we don't believe in shoving popup ads to our registered users, we try to make the money back through forum registrations.
«260 »
  • Post
  • Reply
The Nards Pan
Jun 10, 2009

Goodbye Galaxy!


Unlockable Ben

Is anyone else here running Ovirt? I'm having a bitch of a time getting it set up. I only have a single host to work with which makes parts of it a bit difficult since you can't turn off some of the HA stuff with the hosted engine and it won't let you do some config without moving the engine VM and putting the host in maintenance mode. I'm also trying to export local storage as nfs shares and then mount them on the host to satisfy the network storage requirement of the hosted engine but I have all kinds of problems with storage disconnecting from the host and being unable to reactivate.
I've done most of it on Cent7 but tried oVirt node last night unsuccessfully, it wouldn't even initialize vdsm properly and systemd kept masking lvm2-lvmetad despite my best effort to unmask and enable it.
In my 3AM haze I convinced myself it was my lovely third-hand hardware and ordered an H700 to replace my Perc6i, but in the cold light of day that doesn't seem like it will help. I probably should have bought an actual nas enclosure instead and pulled most my local storage to that.
Mostly I'm tired and frustrated and just wanted to piss and moan about it - should I just give up on oVirt on a single host? What would be a better option? I already have experience with VMware and Hyper-V and wanted to try something new. Proxmox? Xenserver?

Adbot
ADBOT LOVES YOU

unknown
Nov 16, 2002
Ain't got no stinking title yet!

Out of my experience, Ovirt is fundamentally designed to be run on 2+ nodes (+1 if you don't do self hosted controller) with a NAS of some sort (preferably NFS).

Out side of that config you rapidly start running into weird issues and corner cases that someone hasn't programmed for.

evol262
Nov 30, 2010
#!/usr/bin/perl

unknown posted:

Out of my experience, Ovirt is fundamentally designed to be run on 2+ nodes (+1 if you don't do self hosted controller) with a NAS of some sort (preferably NFS).

Out side of that config you rapidly start running into weird issues and corner cases that someone hasn't programmed for.

It can do self-hosted pretty easily, but it's definitely more common to see a multiple-node deployment.


The Nards Pan posted:

Is anyone else here running Ovirt? I'm having a bitch of a time getting it set up. I only have a single host to work with which makes parts of it a bit difficult since you can't turn off some of the HA stuff with the hosted engine and it won't let you do some config without moving the engine VM and putting the host in maintenance mode. I'm also trying to export local storage as nfs shares and then mount them on the host to satisfy the network storage requirement of the hosted engine but I have all kinds of problems with storage disconnecting from the host and being unable to reactivate.
ovirt-hosted-engine-ha doesn't actually care about the "ha" bits on single-host deployments. You'd honestly be better off using gluster for local storage than trying to export NFS back to itself, since this works seamlessly.

vdsm expects to be able to reconfigure the network, and this probably disconnects you from NFS storage. Then you need to wait for sanlock to get back at it. I suspect that if you look at /var/log/sanlock/sanlock.log, you'll see that it's busy trying to acquire a lock after it disconnects, which can take minutes. Use gluster.

Or, honestly, just use ovirt live, which is designed for an "all in one" scenario.

The Nards Pan posted:

I've done most of it on Cent7 but tried oVirt node last night unsuccessfully, it wouldn't even initialize vdsm properly and systemd kept masking lvm2-lvmetad despite my best effort to unmask and enable it.
vdsm doesn't behave with lvmetad anyway. This isn't specific to oVirt node. See this bug. The whole thing got whacked because vdsm (for historical reasons) creates a VG for storage, and lvmetad can do really weird things with presented FC/iSCSI LUNs that actually have guests on them. Plus systems with 3000+ LUNs zoned to that server (don't ask me why) made lvmetad take forever to start up, which is also a problem on bare CentOS/RHEL/Fedora.

oVirt Node runs "vdsm-tool configure --force" as part of startup, so it should definitely be configured. If for whatever reason your clock was behind and you didn't configure ntp/chrony at installation, the vdsm certificate may have a "Not before:" date in the future. There's currently a patch up to resolve this, but it probably won't merge until oVirt 4.1.7, and it's a bit of an edge case anyway. If it isn't that, have you sent a mail to users@ovirt.org ? vdsm should definitely work out of the box.

The Nards Pan posted:

In my 3AM haze I convinced myself it was my lovely third-hand hardware and ordered an H700 to replace my Perc6i, but in the cold light of day that doesn't seem like it will help. I probably should have bought an actual nas enclosure instead and pulled most my local storage to that.
Mostly I'm tired and frustrated and just wanted to piss and moan about it - should I just give up on oVirt on a single host? What would be a better option? I already have experience with VMware and Hyper-V and wanted to try something new. Proxmox? Xenserver?

Use gluster for a single host. Though, honestly, if you just want a single host, use plain KVM with kimchi or some other frontend. There's no point to running most 'enterprise' virtualization on a single host, unless you want to go down the rabbit hole of creating nested VMs for labbing (which also works fine).

The Nards Pan
Jun 10, 2009

Goodbye Galaxy!


Unlockable Ben

Wow, thank you for all that.

You were actually right about the time thing causing strange errors with lvm2-metad being masked. I had updated my idrac firmware recently and the time was off which was causing the installation of the hosted engine on oVirt Node to fail early in the install. I had configured NTP but for some reason the network wasn't coming up correctly either. Fixing both of those let it continue in the install. Although I'd read about Gluster, I've never tinkered with it. I tried quickly setting up a volume with my current Node install, now that I was able to actually get to the option to select storage - unfortunately it wouldn't let me install to the volume because it's not using replica 3. From what I understand it means it's not set up in a Gluster cluster with at least 3 hosts. Obviously I understand why you'd want this for any production VM environment but I can't really lab unless I just set up dummy 5GB volumes on cent VMs on my laptop, but that seems kinda silly.

I appreciate the advice, I think I'm going to play with a different hypervisor for a while. I've used plenty of pure kvm and virt-manager on an ubuntu jump box but I wanted to lab with some enterprise virtualization solutions now that I have an actual host, but it seems like I still need at least a dedicated network storage first. I'll check out oVirt Live too.

For now I'm ready to be done with this and actually deploy some machines so I'll probably drop back to hyper-v server just so I can keep doing something with Windows. Apparently Hyper-V server 16 handles nested virtualization pretty well so maybe I'll build my ovirt/gluster cluster in there.

Wicaeed
Feb 8, 2005


Oracle Cloud Migration Bombshell #1:

My boss, who had advocated against this move, or to even allow our department to evaluate Oracle Cloud as a hosting solution before this move, was terminated with prejudice today. He was greeted at the door by security, who collected his things and walked him out.

While not 100% the cause of his firing, I feel that his opposition to the move was on a long list of "issues" that management had with him. And by issues I mean completely valid complaints about how our supposed security focused company chooses to implement things.

Wicaeed fucked around with this message at Oct 5, 2017 around 20:54

Thanks Ants
May 21, 2004

Bless you, ants. Blants.




Fun Shoe

So you're looking, right?

Dr. Arbitrary
Mar 15, 2006

You're trying to say that you like DOS better then me, right?



Bleak Gremlin

Make sure you have his contact info. Maybe you can ride his coattails on out of that trainwreck.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.


For whatever it's worth, I've been told by people inside the company that Dyn is basically taking over Oracle's corporate culture on the services side, in sort of the same way that Pixar got paid a lot of money to take over Disney's whole animation division. Hopefully it turns out better than it's been.

evol262
Nov 30, 2010
#!/usr/bin/perl

Or they'll jump on the serverless hype train and push it as the solution for everything

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.


evol262 posted:

Or they'll jump on the serverless hype train and push it as the solution for everything
FaaS is babby nonsense but serverless is gonna eat the world. RedShift and Athena are to EMR what EMR was to on-premises Hadoop. IaaS got us where we are today, and I'm grateful, but serverless is the thing that's going to pose an actual existential threat to old-school server-obsessed sysadmins.

(Like everything forward-looking that I post, hedge this with "in 10-15 years".)

Vulture Culture fucked around with this message at Oct 6, 2017 around 13:08

jre
Sep 2, 2011

To the cloud ?





Athena is unbelievable hilariously expensive to actually use.
Redshift isn't server less.
EMR isn't serverless either and is horrible to use for anything other than a very narrow set of batch workloads

evol262
Nov 30, 2010
#!/usr/bin/perl

Vulture Culture posted:

FaaS is babby nonsense but serverless is gonna eat the world. RedShift and Athena are to EMR what EMR was to on-premises Hadoop. IaaS got us where we are today, and I'm grateful, but serverless is the thing that's going to pose an actual existential threat to old-school server-obsessed sysadmins.

(Like everything forward-looking that I post, hedge this with "in 10-15 years".)

I don't disagree with this at all. I was primarily referring to AWS Lambda, Serverless (TM), and other FaaS providers.

I'm all about PaaS, and FaaS fits a use case that PaaS and/or container-oriented microservices don't, but the sudden surge in press is hilarious. "Everyone can be a developer now!", like it's any easier than it already was with basic containers.

Providing an endpoint which does nothing but spin up a tiny unikernel and execute my code is neat, and infinitely more practical than any other options for some stuff, but not the end-all-be-all it's touted as.

Athena/Redshift/BigQuery are different bests entirely, though I'd honestly expect then to be (eventually) consumed by something more like Firebase.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.


jre posted:

Athena is unbelievable hilariously expensive to actually use.
Most new cloud stuff is until it's fully commoditized. I'm not talking about what's the best option for businesses today.

jre posted:

Redshift isn't server less.
Technically true, but this difference is pretty much completely insignificant if you're using Spectrum, and at some point that argument is just semantics over "what even is capacity management".

jre posted:

EMR isn't serverless either and is horrible to use for anything other than a very narrow set of batch workloads
I agree that it is not serverless, which is why I never claimed that EMR is serverless!

Vulture Culture fucked around with this message at Oct 6, 2017 around 17:00

Potato Salad
Oct 23, 2014




Tortured By Flan

https://kb.vmware.com/selfservice/m...ernalId=2151061

Thank the lord, a client on vsan also uses Veeam with CBT enabled, and it seems to be twice or three times a week that I've had to fix file issues with a clone job + delete old VM. My ticket on this has been open for a while.

Let me tell you, there's nothing quite like typing "rm -rf /vol/path/to/vsan/[thumbprint]" into a production esxi host. No, pinkie finger, stay the gently caress away from the Enter key.

Potato Salad fucked around with this message at Oct 9, 2017 around 15:04

Dr. Arbitrary
Mar 15, 2006

You're trying to say that you like DOS better then me, right?



Bleak Gremlin

For VMware, I've got a VM that uses a ton of compute resources on the weekends, and almost none during the week.

To perform the calculations in a reasonable amount of time, it needs a ton of CPUs, but obviously this ends up leading to wait time issues if there are other VMs on the host.

My tentative solution is to give a CPU reservation to the VM to make sure it always has those CPUs available and other VMs can wait.

First: Is there a better solution for this? I'd feel dumb if there was some new feature that was designed for this.

Second: I'm currently writing a script to see if I can set the reservation to 100% on weekends, 0% during the week. Is there an easy way to do this so I don't reinvent the wheel?

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles



Give it the bunch of cores but reduce its CPU share either manually on the VM or by dumping it in a low share resource pool so the other VMs can win if they need to. The reservation is just going to lock up all those cores on your host 24/7 and be a big waste. It'll probably still be fast enough on the weekends unless the other VMs are very CPU busy and will keep it from causing excessive wait on the other VMs during the week. Alternately, get comfortable the the powershell API and schedule some jobs to either shutdown/re-allocate/restart the VM with the new core count or dynamically adjust the share count on a schedule.

Punkbob
Mar 26, 2015


This is also an ideal cloud workload. I haven't looked but I'd also suspect there has to be some sort of batch job scheduler for VMware that is used for big data.

EdEddnEddy
Apr 5, 2012



Posted in the GPU thread but I also feel I should ask here.

I know its been posted before about people doing similar, but I was wondering how difficult and what hardware might be needed to host either a Linux or Windows Server to host a single game that will run on 10-12 clients through RDP. The Game is RealFlight 7.5 but the requirements haven't changed much.

The clients are literally the lowest end All-In-Ones that can just run the game now, but certain scenes and in Multiplayer they just fall on their face. I was hoping to find a way to scale this setup by keeping the AIO's and just have a single server do all the heavy lifting that I can just copy for future sites.

One user in the GPU forum mentioned running Hyper-V and having a login screen, VM spinup, and hosting up to 20 CS 1.6 clients which should be easily more than I need and sounds like exactly What I need. However all of the searching I seem to come up with shows mainly RemoteFX stuff from 2012 give or take.

The other obstacle I feel is getting the game to work with the Dongle and R/C Remote it uses. Could the game input be set to detect what the remote client has connected to keep that working as it should? I might have to reach out to the RealFlight guys to see if they have any Server Hosted setup experience already or not.

anthonypants
May 6, 2007



Dinosaur Gum

EdEddnEddy posted:

Posted in the GPU thread but I also feel I should ask here.

I know its been posted before about people doing similar, but I was wondering how difficult and what hardware might be needed to host either a Linux or Windows Server to host a single game that will run on 10-12 clients through RDP. The Game is RealFlight 7.5 but the requirements haven't changed much.

The clients are literally the lowest end All-In-Ones that can just run the game now, but certain scenes and in Multiplayer they just fall on their face. I was hoping to find a way to scale this setup by keeping the AIO's and just have a single server do all the heavy lifting that I can just copy for future sites.

One user in the GPU forum mentioned running Hyper-V and having a login screen, VM spinup, and hosting up to 20 CS 1.6 clients which should be easily more than I need and sounds like exactly What I need. However all of the searching I seem to come up with shows mainly RemoteFX stuff from 2012 give or take.

The other obstacle I feel is getting the game to work with the Dongle and R/C Remote it uses. Could the game input be set to detect what the remote client has connected to keep that working as it should? I might have to reach out to the RealFlight guys to see if they have any Server Hosted setup experience already or not.
Dehumanize yourself and face to Remote Desktop Licensing

Da Mott Man
Aug 3, 2012

Merry Christmas!

Ho! Ho! Oh, no!

anthonypants posted:

Dehumanize yourself and face to Remote Desktop Licensing

Hate to empty quote but RDS licensing is hell.

EdEddnEddy
Apr 5, 2012



Da Mott Man posted:

Hate to empty quote but RDS licensing is hell.

It was you I quoted in the GPU thread. Can you expand on your CS 1.6 Lan Party setup? That sounded literally like my exact need, but I am still unsure if it is possible.

Da Mott Man
Aug 3, 2012

Merry Christmas!

Ho! Ho! Oh, no!

EdEddnEddy posted:

It was you I quoted in the GPU thread. Can you expand on your CS 1.6 Lan Party setup? That sounded literally like my exact need, but I am still unsure if it is possible.

It was a Supermicro x9drg-hf with 2x RX470 GPUs installed as the virtualization host. Remote Desktop Services with remotefx, on WS2012R2. Clients were W8.1Ent.

EDIT: Other infrastructure for RDS was borrowed from other servers on the network, SQL Server, AD, disks hosted with SOFS over infiniband, etc...

Da Mott Man fucked around with this message at Oct 12, 2017 around 01:27

Saukkis
May 16, 2003



anthonypants posted:

Dehumanize yourself and face to Remote Desktop Licensing

Would per-device CALs be simple enough in this situation, since it sounds like there is a specific set of computers that would connect to it?

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles



Yep. Either you license all the devices that will connect or all the users. Whichever is lower.

anthonypants
May 6, 2007



Dinosaur Gum

An update to Flash broke the vSphere client (it tries to load but it crashes Flash; e.g. you get the error "Shockwave Flash has crashed" in Chrome), and the workaround is to just use the old version of Flash player. Incidentally, Adobe admits that the older version of Flash player is known to have exploits in the wild, which is why Flash got updated in the first place.

Also per Adobe, there should be a beta version of the Flash update next week.

Adbot
ADBOT LOVES YOU

TheFace
Oct 4, 2004

Fuck anyone that doesn't wanna be this beautiful


anthonypants posted:

An update to Flash broke the vSphere client (it tries to load but it crashes Flash; e.g. you get the error "Shockwave Flash has crashed" in Chrome), and the workaround is to just use the old version of Flash player. Incidentally, Adobe admits that the older version of Flash player is known to have exploits in the wild, which is why Flash got updated in the first place.

Also per Adobe, there should be a beta version of the Flash update next week.

Well that explains why I've been having nothing but headaches with one of the companies I consult with. Thanks for posting this! Thankfully I have another jump box over there with an older version of flash.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«260 »