|
dont change my name posted:it has to look cool. This is the best response I've ever read.
|
# ? Feb 6, 2014 06:18 |
|
|
# ? Apr 28, 2024 23:01 |
|
Dilbert As gently caress posted:Are you taking the 5.5 or 5.1? I'm taking 5.1; thanks for the resources, you are the best!
|
# ? Feb 6, 2014 06:46 |
|
dont change my name posted:What's a good libvirt front end for KVM? Something nice and easy with lots of gui and charts and things. Ovirt?
|
# ? Feb 6, 2014 14:28 |
|
Mr Shiny Pants posted:Ovirt? Archipel looks cooler.
|
# ? Feb 6, 2014 15:08 |
|
Has anyone tried upgrading to 5.1U2 yet? I've tried a couple times from 5.1 in our Dev environment and it is hosing up Update Manager/Web Client/AutoDeploy. Support had us go back and reset the admin password as it looks like it expired but that really didn't help much. My inclination is that SSO is getting hosed because after just upgrading SSO the web client is coming back with "Signature Validation Failed" when logging in with either a domain ID or the System-Domain admin ID.
|
# ? Feb 6, 2014 16:18 |
|
I am trying to use virtual box with a couple Logitech devices, I have a mouse and K400 keyboard (the one with built in touch pad), both devices use a unifying receiver. I want to have one receiver used by the host PC, and the other used by the virtual machine. I know I need to setup a USB filter, but I can not figure out how to properly set it up to have control of both PCs. I'm sure this would be pretty much a non-issue if I had two different receivers, but as is I simply can not find out how to make this work. Is there a way to make this work?
|
# ? Feb 6, 2014 17:03 |
|
Crotch Fruit posted:I am trying to use virtual box with a couple Logitech devices, I have a mouse and K400 keyboard (the one with built in touch pad), both devices use a unifying receiver. I want to have one receiver used by the host PC, and the other used by the virtual machine. I know I need to setup a USB filter, but I can not figure out how to properly set it up to have control of both PCs. I'm sure this would be pretty much a non-issue if I had two different receivers, but as is I simply can not find out how to make this work. Is there a way to make this work? The point of filtering devices is so the host OS doesn't see it. You can't bind the same USB device to two systems at once, sorry.
|
# ? Feb 6, 2014 18:44 |
|
evol262 posted:The point of filtering devices is so the host OS doesn't see it. You can't bind the same USB device to two systems at once, sorry. OK, can I only bind one receiver to the VM and keep the other identical receiver on the host?
|
# ? Feb 6, 2014 19:09 |
|
Can anyone help with an issue with a 5.0 host? A blade dropped out of vsphere a few days ago. I’ve tried the normal restart of the management agent from the console and ssh but the host refuses to connect again. Connecting directly to the host with the vsphere client also fails. All the vm on the blade are still working as normal, it’s just I can’t get the blade back in the cluster to vmotion everything off. A number of the vm are very important production servers that cant currently be shut-down. I’ve logged a number of calls with HP and their current solution is to reboot the blade, which doesn’t help getting the vm off the blade or what the cause of the original issue is.
|
# ? Feb 6, 2014 22:20 |
|
wibble posted:I’ve logged a number of calls with HP http://kb.vmware.com/kb/1010705 There can be lots of reasons that the management agent won't start up, from simple things like being out of log space to unresolvable issues like socket bugs.
|
# ? Feb 6, 2014 22:40 |
|
evol262 posted:Archipel looks cooler. Anyone tried it?
|
# ? Feb 6, 2014 23:25 |
|
wibble posted:Can anyone help with an issue with a 5.0 host? A blade dropped out of vsphere a few days ago. That's funny - I had the exact same issue with a 5.0 host, the exact same support experience, and the exact same resolution. I never was able to get those VMs off to other hosts, so I just had to take a late night outage to reboot that host. Real goddamn annoying. If you figure out anything clever I would love to know about it.
|
# ? Feb 6, 2014 23:34 |
|
evol262 posted:Archipel looks cooler. Comedy option: Openstack Horizon (maybe it's not literal poo poo in Havana, we are still on Grizzly)
|
# ? Feb 6, 2014 23:35 |
|
Mausi posted:If you can get on the console via SSH they should be getting you to run vm-support and giving them the logs to analyze. The management agents starts up and runs. It seem its something else.
|
# ? Feb 6, 2014 23:43 |
|
In my experience you're not going to get those hosts moved. You need to start the motions for an emergency maintenance window so you can gracefully shut those VM's down and move them to another host while you figure out what's up with the affected physical host.
|
# ? Feb 7, 2014 00:01 |
|
luminalflux posted:Anyone tried it? Docjowles posted:Comedy option: Openstack Horizon (maybe it's not literal poo poo in Havana, we are still on Grizzly) But yeah. It sucks less in Havana. And even less in icehouse. But the better horizon gets, the more you need to muck with heat and a bunch of network namespace poo poo just to get something that's not an appropriate replacement for most KVM deployments. But it's 2014 and people put loving everything on AWS and treat it like a Linode VPS, so...
|
# ? Feb 7, 2014 02:34 |
|
wibble posted:The management agents starts up and runs. It seem its something else. I had the same thing happen in our 5.0 cluster a couple of times. The issue was traced down to a bug in 5.0 when we deleted old datastores (FC or iSCSI). The device wouldn't be completely removed and show 'dead or error' on a non-existent LUN. At that point, it was a time-bomb and after a few of these accumulated and we rescanned the affected HBAs that housed them, the vmkernel would go bananas. No SSH access or anything - we had to reboot the host entirely. We use Kaseya for general monitoring so it was logging into each box that resided there via RDP and performing a shutdown, waiting a bit, then bounce the host from the ILO. All of a sudden, the dead devices were gone and everything was peachy. Upgrading to 5.1 fixes it as they reworked things in that release to remove the device properly. I still run a PowerCLI script across the cluster periodically though to see if I find anything, just in case...
|
# ? Feb 7, 2014 04:36 |
|
evol262 posted:Only comedy because neutron/quantum is so painful. I do find it hilarious when people come into #openstack to complain because their environment went down and they lost their nonpersistent VMs they were using like they were on ESX, though. Cattle and pets will never stop being the best analogy ever. My main gripes with Horizon (again, in Grizzly) are that it is slow as gently caress once you get past a handful of instances in a project and a lot of important management features just aren't exposed (looking at you, flavor management). At this point I do all management from the CLI. I wouldn't be totally surprised to find we have done something wrong and a simple change would speed things up dramatically, but as we have it configured it can take minutes just to pull up the "Instances" page before I can click one and connect to the console. In fact I'd love to find that we did something dumb so we can fix it. We basically punted on quantum/neutron and use it as a glorified DHCP relay into our existing network infrastructure since we already paid vendors a bunch of money for robust and highly available gear. We're very much not a greenfield deployment, OpenStack is supplementing and slowly replacing a ton of physical servers but they all need to play nice for the foreseeable future. To your point, we aren't trying to 1:1 reproduce AWS and create a perfect cloud, we just want to have a big bunch of compute nodes where we can spin up a shitload of VM's and start pulling poorly utilized, ancient physical hosts out of racks. 90% of those servers are definitely cattle (I love that analogy myself); a few are pets and it makes me nervous putting them on OpenStack but we have other redundancies like load balancers or multiple DNS entries to mitigate that. All important data lives on highly available NFS shares, anyway. After all that the TL;DR may be that we should have gone with one of the other KVM management tools you mentioned vv But long-term we do want autoscaling and other ~cloud~ features so I think it will eventually pay off.
|
# ? Feb 7, 2014 05:58 |
|
Reading you guys I feel like a spoiled little rear end in a top hat with my VMware Ent+ ELA. Makes me kinda jealous in another way that you're getting to hack around with maturing tech though, I kinda miss that.
|
# ? Feb 7, 2014 09:59 |
|
Mausi posted:Reading you guys I feel like a spoiled little rear end in a top hat with my VMware Ent+ ELA. Docjowles posted:My main gripes with Horizon (again, in Grizzly) are that it is slow as gently caress once you get past a handful of instances in a project and a lot of important management features just aren't exposed (looking at you, flavor management). At this point I do all management from the CLI. I wouldn't be totally surprised to find we have done something wrong and a simple change would speed things up dramatically, but as we have it configured it can take minutes just to pull up the "Instances" page before I can click one and connect to the console. In fact I'd love to find that we did something dumb so we can fix it. nova-network not running (horizon will wait 60 seconds to time out every time if it's not) Or more likely the default ajax_queue_limit is absurdly low. It works fine when you have your teams broken out so they have 30 instances per project. It falls over when it's thousands. Fortunately, it's Django, and this is configurable. Docjowles posted:We basically punted on quantum/neutron and use it as a glorified DHCP relay into our existing network infrastructure since we already paid vendors a bunch of money for robust and highly available gear. I'd probably recommend Openstack instead of oVirt if you have lots of cattle anyway (even if ovirt does autoscale hosts and VMS). There are some very large RHEV deployments out there, but the community around openstack is much more active.
|
# ? Feb 7, 2014 15:05 |
|
evol262 posted:This is probably one of two things: Hmm. We don't appear to even have the nova-network package installed. But it's certainly not stalling for 60 seconds; pulling up Project -> Instances takes about 10 seconds. Which is roughly twice as long as a CLI "nova list" takes. I tried increasing ajax_queue_limit from 10 to 500 and that didn't make an appreciable difference. Logging into another node with the old value takes just as long. Unless that needs to be changed on every control node before it helps? Or by "absurdly low" you mean I should crank it up to 5000 or something Docjowles fucked around with this message at 18:14 on Feb 7, 2014 |
# ? Feb 7, 2014 18:02 |
|
Docjowles posted:Hmm. We don't appear to even have the nova-network package installed. But it's certainly not stalling for 60 seconds; pulling up Project -> Instances takes about 10 seconds. Which is roughly twice as long as a CLI "nova list" takes. I tried increasing ajax_queue_limit from 10 to 500 and that didn't make an appreciable difference. Logging into another node with the old value takes just as long. Unless that needs to be changed on every control node before it helps? Or by "absurdly low" you mean I should crank it up to 5000 or something It shouldn't need to be changed on every node, no. nova cli shouldn't take 5 seconds to return, either. Did you upgrade from essex or folsom? You may be missing indexes in mysql. Debugging output on nova-api should show you what queries it's running.
|
# ? Feb 7, 2014 19:35 |
|
Thanks, I'll check that out. I don't believe we've done any upgrades, started out fresh on Grizzly.
|
# ? Feb 7, 2014 19:40 |
|
Is there some unspoken rule about putting a host, running no vm's, into maintenance mode before rebooting it via vSphere? Someone got on my case because I rebooted one. Yes I realize in a production environment maintenance mode should be done to ensure VM's are evacuated but this host had not been running any VM's at the time.
|
# ? Feb 7, 2014 22:17 |
|
Was it in a DRS cluster? If not, sounds like someone wanted to be high and mighty.
|
# ? Feb 7, 2014 22:29 |
|
Erwin posted:Was it in a DRS cluster? If not, sounds like someone wanted to be high and mighty. Nope. Thinking back I guess he might have been stressed because his VCDX defense is Monday.
|
# ? Feb 7, 2014 22:32 |
|
It's always good to develop safe habits. Better to spend an extra 30 seconds "unnecessarily" putting a host in maint vs spacing out and power cycling a host running production VM's cause that's what you're used to.
|
# ? Feb 7, 2014 22:53 |
|
I also don't want to get alarmed about a host unreachable for no reason.
|
# ? Feb 7, 2014 22:55 |
|
tell him to stop being such a pussy.
|
# ? Feb 8, 2014 00:17 |
|
evol262 posted:I also don't want to get alarmed about a host unreachable for no reason. Alarming isn't configured for emails, the server was having data issues corrupting things and even making VSS's somehow mix traffic. Backstory is the hvac had been out for a week and everything was flurbing up. My mind set in a situation like that is "get the HW down as soon as possible"; as the vm's were already off.
|
# ? Feb 8, 2014 01:03 |
|
I am really wanting to get my feet wet into some Ubuntu/KVM systems. I have setup a DevStack lab, and a vanilla Ubuntu KVM server. Is the current offering for GUI KVM management really this bad? The Virtual Machine Manager is really quite clunky, and Proxmox... I am comfortable with CLI, but I am spoiled with ESXi and Vsphere. Is there anything out there that is even half as nice as Virtualbox, as far as GUI management of KVM?
|
# ? Feb 8, 2014 01:30 |
|
Argh I wanted to upgrade these ESX clusters from home but they must be from the 2.5 or 3.x days - the boot partition is like 50MB and the 5.0 installer wants to see 333MB or so. I guess I'm going back to work in the morning to install via CD.
|
# ? Feb 8, 2014 01:47 |
|
lifenomad posted:I am really wanting to get my feet wet into some Ubuntu/KVM systems. I have setup a DevStack lab, and a vanilla Ubuntu KVM server. I mean this in the best way, but never use Devstack unless it's in a VM. It litters poo poo everywhere. Use packstack unless you're trying to run openstack on openstack in a disposable environment. Beyond which, openstack is not really a KVM frontend and is not an esx competitor. It's a different animal. You're looking at oVirt (won't run on Ubuntu because vdsm won't run on Ubuntu), archipel, ganeti, or kimchi. Or plain old virsh/virt-manager/boxes, which works fine. Why do you want to "get your feet wet"? What do you want to do with it? Suggestions are easier if you have some criteria. vSphere is fine as a "default" choice, but you'll have to explicate your complaints and needs to get suggestions. And it may not be on Ubuntu
|
# ? Feb 8, 2014 04:14 |
|
Basically just to explore and play with that hypervisor. I have a bunch of experience with ESXi and I am completely comfortable with it. I just like to explore other options, and push my comfort zones.
|
# ? Feb 8, 2014 20:59 |
|
Perfect day for VMwares KB to be down. gently caress.
|
# ? Feb 8, 2014 22:01 |
|
lifenomad posted:Basically just to explore and play with that hypervisor. I have a bunch of experience with ESXi and I am completely comfortable with it. I just like to explore other options, and push my comfort zones. Honestly, "that hypervisor" is 99% libvirt for most uses, and no frontend will do what you want. It's not VMware. Like everything else to learn, pick a use case. Do that, and solutions can be recommended.
|
# ? Feb 8, 2014 22:25 |
|
lifenomad posted:Basically just to explore and play with that hypervisor. I have a bunch of experience with ESXi and I am completely comfortable with it. I just like to explore other options, and push my comfort zones. ESXI is one of those easy to pick up hard to master things evol262 posted:Honestly, "that hypervisor" is 99% libvirt for most uses, and no frontend will do what you want. It's not VMware. Like everything else to learn, pick a use case. Do that, and solutions can be recommended. Well you need to look at the business requirements; ESXi is a great common knowledge that doesn't require the knowledge transfer as libvirt. Sure some things out best vmware but when thinking for the business needs you gotta sometimes thing more along the continuity needs. Think I should reshoot my DCD. Any Idea what a 23y/o DCD is worth? getting like 102 at my next job but I wonder sometimes. Dilbert As FUCK fucked around with this message at 07:21 on Feb 9, 2014 |
# ? Feb 9, 2014 07:18 |
|
Dilbert As gently caress posted:Think I should reshoot my DCD. Any Idea what a 23y/o DCD is worth? getting like 102 at my next job but I wonder sometimes. You need the DCD in order to get VCDX right? What do you think a 23y/o VCDX is worth?
|
# ? Feb 9, 2014 07:40 |
|
Dr. Arbitrary posted:You need the DCD in order to get VCDX right? What do you think a 23y/o VCDX is worth? Unfortunately your age will play a factor, but a smart hiring manager will recognise the value of a dedicated SME regardless. For reference, I'm 32 with no DCD but work as the SME for one of the largest banks.
|
# ? Feb 9, 2014 11:37 |
|
|
# ? Apr 28, 2024 23:01 |
|
Dilbert As gently caress posted:
In most cases, about $400 more than a 23 y/o with VCP. At a partner where there are VMware competencies that require DCD, maybe a little more. An end-user sort of organization won't be as familiar with it, and as such is unlikely to value it any higher.
|
# ? Feb 9, 2014 15:13 |