Search Amazon.com:
Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us $3,400 per month for bandwidth bills alone, and since we don't believe in shoving popup ads to our registered users, we try to make the money back through forum registrations.
«253 »
  • Post
  • Reply
Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.


big money big clit posted:

Or you could just keep a few very basic and bare bones templates and use puppet to configure them on creation for their intended role, rather than keeping a hundred special snowflake images that all need to be converted to VM and turned on and updated and turned off and converted back to templates.
You don't do that manually at all if you're using a tool like Packer, that's the entire point.

Adbot
ADBOT LOVES YOU

big money big clit
Oct 19, 2004

Breaux, Breaux, you seen a defense around here anywhere!?


Vulture Culture posted:

You don't do that manually at all if you're using a tool like Packer, that's the entire point.

And my point is that the actual problem is that they have a few hundred "half-baked" images sitting around on disk and if they solved that problem they wouldn't need a solution like Packer to manage it. There are places where managing that many images makes sense but I'm guessing that a VMware/Windows centric environment isn't usually one of them.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.


big money big clit posted:

And my point is that the actual problem is that they have a few hundred "half-baked" images sitting around on disk and if they solved that problem they wouldn't need a solution like Packer to manage it. There are places where managing that many images makes sense but I'm guessing that a VMware/Windows centric environment isn't usually one of them.
Even if you're talking about managing an entire infrastructure of pets -- and mutable infrastructures are pets no matter how much you pretend they aren't -- the two tools aren't mutually exclusive. Idempotent CM tools like Puppet and Chef are really good at converging systems from state A and state B to the same state C, but they can often take a long time to run, especially when you're dealing with, say, Microsoft software that has a ton of service packs and updates that require reboots. If you produce a lot of systems of the same type, a better approach is to have your continuous integration system (your infrastructure management code has integration tests, right?) run Packer to generate new base images when your server role is updated, have Packer apply your Puppet code, and then your turnaround on deploying a new server instance with a complex application is down to a couple of minutes, not a day and a half. If you have people manually creating these images all the time, that's a waste, but what the gently caress are you even doing running these automation tools if you have people pushing the button and just staring at the output until it's done?

Then, let's talk about immutable infrastructures. Every environment has that one person who tests changes ad-hoc in production, then forgets to back them out or put them into configuration management, so your prod configs drift. If you're regularly replacing your server instances doing blue/green deployments -- things like Packer and Docker make this really easy -- this is a problem that doesn't stick around very long. Your horizontally-scaled infrastructures will always be more robust and consistent blue/green deploying from regularly-updated golden master images than from a complicated series of arbitrary state transitions that may or may not actually converge to the same result.

How much of this applies to anyone actually running VMware is totally up in the air.

Vulture Culture fucked around with this message at Jun 1, 2017 around 15:05

Alfajor
Jun 10, 2005

The delicious snack cake.

Whoa, hey guys, sorry for posting a question and disappearing for a few days.
First of all, thanks for the responses.

I've looked at Packer, but I don't have much decision making power now to say "let's do this!". There are a handful of other people in my team, and I'm only trying to rock the boat if I'm sure of what I'm doing, and I'm not quite there with configuration management (packer, puppet, or anything). As some of you pointed out, this is the problem that needs fixing, and I think we're going in that direction - just very slowly.

After reading all the posts, and thinking about this, I'm feeling that the best effort would be to hire some sort of consultant to get us from all the VM templates we're in now, to the OS+deploy app approach that I know is the modern way of doing things. My boss has mentioned that this is something he's been wanting to do, since no one currently has the skills to takes us to the next level, but we're all pretty good at the current way of doing things.

We'll see how it goes, I think now I have a better informed point of view.

evol262
Nov 30, 2010
#!/usr/bin/perl

I'm not sure why you'd need a consultant.

It sounds complex, but it's very simple in practice. You could pick up ansible in an afternoon.

Packer is also simple.

Use some downtime and play with vagrant plus the ansible provider. Figure out how to build one working application image. Demonstrate it. Then translate to packer.

The hard part is autoscaling your apps to make those images worthwhile, but that's something it sounds like you're doing anyway.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles



Is it documented "Best Practice" from someone, somewhere, that if you are running a very large environment with dozens of hosts and thousands of guests that you should be separating them in to clusters with "large" and "small" vCPU allocations on the guest. 4+ being large and everything else small in this case. I can't imagine that having a real performance benefit over throwing them all together in one massive cluster since relaxed coscheduling was introduced and DRS should be optimizing the workload based on real VM resource requests. It stinks of a huge waste of money on underutilized hardware.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.


BangersInMyKnickers posted:

Is it documented "Best Practice" from someone, somewhere, that if you are running a very large environment with dozens of hosts and thousands of guests that you should be separating them in to clusters with "large" and "small" vCPU allocations on the guest. 4+ being large and everything else small in this case. I can't imagine that having a real performance benefit over throwing them all together in one massive cluster since relaxed coscheduling was introduced and DRS should be optimizing the workload based on real VM resource requests. It stinks of a huge waste of money on underutilized hardware.
This doesn't sound right to me. Based on the way relaxed co-scheduling works, if you're oversubscribing CPU in bursts, scheduling big VMs alongside small VMs seems like a much smarter way to avoid co-stops than scheduling your huge VMs together. Then again, it depends a lot on your hardware and how many cores you have available per NUMA node. (VMkernel generally prefers to not relocate a running VM to a non-home NUMA node unless it's started to lag a lot.)

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles



Vulture Culture posted:

This doesn't sound right to me. Based on the way relaxed co-scheduling works, if you're oversubscribing CPU in bursts, scheduling big VMs alongside small VMs seems like a much smarter way to avoid co-stops than scheduling your huge VMs together. Then again, it depends a lot on your hardware and how many cores you have available per NUMA node. (VMkernel generally prefers to not relocate a running VM to a non-home NUMA node unless it's started to lag a lot.)

Yeah, that's what I thought. There's a lot of head shaking going on when I look at how they built this thing.

bigmandan
Sep 11, 2001

lol internet

College Slice

We're (finally) getting ready to upgrade or vmWare stuff and in order to do that I need to get vShield removed. The only thing that's left is removing the vShield Endpoint and the vShield manager. While the documentation specifies needing to reboot when uninstalling vShield App (which is not installed), I'm not sure if this also applies to uninstalling Endpoint as well. Anyone have experience with this? I want to make sure I plan our maintenance window long enough in case I need to evacuate VMs to other hosts and do reboots.

evil_bunnY
Apr 2, 2003



Vulture Culture posted:

Even if you're talking about managing an entire infrastructure of pets -- and mutable infrastructures are pets no matter how much you pretend they aren't -- the two tools aren't mutually exclusive. Idempotent CM tools like Puppet and Chef are really good at converging systems from state A and state B to the same state C, but they can often take a long time to run, especially when you're dealing with, say, Microsoft software that has a ton of service packs and updates that require reboots. If you produce a lot of systems of the same type, a better approach is to have your continuous integration system (your infrastructure management code has integration tests, right?) run Packer to generate new base images when your server role is updated, have Packer apply your Puppet code, and then your turnaround on deploying a new server instance with a complex application is down to a couple of minutes, not a day and a half. If you have people manually creating these images all the time, that's a waste, but what the gently caress are you even doing running these automation tools if you have people pushing the button and just staring at the output until it's done?

Then, let's talk about immutable infrastructures. Every environment has that one person who tests changes ad-hoc in production, then forgets to back them out or put them into configuration management, so your prod configs drift. If you're regularly replacing your server instances doing blue/green deployments -- things like Packer and Docker make this really easy -- this is a problem that doesn't stick around very long. Your horizontally-scaled infrastructures will always be more robust and consistent blue/green deploying from regularly-updated golden master images than from a complicated series of arbitrary state transitions that may or may not actually converge to the same result.

How much of this applies to anyone actually running VMware is totally up in the air.
I love that tingly "I'm not as smart as I think I am" I get when you post.

Methanar
Sep 26, 2013

There will be no mercy


evil_bunnY posted:

I love that tingly "I'm not as smart as I think I am" I get when you post.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.


evil_bunnY posted:

I love that tingly "I'm not as smart as I think I am" I get when you post.
I don't consider myself all that smart, I just have exposure to a different set of problems than a lot of people in the industry are dealing with. Like, if you don't happen to operate a 24x7 public-facing service with highly variable, bursty traffic, why would you ever care about the tooling that helps you solve auto-scaling and large-scale software deployment?

If you look through my post history like a creepy stalker, I was the guy arguing against golden master images as a waste of time at my last job. I literally held the exact position I just argued with, because a day to kick over a new Windows app server to some random department was totally not a big deal and almost never held anybody up.

Mr Shiny Pants
Nov 12, 2012


The golden rule in IT: "it depends".

wolrah
May 8, 2006
what?


Do the restrictions on running Windows 7 on Kaby Lake/Ryzen systems apply to VMs? I know it complains and won't let you run Windows Update if you install natively, but does a VM make it happy or does it pass through enough of the CPUID info that Windows decides the system's too new?

cliffy
Apr 12, 2002



wolrah posted:

Do the restrictions on running Windows 7 on Kaby Lake/Ryzen systems apply to VMs? I know it complains and won't let you run Windows Update if you install natively, but does a VM make it happy or does it pass through enough of the CPUID info that Windows decides the system's too new?

A good hypervisor should let you pick what model the CPU looks like to the guest. You can with qemu/kvm at least.

The guest will probably complain if you just pass through the host CPU id info.

evil_bunnY
Apr 2, 2003



wolrah posted:

Do the restrictions on running Windows 7 on Kaby Lake/Ryzen systems apply to VMs? I know it complains and won't let you run Windows Update if you install natively, but does a VM make it happy or does it pass through enough of the CPUID info that Windows decides the system's too new?
Prob depends on EVC level

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

Can someone explain how to set up a virtual bridge on CentOS in a simple way? All I'm trying to do is switch a VM (or create a new one) from NAT addressing on 192.168.122.X to sharing my LAN on 192.168.1.X. It's a simple thing that I'm trying to achieve and the implementation is baffling! I'm not sure how to distinguish a virtual network from a virtual interface.

I've had it working once in Ubuntu and once in CentOS but I'm getting frustrated with it now. Probably this heat.

I'm doing everything from the terminal, btw.

EDIT:

I think I got it. Created /etc/sysconfig/network-scripts/ifcfg-bridge0 containing parts of my ifcfg-eth0 file:

DEVICE="bridge0"
ONBOOT="yes"
TYPE=Bridge
BOOTPROTO=static
IPADDR=192.168.1.6
NETMASK=255.255.255.0

EDIT2:
It wasn't actually that complicated in the end.

Just remove BOOTPROTO= from /etc/sysconfig/network-scripts/ifcfg-eth0 and add BRIDGE=br0

Create /etc/sysconfig/network-scripts/ifcfg-br0 and add this to it:
DEVICE=br0
BOOTPROTO=dhcp
ONBOOT=yes
TYPE=Bridge

Then restart networking and br0 now gets the IP instead of eth0. Then specify the bridge with virt-install.

Source: https://www.linux-kvm.org/page/Networking#Public_Bridge

apropos man fucked around with this message at Jun 18, 2017 around 14:50

gallop w/a boner
Aug 16, 2002



What is the best approach to encrypting VMs at rest? Environment is vSphere 5.5, guests are all Windows Server. Are people using Bitlocker on guest VMs?

I am aware that encryption at rest probably does not have a lot of real-world infosec value, however it's a client requirement.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.


gallop w/a boner posted:

What is the best approach to encrypting VMs at rest? Environment is vSphere 5.5, guests are all Windows Server. Are people using Bitlocker on guest VMs?

I am aware that encryption at rest probably does not have a lot of real-world infosec value, however it's a client requirement.
6.5 does FDE natively, and can finally encrypt vMotion. I recommend that if you're in a highly regulated environment, since there's not much value in encrypting your storage network access but dumping the contents of memory over the network every time you rebalance for DRS.

There's a lot of different approaches you can take on 5.5, though. That's one of them, and probably the easiest and most portable if you're only encrypting a very small number of VMs. Many storage arrays support full-disk encryption that might satisfy those requirements. Another option might be to set up a Linux server/cluster with NFS or iSCSI on dm-crypt and use that as your datastore.

Maneki Neko
Oct 27, 2000



gallop w/a boner posted:

What is the best approach to encrypting VMs at rest? Environment is vSphere 5.5, guests are all Windows Server. Are people using Bitlocker on guest VMs?

I am aware that encryption at rest probably does not have a lot of real-world infosec value, however it's a client requirement.

Bitlocker on the VMWare side is technically not supported (by Microsoft) for boot volumes, although it is by VMWare. For 5.5 environments we've just generally pushed encryption at rest down to the storage layer.

big money big clit
Oct 19, 2004

Breaux, Breaux, you seen a defense around here anywhere!?


In addition to the above mentioned, which are the best options, there's also HyTrust, which does per-VM encryption (caveat emptor, we've got a customer that's had a lot of issues with it) and Gemalto which can do at-rest encryption of various types of data by sitting in the data storage path.

I'd try to get to 6.5, if your hardware allows, and use the native encryption. It's going to be by far the easiest least problematic solution.

Tev
Aug 13, 2008


Also, if you do go to 6.5 for easier encryption and don't have key management in place, Hytrust offers a free one. You'll need some type of key management for the 6.5, since it doesn't have anything built in.

Adbot
ADBOT LOVES YOU

anthonypants
May 6, 2007



Dinosaur Gum

I got a self-audit letter from VMware and I don't think this process has been updated from the vCenter/vSphere 5 era.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«253 »