|
big money big clit posted:Or you could just keep a few very basic and bare bones templates and use puppet to configure them on creation for their intended role, rather than keeping a hundred special snowflake images that all need to be converted to VM and turned on and updated and turned off and converted back to templates.
|
# ? May 28, 2017 06:35 |
|
|
# ? Apr 26, 2024 09:44 |
|
Vulture Culture posted:You don't do that manually at all if you're using a tool like Packer, that's the entire point. And my point is that the actual problem is that they have a few hundred "half-baked" images sitting around on disk and if they solved that problem they wouldn't need a solution like Packer to manage it. There are places where managing that many images makes sense but I'm guessing that a VMware/Windows centric environment isn't usually one of them.
|
# ? May 28, 2017 06:44 |
|
big money big clit posted:And my point is that the actual problem is that they have a few hundred "half-baked" images sitting around on disk and if they solved that problem they wouldn't need a solution like Packer to manage it. There are places where managing that many images makes sense but I'm guessing that a VMware/Windows centric environment isn't usually one of them. Then, let's talk about immutable infrastructures. Every environment has that one person who tests changes ad-hoc in production, then forgets to back them out or put them into configuration management, so your prod configs drift. If you're regularly replacing your server instances doing blue/green deployments -- things like Packer and Docker make this really easy -- this is a problem that doesn't stick around very long. Your horizontally-scaled infrastructures will always be more robust and consistent blue/green deploying from regularly-updated golden master images than from a complicated series of arbitrary state transitions that may or may not actually converge to the same result. How much of this applies to anyone actually running VMware is totally up in the air. Vulture Culture fucked around with this message at 16:05 on Jun 1, 2017 |
# ? Jun 1, 2017 15:59 |
|
Whoa, hey guys, sorry for posting a question and disappearing for a few days. First of all, thanks for the responses. I've looked at Packer, but I don't have much decision making power now to say "let's do this!". There are a handful of other people in my team, and I'm only trying to rock the boat if I'm sure of what I'm doing, and I'm not quite there with configuration management (packer, puppet, or anything). As some of you pointed out, this is the problem that needs fixing, and I think we're going in that direction - just very slowly. After reading all the posts, and thinking about this, I'm feeling that the best effort would be to hire some sort of consultant to get us from all the VM templates we're in now, to the OS+deploy app approach that I know is the modern way of doing things. My boss has mentioned that this is something he's been wanting to do, since no one currently has the skills to takes us to the next level, but we're all pretty good at the current way of doing things. We'll see how it goes, I think now I have a better informed point of view.
|
# ? Jun 2, 2017 14:12 |
|
I'm not sure why you'd need a consultant. It sounds complex, but it's very simple in practice. You could pick up ansible in an afternoon. Packer is also simple. Use some downtime and play with vagrant plus the ansible provider. Figure out how to build one working application image. Demonstrate it. Then translate to packer. The hard part is autoscaling your apps to make those images worthwhile, but that's something it sounds like you're doing anyway.
|
# ? Jun 2, 2017 15:25 |
|
Is it documented "Best Practice" from someone, somewhere, that if you are running a very large environment with dozens of hosts and thousands of guests that you should be separating them in to clusters with "large" and "small" vCPU allocations on the guest. 4+ being large and everything else small in this case. I can't imagine that having a real performance benefit over throwing them all together in one massive cluster since relaxed coscheduling was introduced and DRS should be optimizing the workload based on real VM resource requests. It stinks of a huge waste of money on underutilized hardware.
|
# ? Jun 2, 2017 18:25 |
|
BangersInMyKnickers posted:Is it documented "Best Practice" from someone, somewhere, that if you are running a very large environment with dozens of hosts and thousands of guests that you should be separating them in to clusters with "large" and "small" vCPU allocations on the guest. 4+ being large and everything else small in this case. I can't imagine that having a real performance benefit over throwing them all together in one massive cluster since relaxed coscheduling was introduced and DRS should be optimizing the workload based on real VM resource requests. It stinks of a huge waste of money on underutilized hardware.
|
# ? Jun 2, 2017 21:19 |
|
Vulture Culture posted:This doesn't sound right to me. Based on the way relaxed co-scheduling works, if you're oversubscribing CPU in bursts, scheduling big VMs alongside small VMs seems like a much smarter way to avoid co-stops than scheduling your huge VMs together. Then again, it depends a lot on your hardware and how many cores you have available per NUMA node. (VMkernel generally prefers to not relocate a running VM to a non-home NUMA node unless it's started to lag a lot.) Yeah, that's what I thought. There's a lot of head shaking going on when I look at how they built this thing.
|
# ? Jun 2, 2017 21:42 |
|
We're (finally) getting ready to upgrade or vmWare stuff and in order to do that I need to get vShield removed. The only thing that's left is removing the vShield Endpoint and the vShield manager. While the documentation specifies needing to reboot when uninstalling vShield App (which is not installed), I'm not sure if this also applies to uninstalling Endpoint as well. Anyone have experience with this? I want to make sure I plan our maintenance window long enough in case I need to evacuate VMs to other hosts and do reboots.
|
# ? Jun 7, 2017 17:40 |
|
Vulture Culture posted:Even if you're talking about managing an entire infrastructure of pets -- and mutable infrastructures are pets no matter how much you pretend they aren't -- the two tools aren't mutually exclusive. Idempotent CM tools like Puppet and Chef are really good at converging systems from state A and state B to the same state C, but they can often take a long time to run, especially when you're dealing with, say, Microsoft software that has a ton of service packs and updates that require reboots. If you produce a lot of systems of the same type, a better approach is to have your continuous integration system (your infrastructure management code has integration tests, right?) run Packer to generate new base images when your server role is updated, have Packer apply your Puppet code, and then your turnaround on deploying a new server instance with a complex application is down to a couple of minutes, not a day and a half. If you have people manually creating these images all the time, that's a waste, but what the gently caress are you even doing running these automation tools if you have people pushing the button and just staring at the output until it's done?
|
# ? Jun 8, 2017 22:44 |
|
evil_bunnY posted:I love that tingly "I'm not as smart as I think I am" I get when you post.
|
# ? Jun 8, 2017 23:22 |
|
evil_bunnY posted:I love that tingly "I'm not as smart as I think I am" I get when you post. If you look through my post history like a creepy stalker, I was the guy arguing against golden master images as a waste of time at my last job. I literally held the exact position I just argued with, because a day to kick over a new Windows app server to some random department was totally not a big deal and almost never held anybody up.
|
# ? Jun 9, 2017 06:33 |
|
The golden rule in IT: "it depends".
|
# ? Jun 9, 2017 09:30 |
|
Do the restrictions on running Windows 7 on Kaby Lake/Ryzen systems apply to VMs? I know it complains and won't let you run Windows Update if you install natively, but does a VM make it happy or does it pass through enough of the CPUID info that Windows decides the system's too new?
|
# ? Jun 14, 2017 19:53 |
|
wolrah posted:Do the restrictions on running Windows 7 on Kaby Lake/Ryzen systems apply to VMs? I know it complains and won't let you run Windows Update if you install natively, but does a VM make it happy or does it pass through enough of the CPUID info that Windows decides the system's too new? A good hypervisor should let you pick what model the CPU looks like to the guest. You can with qemu/kvm at least. The guest will probably complain if you just pass through the host CPU id info.
|
# ? Jun 15, 2017 20:58 |
|
wolrah posted:Do the restrictions on running Windows 7 on Kaby Lake/Ryzen systems apply to VMs? I know it complains and won't let you run Windows Update if you install natively, but does a VM make it happy or does it pass through enough of the CPUID info that Windows decides the system's too new?
|
# ? Jun 15, 2017 21:09 |
|
Can someone explain how to set up a virtual bridge on CentOS in a simple way? All I'm trying to do is switch a VM (or create a new one) from NAT addressing on 192.168.122.X to sharing my LAN on 192.168.1.X. It's a simple thing that I'm trying to achieve and the implementation is baffling! I'm not sure how to distinguish a virtual network from a virtual interface. I've had it working once in Ubuntu and once in CentOS but I'm getting frustrated with it now. Probably this heat. I'm doing everything from the terminal, btw. EDIT: I think I got it. Created /etc/sysconfig/network-scripts/ifcfg-bridge0 containing parts of my ifcfg-eth0 file: DEVICE="bridge0" ONBOOT="yes" TYPE=Bridge BOOTPROTO=static IPADDR=192.168.1.6 NETMASK=255.255.255.0 EDIT2: It wasn't actually that complicated in the end. Just remove BOOTPROTO= from /etc/sysconfig/network-scripts/ifcfg-eth0 and add BRIDGE=br0 Create /etc/sysconfig/network-scripts/ifcfg-br0 and add this to it: DEVICE=br0 BOOTPROTO=dhcp ONBOOT=yes TYPE=Bridge Then restart networking and br0 now gets the IP instead of eth0. Then specify the bridge with virt-install. Source: https://www.linux-kvm.org/page/Networking#Public_Bridge apropos man fucked around with this message at 15:50 on Jun 18, 2017 |
# ? Jun 18, 2017 13:24 |
|
What is the best approach to encrypting VMs at rest? Environment is vSphere 5.5, guests are all Windows Server. Are people using Bitlocker on guest VMs? I am aware that encryption at rest probably does not have a lot of real-world infosec value, however it's a client requirement.
|
# ? Jun 20, 2017 11:47 |
|
gallop w/a boner posted:What is the best approach to encrypting VMs at rest? Environment is vSphere 5.5, guests are all Windows Server. Are people using Bitlocker on guest VMs? There's a lot of different approaches you can take on 5.5, though. That's one of them, and probably the easiest and most portable if you're only encrypting a very small number of VMs. Many storage arrays support full-disk encryption that might satisfy those requirements. Another option might be to set up a Linux server/cluster with NFS or iSCSI on dm-crypt and use that as your datastore.
|
# ? Jun 20, 2017 15:56 |
|
gallop w/a boner posted:What is the best approach to encrypting VMs at rest? Environment is vSphere 5.5, guests are all Windows Server. Are people using Bitlocker on guest VMs? Bitlocker on the VMWare side is technically not supported (by Microsoft) for boot volumes, although it is by VMWare. For 5.5 environments we've just generally pushed encryption at rest down to the storage layer.
|
# ? Jun 20, 2017 17:07 |
|
In addition to the above mentioned, which are the best options, there's also HyTrust, which does per-VM encryption (caveat emptor, we've got a customer that's had a lot of issues with it) and Gemalto which can do at-rest encryption of various types of data by sitting in the data storage path. I'd try to get to 6.5, if your hardware allows, and use the native encryption. It's going to be by far the easiest least problematic solution.
|
# ? Jun 20, 2017 19:44 |
|
Also, if you do go to 6.5 for easier encryption and don't have key management in place, Hytrust offers a free one. You'll need some type of key management for the 6.5, since it doesn't have anything built in.
|
# ? Jun 21, 2017 12:50 |
|
I got a self-audit letter from VMware and I don't think this process has been updated from the vCenter/vSphere 5 era.
|
# ? Jun 27, 2017 18:18 |
|
Stupid newbie question: If I've got my .vmx stored in a certain folder, and other associated files (ex: vmdk) seem to be saving there, are there any other tweaks I should make so that all of the files to run the VM stay in that folder? For context, I have two ways I backup. I occassionally (once a month or so) back up to an external HD, but for day to day I use Spideroak for my more important files. Ideally I'd like it so that even if my laptop broke or was stolen, I could just install Spideroak on the new one, and in my Spideroak folder would be the subdirectory with this VM.
|
# ? Jul 2, 2017 15:48 |
|
maskenfreiheit posted:Stupid newbie question: If I've got my .vmx stored in a certain folder, and other associated files (ex: vmdk) seem to be saving there, are there any other tweaks I should make so that all of the files to run the VM stay in that folder?
|
# ? Jul 5, 2017 03:43 |
|
Vulture Culture posted:What program are you talking about? VMware on OSX
|
# ? Jul 5, 2017 05:06 |
|
All the files related to the VM should stay in that folder unless you tell them not to. Unless something is different in Fusion from ESXi.
|
# ? Jul 5, 2017 06:56 |
|
If I'm wanting to backup a KVM vm for using on a different host, dumping its XML configuration and copying its .img file is all I should need, right? also, is there any good webui replacement for virt-manager? I know of proxmox, but my naive understanding is that it's not just a drop-in replacement for virt-manager. This is just for my home server and managing a half-dozen KVM vms... Thermopyle fucked around with this message at 17:05 on Jul 5, 2017 |
# ? Jul 5, 2017 17:02 |
|
Thermopyle posted:also, is there any good webui replacement for virt-manager? I know of proxmox, but my naive understanding is that it's not just a drop-in replacement for virt-manager. This is just for my home server and managing a half-dozen KVM vms... https://github.com/kimchi-project/kimchi
|
# ? Jul 5, 2017 17:45 |
|
Me and my colleague was hired at a small it-company at the same time as all the old techs quit. The documentation of old jobs are often non-existing and tickets 70-75% have the resolution "Fixed it" and nothing more. So one of our customers is a large printing company and they have merged with another company. The other company previously had a corporate it-department. When they merged a new server was setup to handle prepress jobs, 50gb+ files with rendered data that are used and sometimes re-used (therefore storage is vital). The server was setup as a hyper-v host running 2012 with a 1tb drive for the guest. The guest was setup with a 600gb disk for printjob storage (e:). Now one of the old technicians apparently created a snapshot right before me and my colleague was hired, this has created a differencing disk that when full will grow out of the physical space on the host. I do not recognize "differencing disk" effect, snapshots i've done have not resulted in one of these that i know of. The biggest problem is that i right now cannot see how much free space i have, the host has been showing 41gb for a few months, i've been on sick-leave and none of my colleagues have bothered looking at this during that time. Drives as the guest see them: * Can i somehow merge these two vhds into one 900gb+ file then shrink the space for the guest? * Is it possible to do this live? * Is it possible to guestimate how long it will take to merge? I talked to the supplier of the prepress-software and he told me they recommend 1-2 tb of disk for the client, 600gb is too small, and they didn't understand why it was setup this way. I told the customer that the easiest way would be to just upgrade to larger drives, expand the array and that would solve it, but they do not want to spend money (unless it's for macbooks and iphones). * I have a possible plan B and that's an older HP StorageWorx unknown version-server but that will require them to purchase two fiberchannel cards to get the transferspeed they need between the host and the storageworx, i found a couple of 4gb fc cards, do you expect that to be fast enough or do i need to buy 10gb?
|
# ? Jul 7, 2017 20:14 |
|
What's the timestamp on the last full backup of that VM?
|
# ? Jul 8, 2017 06:48 |
|
H2SO4 posted:What's the timestamp on the last full backup of that VM? The last full what? (Would a full backup of it remove the differencing disk -part? I could do a backup to the storageworx-server, probably.). This host is located at the client and there are no backup at all configured for anything there. This is something that was agreed before i started, i have no idea why anyone would run something as "business critical" and not do backups. To add to the insanity they repurposed the storageworx server with 2tb-drives totalling something like ~17tb of usable space. This has neither backups nor monitoring configured. you know what, it's saturday evening and i realize more and more that i am slowly going insane here. YOTJ is still a thing right?..
|
# ? Jul 8, 2017 21:27 |
|
Right, a Hyper-V snapshot at its core just creates a new differencing disk (and a copy of the VM metadata at time of snapshot). The post-snapshot differencing disk will store the changes between the snapshot state and the current state. From what I understand, your issue is that you do not know how fast this disk will grow. This is unavoidable - after all, it is just a file containing the differences between an existing disk and the current state. So it is a question of how fast the differences get created. If every write is new data, it is just a matter of counting up the writes to get a rough estimate. But all in all, you don't care about how fast it grows, I bet. You want to eliminate the danger of running out of space on the host. I would recommend you simply delete the snapshot. This will merge all the differences into the original 600 GB fixed-size base disk and get rid of the immediate threat. I believe this cannot be done live (it will pause the VM) but my memory is hazy. Naturally, this whole setup stinks and I would recommend you do some education series about the risks involved. If the customer does not get the meaning of backups, maybe a few harmless questions about their failure recovery process might shed light on why they do not value their data or perhaps it will jog them into taking it seriously. EssOEss fucked around with this message at 22:04 on Jul 8, 2017 |
# ? Jul 8, 2017 22:01 |
|
underlig posted:The last full what? Sorry, that was mean. I knew the answer to this question. To merge the disks you shut down the machine and delete the snapshot. Hyper-V will then merge the disks. There's a lot of things that could go wrong, though, I wouldn't do poo poo without a full backup of that VM somewhere else if it's indeed mission critical. I'd merge over the weekend (if they're only a M-F shop) because you don't know how long a merge operation is going to take on that bad boy, especially if it's a single disk. Hopefully you have enough space to merge the disks or else things get really fun. Your saving grace on this one is that it looks like the original disk is a fixed size and not dynamic, so you shouldn't run into any space problems since the space is already allocated. Good luck!
|
# ? Jul 10, 2017 16:14 |
|
I'm giving up on virtualizing my old PC, there just isn't a solution that will let me use the shitton of HDDs in there as well as have windows VMs that can use USB devices exclusively.
|
# ? Jul 12, 2017 20:19 |
|
SEKCobra posted:I'm giving up on virtualizing my old PC, there just isn't a solution that will let me use the shitton of HDDs in there as well as have windows VMs that can use USB devices exclusively. Passing through HDDs and USB devices is straightforward and reasonably performant in pretty much every modern hypervisor.
|
# ? Jul 12, 2017 20:51 |
|
SEKCobra posted:I'm giving up on virtualizing my old PC, there just isn't a solution that will let me use the shitton of HDDs in there as well as have windows VMs that can use USB devices exclusively.
|
# ? Jul 12, 2017 21:09 |
|
Hyper-V still does not do USB, does it?
|
# ? Jul 12, 2017 21:11 |
|
EssOEss posted:Hyper-V still does not do USB, does it? I saw options for passthrough on 2016 release preview.
|
# ? Jul 12, 2017 21:12 |
|
|
# ? Apr 26, 2024 09:44 |
|
anthonypants posted:What in the world are you trying to do Run a NAS on the same machine as a Hypervisor. But yeah, Hyper-V can't even do USB pass-through and everyone else can't do the NAS part.
|
# ? Jul 13, 2017 07:45 |