|
Shumagorath posted:I saw the vSphere web client for the first time today. In ESXi 6 the webclient uses html5. It still sucks but it's not quite as bad.
|
# ¿ Nov 4, 2015 00:13 |
|
|
# ¿ May 16, 2024 13:17 |
|
Tab8715 posted:Is the fat vSphere client going away? The client was deprecated in 5.1. The client is incapable of using any of the new features in 5.5 and beyond, as well as certain vcenter level features like distributed switches. I don't think there's an actual kill date for it though.
|
# ¿ Nov 4, 2015 00:33 |
|
servers are cattle, not pets.
|
# ¿ Dec 14, 2015 04:03 |
|
I can't remediate hosts with VUM with the web client, instead I need to use the discontinued fat client. Are you serious VMware?
|
# ¿ Dec 30, 2015 06:50 |
|
So how does JBOD affect disk IO? I've got a server I'm playing with and 5 drives with ~150gb each. I'm going to install two exchange servers, two DCs, and maybe two other low impact things on ESXi. In total I might use 350gb of storage. I'm concerned about disk i/o. I see a few options open to me. 1) RAID 5. I don't need any redundancy for what I'm doing but it's an option. I'm sure I'd still have enough storage after the overhead. My disk io concerns are more read than write anyway. 2) JBOD. This is what I'm leaning towards right now but I'm not sure how well the load would be distributed across the disks. If I have a single disk eating all of the exchange IOPs I'm probably gonna have a bad time. 3) I could make two RAID 0 arrays and one regular disk. This could work but could be cutting it pretty close to my storage requirements.
|
# ¿ Jan 23, 2016 06:25 |
|
This only needs to last a few months as a test environment so I'm not worried about a disk failing right now. I'm demoing an on-prem exchange to O365 migration plus a few other Azure-y things. My plan was to run ESXi off of an old USB stick. I do have a hardware raid controller that I was planning using to aggregate my disks together that gives me all of my raid/JBOD options. My main question was that when I turn all my disks into a single logical volume with JBOD, does it just fill up the first disk before moving onto the second with zero concern for trying to distribute IO? I'm going to assume it does or that it could be raid card dependent. I guess I could just not bother aggregating my disks at all and just have 5 different drives available to ESXi. That might just be simplest and best option for me here.
|
# ¿ Jan 23, 2016 21:39 |
|
Potato Salad posted:What sort of claim does MS have on VMware IP? Probably something about VMware benefiting from their hypervisors running MS produced OSes
|
# ¿ Feb 15, 2016 23:48 |
|
Why not just do a physical server if you're resorting to using workstation?
|
# ¿ Feb 24, 2016 03:11 |
|
I've got a weird issue with vCenter not properly handling AD accounts. My vCenter server connected to AD with integrated windows authentication and my domain account is set to be a vcenter administrator. It works great I can log in and do whatever but after restarting vCenter I lose this capability and am told my account does not have permission to log in as well as all other AD accounts. That's weird, I logged in with the vsphere.local\administrator account and check that everything is still correctly set and it is. If I give it a nudge and remove and re-add a specific domain account as being permitted to log into vCenter all domain accounts regain their permissions. What is going on?
|
# ¿ Mar 3, 2016 21:00 |
|
parid posted:I had a very similar problem in my old job. We're you having permissions errors powering on VMs? Inventory view in the webclient without any VMs listed? If I try to log in through the fat client I'm told I do not have permission. The web client actually lets me log in but, like you, the inventory will be empty.
|
# ¿ Mar 6, 2016 18:04 |
|
1000101 posted:Show me your sso config.
|
# ¿ Mar 6, 2016 21:15 |
|
When I was doing a DC upgrade I wrongly assumed that because all the client machines were using DHCP for IP addressing they must also be using it to pull their DNS. So when I changed the DNS server scope option to what would be the new DC nobody except for my own computer and the 6 or so machines that I had personally set up got the message to use the new DNS server.
|
# ¿ Apr 5, 2016 00:36 |
|
You can take a look at the quote I was given last year when I was involved in purchasing new hardware for the VRTX. This was dual PERC raid cards, 17(?) 1tb drives and 8 gigabit NICs in the tower chassis. Keep in mind this is from when the Canadian funny money was 75% what the american dollar was. I have nothing to compare it to, but I thought it was really easy to work with and get setup for shared storage: you can do either iSCSI or NFS and just run it over a few 1Gb NICs bonded together if you really need to. Methanar fucked around with this message at 03:46 on May 12, 2016 |
# ¿ May 12, 2016 03:43 |
|
Code is self documenting
|
# ¿ Sep 19, 2016 18:04 |
|
Has anyone ever had ridiculous 350-500ms write latencies when using hybrid vsan? Read speeds are fine, writing to SSD cache is fine, and the write latencies under vsan disk/deep dive show write latencies of around 3-10ms which is what you expect. That makes me think the problem is between the vsan client tab and the vsan disk: ie the network fabric. But that doesn't make much sense either because clearly the data is getting to the servers just fine if the SSDs can write with zero latency. The hosts are connected together with 40g links with vcenter connected by 1g. The m5210 raid card we're using isn't certified for vsan but is supposedly supported in raid 0 mode. Whatever it means. We interpreted it as meaning each drive had to be in a single-drive raid0 volume. Exposing single-drive jbod didn't allow vmware to recognize it, but the raid 0s were recognized, which sort of makes sense. http://www.vmware.com/resources/compatibility/detail.php?deviceCategory=vsanio&productid=36877 quote:VSAN RAID 0 mode is only supported when the controller is in MegaRAID mode. Am I just doing something dumb and obviously wrong?
|
# ¿ Sep 25, 2016 02:25 |
|
At gigabit speed that would take 18641351 hours to fill, or 776722 days, or 2128 years.
|
# ¿ Dec 3, 2016 18:51 |
|
Maneki Neko posted:We had nothing but sadness and purple screens with the x710 drivers and vmware, it was bad enough we said gently caress it and switched out fleet back to the x520s Seconding sadness with x710 and VMware.
|
# ¿ Dec 5, 2016 22:54 |
|
evol262 posted:I'm not familiar with HPEs offerings these days, but that's not 16gb of memory, right? Because that's hilariously underpowered as soon as you start putting VMs doing "real" work on it. 16gb is likely included in the base model. You buy the rest seperately.
|
# ¿ Dec 6, 2016 06:23 |
|
theperminator posted:Anyone have much experience with VSAN in production environments? Make absolute sure any raid/passthrough cards you have are explicitly said to be compatible with VSAN by VMware.
|
# ¿ Dec 13, 2016 02:37 |
|
gently caress x710
|
# ¿ Jan 10, 2017 00:29 |
|
It probably wouldn't be too bad to engineer something to do what you want. You could write a fairly simple web server that listens for connecting clients and assigns the client a DB from a list when pinged. Make a base worker VMDK that contains an rc.local script to 1) Reach out to the webserver and get a DB assigned 2) Dump the DB and run the scripts 3) upload the processed data somewhere 4) Ping the controller again with some kind of exit code. When the controller gets a good exit code it can do something like this to repeat until the controller runs out of DBs to assign: Trigger warning: Hyper V code:
code:
|
# ¿ Apr 19, 2017 07:53 |
|
Run ESXi in workstation. ESXi will like the virtual intel nic.
|
# ¿ May 8, 2017 06:31 |
|
evil_bunnY posted:I love that tingly "I'm not as smart as I think I am" I get when you post.
|
# ¿ Jun 8, 2017 23:22 |
|
I'm 99% sure really bad things happen if you try to treat a HT core as if it was a physical core. IE don't assign a VM 4 cores on a 2 physical core host.
|
# ¿ Jul 19, 2017 01:13 |
|
Have you looked into container orchestration platforms like Kubernetes or Docker Swarm? Those are super interesting and should be demoable on any VPS for cheap. With Jenkins or another build pipeline you can build your code right into docker images that you can feed into a container orchestrator to be pushed around. It's less than friendly to get started but the benefits for any sort of micro-service/distributed system are immediately obvious. https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes For CM I've been using Ansible and Chef and both have their use cases. Ansible is okay for adhoc stuff like pushing code, checking statuses and other human intervention tasks, but I'd never dream of using it exclusively as a CM (Protip: figure out dynamic inventory and feed your Chef inventory into Ansible). Chef is great because it fully enables you to drop into arbitrary code at any moment if you need to do something fancy or need special logic tailored to your environment.
|
# ¿ Jul 24, 2017 01:24 |
|
Eletriarnation posted:Depends on what kind of CPU you want but you might find it more cost effective to get a cheap home-office server like a Poweredge T30 than to actually build something around a server-chipset motherboard. Does VMware still blacklist realtek NICs? That might be something to watch out for
|
# ¿ Aug 5, 2017 06:12 |
|
H2SO4 posted:KUBERNETES WAS AN INSIDE JOB Actual lol
|
# ¿ Sep 12, 2017 03:29 |
|
Wibla posted:My G7 with 2x X5675, 120GB ram and 2 SSD + 6 SAS 10k drives use around 155-160W on average, and it's not noisy as such - but it varies fan speeds continually due to how the sensors are setup, so it's not a box I'd want in a living space. I get where you're coming from though. What could you possibly need that at home for
|
# ¿ Feb 3, 2018 19:26 |
|
stevewm posted:I've been working on figuring out a solution for our infrastructure at work. A small smattering of servers... 8 Windows VMs and 4 Linux VMs. Our computing power needs are not likely to change soon, but our storage usage is constantly growing. The existing servers are all Dell, 6+ years old.. Support/warranty on them expiring soon. I've used a Dell VRTX before and I agree, it was pretty nifty as an all in one box. If I was somehow in a situation where I needed an in-office server presence again, I'd definitely consider it. Multiply whatever concerns you have about Lenovo support by a factor of 10. I will never be complicit in purchasing Lenovo hardware again. I don't know anything about your storage requirements, but have you considered an external ZFS NAS that you mount rather than direct attach disks? Methanar fucked around with this message at 22:16 on Feb 14, 2018 |
# ¿ Feb 14, 2018 22:12 |
|
What's everyones favorite way of backing up a vcsa database. Preferably free and lovely.
|
# ¿ Mar 7, 2018 01:17 |
|
Thermopyle posted:I'm tired so I'm sure I'm just doing something stupid or forgetting something obvious here, but all of a sudden I can't remember how to handle docker ports. No. -p 8080:12492 is saying to nat any traffic destined to 8080 on your host to :12492 in your container. That means two things would try to listen to 8080 on your host.
|
# ¿ Jun 29, 2018 20:17 |
|
Inside of a container, do you still need to run public facing services like haproxy or nginx as unprivileged users, or is it fine to run them as a normal ubuntu user? Chroot directives are extraneous as well right?
Methanar fucked around with this message at 08:19 on Jul 3, 2018 |
# ¿ Jul 3, 2018 08:17 |
|
Question Friend posted:Would making a virtual Linux box inside Windows allow you to use NordVPN on the programs inside the box while keeping your normal Windows connections raw? If you set it up properly, sure
|
# ¿ Aug 3, 2018 23:55 |
|
Agrikk posted:Is there a way I can wipe a disk from within ESXi? I recently did this. I thought it was extremely frustrating too. lsblk to figure out what labels refer to what storage backend. Then you'll want to delete unnecessary partitions. code:
Methanar fucked around with this message at 05:22 on Mar 22, 2019 |
# ¿ Mar 22, 2019 05:19 |
|
lol if you pay for software
|
# ¿ Mar 22, 2019 21:41 |
|
Internet Explorer posted:they don't call it "One Rich rear end in a top hat Called Larry Ellison" for nothing https://www.youtube.com/watch?v=-zRN7XLCRhc&t=2307s
|
# ¿ Mar 22, 2019 23:06 |
|
Why do you want a personal hardware lab in your living room
|
# ¿ Jun 16, 2019 23:49 |
|
Hughlander posted:Related. Is there any free nested virtualization for Android emulators? Few years back someone charged an arm and a leg to run them on AWS to the point where it wasn’t worth it for one of the top 3 app developers to even consider using. Protip: run android emulators on ARM instances. Doing QEMU emulation from x86 is extremely slow. Methanar fucked around with this message at 05:47 on Aug 17, 2019 |
# ¿ Aug 17, 2019 03:04 |
|
Martytoof posted:Ehh, I'm reticent to ask this in here because I feel like I'm one good google search away but somehow it's eluding me. Terraform is what you want for creating infrastructure on a cloud platform of some kind, whether its AWS, vSphere or whatever: Like VMs, attaching disks to VMs, and networking VMs. It looks like a terraform provider does exist for direct interaction with ESXi without vCenter if thats your thing. https://github.com/josenk/terraform-provider-esxi. Terraform isn't great when it comes to k8s manifests, though, try kustomize for that. Your post is a bit ambiguous as to whether you want to be maintaining workload on k8s or on VM infra. Don't be afraid to pick the right tool for the job: if you want to use Vagrant for your local testing and development and then separately be using terraform to create your AWS/vSphere infrastructure you can. You don't need to try and make a single tool do anything. You can try the devops thread too https://forums.somethingawful.com/showthread.php?threadid=3695559
|
# ¿ Feb 15, 2020 02:14 |
|
|
# ¿ May 16, 2024 13:17 |
|
X710 is great because it puts ESXi closer towards it's natural state. psod Also I once discovered that my non-lacp bonded interfaces had been flapping several times a second for like a year on one of our databases. Also sometimes the cards would just not be recognized as plugged in at all, even by the BMC until you restarted 10 times. gently caress datacenters Methanar fucked around with this message at 00:53 on May 7, 2020 |
# ¿ May 7, 2020 00:44 |