Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Methanar
Sep 26, 2013

by the sex ghost

Shumagorath posted:

I saw the vSphere web client for the first time today.

Is there a single good reason I shouldn't scrap our planned ESXi and convince everyone to go with Windows Server? Having separate management clients that are either deprecated but functional or featureful and utter trash (seriously, Adobe Flex??) isn't something I want to have my name on.

In ESXi 6 the webclient uses html5.

It still sucks but it's not quite as bad.

Adbot
ADBOT LOVES YOU

Methanar
Sep 26, 2013

by the sex ghost

Tab8715 posted:

Is the fat vSphere client going away?

Either way, VMware virtualization's portfolio is incredibly strong. It's not just that they have a product that does what customers want it's been around for a while integrates a ton of stuff and incredibly polished.

The client was deprecated in 5.1. The client is incapable of using any of the new features in 5.5 and beyond, as well as certain vcenter level features like distributed switches.

I don't think there's an actual kill date for it though.

Methanar
Sep 26, 2013

by the sex ghost
servers are cattle, not pets.

:colbert:

Methanar
Sep 26, 2013

by the sex ghost
I can't remediate hosts with VUM with the web client, instead I need to use the discontinued fat client. Are you serious VMware?

Methanar
Sep 26, 2013

by the sex ghost
So how does JBOD affect disk IO?

I've got a server I'm playing with and 5 drives with ~150gb each. I'm going to install two exchange servers, two DCs, and maybe two other low impact things on ESXi. In total I might use 350gb of storage.

I'm concerned about disk i/o. I see a few options open to me.

1) RAID 5. I don't need any redundancy for what I'm doing but it's an option. I'm sure I'd still have enough storage after the overhead. My disk io concerns are more read than write anyway.

2) JBOD. This is what I'm leaning towards right now but I'm not sure how well the load would be distributed across the disks. If I have a single disk eating all of the exchange IOPs I'm probably gonna have a bad time.

3) I could make two RAID 0 arrays and one regular disk. This could work but could be cutting it pretty close to my storage requirements.

Methanar
Sep 26, 2013

by the sex ghost
This only needs to last a few months as a test environment so I'm not worried about a disk failing right now. I'm demoing an on-prem exchange to O365 migration plus a few other Azure-y things.

My plan was to run ESXi off of an old USB stick. I do have a hardware raid controller that I was planning using to aggregate my disks together that gives me all of my raid/JBOD options. My main question was that when I turn all my disks into a single logical volume with JBOD, does it just fill up the first disk before moving onto the second with zero concern for trying to distribute IO? I'm going to assume it does or that it could be raid card dependent.

I guess I could just not bother aggregating my disks at all and just have 5 different drives available to ESXi. That might just be simplest and best option for me here.

Methanar
Sep 26, 2013

by the sex ghost

Potato Salad posted:

What sort of claim does MS have on VMware IP?

Probably something about VMware benefiting from their hypervisors running MS produced OSes

Methanar
Sep 26, 2013

by the sex ghost
Why not just do a physical server if you're resorting to using workstation?

Methanar
Sep 26, 2013

by the sex ghost
I've got a weird issue with vCenter not properly handling AD accounts.

My vCenter server connected to AD with integrated windows authentication and my domain account is set to be a vcenter administrator. It works great I can log in and do whatever but after restarting vCenter I lose this capability and am told my account does not have permission to log in as well as all other AD accounts.

That's weird, I logged in with the vsphere.local\administrator account and check that everything is still correctly set and it is. If I give it a nudge and remove and re-add a specific domain account as being permitted to log into vCenter all domain accounts regain their permissions.

What is going on?

Methanar
Sep 26, 2013

by the sex ghost

parid posted:

I had a very similar problem in my old job. We're you having permissions errors powering on VMs? Inventory view in the webclient without any VMs listed?

Our issue seemed to start after our vcenter 6 upgrade. Case is still open with vmware. They don't have the best people on it and it's moving very very slowly. The best clue last time I checked in, is that it had something to do with the inventory service. I'd encourage you to open a case. Maybe if more of us report it, they might devote the support/engineering resources to figure if out.

If I try to log in through the fat client I'm told I do not have permission.

The web client actually lets me log in but, like you, the inventory will be empty.

Methanar
Sep 26, 2013

by the sex ghost

1000101 posted:

Show me your sso config.

Also, stop using domain admin to do your day job. Create an ad group, drop admin users in it, then assign it rights via SSO.



Methanar
Sep 26, 2013

by the sex ghost
When I was doing a DC upgrade I wrongly assumed that because all the client machines were using DHCP for IP addressing they must also be using it to pull their DNS.

So when I changed the DNS server scope option to what would be the new DC nobody except for my own computer and the 6 or so machines that I had personally set up got the message to use the new DNS server.

:downs:

Methanar
Sep 26, 2013

by the sex ghost
You can take a look at the quote I was given last year when I was involved in purchasing new hardware for the VRTX. This was dual PERC raid cards, 17(?) 1tb drives and 8 gigabit NICs in the tower chassis. Keep in mind this is from when the Canadian funny money was 75% what the american dollar was. I have nothing to compare it to, but I thought it was really easy to work with and get setup for shared storage: you can do either iSCSI or NFS and just run it over a few 1Gb NICs bonded together if you really need to.

Methanar fucked around with this message at 03:46 on May 12, 2016

Methanar
Sep 26, 2013

by the sex ghost
Code is self documenting :colbert:

Methanar
Sep 26, 2013

by the sex ghost
Has anyone ever had ridiculous 350-500ms write latencies when using hybrid vsan?



Read speeds are fine, writing to SSD cache is fine, and the write latencies under vsan disk/deep dive show write latencies of around 3-10ms which is what you expect.

That makes me think the problem is between the vsan client tab and the vsan disk: ie the network fabric. But that doesn't make much sense either because clearly the data is getting to the servers just fine if the SSDs can write with zero latency. The hosts are connected together with 40g links with vcenter connected by 1g.


The m5210 raid card we're using isn't certified for vsan but is supposedly supported in raid 0 mode. Whatever it means. We interpreted it as meaning each drive had to be in a single-drive raid0 volume. Exposing single-drive jbod didn't allow vmware to recognize it, but the raid 0s were recognized, which sort of makes sense.

http://www.vmware.com/resources/compatibility/detail.php?deviceCategory=vsanio&productid=36877

quote:

VSAN RAID 0 mode is only supported when the controller is in MegaRAID mode.

Am I just doing something dumb and obviously wrong?

Methanar
Sep 26, 2013

by the sex ghost
At gigabit speed that would take 18641351 hours to fill, or 776722 days, or 2128 years.

Methanar
Sep 26, 2013

by the sex ghost

Maneki Neko posted:

We had nothing but sadness and purple screens with the x710 drivers and vmware, it was bad enough we said gently caress it and switched out fleet back to the x520s :(

Seconding sadness with x710 and VMware.

Methanar
Sep 26, 2013

by the sex ghost

evol262 posted:

I'm not familiar with HPEs offerings these days, but that's not 16gb of memory, right? Because that's hilariously underpowered as soon as you start putting VMs doing "real" work on it.

16gb is likely included in the base model. You buy the rest seperately.

Methanar
Sep 26, 2013

by the sex ghost

theperminator posted:

Anyone have much experience with VSAN in production environments?

I'm wondering what kind of issues I can expect to run into with it, any gotchas etc?

Make absolute sure any raid/passthrough cards you have are explicitly said to be compatible with VSAN by VMware.

Methanar
Sep 26, 2013

by the sex ghost
gently caress x710


Methanar
Sep 26, 2013

by the sex ghost
It probably wouldn't be too bad to engineer something to do what you want.

You could write a fairly simple web server that listens for connecting clients and assigns the client a DB from a list when pinged.

Make a base worker VMDK that contains an rc.local script to
1) Reach out to the webserver and get a DB assigned
2) Dump the DB and run the scripts
3) upload the processed data somewhere
4) Ping the controller again with some kind of exit code.

When the controller gets a good exit code it can do something like this to repeat until the controller runs out of DBs to assign:

Trigger warning: Hyper V
code:
$parentpath = "E:\VMs\DBWorker.vhdx" 
$path = "E:\VMs\" 
$vmname = DBWorker

Get-VM $vmname | Stop-VM -VM $_ -Force
Get-VM $vmname | Remove-VM $_ -Force
Remove-Item -Path "$path\$vmname Disk 0" -Force

#create a VHDX – differencing format 
$vhdpath = "$path\$vmname Disk 0.vhdx" 
New-VHD -ParentPath $parentpath -Differencing -Path $vhdpath 

#Create the VM 
New-VM -VHDPath "$vhdpath" -Name $vmname -Path "$path" -SwitchName "HasInternet" -Generation 2 

#Configure Dynamic Memory 
Set-VMMemory -VMName $vmname -DynamicMemoryEnabled $True -MaximumBytes 2GB -MinimumBytes 1GB -StartupBytes 1GB

#Start the VM 
Start-VM $vmname 
If you have a lot of power locally you could parallelize it fairly well by wrapping the above block in something like this

code:
foreach ($i in 1..20) { 
  $suffix = $i.ToString("000") 
  $vmname = "DBWorker-$suffix" 
}

Methanar
Sep 26, 2013

by the sex ghost
Run ESXi in workstation. :kheldragar:


ESXi will like the virtual intel nic.

Methanar
Sep 26, 2013

by the sex ghost

evil_bunnY posted:

I love that tingly "I'm not as smart as I think I am" I get when you post.

Methanar
Sep 26, 2013

by the sex ghost
I'm 99% sure really bad things happen if you try to treat a HT core as if it was a physical core.

IE don't assign a VM 4 cores on a 2 physical core host.

Methanar
Sep 26, 2013

by the sex ghost
Have you looked into container orchestration platforms like Kubernetes or Docker Swarm? Those are super interesting and should be demoable on any VPS for cheap. With Jenkins or another build pipeline you can build your code right into docker images that you can feed into a container orchestrator to be pushed around. It's less than friendly to get started but the benefits for any sort of micro-service/distributed system are immediately obvious.

https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes

For CM I've been using Ansible and Chef and both have their use cases. Ansible is okay for adhoc stuff like pushing code, checking statuses and other human intervention tasks, but I'd never dream of using it exclusively as a CM (Protip: figure out dynamic inventory and feed your Chef inventory into Ansible). Chef is great because it fully enables you to drop into arbitrary code at any moment if you need to do something fancy or need special logic tailored to your environment.

Methanar
Sep 26, 2013

by the sex ghost

Eletriarnation posted:

Depends on what kind of CPU you want but you might find it more cost effective to get a cheap home-office server like a Poweredge T30 than to actually build something around a server-chipset motherboard.

I really doubt you're going to run into compatibility issues with whatever random desktop you want to run on, especially if you're OK popping in a NIC if the one you have isn't supported.

Does VMware still blacklist realtek NICs? That might be something to watch out for

Methanar
Sep 26, 2013

by the sex ghost

H2SO4 posted:

KUBERNETES WAS AN INSIDE JOB

Actual lol

Methanar
Sep 26, 2013

by the sex ghost

Wibla posted:

My G7 with 2x X5675, 120GB ram and 2 SSD + 6 SAS 10k drives use around 155-160W on average, and it's not noisy as such - but it varies fan speeds continually due to how the sensors are setup, so it's not a box I'd want in a living space. I get where you're coming from though.

What could you possibly need that at home for

Methanar
Sep 26, 2013

by the sex ghost

stevewm posted:

I've been working on figuring out a solution for our infrastructure at work. A small smattering of servers... 8 Windows VMs and 4 Linux VMs. Our computing power needs are not likely to change soon, but our storage usage is constantly growing. The existing servers are all Dell, 6+ years old.. Support/warranty on them expiring soon.

So I'm looking at setting up a 2 or 3 node Hyper-V cluster to consolidate everything, though I think 2 nodes would be enough. At the very least to get some redundancy, as we basically have none right now.

Did a Dpack/Live Optics run for 24 hours; with all our servers included and the one hyper-v host we had, results in 1196 IOPs @ 95%... 4756 peak. (caused by a end of day process that runs at 11PM).

Have gotten a few solutions quoted by a few vendors and have narrowed it down to a few choices.

2 node Starwind VSAN appliance - most expensive option by far
2x Lenovo SR630 servers paired with DS4200 iSCSI SAN - cheapest option
2 node Dell VRTX - just barely more expensive than the Lenovo option.
3 node Scale Computing HC cluster - in-between Lenovo and Dell price wise.

Starwind was out due to sheer pricing alone. Way more than anything else.

The Lenovo looks promising... Concerned about their support though.

While I was impressed by the Scale Computing cluster, it is sold as a fixed appliance and not easy to upgrade. Need more storage? You can't just slap more drives in, you have to add another entire node. They use SuperMicro hardware running a highly customized KVM hypervisor. They also need 3 nodes minimum, which increases the MS licensing.

I have been leaning towards the Dell VRTX, which is basically a chassis with blade servers and shared storage in a single box with redundant everything. (except for the networking backplane, which is easy to work around). Easy to upgrade with either additional blades or storage. Single management point, easier configuration... From a company who's support I am familiar and satisfied with.. Am I crazy for going this route?

I've used a Dell VRTX before and I agree, it was pretty nifty as an all in one box. If I was somehow in a situation where I needed an in-office server presence again, I'd definitely consider it.

Multiply whatever concerns you have about Lenovo support by a factor of 10. I will never be complicit in purchasing Lenovo hardware again.

I don't know anything about your storage requirements, but have you considered an external ZFS NAS that you mount rather than direct attach disks?

Methanar fucked around with this message at 22:16 on Feb 14, 2018

Methanar
Sep 26, 2013

by the sex ghost
What's everyones favorite way of backing up a vcsa database. Preferably free and lovely.

Methanar
Sep 26, 2013

by the sex ghost

Thermopyle posted:

I'm tired so I'm sure I'm just doing something stupid or forgetting something obvious here, but all of a sudden I can't remember how to handle docker ports.

I'm trying to run this docker container, like so:

code:
sudo docker run \
	-d \
	--name="smartthings-mqtt-bridge" \
	-v /opt/mqtt-bridge:`pwd` \
	-p 8080:12492 \
	stjohnjohnson/smartthings-mqtt-bridge
When I do that I get:

code:
docker: Error response from daemon: driver failed programming external connectivity on endpoint 
smartthings-mqtt-bridge (d259cfdf707fb30d910b5cfbaa7135ca90b45de2d5fb2d58f2b80b51e3d74368): 
Error starting userland proxy: listen tcp 0.0.0.0:8080: bind: address already in use.
I do have another service on 8080, but shouldn't my -p 8080:12492 mean that it's not attempting to use 8080 on my host?

No.

-p 8080:12492 is saying to nat any traffic destined to 8080 on your host to :12492 in your container. That means two things would try to listen to 8080 on your host.

Methanar
Sep 26, 2013

by the sex ghost
Inside of a container, do you still need to run public facing services like haproxy or nginx as unprivileged users, or is it fine to run them as a normal ubuntu user? Chroot directives are extraneous as well right?

Methanar fucked around with this message at 08:19 on Jul 3, 2018

Methanar
Sep 26, 2013

by the sex ghost

Question Friend posted:

Would making a virtual Linux box inside Windows allow you to use NordVPN on the programs inside the box while keeping your normal Windows connections raw?

If you set it up properly, sure

Methanar
Sep 26, 2013

by the sex ghost

Agrikk posted:

Is there a way I can wipe a disk from within ESXi?

I have shoved a previously used disk into an ESXi 6.5 box for more storage, but it turns out that I've previously used the disk for an ESXi installation so there are old partitions on it.

I recently did this. I thought it was extremely frustrating too.

lsblk to figure out what labels refer to what storage backend. Then you'll want to delete unnecessary partitions.

code:
[root@esxi:/dev/disks] ls
mpx.vmhba32:C0:T0:L0
mpx.vmhba32:C0:T0:L0:1
mpx.vmhba32:C0:T0:L0:5
mpx.vmhba32:C0:T0:L0:6
mpx.vmhba32:C0:T0:L0:7
mpx.vmhba32:C0:T0:L0:8
mpx.vmhba32:C0:T0:L0:9
naa.6848f690ef0b72001f6320c182e69de9
naa.6848f690ef0b72001f6320c182e69de9:1
naa.6848f690ef0b720023fe71f648ee744e
naa.6848f690ef0b720023fe71f648ee744e:1
naa.6848f690ef0b720023fe71f648ee744e:2
naa.6848f690ef0b720023fe71f648ee744e:3
naa.6848f690ef0b720023fe71f648ee744e:4

[root@esxi:/dev/disks] rm naa.6848f690ef0b720023fe71f648ee744e:1 naa.6848f690ef0b720023fe71f648ee744e:2 naa.6848f690ef0b720023fe71f648ee744e:3 naa.6848f690ef0b720023fe71f648ee744e:4

[root@esxi:/dev/disks] partedUtil mklabel /dev/disks/naa.6848f690ef0b720023fe71f648ee744e msdos

[root@esxi:/dev/disks] vmkfstools -C vmfs3 -b 8m -S datastore1 /vmfs/dev/disks/naa.6848f690ef0b720023fe71f648ee744e:1

Methanar fucked around with this message at 05:22 on Mar 22, 2019

Methanar
Sep 26, 2013

by the sex ghost
lol if you pay for software

Methanar
Sep 26, 2013

by the sex ghost

Internet Explorer posted:

they don't call it "One Rich rear end in a top hat Called Larry Ellison" for nothing

https://www.youtube.com/watch?v=-zRN7XLCRhc&t=2307s

Methanar
Sep 26, 2013

by the sex ghost
Why do you want a personal hardware lab in your living room

Methanar
Sep 26, 2013

by the sex ghost

Hughlander posted:

Related. Is there any free nested virtualization for Android emulators? Few years back someone charged an arm and a leg to run them on AWS to the point where it wasn’t worth it for one of the top 3 app developers to even consider using.

Protip: run android emulators on ARM instances. Doing QEMU emulation from x86 is extremely slow.

Methanar fucked around with this message at 05:47 on Aug 17, 2019

Methanar
Sep 26, 2013

by the sex ghost

Martytoof posted:

Ehh, I'm reticent to ask this in here because I feel like I'm one good google search away but somehow it's eluding me.

I'm struggling to decide on a toolchain to move from hand-provisioning lab VMs and move to an infrastructure-as-code model. I've been doing a lot of work with ansible to configure servers after they're provisioned, but I haven't ever been able to close the loop and ditch the ESXi GUI.

In my mind I'm polling to see is there's any obvious advice to be gleamed here. Ideally I'd like to figure out a toolchain where I have a concept, say a k8s cluster, I code the hardware as one spec, the OS/app config as a second spec, and I'm two commands away from spinning up a whole set of working servers. In the back of my mind I'm also looking to do the same for all the infrastructure that ESXi is providing me, vSwitches, state of the vhost itself, etc.

I've looked into vagrant and it seems plausible but I've read it's more for spinning up development environments than maintaining infrastructure. Terraform seems like the other obvious choice. Am I anywhere near "warm" on this? Are there any additional tools I should be investigating? I'm asking from a place of ignorance because I've always worked at places where infrastructure, development, and configuration management are all very monolithic and I've never really been exposed to other concepts.

I'm happy to keep talking it though if I'm not asking the right questions but hopefully you guys can guess what I'm seeing in my mind's eye. If there's a better place to ask this then I apologize and I'm all ears. I poked the virtualization thread because it's immediately relevant to my specific homelab, but ideally the concepts carry over to the cloud and are platform agnostic for the most part.

Terraform is what you want for creating infrastructure on a cloud platform of some kind, whether its AWS, vSphere or whatever: Like VMs, attaching disks to VMs, and networking VMs. It looks like a terraform provider does exist for direct interaction with ESXi without vCenter if thats your thing. https://github.com/josenk/terraform-provider-esxi.

Terraform isn't great when it comes to k8s manifests, though, try kustomize for that. Your post is a bit ambiguous as to whether you want to be maintaining workload on k8s or on VM infra.


Don't be afraid to pick the right tool for the job: if you want to use Vagrant for your local testing and development and then separately be using terraform to create your AWS/vSphere infrastructure you can. You don't need to try and make a single tool do anything.



You can try the devops thread too
https://forums.somethingawful.com/showthread.php?threadid=3695559

Adbot
ADBOT LOVES YOU

Methanar
Sep 26, 2013

by the sex ghost
X710 is great because it puts ESXi closer towards it's natural state. psod


Also I once discovered that my non-lacp bonded interfaces had been flapping several times a second for like a year on one of our databases.

Also sometimes the cards would just not be recognized as plugged in at all, even by the BMC until you restarted 10 times.

gently caress datacenters

Methanar fucked around with this message at 00:53 on May 7, 2020

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply