|
Syano posted:I know we go over this from time to time, but would some of you guys mind posting what you use for your home lab setups? I am going to start putting together a lab since I am finally managing an environment bigger than 3 hosts and using a bit more than live migration and would really appreciate some ideas on what you all are using. Thanks There's an ongoing thread over at HardOCP where people have been posting home setups over the years. You should check out the more recent posts for what folks are using. But if you want to know what goons use, maybe it'd be better to start a specific thread for that? I dunno what the policy is for "list your gear" threads is around these parts.
|
# ¿ Jun 21, 2013 21:24 |
|
|
# ¿ Apr 19, 2024 13:11 |
|
bull3964 posted:Still though, you should probably use drives with a limited error recovery mode (TLER for Western Digital Drives) or they are going to be a PITA to use under any sort of RAID like system as the first time one of them goes into heroic recovery mode it will likely hose the volume. So this. Oh so this. A thousand times this. I had a RAID-5 array of six WD Blues and discovered the pitfalls of unnoticed bad blocks. What essentially happened is that I had a drive fail and then the array failed to rebuild after swapping out the drive. The vendor-verified solution was to tear down the array, rebuild it and restore data from backup (making sure to test each individual drive for health before rebuilding it, of course). Don't be me. Buy TLER/CCTL drives for your hardware RAID solution. Note that TLER/CCTL does not add benefit in devices that use software raid.
|
# ¿ Jun 25, 2013 23:17 |
|
FISHMANPET posted:ZFS supremacy. That, like the rest of us, you enjoy playing with expensive toys and getting to build things instead of maintaining things.
|
# ¿ Jun 25, 2013 23:26 |
|
Frozen-Solid posted:I've successfully deployed vCenter and vSphere Data Protection... You throw that word out of your vocabulary right now.
|
# ¿ Jun 27, 2013 23:53 |
|
Kachunkachunk posted:
Using iSCSI for desktop use? What a great idea! I multibox 5 instances of WoW using two PCs, with one PC as my master and the other PC running four slaves, and a RAID-0 array gets clobbered during a 4x map refresh. I might have to look into an iSCSI setup with LUNs mapped to my overbuilt file server that has I/O to spare. Well hello new Monday night project! Agrikk fucked around with this message at 18:42 on Jul 5, 2013 |
# ¿ Jul 5, 2013 18:39 |
|
evol262 posted:You probably shouldn't, unless you want to set up one LUN per client or your clients are Windows 2k8r2/2012 in a cluster and you make the LUN a cluster shared volume. Nah. I was thinking I could re-turn on the iSCSI Target software installed on my file server and create four LUNs mounted to my desktop. Yes, I could simply set up a file share and have all the clients point there. But then it wouldn't be overbuilt, unnecessary and fun at all, would it? What could possibly go wrong?
|
# ¿ Jul 5, 2013 20:08 |
|
Goon Matchmaker posted:Well. Deep Security PSOD'd two of our ESXi hosts taking down our entire production cluster. I've had more problems with AV poo poo in production environments than problems from viruses. To the point where my last production environment that had AV was eight years ago. It was a Symantec installation and we had a virus come in using a SEP exploit as the vector. Any server that had SEP installed rolled over and died while the AV-free ones hummed right along. To recover from the outbreak we had to uninstall SEP and never bothered installing another AV suite. I haven't had an outbreak since in that environment or any other. But count the times Trend or Symantec has toasted a SAN or a VM in our lower environments? loving meh.
|
# ¿ Jul 19, 2013 23:59 |
|
Tequila25 posted:I didn't even think about flash drives. Do you even need a hard drive in the host? I guess maybe for logs? A lot of newer servers have an internal USB connection direct on the motherboard. Pick up a 8GB USB stick for a few bucks each and never worry about log files or hard drives. Also, I once had a dev ESX 3 server go for over three months with a dead RAID controller. It booted and then the controller died, killing the local disk volume. It ran just fine until someone physically sat at the console and saw all the SCSI errors scrolling by. Hard drives are overrated. Oh yeah, but proper monitoring isn't...
|
# ¿ Aug 9, 2013 00:52 |
|
skipdogg posted:Quick networking question.... I'm out of my depth here. My lab is similar to your setup in that each of my hosts has four NICs that I want to set up using multiple pathways: 1 LAN nic, 1 vMotion nic and two iSCSI nics that I have configured for round-robin access to help distribute the load. Each connection type is on a dedicated VLAN, and the vMotion and iSCSI VLANs are on-routable. Here's a pic of my current config from one of my hosts: I have each nic in its own vSwitch. On my iSCSI target I grant access to each NIC path and grant access to each volume group (or however your target calls them) so that each stroage volume will then appear twice in vSphere. Then in the iSCSI initiator properties you will add the iSCSI nics you have previously ientified. I think the default setting is failover-only, but in my case I have configured the connections as round-robin:right click on a storage volume and select manage paths In this case I have four connections to a volume group because by iSCSI target has two active/active heads. Two heads, times two paths to each head = four paths. In your case, you will have two nics on your SAN times four nics on your ESX host = eight paths. Agrikk fucked around with this message at 18:05 on Aug 14, 2013 |
# ¿ Aug 14, 2013 17:56 |
|
skipdogg posted:drat man, thanks for taking the time to reply with a very informative post. Much appreciated. Glad to help. Multipathing can be a little tricky and it took me a lot of trial-and-error to get it working so I'm happy to help anyone else avoid that pain. Also, I have not done any bonding or anything special on my switch. From what I've read you get the best performace if you let ESXi handle the load balancing/portgrouping and avoid creating a LAG on your switch. All you need to do on the switch side is configure your vlans and set maximum packet size to 9000. Agrikk fucked around with this message at 18:08 on Aug 14, 2013 |
# ¿ Aug 14, 2013 18:05 |
|
sanchez posted:I do it that way too, multiple vmkernels on one switch, with each vmkernel tied to its own physical nic by making all other NIC's unused in the properties for each vmkernel. "Tidier" is in the eye of the beholder. While I agree that a single vSwitch for all iSCSI adapters looks tidier, I prefer unique vswitches for each iSCSI connection so that I don't have unused connections in my configuration, making the configuration itself tidier IMO. This way I avoid the potential mistake of "Wait, why is this adapter unused in this vSwitch? I should turn it back on." when it is four in the morning and I'm feeling dingy after some issue that has kept me up all night. But whatev'. I don't think it makes any performance difference at all.
|
# ¿ Aug 15, 2013 02:36 |
|
I just found out that the infrastructure team of the company at which I am contracting has created "units" of hardware that they can offer in their IaaS plan. While this doesn't seem so bad, with the basic unit is 1 core and 2GB of RAM it has caused some strange and horrible configurations to appear on VMs that people are requesting. I was just on a VM that was configured with 64GB of RAM. And 32 CPUs. When I asked about it, they said that it was a SQL server requiring 64GB of ram, and since they're paying for the cores why not add the CPUs as well? Another machine was configured with 20GB of RAM. And of course five cores because they paid for 'em. Five cores? These two examples seem really wrong to me, but I can't find any supporting documentation other than "provision your CPU cores properly". Can anyone add justification to why these two configurations are okay or horribly broken and what the impact of these configurations would be?
|
# ¿ Aug 23, 2013 21:37 |
|
Hah hah hah! I just found out that each host in this IaaS effort is configured with 32 cores and 128GB of RAM and based on the cost of each of these hosts purchased they came up with the cost per unit. So, in the case of the aforementioned 32-core VM, per policy it will now be the only VM running on this host which will leave 64GB of RAM unused. Holy poo poo, I just don't know. Docjowles posted:TL;DR assigning unused CPU cores to a VM actually makes performance go DOWN. You should be giving everything 1 vCPU by default unless they can prove that their application scales positively with more. That's what I thought what general wisdom said. Interestingly, the Team Foundation Server at my last job ran smooth as silk with 4 cores on Server 2003 R2. It ran like a turd on anything more or anything less. Agrikk fucked around with this message at 22:47 on Aug 23, 2013 |
# ¿ Aug 23, 2013 22:42 |
|
evil_bunnY posted:Google relaxed coscheduling, read up. Thanks for the terms to look for! edit: vSphere 5.5 posted:Virtual CPUs per host 4096 (was 2048) Phew. I'll finally be able to use all the processors in my gaming rig... Seriously, though. What does ESXi run on that even has 4K processors? Some kind of mainframe? Agrikk fucked around with this message at 23:14 on Aug 26, 2013 |
# ¿ Aug 26, 2013 22:51 |
|
Mierdaan posted:vCPUs, not physical cores. For people who love to over-provision. Heh. My IaaS team would love it, then. But doesn't the host need to have at least that many CPU cores to be able to provide that many vCPUs to the VM?
|
# ¿ Aug 26, 2013 23:22 |
|
Tequila25 posted:Okay, now that everything's been purchased and delivered, I'm in the process of setting up my first production ESXi cluster cluster. IMO, setting up a production virtual environment is one of the funnest things you can do in Infrastructure. Have fun! Protip: After you've set it all up but before you have any critical VMs on it, spend a day or two unplugging cables and powering off devices to simulate outages. It's a good sanity check to simulate some disaster scenarios to make sure everything is configured and working like you expect. Then, do the same thing but with your boss watching. When he sees the magical mysical vMotion maintain connectivity to a vm after a component failure he'll have faith in you and your gear and be able to show it off to his bosses, making you and him look good.
|
# ¿ Sep 26, 2013 00:51 |
|
cheese-cube posted:Can we have a biggest-cluster dick waving contest? I'll play: This is one of three roughly identical virtual environments in data centers for the organization I'm currently consulting for as an infrastructure architect. The three data centers combined have over thirty ESX hosts, with over eight hundred cores serving two terahertz of CPU. 12TB of RAM and close to a PB of storage for virtual hosts alone.
|
# ¿ Oct 3, 2013 00:47 |
|
cheese-cube posted:God drat. Out of interest what are the hosts? Mostly these: I would love to get these things folding for a week. Just one week. My PPD would be absolutely retarded.
|
# ¿ Oct 3, 2013 23:03 |
|
nm figured it out Agrikk fucked around with this message at 04:16 on Oct 16, 2018 |
# ¿ Oct 15, 2018 21:24 |
|
Can anyone suggest some eBay-able hardware that is compatible with ESX 6.7? I have an Opteron-based ESX 6.5 lab whose CPUs are not supported anymore so I cannot upgrade to 6.7. I have three hosts built on Supermicro H8SGL-F motherboards with Operon 6000-series CPUs and they have reached EOL on vMWare's HCL. I'd like to replace the three hosts with two hosts, spending around $250 per box for CPU/RAM/Motherboar. Can anyone recommend me a CPU/Motherboard combo that is (relatively) future proof? I'd prefer it to be a Supermicro board for the IPMI capability with remote KVM, but will look at anything.
|
# ¿ Feb 25, 2019 21:47 |
|
Is there a way to migrate a VM from one cluster to another cluster that has incompatible CPU types? I have an ESX6.5 cluster running on Opteron 6128 CPUs and a cluster running ESX6.7 running on Xeon E5620 chips. I'm trying to decommission the Opteron cluster but doing a vMotion won't work due to the incompatibility between Opterons on 6.5 and Xeons on 6.7. What's the best way of moving these workloads to the Xeon cluster?
|
# ¿ Mar 19, 2019 18:14 |
|
Thanks for the replies all. I'm just going to bite the bullet and schedule an outage to move these boxes.
|
# ¿ Mar 19, 2019 21:39 |
|
BangersInMyKnickers posted:If you want to do it with minimal disruption, mount the data stores from the old cluster to the new cluster. That way you can just shut down on the old hardware, vmotion the vm compute to the new hardware and fire it up, then start running a storage vmotion in the background while its up to get it on the new datastores. Did this process when I had to integrate a junky old cluster with an incompatible EVC mask, only saw about 5 min of downtime max. Great idea. Thanks for this.
|
# ¿ Mar 19, 2019 22:52 |
|
So migration to the new cluster took about 120 seconds of actual outage time for the six VMs that I migrated. FWIW, these were Windows Server 2016 VMs. I mounted the old stores in the new cluster, powered all of them off, migrated them and powered them back on and they came right back up, no additional reboot required. Having migrated compute, I then migrated storage to the new cluster storage that took about fifteen minutes total (but transparent to the end user). I had a four hour outage window on the books, so being able to tell folks that the change was complete after thirty minutes was p. cool. Thanks Potato Salad, BangersInMyKnickers, Vulture Culture, DevNull, Moey for your advice!
|
# ¿ Mar 21, 2019 02:00 |
|
Is there a way I can wipe a disk from within ESXi? I have shoved a previously used disk into an ESXi 6.5 box for more storage, but it turns out that I've previously used the disk for an ESXi installation so there are old partitions on it. code:
code:
I tried code:
code:
|
# ¿ Mar 22, 2019 05:07 |
|
I've been messing around with vSAN v6.7 and I'm having a hard time determining the memory requirements for the witness host for a vSAN Stretched cluster. Can anyone point me or tell me a good rule of thumb for memory requirements? I currently have two ESXi 6.7 hosts with 16 cores and 96GB each booting off of thumb drives and they use a FreeNAS server for their shared iSCSIstorage. the FreeNAS box has identical hardware (mobo/ram/mem) but has 32GB RAM and a bunch of SSD configured in passthough mode. I'm thinking about reconfiguring the hardware so that each server has the identical amount of SSD disk and setting up vSAN using the old iSCSI box as a witness server. I run about two dozen VMs currently. What processes require memory on the witness node that would require RAM? Will 32 gigs be enough considering that each ESX host will have approximately three times the amount?
|
# ¿ Apr 15, 2019 19:56 |
|
|
# ¿ Apr 19, 2024 13:11 |
|
I figured that for a cluster quorum box it should be sufficient. But the FreeNAS box that is currently running had so many problems under my workload with my original setup (Opteron 6000 8-core and 16GB RAM) that I'm leery of all hardware-related bottlenecks. Thanks for the tip on the Deep Dive book. It looks interesting.
|
# ¿ Apr 16, 2019 20:03 |