Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
Funny that should come up now. I was just going to post that I got sucked into the hype and ordered some more lab gear in order to give Nutanix CE a whirl. Worst case scenario, I'll roll back to something more traditional on newer hardware than I started with.

Adbot
ADBOT LOVES YOU

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord

HPL posted:

So what the heck is Nutanix? I went to the web site and couldn't find my way past the wall of hype.

By default, it's KVM(ish) with some goodies on top. Stuff like automated tiering of data between spindles and SSD, smart placement of workloads, ability to aggregate local storage pools and treat them like shared storage, fault tolerance by replicating n copies of data across the cluster members, and a more shiny happy look and feel than the traditional hypervisors and their management tools. They're basically trying to abstract away the backend stuff (storage, mainly) as much as possible. They're trying to get people to stop buying SANs and horsepower separately and instead buy both in a single box that you can add additional boxes to as necessary.

Now, whether any of this actually works in practice, I'm looking forward to reporting back on after the new heavy metal gets here. I've heard their pitch for a while now and kind of wrote it off since I'm a curmudgeon, but it's gaining popularity with a good segment of my customers so I figured it's worth taking a look at.

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord

HPL posted:

So in other words they can do everything Server 2012 R2 can already do?

but it's pretty

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord

Thanks Ants posted:

Pass all the individual disks through to the host and run this?

http://www8.hp.com/uk/en/products/data-storage/server-vsa.html

The StoreVirtual VSA is pretty slick, as is their StoreOnce VSA. I use the latter as a backup destination, 1TB of raw storage capacity that does dedupe and integrates with Veeam. You can get NFR licenses for both, as well as Veeam Availability Suite for that matter.

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
I'm in the middle of migrating my lab to Nutanix CE as stated earlier in the thread, and it's pretty slick. It's basically stacking a lot of stuff together that I was doing piecemeal before through things like ESX talking to storevirtual VSA, etc. It's been great to work with so far especially considering i'm really starting from square one with a new hypervisor, but even the Acropolis CLI is pretty straightforward.

Comparing a Nutanix/converged infrastructure type product with traditional big iron/spindles/etc doesn't really make sense. Pretty much the largest selling point of something like Nutanix is that you buy one product (the nodes) and scale out by adding nodes as necessary instead of managing your compute and storage separately, monitoring/planning/purchasing the respective gear separately. If you've already got SAN gear, compute and people to manage both then it likely won't make sense for you to look at it, where for new environments or projects - especially for orgs without existing personnel and resources - it may make a whole lot of sense.

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord

adorai posted:

At what scale is PVS REALLY better than MCS?

This is probably the most pitchfork-producing question in the Citrix realm since MCS was released. The answer, of course, is "it depends." PVS write cache in memory with overflow to disk is loving fantastic.

I agree with IE above pretty much across the board. PVS unless you're doing something quick and dirty, then MCS. I've also been using PVS since Citrix bought it from Ardence so take that with a grain of salt, I guess.

H2SO4 fucked around with this message at 07:29 on Feb 21, 2016

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord

Trastion posted:

Good to know. I planned on keeping things separate.

Will installing Storefront & Director on the old Web Interface server mess anything up with that? I might not be able to anyways if they require 2008+ though.

FWIW, I wouldn't do this. If you're building new, build new and don't touch the existing stuff at all. Then once you're validated working on the new environment, power the old one off. This approach hasn't failed me yet and I've been doing Citrix since the Metaframe days. I am a big proponent of keeping the different pieces separate and treating them as a more modular fashion, that way one service misbehaving won't make another angry. Especially for things like WI/SF which have an immediate and very visible impact to the end user population if they keel over.

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
Finally decided to start pulling the ripcord and begin the conversion from Nutanix CE to ESX. Migrated all VMs off of one node and removed said node, only to have one of the three remaining nodes immediately go tits up.

Logged into the CVM to find this:


Wasn't responding, so I force rebooted that node to get nothing but a blinking cursor after POST. From looking at one of my other boxes sdb would be the first spindle. Kind of pissed off that Nutanix would log block errors to console but not, y'know, actually alert you that something's not kosher. Hopefully the issue isn't with the drive controller.

Edit: ended up being the USB system drive. Forcefully evicted the node from the cluster, currently installing ESX on it using a replacement USB drive.

H2SO4 fucked around with this message at 01:11 on Jul 25, 2016

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord

BangersInMyKnickers posted:

Does it support redundant USB media for boot? I'm doing dual-sd card boots on my current Dell hardware for VMware.

I don't believe it does. This is the four x9 based nodes in one 2U box form factor. I''ll probably get some SATADOMs and migrate to that as part of the ESX rodeo.

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
guys, just write good code without bugs

not understanding the problem here

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
Shoutout to VMWare Converter for doing a bangup job getting my lab VMs out of Acropolis and back onto good old fashioned ESXi hosts. I'll still end up just building new DCs and decomming those instead of trying to convert them as well. I suspect they should convert fine but DCs are disposable enough to not worry about accidentally giving my domain cancer.

Why yes I have had to help clean up a gigantic USN rollback when the intel guys brought up ancient DC snapshots in their prod network instead of their lab, how did you know?

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord

keykey posted:

I couldn't for the life of me get UDP to work with windows 7 in a vm even after running the 4 RDP updater files and adding firewall exceptions. Installed windows 10 on the hyper-v server and it works amazing even without enabling remote fx. At least I know UDP makes all the difference.. Now I just have to hammer out why my Windows 7 image won't allow UDP connections. Thanks for pointing me in the right direction though. :)

Have you already checked this out?

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord

cr0y posted:

Ok, but for the rest of us peons? I work for a fortune 200 and we have an ELA but am not sure how to pitch it to management to pay for my at home fuckery.

VMUG Advantage. $200 for a year.

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
FWIW, the VMUG subscription gives you the whole shebang for up to 6 CPUs. My lab has four two-CPU nodes so I'll probably buy another VMUG license to get a full 4-wide cluster. Has anyone added two VMUG licenses to the same vCenter Server appliance? I don't expect it to barf if I added a second but I would like to hear from someone who has done so already before I drop another 2 bills.

Edit: Others have confirmed, should be fine.

H2SO4 fucked around with this message at 19:26 on Oct 7, 2016

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord

hihifellow posted:

gently caress Xenserver.

That's all.

Sounds about right.

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
I've got a four node ESXi 6 VSAN cluster in my lab and I'm going to be installing 10Gb NICs. Is it really as easy as just dropping a host in maintenance mode, installing the card, then migrating the DVS uplinks to the 10Gb interfaces one by one? Is it best to add the 10Gb interface to the DVS uplinks and then remove the 1Gb uplinks afterwards as a separate task? Should I disable HA host monitoring during the migration?

Basically I'm taking bets on how bad I'm going to make my cluster poo poo itself. Should be entertaining.

H2SO4 fucked around with this message at 11:56 on Jan 17, 2017

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
Thanks for the sanity check. This is my lab so even if I manage to burn it all to the ground I'll just be annoyed and not crying.

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord

cheese-cube posted:

A couple of years ago I watched a big honkin' Cisco 6509 grind to a halt because someone forgot to enable jumbos on a VLAN which was handling storage traffic. Things were fine for about a week until the weekend when big backup jobs kicked off. CPU usage on the 6K started creeping up over the course of about 12 hours until it pegged at 100% and the whole unit just started dropping packets. One of my colleagues had to physically attend the DC to console in and un-gently caress it.

Good times.

I had to drive 45 minutes to pull the cable out of a 6509 that lost its poo poo and took down a hospital one time. Felt great.

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
Can confirm, cheap WD boxes work in a pinch. I've had terrible experiences with Seagate ones though for what it's worth.

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
Oh boy. They don't know what they're doing, and with Citrix and printing that's dangerous. When your users logon to Citrix are they logging into a full desktop session or are they just launching a seamless app? Either way, if you can stumble your way to a Print dialog box and look at the names of the printers on a working versus a non-working session you should be able to get some hints.

"Citrix Universal Printer" - this is a generic print device that's mapped in the remote session which also uses a generic driver. Basically (IIRC) it's a glorified print-to-XPS plugin that sends the XPS file to the local client and presents a local print dialog once the remote print job is finished. The idea there is that there's no need to install or manage any kind of print drivers on the VDI, it just renders the print job and shoots it down to the client and lets the client use whatever black magic it has installed locally to actually print the document.

"[blah blah printer and model] (from [CLIENTPCNAME] in session [x])" with a description of "Auto Created Client Printer [CLIENTPCNAME]" is an automatically mapped printer created by Citrix during the logon process based on what printers are installed on the client. This may either use a native driver or a Universal driver, but it generally means that it's actually printing from the VDI itself. Citrix policies can be set to influence what drivers are used with certain kinds of printers, to deny mapping certain printers, etc. They can also be set to map only the user's default printer or to map all printers installed on the client.

"[blah blah printer and model] (from [CLIENTPCNAME] in session [x])" with a description of "Auto Restored Client Printer [PRINTERNAME]" is an auto-retained and auto-restored client printer created during logon, BUT this information is stored in the user's profile. It's synced on the remote and local profiles and is generally a pain in the rear end. I've worked cases where these entries are cumulative and never cleared, leading one user to stack up a shitload of printers that get connected each time they launch an app. I don't know of any good reason they should be used instead of auto created printers.

That's not even all of the printer settings available, but the main reason I went down that rabbit hole is to express that every single printer related setting can be set and is affected by Citrix policies. Something's absolutely jacked up.

Can you get a couple screenshots or just the text of the names and notes field of printers created when connecting from an HP versus a Lenovo? That might provide a clue as to what's not firing.

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
What's the timestamp on the last full backup of that VM?

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord

underlig posted:

The last full what?

Sorry, that was mean. I knew the answer to this question.

To merge the disks you shut down the machine and delete the snapshot. Hyper-V will then merge the disks. There's a lot of things that could go wrong, though, I wouldn't do poo poo without a full backup of that VM somewhere else if it's indeed mission critical. I'd merge over the weekend (if they're only a M-F shop) because you don't know how long a merge operation is going to take on that bad boy, especially if it's a single disk. Hopefully you have enough space to merge the disks or else things get really fun.

Your saving grace on this one is that it looks like the original disk is a fixed size and not dynamic, so you shouldn't run into any space problems since the space is already allocated. Good luck!

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord

SEKCobra posted:

everyone else can't do the NAS part.

Everything you're trying to do has been possible for years, so I'm betting you're not explaining the use case completely.

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
So to be clear, you're wanting to use a Windows workstation as a client PC as well as a VM host? That shrinks your options dramatically.

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
You really don't want to do FreeNAS in a VM unless you're passing the actual controller through to the VM. Passing disks through is risky, especially if you're passing through VMDKs/VHDs/etc and not the raw disk.

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord

ZHamburglar posted:

getting their passwords cracked by super computers

ahahahaha what

Don't believe everything you're told.

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord

Secx posted:


We use Citrix (I couldn't tell you which product specifically, but we call it the Cloud Desktop and it uses the Citrix Receiver) as the virtualization technology for end users at work and I wanted to see if that would work. My Citrix guy said it wouldn't work based on his previous experience, but he humored me in our test environment anyway by installing the VPN software onto two user accounts. Unfortunately, as soon as the second user opened the VPN connection, the first got disconnected.

So, all that to ask you virtualization experts: is there a virtualization solution that will allow multiple users to be concurrently logged onto the same host and allow each of them to independently create their own VPN tunnel?

The problem with using Xendesktop here is that as soon as you establish the VPN there's no route back to you as the client to send the Citrix traffic back through.

You either need a machine you can access the console of (through VMWare or similar), a tool like GoToMyPC that tunnels through the internet, or have desktops setup in the other company's network that you can connect to.

To be real clear here, your network/security guys are the ones who are blocking you so it's really on them to work out a solution that works with the other company. I totally understand not wanting client VPN traffic on their network but they should have a more workable solution. They're clearly OK with letting you access it but not from the corporate internal Network - why not push back and request being allowed to use guest WiFi or something like that?

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
What are my options for a 1U server focused on storage? I've got a supermicro 4 node 2U box that's loaded but doesn't have onboard RAID, and I played with vSAN a bit but the queue depth is 64 which makes baby jesus cry. I'd like to throw something together to do iSCSI for the four nodes over a 10Gbit backbone.

It's looking like a 10-bay R620 is the leader right now, can build it out with modest specs and 10gig for around 800 or so. I also thought of trying out Storage Spaces Direct on the nodes themselves but that requires datacenter licensing which I can't get through the Action Pack. With this I might try loading regular storage spaces on it and using tiering between the SSD and spinning platters. If I go the FreeNAS route I'll have to sacrifice four bays for a mirrored ZIL and L2ARC if I don't want sync writes to be slow as molasses from what I've read.

I would try Nutanix CE again but they don't have support for some of the virtual appliances I need to run. Maybe it's a better idea to go that route again and just get a couple simple 1U boxes for the stuff that requires ESX?

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
Availability doesn't matter in the sense that it's not production, but it's a lab environment that I use to test/mock things up so if I suddenly lose a chunk of it while I'm travelling it becomes a pain in the rear end. Storage spaces should work, storage spaces direct is a specific flavor of SS for server 2016 that has some hyperconverged goodies thrown in.

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
Yeah I'll end up shifting to an agent based backup either way. Veeam's NFR is great but only comes with two socket licenses. I should be able to use their free agent and point it to the B&R repository, I just lose the vsphere integration which isn't really a huge issue.

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord


looooooool

All disks in one particular node got angry. On boot you can see the SSD initialization fail. vmkernel.log logs events for each drive saying "No filesystem on the device." Seems a bit coincidental for all drives to bomb out at the same time. I can still at least see valid partition tables/etc on them so I'm hard pressed to believe it's a hardware failure. This seems to have happened after the 6.0 to 6.5 upgrade I pushed via VUM - should I try re-running the 6.5 upgrade for shits and giggles? This is my home lab and I've got backups so it's not like it's the end of the world by any stretch, I'm now more interested in seeing if I can bring it back to life and validate the hardware isn't jacked.

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
Yeah it's four spindles, one SSD for VSAN and one small SSD for the system drive. If the controller died I wouldn't expect to be able to boot the thing. Only the disks claimed by vSAN seem to be unhappy which makes me think it's something to do with the black magic.

Rebuilding the disk group would wipe the disks, wouldn't it?

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord

big money big clit posted:

It will wipe the disks, though if you're running vSAN you should have multiple hosts and a replication factor of at least 2 set on your storage policies, so you shouldn't actually lose data.

Oh for sure, but I already moved the fourth host out of the pool to start the migration back to standard datastores. I've only got four or so inaccessible VMs and I've already moved the pain in the rear end stuff over first so I'm probably just going to try and grab the rest then nuke and pave.

All in all it's a great learning experience.

Edit: Definitely wasn't a hardware failure. Something related to vSAN itself just straight up barfed and decided the disks on that host were no longer trustworthy. Almost like it was in the middle of a resync operation and lost power/connectivity for too long, making the rest of the pool assume the disks are permanently gone and rebuild whatever data they could between the two of them. It would be nice for some of this stuff to be more obvious, but then again the point of hyperconverged poo poo is that it's all ~~~magic~~~. It's great until something runs out of pixie dust.

H2SO4 fucked around with this message at 04:21 on Aug 21, 2017

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
Keep in mind that (at the last time I researched this) distributed virtual switches do not like HA stuff like VRRP/HSRP. Something to do with the fact that they don't have a CAM table but use VM metadata to decide where traffic for a given MAC address goes to instead. If anyone else knows differently I'd be interested to hear, since the only other sort of workaround for this behavior I'm aware of is putting everything in promiscuous mode.

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord

Notax posted:

VMWare is actually VerMinWare it is a platform for virtual lies

It's just like that Citrix aka Sit Tricks where they run GoToMyPassword

This was not enterprise grade material!

do you smell burning toast

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
KUBERNETES WAS AN INSIDE JOB

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
The whole notion that VMs are going away everywhere and are being replaced by containers or serverless or four line perl scripts or whatever it is this week is rather annoying.

It's something you should keep on top of, like the rest of the field, but VMs aren't going away for a long time. The landscape needs to mature a whole hell of a lot more before it makes any sense for businesses to start completely rearchitecting their apps. The architecture isn't useless by any means, it's great in certain circumstances but not every app intrinsically benefits from such a large architecture change to be worth it. Cattle are coming eventually but nobody's gassing all of their pets.

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
Oh yikes. Just present an SMB share directly to Veeam.

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
jesus christ

Adbot
ADBOT LOVES YOU

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
I don't know, when you're playing with data in multiple tiers that don't have any knowledge of each other I start getting uncomfortable. Have you done any testing of disabling Veeam's compression/dedupe and just relying on WS2016? I'd be interested to see the difference there.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply