Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BallerBallerDillz
Jun 11, 2009

Cock, Rules, Everything, Around, Me
Scratchmo
Is anyone else here running Ovirt? I'm having a bitch of a time getting it set up. I only have a single host to work with which makes parts of it a bit difficult since you can't turn off some of the HA stuff with the hosted engine and it won't let you do some config without moving the engine VM and putting the host in maintenance mode. I'm also trying to export local storage as nfs shares and then mount them on the host to satisfy the network storage requirement of the hosted engine but I have all kinds of problems with storage disconnecting from the host and being unable to reactivate.
I've done most of it on Cent7 but tried oVirt node last night unsuccessfully, it wouldn't even initialize vdsm properly and systemd kept masking lvm2-lvmetad despite my best effort to unmask and enable it.
In my 3AM haze I convinced myself it was my lovely third-hand hardware and ordered an H700 to replace my Perc6i, but in the cold light of day that doesn't seem like it will help. I probably should have bought an actual nas enclosure instead and pulled most my local storage to that.
Mostly I'm tired and frustrated and just wanted to piss and moan about it - should I just give up on oVirt on a single host? What would be a better option? I already have experience with VMware and Hyper-V and wanted to try something new. Proxmox? Xenserver?

Adbot
ADBOT LOVES YOU

unknown
Nov 16, 2002
Ain't got no stinking title yet!


Out of my experience, Ovirt is fundamentally designed to be run on 2+ nodes (+1 if you don't do self hosted controller) with a NAS of some sort (preferably NFS).

Out side of that config you rapidly start running into weird issues and corner cases that someone hasn't programmed for.

evol262
Nov 30, 2010
#!/usr/bin/perl

unknown posted:

Out of my experience, Ovirt is fundamentally designed to be run on 2+ nodes (+1 if you don't do self hosted controller) with a NAS of some sort (preferably NFS).

Out side of that config you rapidly start running into weird issues and corner cases that someone hasn't programmed for.

It can do self-hosted pretty easily, but it's definitely more common to see a multiple-node deployment.


The Nards Pan posted:

Is anyone else here running Ovirt? I'm having a bitch of a time getting it set up. I only have a single host to work with which makes parts of it a bit difficult since you can't turn off some of the HA stuff with the hosted engine and it won't let you do some config without moving the engine VM and putting the host in maintenance mode. I'm also trying to export local storage as nfs shares and then mount them on the host to satisfy the network storage requirement of the hosted engine but I have all kinds of problems with storage disconnecting from the host and being unable to reactivate.
ovirt-hosted-engine-ha doesn't actually care about the "ha" bits on single-host deployments. You'd honestly be better off using gluster for local storage than trying to export NFS back to itself, since this works seamlessly.

vdsm expects to be able to reconfigure the network, and this probably disconnects you from NFS storage. Then you need to wait for sanlock to get back at it. I suspect that if you look at /var/log/sanlock/sanlock.log, you'll see that it's busy trying to acquire a lock after it disconnects, which can take minutes. Use gluster.

Or, honestly, just use ovirt live, which is designed for an "all in one" scenario.

The Nards Pan posted:

I've done most of it on Cent7 but tried oVirt node last night unsuccessfully, it wouldn't even initialize vdsm properly and systemd kept masking lvm2-lvmetad despite my best effort to unmask and enable it.
vdsm doesn't behave with lvmetad anyway. This isn't specific to oVirt node. See this bug. The whole thing got whacked because vdsm (for historical reasons) creates a VG for storage, and lvmetad can do really weird things with presented FC/iSCSI LUNs that actually have guests on them. Plus systems with 3000+ LUNs zoned to that server (don't ask me why) made lvmetad take forever to start up, which is also a problem on bare CentOS/RHEL/Fedora.

oVirt Node runs "vdsm-tool configure --force" as part of startup, so it should definitely be configured. If for whatever reason your clock was behind and you didn't configure ntp/chrony at installation, the vdsm certificate may have a "Not before:" date in the future. There's currently a patch up to resolve this, but it probably won't merge until oVirt 4.1.7, and it's a bit of an edge case anyway. If it isn't that, have you sent a mail to users@ovirt.org ? vdsm should definitely work out of the box.

The Nards Pan posted:

In my 3AM haze I convinced myself it was my lovely third-hand hardware and ordered an H700 to replace my Perc6i, but in the cold light of day that doesn't seem like it will help. I probably should have bought an actual nas enclosure instead and pulled most my local storage to that.
Mostly I'm tired and frustrated and just wanted to piss and moan about it - should I just give up on oVirt on a single host? What would be a better option? I already have experience with VMware and Hyper-V and wanted to try something new. Proxmox? Xenserver?

Use gluster for a single host. Though, honestly, if you just want a single host, use plain KVM with kimchi or some other frontend. There's no point to running most 'enterprise' virtualization on a single host, unless you want to go down the rabbit hole of creating nested VMs for labbing (which also works fine).

BallerBallerDillz
Jun 11, 2009

Cock, Rules, Everything, Around, Me
Scratchmo
Wow, thank you for all that.

You were actually right about the time thing causing strange errors with lvm2-metad being masked. I had updated my idrac firmware recently and the time was off which was causing the installation of the hosted engine on oVirt Node to fail early in the install. I had configured NTP but for some reason the network wasn't coming up correctly either. Fixing both of those let it continue in the install. Although I'd read about Gluster, I've never tinkered with it. I tried quickly setting up a volume with my current Node install, now that I was able to actually get to the option to select storage - unfortunately it wouldn't let me install to the volume because it's not using replica 3. From what I understand it means it's not set up in a Gluster cluster with at least 3 hosts. Obviously I understand why you'd want this for any production VM environment but I can't really lab unless I just set up dummy 5GB volumes on cent VMs on my laptop, but that seems kinda silly.

I appreciate the advice, I think I'm going to play with a different hypervisor for a while. I've used plenty of pure kvm and virt-manager on an ubuntu jump box but I wanted to lab with some enterprise virtualization solutions now that I have an actual host, but it seems like I still need at least a dedicated network storage first. I'll check out oVirt Live too.

For now I'm ready to be done with this and actually deploy some machines so I'll probably drop back to hyper-v server just so I can keep doing something with Windows. Apparently Hyper-V server 16 handles nested virtualization pretty well so maybe I'll build my ovirt/gluster cluster in there.

Wicaeed
Feb 8, 2005
Oracle Cloud Migration Bombshell #1:

My boss, who had advocated against this move, or to even allow our department to evaluate Oracle Cloud as a hosting solution before this move, was terminated with prejudice today. He was greeted at the door by security, who collected his things and walked him out.

While not 100% the cause of his firing, I feel that his opposition to the move was on a long list of "issues" that management had with him. And by issues I mean completely valid complaints about how our supposed security focused company chooses to implement things.

:suicide:

Wicaeed fucked around with this message at 21:54 on Oct 5, 2017

Thanks Ants
May 21, 2004

#essereFerrari


So you're looking, right?

Dr. Arbitrary
Mar 15, 2006

Bleak Gremlin
Make sure you have his contact info. Maybe you can ride his coattails on out of that trainwreck.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
For whatever it's worth, I've been told by people inside the company that Dyn is basically taking over Oracle's corporate culture on the services side, in sort of the same way that Pixar got paid a lot of money to take over Disney's whole animation division. Hopefully it turns out better than it's been.

evol262
Nov 30, 2010
#!/usr/bin/perl
Or they'll jump on the serverless hype train and push it as the solution for everything

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

evol262 posted:

Or they'll jump on the serverless hype train and push it as the solution for everything
FaaS is babby nonsense but serverless is gonna eat the world. RedShift and Athena are to EMR what EMR was to on-premises Hadoop. IaaS got us where we are today, and I'm grateful, but serverless is the thing that's going to pose an actual existential threat to old-school server-obsessed sysadmins.

(Like everything forward-looking that I post, hedge this with "in 10-15 years".)

Vulture Culture fucked around with this message at 14:08 on Oct 6, 2017

jre
Sep 2, 2011

To the cloud ?



Athena is unbelievable hilariously expensive to actually use.
Redshift isn't server less.
EMR isn't serverless either and is horrible to use for anything other than a very narrow set of batch workloads

evol262
Nov 30, 2010
#!/usr/bin/perl

Vulture Culture posted:

FaaS is babby nonsense but serverless is gonna eat the world. RedShift and Athena are to EMR what EMR was to on-premises Hadoop. IaaS got us where we are today, and I'm grateful, but serverless is the thing that's going to pose an actual existential threat to old-school server-obsessed sysadmins.

(Like everything forward-looking that I post, hedge this with "in 10-15 years".)

I don't disagree with this at all. I was primarily referring to AWS Lambda, Serverless (TM), and other FaaS providers.

I'm all about PaaS, and FaaS fits a use case that PaaS and/or container-oriented microservices don't, but the sudden surge in press is hilarious. "Everyone can be a developer now!", like it's any easier than it already was with basic containers.

Providing an endpoint which does nothing but spin up a tiny unikernel and execute my code is neat, and infinitely more practical than any other options for some stuff, but not the end-all-be-all it's touted as.

Athena/Redshift/BigQuery are different bests entirely, though I'd honestly expect then to be (eventually) consumed by something more like Firebase.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

jre posted:

Athena is unbelievable hilariously expensive to actually use.
Most new cloud stuff is until it's fully commoditized. I'm not talking about what's the best option for businesses today.

jre posted:

Redshift isn't server less.
Technically true, but this difference is pretty much completely insignificant if you're using Spectrum, and at some point that argument is just semantics over "what even is capacity management".

jre posted:

EMR isn't serverless either and is horrible to use for anything other than a very narrow set of batch workloads
I agree that it is not serverless, which is why I never claimed that EMR is serverless!

Vulture Culture fucked around with this message at 18:00 on Oct 6, 2017

Potato Salad
Oct 23, 2014

nobody cares


https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2151061

Thank the lord, a client on vsan also uses Veeam with CBT enabled, and it seems to be twice or three times a week that I've had to fix file issues with a clone job + delete old VM. My ticket on this has been open for a while.

Let me tell you, there's nothing quite like typing "rm -rf /vol/path/to/vsan/[thumbprint]" into a production esxi host. No, pinkie finger, stay the gently caress away from the Enter key.

Potato Salad fucked around with this message at 16:04 on Oct 9, 2017

Dr. Arbitrary
Mar 15, 2006

Bleak Gremlin
For VMware, I've got a VM that uses a ton of compute resources on the weekends, and almost none during the week.

To perform the calculations in a reasonable amount of time, it needs a ton of CPUs, but obviously this ends up leading to wait time issues if there are other VMs on the host.

My tentative solution is to give a CPU reservation to the VM to make sure it always has those CPUs available and other VMs can wait.

First: Is there a better solution for this? I'd feel dumb if there was some new feature that was designed for this.

Second: I'm currently writing a script to see if I can set the reservation to 100% on weekends, 0% during the week. Is there an easy way to do this so I don't reinvent the wheel?

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Give it the bunch of cores but reduce its CPU share either manually on the VM or by dumping it in a low share resource pool so the other VMs can win if they need to. The reservation is just going to lock up all those cores on your host 24/7 and be a big waste. It'll probably still be fast enough on the weekends unless the other VMs are very CPU busy and will keep it from causing excessive wait on the other VMs during the week. Alternately, get comfortable the the powershell API and schedule some jobs to either shutdown/re-allocate/restart the VM with the new core count or dynamically adjust the share count on a schedule.

freeasinbeer
Mar 26, 2015

by Fluffdaddy
This is also an ideal cloud workload. I haven't looked but I'd also suspect there has to be some sort of batch job scheduler for VMware that is used for big data.

EdEddnEddy
Apr 5, 2012



Posted in the GPU thread but I also feel I should ask here.

I know its been posted before about people doing similar, but I was wondering how difficult and what hardware might be needed to host either a Linux or Windows Server to host a single game that will run on 10-12 clients through RDP. The Game is RealFlight 7.5 but the requirements haven't changed much.

The clients are literally the lowest end All-In-Ones that can just run the game now, but certain scenes and in Multiplayer they just fall on their face. I was hoping to find a way to scale this setup by keeping the AIO's and just have a single server do all the heavy lifting that I can just copy for future sites.

One user in the GPU forum mentioned running Hyper-V and having a login screen, VM spinup, and hosting up to 20 CS 1.6 clients which should be easily more than I need and sounds like exactly What I need. However all of the searching I seem to come up with shows mainly RemoteFX stuff from 2012 give or take.

The other obstacle I feel is getting the game to work with the Dongle and R/C Remote it uses. Could the game input be set to detect what the remote client has connected to keep that working as it should? I might have to reach out to the RealFlight guys to see if they have any Server Hosted setup experience already or not.

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

EdEddnEddy posted:

Posted in the GPU thread but I also feel I should ask here.

I know its been posted before about people doing similar, but I was wondering how difficult and what hardware might be needed to host either a Linux or Windows Server to host a single game that will run on 10-12 clients through RDP. The Game is RealFlight 7.5 but the requirements haven't changed much.

The clients are literally the lowest end All-In-Ones that can just run the game now, but certain scenes and in Multiplayer they just fall on their face. I was hoping to find a way to scale this setup by keeping the AIO's and just have a single server do all the heavy lifting that I can just copy for future sites.

One user in the GPU forum mentioned running Hyper-V and having a login screen, VM spinup, and hosting up to 20 CS 1.6 clients which should be easily more than I need and sounds like exactly What I need. However all of the searching I seem to come up with shows mainly RemoteFX stuff from 2012 give or take.

The other obstacle I feel is getting the game to work with the Dongle and R/C Remote it uses. Could the game input be set to detect what the remote client has connected to keep that working as it should? I might have to reach out to the RealFlight guys to see if they have any Server Hosted setup experience already or not.
Dehumanize yourself and face to Remote Desktop Licensing

Da Mott Man
Aug 3, 2012


anthonypants posted:

Dehumanize yourself and face to Remote Desktop Licensing

Hate to empty quote but RDS licensing is hell.

EdEddnEddy
Apr 5, 2012



Da Mott Man posted:

Hate to empty quote but RDS licensing is hell.

It was you I quoted in the GPU thread. Can you expand on your CS 1.6 Lan Party setup? That sounded literally like my exact need, but I am still unsure if it is possible.

Da Mott Man
Aug 3, 2012


EdEddnEddy posted:

It was you I quoted in the GPU thread. Can you expand on your CS 1.6 Lan Party setup? That sounded literally like my exact need, but I am still unsure if it is possible.

It was a Supermicro x9drg-hf with 2x RX470 GPUs installed as the virtualization host. Remote Desktop Services with remotefx, on WS2012R2. Clients were W8.1Ent.

EDIT: Other infrastructure for RDS was borrowed from other servers on the network, SQL Server, AD, disks hosted with SOFS over infiniband, etc...

Da Mott Man fucked around with this message at 02:27 on Oct 12, 2017

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

anthonypants posted:

Dehumanize yourself and face to Remote Desktop Licensing

Would per-device CALs be simple enough in this situation, since it sounds like there is a specific set of computers that would connect to it?

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Yep. Either you license all the devices that will connect or all the users. Whichever is lower.

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum
An update to Flash broke the vSphere client (it tries to load but it crashes Flash; e.g. you get the error "Shockwave Flash has crashed" in Chrome), and the workaround is to just use the old version of Flash player. Incidentally, Adobe admits that the older version of Flash player is known to have exploits in the wild, which is why Flash got updated in the first place.

Also per Adobe, there should be a beta version of the Flash update next week.

TheFace
Oct 4, 2004

Fuck anyone that doesn't wanna be this beautiful

anthonypants posted:

An update to Flash broke the vSphere client (it tries to load but it crashes Flash; e.g. you get the error "Shockwave Flash has crashed" in Chrome), and the workaround is to just use the old version of Flash player. Incidentally, Adobe admits that the older version of Flash player is known to have exploits in the wild, which is why Flash got updated in the first place.

Also per Adobe, there should be a beta version of the Flash update next week.

Well that explains why I've been having nothing but headaches with one of the companies I consult with. Thanks for posting this! Thankfully I have another jump box over there with an older version of flash.

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

TheFace posted:

Well that explains why I've been having nothing but headaches with one of the companies I consult with. Thanks for posting this! Thankfully I have another jump box over there with an older version of flash.
That beta version of Flash player is here here with version 27.0.0.180 that doesn't break in Chrome, and I guess it came out some time last week.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Here's a neat paper/project wherein they make VMs with resource usage on par (or better!) with containers.

Basically it sounds useful where isolation is important.

It was a pretty interesting read to me, particularly where they dove into all the bottlenecks in current VM creation, booting, etc lies.

Alfajor
Jun 10, 2005

The delicious snack cake.
Anyone else can't get to the VMware HOL (https://labs.hol.vmware.com/)? I keep getting SSL errors, with Chrome and Firefox :smith:

Thanks Ants
May 21, 2004

#essereFerrari


Borked here too

Edit: https://www.htbridge.com/ssl/?id=UB8xDJ3A

There's no HTTPS, so turn off HTTPS Everywhere or any equivalent

Thanks Ants fucked around with this message at 21:54 on Nov 3, 2017

Alfajor
Jun 10, 2005

The delicious snack cake.

Thanks Ants posted:

turn off HTTPS Everywhere
Good call, this was it. Thanks Thanks Ants!

Pile Of Garbage
May 28, 2007



Alfajor posted:

Good call, this was it. Thanks Thanks Ants!

Thants

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender

Thermopyle posted:

Here's a neat paper/project wherein they make VMs with resource usage on par (or better!) with containers.
This was an interesting read, but the swagger they display in the title is heavily undercut by this section at the end:

quote:

Despite its compelling performance, LightVM is still not as easy to use as containers. Container users can rely on a large ecosystem of tools and support to run unmodifed existing applications.
Like... yeah. That's kind of a big caveat. They did good work to optimize Xen, and basically remade an even-more-minimal-ContainerLinux (nee CoreOS), but practical development of any system to run on their LightVM is still gonna be hampered by the inadequate tooling.

With the unikernel stuff, they also assumed that only one process is going to run in the VM. So... no log shippers, Memcache instances, telemetry agents, debugging tools like sshd/bash...? Adequate for the narrow use-cases they mentioned, but impractical for more standard workloads.

They also only compared with Docker without digging into profiling Docker itself. I couldn't even find any description of the Docker image they used, so it's not clear how long Docker would take to unpack it and set up the CoW system, which could have contributed significantly to the boot time. It'd probably be fairer to also compare against rkt, runc, and systemd-nspawn.

Still, I'm glad I took a look at the paper, lots of interesting stuff about hypervisor internals I was unaware of.

evol262
Nov 30, 2010
#!/usr/bin/perl

minato posted:

Like... yeah. That's kind of a big caveat. They did good work to optimize Xen, and basically remade an even-more-minimal-ContainerLinux (nee CoreOS), but practical development of any system to run on their LightVM is still gonna be hampered by the inadequate tooling.
Tooling took a while to get there with containers also. I'm not saying this is super practical yet, but the isolation offered by using virt is an active area of research. Tooling comes later, mostly.

minato posted:

With the unikernel stuff, they also assumed that only one process is going to run in the VM. So... no log shippers, Memcache instances, telemetry agents, debugging tools like sshd/bash...? Adequate for the narrow use-cases they mentioned, but impractical for more standard workloads.
They mimicked container workloads. Logging and telemetry in containers is handled from the host level. This is just more tooling missing.

minato posted:

They also only compared with Docker without digging into profiling Docker itself. I couldn't even find any description of the Docker image they used, so it's not clear how long Docker would take to unpack it and set up the CoW system, which could have contributed significantly to the boot time. It'd probably be fairer to also compare against rkt, runc, and systemd-nspawn.
To be honest, they should have compared to process isolation with bare cgroups, which docker (and all the others) is/are basically a wrapper around.

I'm basically taking this as "in 2 years, we expect that there will be a container-like solution which leverages virtualization for more isolation"

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

minato posted:

With the unikernel stuff, they also assumed that only one process is going to run in the VM. So... no log shippers, Memcache instances, telemetry agents, debugging tools like sshd/bash...? Adequate for the narrow use-cases they mentioned, but impractical for more standard workloads.
This is standard deployment practice for containerized applications. Partnered applications are delivered as a sidecar, not in-container with an init system.

That said, you're correct about the difficulty this presents in debugging or even getting an interactive shell, whereas with a container it's as simple as injecting a new process into the namespace.

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender

evol262 posted:

They mimicked container workloads. Logging and telemetry in containers is handled from the host level. This is just more tooling missing.
Yeah, which I think underscores my point about how impractical unikernels are. If the VM image is optimized for a single process, then it basically precludes adding those support tools & sidecars which will limit the types of applications this is good for.

They acknowledge this, and their solution is to use a minimal Linux. But this doesn't completely solve the issue either, because adding those support tools to the VM image will bloat it and negate many of the boot-time perf improvements they got by stripping it down.

I think they acknowledge this too, which is why they focused on such narrow workload types. Perhaps the title should have been "My VM boots faster Lighter (and is Safer) than your Container, as long as you don't need any sidecars lol".

evol262 posted:

I'm basically taking this as "in 2 years, we expect that there will be a container-like solution which leverages virtualization for more isolation"
This sounds about right. As I see it, isolation is on a spectrum. Containers start from "no isolation" and can easily choose to add "cladding" onto themselves to achieve the level of isolation they're happy with. VMs are on the far right of the spectrum and don't have any choice, which is both their value-add (for security) and a curse (for lack of resource sharing). So perhaps advancements could be made to allow VMs to move a little further left down the spectrum.

Looking much further ahead to the future, serverless/lambda is likely to be much bigger and so this tech could be useful for multi-tenant public cloud providers who want fast booting VMs with high security.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Yeah, I don't think anyone should read that and think "oh, let's deploy this".

Like most academic papers, the subject is not ready for prime time.

evol262
Nov 30, 2010
#!/usr/bin/perl

minato posted:

Yeah, which I think underscores my point about how impractical unikernels are. If the VM image is optimized for a single process, then it basically precludes adding those support tools & sidecars which will limit the types of applications this is good for.

As noted, this is already the case for containers. There's a single entrypoint, and best practice is that this maps to a single process (instead of some forking script)

minato posted:

They acknowledge this, and their solution is to use a minimal Linux. But this doesn't completely solve the issue either, because adding those support tools to the VM image will bloat it and negate many of the boot-time perf improvements they got by stripping it down.

I'm not understanding why "support tools" are needed here. Presumably, if this lands in a project, those support tools will live in the project framework, other lightweight VMs, or similar. See Clear Containers, kubevirt, and a variety of other projects doing stuff similar to this right now.

minato posted:

I think they acknowledge this too, which is why they focused on such narrow workload types. Perhaps the title should have been "My VM boots faster Lighter (and is Safer) than your Container, as long as you don't need any sidecars lol".

I mean, yes, but the "sidecars" would also boot quicker and be more secure. Plus containers need exactly the same use case anyway.

minato posted:

This sounds about right. As I see it, isolation is on a spectrum. Containers start from "no isolation" and can easily choose to add "cladding" onto themselves to achieve the level of isolation they're happy with. VMs are on the far right of the spectrum and don't have any choice, which is both their value-add (for security) and a curse (for lack of resource sharing). So perhaps advancements could be made to allow VMs to move a little further left down the spectrum.
This is the real takeaway. There isn't a practical amount of "cladding" which can be added to prevent breakouts. Side channel attacks against hypervisors also exist, despite ASLR, but it's much, much harder. When most major container hosts are still saying "if you container needs to run as root, we won't allow it, due to security", it speaks volumes about the relative security of containers. This paper (or clear containers) is something like a "best of both worlds". Except unikernels have even less overhead.

minato posted:

Looking much further ahead to the future, serverless/lambda is likely to be much bigger and so this tech could be useful for multi-tenant public cloud providers who want fast booting VMs with high security.
I'm firmly meh on serverless having an impact for 99% of us. This tech could be/is useful for providers who want the speed and single-mindedness of containers with the security of VMs. That's where it goes.

Serverless is ok, but doesn't fundamentally differ from PaaS in a lot of ways, and the amount of times a business says "well, I really need this API, but I can't be bothered containerizing it and I'm ok with a high turnaround time" is low. FaaS fits for a very narrow range of cases compared to "how do I lock down something like a container while retaining almost all of the benefits."

Not trying to be argumentative, but it seems that you're reading this and saying, "yeah, but a unikernel can't replace my varnish+nginx machine with VMware tools". You're right. It can't. But that's not what they're gunning for. The question is "can this replace my nginx container without worrying that a vulnerability which allows remote code execution will let an attacker break out of the namespace and see my other poo poo". And it can do that.

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender

evol262 posted:

I'm not understanding why "support tools" are needed here. Presumably, if this lands in a project, those support tools will live in the project framework, other lightweight VMs, or similar. See Clear Containers, kubevirt, and a variety of other projects doing stuff similar to this right now.
By "support tools" I meant app-independent resources on the host system that are used as container infrastructure: overlay network managers, telemetry agents, service mesh agents, debugging tools, service proxies, host config managers, etc. I'm going to have to walk back my previous statement: I had assumed that since those resources couldn't easily be shared from the host with a VM instance, the VM would have to include them in its own image. But I realize now that if the VM can assume they exist on the host, it can just use them over a local network connection and retain its small size and high-isolation.

quote:

I'm firmly meh on serverless having an impact for 99% of us. This tech could be/is useful for providers who want the speed and single-mindedness of containers with the security of VMs. That's where it goes.

Serverless is ok, but doesn't fundamentally differ from PaaS in a lot of ways, and the amount of times a business says "well, I really need this API, but I can't be bothered containerizing it and I'm ok with a high turnaround time" is low. FaaS fits for a very narrow range of cases compared to "how do I lock down something like a container while retaining almost all of the benefits."
I agree that's where we are now. But I think the narrow use-cases it's adequate for now will grow significantly.

As I see it, modern apps need to be (**waves hands mysteriously**) "cloud native", and combined with the whole DevOps principle of "you wrote it, you help run it", there's been a push to get Software Engineers to understand this massive evergrowing stack of IaaS/PaaS technology (previously the domain of Ops) just to get their feature out the door.

But in my experience SWEs don't want to know all that stuff, they just want to push a button and get their feature into prod. They're actively resistant to learning about how the sausage is made. They just want a magic "PaaS 2.0" where they click a button and get a deployment pipeline, telemetry, logs, alerts, & reliability. They don't want to know anything about configuration, auto-scaling, backups, availability-zones, security, load-balancing or service meshes; that's just an opaque implementation detail to them. And I can see their point.

I see the handwavey concept of serverless as quite appealing to them because it's a step closer to that utopia, even if there are still very restrictive caveats and Ops will never be able to hide every detail from them. So I see a strong incentive to develop serverless in that direction.

Adbot
ADBOT LOVES YOU

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

minato posted:

But in my experience SWEs don't want to know all that stuff, they just want to push a button and get their feature into prod. They're actively resistant to learning about how the sausage is made. They just want a magic "PaaS 2.0" where they click a button and get a deployment pipeline, telemetry, logs, alerts, & reliability. They don't want to know anything about configuration, auto-scaling, backups, availability-zones, security, load-balancing or service meshes; that's just an opaque implementation detail to them. And I can see their point.

I see the handwavey concept of serverless as quite appealing to them because it's a step closer to that utopia, even if there are still very restrictive caveats and Ops will never be able to hide every detail from them. So I see a strong incentive to develop serverless in that direction.

This is exactly why serverless or whatever it evolves into is going to be big.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply