Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Route53 alias record TTLs are set to the TTL of the target.

Adbot
ADBOT LOVES YOU

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Tab8715 posted:

Extra-circular question but when it comes to massive web-based SaaS Applications like Facebook, Salesforce, Apple iCloud what are they using for their Directory Service?
This is specific for Salesforce, but everything I know about Salesforce's primary platform after talking to various employees over the years is that they actually don't even have that many servers (< 1500) and that they don't even use virtualization, so I'd make a guess that software-wise they're still probably fine with LDAP. I know they aren't using CA SiteMinder (the only other alternative to Active Directory worth a drat in any fashion that gives consideration to dinosaur enterprise needs and enterprise feature check boxes). And to add to the above, Salesforce's primary tech stack was based around hiring Oracle engineers (and Benioff is ex-Oracle himself after all) so they went with Oracle because it made the most sense for them. Salesforce's scaling problems are almost nothing like the problems that face most b2c platforms and will have a lot more concerns about regulatory considerations for their customers that make dealing with oftentimes tech-clueless regulators easiest with just pulling crap into a traditional enterprise RDBMS like Oracle and just treat it like a typical ETL platform with data warehousing that's the bread and butter of Oracle's business.

MagnumOpus
Dec 7, 2006

Currently sitting here with my thumb up my butt, waiting for a call back from my internal Openstack provider. I needed some more resources so I went into their customized Horizon portal and configured a quota increase request. Clicked Submit, got an error page, and then 30 seconds later an email letting me know my request to deactivate all of our configured user accounts, including the ones for automation, had completed successfully.

:bang:

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

MagnumOpus posted:

Currently sitting here with my thumb up my butt, waiting for a call back from my internal Openstack provider. I needed some more resources so I went into their customized Horizon portal and configured a quota increase request. Clicked Submit, got an error page, and then 30 seconds later an email letting me know my request to deactivate all of our configured user accounts, including the ones for automation, had completed successfully.

:bang:

@openstack_txt

Wicaeed
Feb 8, 2005
:catstare: jesus

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Sounds like someone customized that Horizon portal..... poorly.

Ixian
Oct 9, 2001

Many machines on Ix....new machines
Pillbug
Sums up Openstack for me (it does everything you need! Everything! Just...badly. A script will fix it, most of the time!)

Find a good vodka you like and hunker down, it's gonna be a long night.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
If my OpenStack servers weren't blades I would literally pull them out from the rack on their rails and poo poo inside them

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
I am tired of the cloud today.

We just fired another RH consultant for underperforming, again. One of our devs goes in to try and untangle the mess he left behind and the code is a horror show of cut-pasted together crap with other people's names all over it, big surprise

At least i'm not in our sister group, the openstack team lost a drive over the weekend and an important project's VMs were lost and so their director got pulled in and reamed out hard on Monday by senior leadership for the fuckup (if not directly by misconfiguration / poor architecture on his group's end, than failure to communicate the importance of availability zones and risk of provisioning to users)

:yayclod:

Bhodi fucked around with this message at 18:06 on Jan 13, 2016

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Wait... no backups?

Although ironically, my group trying to setup backups for our Chef installation (less than 10k nodes across thousands of users and people think we're getting overloaded first thing, lol) has caused more serious outages and failures than if we had never bothered with back-ups. Hell, we had Netbackup running (which was causing some of the outages, ironically too) that could have restored our systems, so we had double backups going for a while. The same thing goes for our Chef HA setup (it's caused more availability loss than anything due to split brain problems happening with a really, really flaky ESX cluster running way, way overprovisioned on everything when we randomly drop ICMP packets for a few times in a row even though no vMotions are happening).

Aunt Beth
Feb 24, 2006

Baby, you're ready!
Grimey Drawer

Vulture Culture posted:

If my OpenStack servers weren't blades I would literally pull them out from the rack on their rails and poo poo inside them
Is the point that you'd poo poo in them while they're still running? Because I mean, dropping small ones into blades would be pretty great. Chassis' air movers are mighty powerful. Could be fun.

bitprophet
Jul 22, 2004
Taco Defender
How does a single drive failure cause any data loss in TYOOL 2016? :psyduck:

Ixian
Oct 9, 2001

Many machines on Ix....new machines
Pillbug

bitprophet posted:

How does a single drive failure cause any data loss in TYOOL 2016? :psyduck:

Because open stack is the Perl of cloud computing

jre
Sep 2, 2011

To the cloud ?



bitprophet posted:

How does a single drive failure cause any data loss in TYOOL 2016? :psyduck:

OpenStack is an amazing machine made of super complex glass parts spinning at 100,000 rpm. Don't Sneeze






Oh gently caress you looked at it wrong and theres glass everywhere

evol262
Nov 30, 2010
#!/usr/bin/perl

Ixian posted:

Because open stack is the Perl of cloud computing

It's because their admins didn't bother configuring a resilient backend for glance, or their storage policies weren't set up (or were misconfigured) for swift.

Openstack isn't even remotely perfect, and it's confusing for admins, but it probably doesn't have the blame for this.

As an aside, Perl is great. Call Openstack the PHP or Node.js if you want, but Perl built the world.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

evol262 posted:

It's because their admins didn't bother configuring a resilient backend for glance, or their storage policies weren't set up (or were misconfigured) for swift.
Or the cluster is set up with instance storage by default, which is sensible for many environments. AWS does the same thing if you launch an instance store AMI.

evol262
Nov 30, 2010
#!/usr/bin/perl

Vulture Culture posted:

Or the cluster is set up with instance storage by default, which is sensible for many environments. AWS does the same thing if you launch an instance store AMI.

But then we have to assume that all the VMs were running on a single compute node, and that a single disk failure killed that node (or at least /var), which is even dumber.

Ixian
Oct 9, 2001

Many machines on Ix....new machines
Pillbug

evol262 posted:

It's because their admins didn't bother configuring a resilient backend for glance, or their storage policies weren't set up (or were misconfigured) for swift.

Openstack isn't even remotely perfect, and it's confusing for admins, but it probably doesn't have the blame for this.

As an aside, Perl is great. Call Openstack the PHP or Node.js if you want, but Perl built the world.

Perl IS great. The glue that still holds a lot of the net together, no doubt.

In the right hands it is incredibly powerful and works wonders. However you really have to know what the gently caress you are doing and/or not push it too far out of bounds. Perl is great but for every Perl master there's a dozen neophytes out of their league setting the stage for long term problems because at first glance what they did "seemed to work fine" or they are just running something someone else did with no real idea how it works or desire to know why.

.....just like Openstack.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

evol262 posted:

But then we have to assume that all the VMs were running on a single compute node, and that a single disk failure killed that node (or at least /var), which is even dumber.
Dumber in the sense that people are obviously doing the wrong thing with it (either expectations haven't been communicated, or people aren't listening regardless), but there's nothing at all wrong with this approach for cattle when you're running OpenStack as designed. The instances blow up, and Heat spins new ones. Why double your disk costs for no reason? If you need individual instances to be resilient against disk failures, you have Cinder.

evol262
Nov 30, 2010
#!/usr/bin/perl

Ixian posted:

Perl IS great. The glue that still holds a lot of the net together, no doubt.

In the right hands it is incredibly powerful and works wonders. However you really have to know what the gently caress you are doing and/or not push it too far out of bounds. Perl is great but for every Perl master there's a dozen neophytes out of their league setting the stage for long term problems because at first glance what they did "seemed to work fine" or they are just running something someone else did with no real idea how it works or desire to know why.

.....just like Openstack.

I'd agree and disagree. I agree in a general sense that the basic tenets of cloud computing (how to structure your applications, how to handle resiliency, etc) are a quagmire for people who just want to get something up and running, no matter whether it's GCE, AWS, Openstack, or another provider. Openstack is harder because you actually need to configure Openstack itself (correctly), which has a whole bunch of problems because the architecture (in particular, the overly chatty applications hammering the database instead of using a message queue) falls over in large environments unless you have subject matter experts.

But "I lost a disk, and a server crashed, with all my stuff on it!" (or "I lost a disk, and my entire VMware datastore was on it!" [Glance], or "I lost a disk, and my RAID failed!" [bad swift resiliency]) isn't Openstack's fault.

I'm not really trying to defend Openstack here, and I know as well as anybody that it's just not appropriate for 95% of the people who want to use it, and 50% of client who it's appropriate for don't have the on-site expertise to keep it ticking, but losing a disk shouldn't cause significant damage to an Openstack environment with an admin who can rub two brain cells together. It's :yaycloud:, but it's not magic. The cloud is just someone else's computer (yours, in the case of openstack), and all the hard-learned lessons about how to make sure your environment doesn't take a dive when you lose a disk or a network link or whatever still apply.

Vulture Culture posted:

Dumber in the sense that people are obviously doing the wrong thing with it (either expectations haven't been communicated, or people aren't listening regardless), but there's nothing at all wrong with this approach for cattle when you're running OpenStack as designed. The instances blow up, and Heat spins new ones. Why double your disk costs for no reason? If you need individual instances to be resilient against disk failures, you have Cinder.

I try not to have customer involvement, but almost everyone sosreports from I've seen is still running on PowerEdge/Proliant/whatever with at least two disks in fault-tolerant RAID.

Sure, it doubles your disk costs, but most of our institutional customers seem to have accepted that as a cost of business for years, and haven't changed their purchasing strategy just for compute nodes. They'd probably be better off shoving as much memory and as little disk as possible into the chassis, but never my call.

I'm also guessing from the total loss of the VMs (and that they cared) that these were probably pets.

evol262 fucked around with this message at 03:34 on Jan 14, 2016

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Half the places I've seen using OpenStack come from a VMware-ish or traditional infrastructure history where you're used to treating every host as a pet, so everyone should be running high availability storage and network to some degree as a habit. If people are starting to deploy OpenStack thinking it magically fixes all these things for you, I dunno if you just became brain damaged or what.

MagnumOpus
Dec 7, 2006

Wait wait wait are you telling me that I can't just shove a 3-node cassandra cluster into my environment, try to hack it into performing like an immediately-consistent database, and expect the cloud to magically recover everything when I lose one of those nodes.

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
OSes are magic, virtualization is magic, the cloud is magic, everything is magic

Sadly, magic doesn't mean operable, or good

MagnumOpus
Dec 7, 2006

Bhodi posted:

OSes are magic, virtualization is magic, the cloud is magic, everything is magic

Sadly, magic doesn't mean operable, or good

I feel like this is the biggest change that occurred in the web ops world in terms of its effect on my day-to-day. It used to be that no one else in the org cared about what we were doing on an infrastructure level because they knew it was a nightmare realm they dared not enter. Since the rise of everything "cloud" now we've got armies of web devs who know just enough to be dangerous. My job is now running around trying to keep assholes from loving up long-term plans by applying half-understood cloudisms to their designs. Just about every day I find a new thing that makes me go cross-eyed with rage, and on the days I don't it's because instead I spent 3 hours arguing a web dev down from his 20% understanding of platform and infrastructure concepts.

Pile Of Garbage
May 28, 2007



Azure question: is it possible to override system routes on the platform? For example, we've deployed a Check Point appliance which we want to act as the default gateway for all VMs so that it can enforce policy and inspect traffic. However we've only managed to get it to work for VM traffic going between subnets. Traffic going from a VM to a VPN gateway or the internet just magically bypasses the Check Point appliance (Which I assume is due to SDN voodoo).

Is this at all possible?

Gucci Loafers
May 20, 2006
Probation
Can't post for 5 hours!

cheese-cube posted:

Azure question: is it possible to override system routes on the platform? For example, we've deployed a Check Point appliance which we want to act as the default gateway for all VMs so that it can enforce policy and inspect traffic. However we've only managed to get it to work for VM traffic going between subnets. Traffic going from a VM to a VPN gateway or the internet just magically bypasses the Check Point appliance (Which I assume is due to SDN voodoo).

Is this at all possible?

Does this help?

Azure User-Defined Routes

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

MagnumOpus posted:

I feel like this is the biggest change that occurred in the web ops world in terms of its effect on my day-to-day. It used to be that no one else in the org cared about what we were doing on an infrastructure level because they knew it was a nightmare realm they dared not enter. Since the rise of everything "cloud" now we've got armies of web devs who know just enough to be dangerous. My job is now running around trying to keep assholes from loving up long-term plans by applying half-understood cloudisms to their designs. Just about every day I find a new thing that makes me go cross-eyed with rage, and on the days I don't it's because instead I spent 3 hours arguing a web dev down from his 20% understanding of platform and infrastructure concepts.
This is a great reason to have devs responsible for operating their own software

MagnumOpus
Dec 7, 2006

Vulture Culture posted:

This is a great reason to have devs responsible for operating their own software

O-ho-ho but you see "we are all just engineers in this group" because we're "avoiding silo-ization". We're "flat" and a "unified development team" "leveraging devops paradigms" and responsible for "self-organizing".

EDIT: All of these are real things that can be accomplished. I don't work for a company group that understands how to do that, so instead we just say pay lip service to all this except for using it to chastise people who point out problems.

MagnumOpus fucked around with this message at 21:00 on Jan 14, 2016

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I've learned to just say "Yes, we can do it, but... here's what you're trading off" and documenting it carefully while I silently work on counter-measures / contingencies for the inevitable failure of the bad design. If a place can't get a lot of system design right, there's a fair chance they're not going to get even basic stuff like HA network and storage right and you're just on your own really. Replication factor of 1? Absolutely fine for your goal of cost savings, Mr. SVP! I'll check with the backup guys that we've put all these into backup inclusions in the meantime if that's alright with you?

Vulture Culture posted:

This is a great reason to have devs responsible for operating their own software
Yeah, and this is why I wound up doing ops in the end for a software team because I had to make up for all the other software engineers that couldn't wrap their heads (and more importantly - didn't want to care) around why you shouldn't just run JBODs, unmask all LUNs on the SANs, stop IPtables, disable SELinux and AppArmor on a public-facing web server, or have flat 10.0.0.0/8 subnets everywhere. Going fast by removing safety measures seems to be fashionable among developers that don't care because "I made it compile and pass my meaningless unit tests with like 20% coverage" is the modus operandi of developer-centric start-ups and because they're paid stupidly high wages they feel they're an authority on anything besides just plain code. Nowadays, I'm now banging my head against ops engineers that don't understand software from this century and how to plan infrastructure for "all these newfangled clouds."

Also, gently caress the developers that think reading highscalability.com posts means they're a goddamn CCIE and VCDX architect.

MagnumOpus
Dec 7, 2006

necrobobsledder posted:

developer-centric start-ups and because they're paid stupidly high wages they feel they're an authority on anything besides just plain code

Also, gently caress the developers that think reading highscalability.com posts means they're a goddamn CCIE and VCDX architect.

These are unfortunately the roots of my problems. My company is huge and developer-centric, and the business unit I'm in has only ever built software that is run on-premise by their customers so they have absolute zero experience in running a webapp enterprise. This fact has not stopped their "architects" from making designs that commit us to poorly-researched solutions before the ops team has any opportunity to intervene. For examples they were recently stumped by the eventually-consistent nature of Cassandra, forcing a massive refactor of their app, and don't even get the PM started on how badly they missed the mark on estimating SSD storage costs.

Pile Of Garbage
May 28, 2007




Yeah I've read that article and unfortunately it looks like system routes cannot be overridden. Was hoping that there was a way to do it but oh well.

Gucci Loafers
May 20, 2006
Probation
Can't post for 5 hours!

cheese-cube posted:

Yeah I've read that article and unfortunately it looks like system routes cannot be overridden. Was hoping that there was a way to do it but oh well.

What do you mean by System Routes?

I know for a fact I've seen PaaS Firewalls implemented in Azure but from the OS couldn't you just manually modify the routes in the route table and force it somewhere else? Example, with cmd.exe route print

code:
IPv4 Route Table
===========================================================================
Active Routes:
Network Destination        Netmask          Gateway       Interface  Metric
          0.0.0.0          0.0.0.0      192.168.1.1    192.168.1.142     10
        127.0.0.0        255.0.0.0         On-link         127.0.0.1    306
        127.0.0.1  255.255.255.255         On-link         127.0.0.1    306
  127.255.255.255  255.255.255.255         On-link         127.0.0.1    306
      169.254.0.0      255.255.0.0         On-link   169.254.195.118    276
  169.254.195.118  255.255.255.255         On-link   169.254.195.118    276
  169.254.255.255  255.255.255.255         On-link   169.254.195.118    276
      192.168.1.0    255.255.255.0         On-link     192.168.1.142    266
    192.168.1.142  255.255.255.255         On-link     192.168.1.142    266
    192.168.1.255  255.255.255.255         On-link     192.168.1.142    266
      192.168.8.0    255.255.255.0         On-link       192.168.8.1    276
      192.168.8.1  255.255.255.255         On-link       192.168.8.1    276
    192.168.8.255  255.255.255.255         On-link       192.168.8.1    276
        224.0.0.0        240.0.0.0         On-link         127.0.0.1    306
        224.0.0.0        240.0.0.0         On-link   169.254.195.118    276
        224.0.0.0        240.0.0.0         On-link       192.168.8.1    276
        224.0.0.0        240.0.0.0         On-link     192.168.1.142    266
  255.255.255.255  255.255.255.255         On-link         127.0.0.1    306
  255.255.255.255  255.255.255.255         On-link   169.254.195.118    276
  255.255.255.255  255.255.255.255         On-link       192.168.8.1    276
  255.255.255.255  255.255.255.255         On-link     192.168.1.142    266

Pile Of Garbage
May 28, 2007



Tab8715 posted:

What do you mean by System Routes?

I know for a fact I've seen PaaS Firewalls implemented in Azure but from the OS couldn't you just manually modify the routes in the route table and force it somewhere else? Example, with cmd.exe route print

Have you read the article you linked? That explains things fairly succinctly. I can try modifying the routing table in the guest OS however I'm unsure if it will break anything. Also there would still be a security issue because if someone gained elevated privileges in the guest OS they could just modify the routing table to go around the FW.

Ixian
Oct 9, 2001

Many machines on Ix....new machines
Pillbug

necrobobsledder posted:

Half the places I've seen using OpenStack come from a VMware-ish or traditional infrastructure history where you're used to treating every host as a pet, so everyone should be running high availability storage and network to some degree as a habit. If people are starting to deploy OpenStack thinking it magically fixes all these things for you, I dunno if you just became brain damaged or what.

I'm going with brain damaged. Something about this platform - I don't know if it is marketing or bad advice from one of the many IT consulting firms that pushes it for "private cloud" - leads normally intelligent people to believe it's not just a software platform for virtual operations but some kind of advanced AI that, once set up, merely needs to be given jobs.

Set up a single VM instance some developer built in a weekend and have it take on mission critical functions the rest of your service relies on? No problem. It's all magic back there, something called Swift handles persistent storage - don't worry guys, hard drive failures are such a 90's problem now - security takes care of itself...holy poo poo what happened?

Backups? We needed backups?

If you are googling "eventually consistent" at 3am after the poo poo has hit the fan and splattered around the room you just might be an Openstack user.

Yeah yeah, blame the user, not the fault of the software, I get it. Managing external IT and software disasters is pretty much half of my job though and I see this way too often. Software is a tool and if you are a tool using it any system can go bad but something about this stack brings it out more than others. Not Openstacks fault probably, just how it is.

Thanks Ants
May 21, 2004

#essereFerrari


It's the Microsoft Excel of the virtualization stacks.

chutwig
May 28, 2001

BURLAP SATCHEL OF CRACKERJACKS

In our OpenStack deployments, we offer two choices for storage. By default (and by our recommendation) you get Ceph-backed storage, which isn't super fast but is reliable and will save your data from being vaporized if an OSD sinks. Alternatively, you can use fast local instance storage, which creates volumes in an LVM VG on top of a RAID 0 of SSDs. Very fast, potentially very dangerous. We have made it as abundantly clear as we can that it's not durable and could wreck your data at any time. People love it and use the hell out of it. All our top hypervisors are the ones dedicated to instance storage rather than Ceph, and we may have to start converting hypervisors running OSDs to instance storage to meet demand.

If you build it, they will come, even when you tell them it's as lethal as a pit full of rattlesnakes.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

MagnumOpus posted:

For examples they were recently stumped by the eventually-consistent nature of Cassandra, forcing a massive refactor of their app, and don't even get the PM started on how badly they missed the mark on estimating SSD storage costs.
In my world, if you're an application architect making really big decisions, you should really, really, really pay attention to the nature of your loving persistence layers and know exactly what you're trading off when you're going between different database solutions. That's just plain incompetence / laziness that has nothing to do with whether they're decent at operational considerations or not. If you ever read anything about NoSQL you'd know that almost everybody is trading off consistency for eventual consistency - another example of skimming blog posts after having worked on some BS applications in the 90s as your "hands on experience" and calling yourself a Big Data architect fashionable in tech (start-up hipsters do this poo poo too).

cheese-cube posted:

Yeah I've read that article and unfortunately it looks like system routes cannot be overridden. Was hoping that there was a way to do it but oh well.
Those routes look pretty reasonable. In fact, they're exactly the routes that I'm afraid of losing in our enterprise-as-all-goddamn-hell :downs: network routing configuration in AWS that we're going to duplicate in Azure. Internet subnet, cloud-local subnet, and your private route - this is what a privileged VM should look like. It gets dicey if you want NAT instances that act as edge routers and IDS like most enterprises do admittedly, but in those cases I'd just override the routes locally via something like iptables (or whatever the hell is in Windows) instead of letting anything outside the instance take precedence. Almost everything I've heard about people using Azure leads me to believe "we just want cheaper Windows VMs that are connected to the corporate network than what corporate IT can give us, oh dear god please anything but those" (myself included, our internal private cloud is miserable and destitute).

Thanks Ants posted:

It's the Microsoft Excel of the virtualization stacks.
Excel is easy to use and is used to run a great deal of the world economy and has a fantastic user interface that very, very quickly shows its limitations before people can take it way too far (and if you do, you ignored the warnings and very much deserve the rear end-pounding that trying to run the equivalent of Big Data OLTP or OLAP on a single loving file will result in). Excel is used extensively by small businesses and scales pretty well out until your needs really are unique and requiring enough scale to warrant something serious (MS Access is a weird place honestly but I've worked with Access DBs enough to know they're definitely used before the era of Google forms off of Sheets). I had a customer before that was running a $80M+ datacenter off of Excel - they wanted their configuration management, monitoring, everything to use Excel as their CMDB. I don't like CMDBs, but holy christ please pick something that you won't be modifying and read-locking throughout the day while your orchestration engine is falling over trying to figure out "did I gently caress up?"

Ixian posted:

If you are googling "eventually consistent" at 3am after the poo poo has hit the fan and splattered around the room you just might be an Openstack user.

Yeah yeah, blame the user, not the fault of the software, I get it. Managing external IT and software disasters is pretty much half of my job though and I see this way too often. Software is a tool and if you are a tool using it any system can go bad but something about this stack brings it out more than others. Not Openstacks fault probably, just how it is.

I've been assisting the Fortune 500 and federal government in various places for the better part of a decade and the amount of incompetence associated with the OpenStack implementers out there is mostly a reflection of the fact that OpenStack has been fantastic at addressing the technical concerns and overall (typically brain damaged) use cases and needs of these organizations.... without addressing any of the non-technical problems that led to these places having such terrible requirements in the first place that can't scale, extremely costly, and generally destined for failure in every sense of the word. I do not envy OpenStack engineers in any way, it's going to be a long battle and by the time anyone has a really great, wonderful experience with OpenStack at scale, we'll be on public clouds that probably meet OpenStack API specs meaning the literal only reason to go with OpenStack would be "you own it" and that's only required by certain government regulations based upon archaic motivations that will hopefully die within our lifetime.

Pile Of Garbage
May 28, 2007



necrobobsledder posted:

Those routes look pretty reasonable. In fact, they're exactly the routes that I'm afraid of losing in our enterprise-as-all-goddamn-hell :downs: network routing configuration in AWS that we're going to duplicate in Azure. Internet subnet, cloud-local subnet, and your private route - this is what a privileged VM should look like. It gets dicey if you want NAT instances that act as edge routers and IDS like most enterprises do admittedly, but in those cases I'd just override the routes locally via something like iptables (or whatever the hell is in Windows) instead of letting anything outside the instance take precedence. Almost everything I've heard about people using Azure leads me to believe "we just want cheaper Windows VMs that are connected to the corporate network than what corporate IT can give us, oh dear god please anything but those" (myself included, our internal private cloud is miserable and destitute).

The issue with overriding via the guest OS routing table is that anyone with access to and sufficient permissions could just change it back. The thing is that there aren't currently any traffic inspection capabilities on the platform and if you ask Microsoft for say Layer 3 logs they will refuse. Sure you can deploy a FW virtual appliance however if you cannot enforce the flow of traffic then it's essentially moot.

Zaepho
Oct 31, 2013

cheese-cube posted:

The issue with overriding via the guest OS routing table is that anyone with access to and sufficient permissions could just change it back. The thing is that there aren't currently any traffic inspection capabilities on the platform and if you ask Microsoft for say Layer 3 logs they will refuse. Sure you can deploy a FW virtual appliance however if you cannot enforce the flow of traffic then it's essentially moot.

Layer 3 logs are difficult due to the NVGRE and the way the Hyper-V Logical switch handles moving the packets around. There's a LOT of PFM going on there in my opinion. More PFM that I like most certainly and I get to help hosters build out their own public clouds with this crap!

Adbot
ADBOT LOVES YOU

fluppet
Feb 10, 2009
I've been given a set of AWS keys from a client but they don't seem to have the permissions i need to do what I'm supposed to be able to do, is there a quick and easy way to list the associated IAM permissions with a valid key?

  • Locked thread