|
Route53 alias record TTLs are set to the TTL of the target.
|
# ? Dec 23, 2015 14:35 |
|
|
# ? Apr 25, 2024 02:40 |
|
Tab8715 posted:Extra-circular question but when it comes to massive web-based SaaS Applications like Facebook, Salesforce, Apple iCloud what are they using for their Directory Service?
|
# ? Dec 26, 2015 06:12 |
|
Currently sitting here with my thumb up my butt, waiting for a call back from my internal Openstack provider. I needed some more resources so I went into their customized Horizon portal and configured a quota increase request. Clicked Submit, got an error page, and then 30 seconds later an email letting me know my request to deactivate all of our configured user accounts, including the ones for automation, had completed successfully.
|
# ? Jan 7, 2016 17:42 |
|
MagnumOpus posted:Currently sitting here with my thumb up my butt, waiting for a call back from my internal Openstack provider. I needed some more resources so I went into their customized Horizon portal and configured a quota increase request. Clicked Submit, got an error page, and then 30 seconds later an email letting me know my request to deactivate all of our configured user accounts, including the ones for automation, had completed successfully. @openstack_txt
|
# ? Jan 7, 2016 20:17 |
|
jesus
|
# ? Jan 9, 2016 10:26 |
|
Sounds like someone customized that Horizon portal..... poorly.
|
# ? Jan 12, 2016 19:39 |
|
Sums up Openstack for me (it does everything you need! Everything! Just...badly. A script will fix it, most of the time!) Find a good vodka you like and hunker down, it's gonna be a long night.
|
# ? Jan 13, 2016 05:03 |
|
If my OpenStack servers weren't blades I would literally pull them out from the rack on their rails and poo poo inside them
|
# ? Jan 13, 2016 06:33 |
|
I am tired of the cloud today. We just fired another RH consultant for underperforming, again. One of our devs goes in to try and untangle the mess he left behind and the code is a horror show of cut-pasted together crap with other people's names all over it, big surprise At least i'm not in our sister group, the openstack team lost a drive over the weekend and an important project's VMs were lost and so their director got pulled in and reamed out hard on Monday by senior leadership for the fuckup (if not directly by misconfiguration / poor architecture on his group's end, than failure to communicate the importance of availability zones and risk of provisioning to users) Bhodi fucked around with this message at 18:06 on Jan 13, 2016 |
# ? Jan 13, 2016 18:02 |
|
Wait... no backups? Although ironically, my group trying to setup backups for our Chef installation (less than 10k nodes across thousands of users and people think we're getting overloaded first thing, lol) has caused more serious outages and failures than if we had never bothered with back-ups. Hell, we had Netbackup running (which was causing some of the outages, ironically too) that could have restored our systems, so we had double backups going for a while. The same thing goes for our Chef HA setup (it's caused more availability loss than anything due to split brain problems happening with a really, really flaky ESX cluster running way, way overprovisioned on everything when we randomly drop ICMP packets for a few times in a row even though no vMotions are happening).
|
# ? Jan 13, 2016 19:44 |
|
Vulture Culture posted:If my OpenStack servers weren't blades I would literally pull them out from the rack on their rails and poo poo inside them
|
# ? Jan 13, 2016 20:07 |
|
How does a single drive failure cause any data loss in TYOOL 2016?
|
# ? Jan 13, 2016 20:31 |
|
bitprophet posted:How does a single drive failure cause any data loss in TYOOL 2016? Because open stack is the Perl of cloud computing
|
# ? Jan 13, 2016 22:30 |
|
bitprophet posted:How does a single drive failure cause any data loss in TYOOL 2016? OpenStack is an amazing machine made of super complex glass parts spinning at 100,000 rpm. Don't Sneeze Oh gently caress you looked at it wrong and theres glass everywhere
|
# ? Jan 14, 2016 01:14 |
|
Ixian posted:Because open stack is the Perl of cloud computing It's because their admins didn't bother configuring a resilient backend for glance, or their storage policies weren't set up (or were misconfigured) for swift. Openstack isn't even remotely perfect, and it's confusing for admins, but it probably doesn't have the blame for this. As an aside, Perl is great. Call Openstack the PHP or Node.js if you want, but Perl built the world.
|
# ? Jan 14, 2016 01:58 |
|
evol262 posted:It's because their admins didn't bother configuring a resilient backend for glance, or their storage policies weren't set up (or were misconfigured) for swift.
|
# ? Jan 14, 2016 02:24 |
|
Vulture Culture posted:Or the cluster is set up with instance storage by default, which is sensible for many environments. AWS does the same thing if you launch an instance store AMI. But then we have to assume that all the VMs were running on a single compute node, and that a single disk failure killed that node (or at least /var), which is even dumber.
|
# ? Jan 14, 2016 02:37 |
|
evol262 posted:It's because their admins didn't bother configuring a resilient backend for glance, or their storage policies weren't set up (or were misconfigured) for swift. Perl IS great. The glue that still holds a lot of the net together, no doubt. In the right hands it is incredibly powerful and works wonders. However you really have to know what the gently caress you are doing and/or not push it too far out of bounds. Perl is great but for every Perl master there's a dozen neophytes out of their league setting the stage for long term problems because at first glance what they did "seemed to work fine" or they are just running something someone else did with no real idea how it works or desire to know why. .....just like Openstack.
|
# ? Jan 14, 2016 02:37 |
|
evol262 posted:But then we have to assume that all the VMs were running on a single compute node, and that a single disk failure killed that node (or at least /var), which is even dumber.
|
# ? Jan 14, 2016 03:24 |
|
Ixian posted:Perl IS great. The glue that still holds a lot of the net together, no doubt. I'd agree and disagree. I agree in a general sense that the basic tenets of cloud computing (how to structure your applications, how to handle resiliency, etc) are a quagmire for people who just want to get something up and running, no matter whether it's GCE, AWS, Openstack, or another provider. Openstack is harder because you actually need to configure Openstack itself (correctly), which has a whole bunch of problems because the architecture (in particular, the overly chatty applications hammering the database instead of using a message queue) falls over in large environments unless you have subject matter experts. But "I lost a disk, and a server crashed, with all my stuff on it!" (or "I lost a disk, and my entire VMware datastore was on it!" [Glance], or "I lost a disk, and my RAID failed!" [bad swift resiliency]) isn't Openstack's fault. I'm not really trying to defend Openstack here, and I know as well as anybody that it's just not appropriate for 95% of the people who want to use it, and 50% of client who it's appropriate for don't have the on-site expertise to keep it ticking, but losing a disk shouldn't cause significant damage to an Openstack environment with an admin who can rub two brain cells together. It's , but it's not magic. The cloud is just someone else's computer (yours, in the case of openstack), and all the hard-learned lessons about how to make sure your environment doesn't take a dive when you lose a disk or a network link or whatever still apply. Vulture Culture posted:Dumber in the sense that people are obviously doing the wrong thing with it (either expectations haven't been communicated, or people aren't listening regardless), but there's nothing at all wrong with this approach for cattle when you're running OpenStack as designed. The instances blow up, and Heat spins new ones. Why double your disk costs for no reason? If you need individual instances to be resilient against disk failures, you have Cinder. I try not to have customer involvement, but almost everyone sosreports from I've seen is still running on PowerEdge/Proliant/whatever with at least two disks in fault-tolerant RAID. Sure, it doubles your disk costs, but most of our institutional customers seem to have accepted that as a cost of business for years, and haven't changed their purchasing strategy just for compute nodes. They'd probably be better off shoving as much memory and as little disk as possible into the chassis, but never my call. I'm also guessing from the total loss of the VMs (and that they cared) that these were probably pets. evol262 fucked around with this message at 03:34 on Jan 14, 2016 |
# ? Jan 14, 2016 03:30 |
|
Half the places I've seen using OpenStack come from a VMware-ish or traditional infrastructure history where you're used to treating every host as a pet, so everyone should be running high availability storage and network to some degree as a habit. If people are starting to deploy OpenStack thinking it magically fixes all these things for you, I dunno if you just became brain damaged or what.
|
# ? Jan 14, 2016 05:34 |
|
Wait wait wait are you telling me that I can't just shove a 3-node cassandra cluster into my environment, try to hack it into performing like an immediately-consistent database, and expect the cloud to magically recover everything when I lose one of those nodes.
|
# ? Jan 14, 2016 15:13 |
|
OSes are magic, virtualization is magic, the cloud is magic, everything is magic Sadly, magic doesn't mean operable, or good
|
# ? Jan 14, 2016 17:04 |
|
Bhodi posted:OSes are magic, virtualization is magic, the cloud is magic, everything is magic I feel like this is the biggest change that occurred in the web ops world in terms of its effect on my day-to-day. It used to be that no one else in the org cared about what we were doing on an infrastructure level because they knew it was a nightmare realm they dared not enter. Since the rise of everything "cloud" now we've got armies of web devs who know just enough to be dangerous. My job is now running around trying to keep assholes from loving up long-term plans by applying half-understood cloudisms to their designs. Just about every day I find a new thing that makes me go cross-eyed with rage, and on the days I don't it's because instead I spent 3 hours arguing a web dev down from his 20% understanding of platform and infrastructure concepts.
|
# ? Jan 14, 2016 17:54 |
|
Azure question: is it possible to override system routes on the platform? For example, we've deployed a Check Point appliance which we want to act as the default gateway for all VMs so that it can enforce policy and inspect traffic. However we've only managed to get it to work for VM traffic going between subnets. Traffic going from a VM to a VPN gateway or the internet just magically bypasses the Check Point appliance (Which I assume is due to SDN voodoo). Is this at all possible?
|
# ? Jan 14, 2016 18:49 |
|
cheese-cube posted:Azure question: is it possible to override system routes on the platform? For example, we've deployed a Check Point appliance which we want to act as the default gateway for all VMs so that it can enforce policy and inspect traffic. However we've only managed to get it to work for VM traffic going between subnets. Traffic going from a VM to a VPN gateway or the internet just magically bypasses the Check Point appliance (Which I assume is due to SDN voodoo). Does this help? Azure User-Defined Routes
|
# ? Jan 14, 2016 20:21 |
|
MagnumOpus posted:I feel like this is the biggest change that occurred in the web ops world in terms of its effect on my day-to-day. It used to be that no one else in the org cared about what we were doing on an infrastructure level because they knew it was a nightmare realm they dared not enter. Since the rise of everything "cloud" now we've got armies of web devs who know just enough to be dangerous. My job is now running around trying to keep assholes from loving up long-term plans by applying half-understood cloudisms to their designs. Just about every day I find a new thing that makes me go cross-eyed with rage, and on the days I don't it's because instead I spent 3 hours arguing a web dev down from his 20% understanding of platform and infrastructure concepts.
|
# ? Jan 14, 2016 20:38 |
|
Vulture Culture posted:This is a great reason to have devs responsible for operating their own software O-ho-ho but you see "we are all just engineers in this group" because we're "avoiding silo-ization". We're "flat" and a "unified development team" "leveraging devops paradigms" and responsible for "self-organizing". EDIT: All of these are real things that can be accomplished. I don't work for a MagnumOpus fucked around with this message at 21:00 on Jan 14, 2016 |
# ? Jan 14, 2016 20:47 |
|
I've learned to just say "Yes, we can do it, but... here's what you're trading off" and documenting it carefully while I silently work on counter-measures / contingencies for the inevitable failure of the bad design. If a place can't get a lot of system design right, there's a fair chance they're not going to get even basic stuff like HA network and storage right and you're just on your own really. Replication factor of 1? Absolutely fine for your goal of cost savings, Mr. SVP! I'll check with the backup guys that we've put all these into backup inclusions in the meantime if that's alright with you?Vulture Culture posted:This is a great reason to have devs responsible for operating their own software Also, gently caress the developers that think reading highscalability.com posts means they're a goddamn CCIE and VCDX architect.
|
# ? Jan 14, 2016 23:02 |
|
necrobobsledder posted:developer-centric start-ups and because they're paid stupidly high wages they feel they're an authority on anything besides just plain code These are unfortunately the roots of my problems. My company is huge and developer-centric, and the business unit I'm in has only ever built software that is run on-premise by their customers so they have absolute zero experience in running a webapp enterprise. This fact has not stopped their "architects" from making designs that commit us to poorly-researched solutions before the ops team has any opportunity to intervene. For examples they were recently stumped by the eventually-consistent nature of Cassandra, forcing a massive refactor of their app, and don't even get the PM started on how badly they missed the mark on estimating SSD storage costs.
|
# ? Jan 14, 2016 23:14 |
|
Tab8715 posted:Does this help? Yeah I've read that article and unfortunately it looks like system routes cannot be overridden. Was hoping that there was a way to do it but oh well.
|
# ? Jan 15, 2016 00:01 |
|
cheese-cube posted:Yeah I've read that article and unfortunately it looks like system routes cannot be overridden. Was hoping that there was a way to do it but oh well. What do you mean by System Routes? I know for a fact I've seen PaaS Firewalls implemented in Azure but from the OS couldn't you just manually modify the routes in the route table and force it somewhere else? Example, with cmd.exe route print code:
|
# ? Jan 15, 2016 00:08 |
|
Tab8715 posted:What do you mean by System Routes? Have you read the article you linked? That explains things fairly succinctly. I can try modifying the routing table in the guest OS however I'm unsure if it will break anything. Also there would still be a security issue because if someone gained elevated privileges in the guest OS they could just modify the routing table to go around the FW.
|
# ? Jan 15, 2016 02:28 |
|
necrobobsledder posted:Half the places I've seen using OpenStack come from a VMware-ish or traditional infrastructure history where you're used to treating every host as a pet, so everyone should be running high availability storage and network to some degree as a habit. If people are starting to deploy OpenStack thinking it magically fixes all these things for you, I dunno if you just became brain damaged or what. I'm going with brain damaged. Something about this platform - I don't know if it is marketing or bad advice from one of the many IT consulting firms that pushes it for "private cloud" - leads normally intelligent people to believe it's not just a software platform for virtual operations but some kind of advanced AI that, once set up, merely needs to be given jobs. Set up a single VM instance some developer built in a weekend and have it take on mission critical functions the rest of your service relies on? No problem. It's all magic back there, something called Swift handles persistent storage - don't worry guys, hard drive failures are such a 90's problem now - security takes care of itself...holy poo poo what happened? Backups? We needed backups? If you are googling "eventually consistent" at 3am after the poo poo has hit the fan and splattered around the room you just might be an Openstack user. Yeah yeah, blame the user, not the fault of the software, I get it. Managing external IT and software disasters is pretty much half of my job though and I see this way too often. Software is a tool and if you are a tool using it any system can go bad but something about this stack brings it out more than others. Not Openstacks fault probably, just how it is.
|
# ? Jan 15, 2016 02:36 |
|
It's the Microsoft Excel of the virtualization stacks.
|
# ? Jan 15, 2016 02:39 |
|
In our OpenStack deployments, we offer two choices for storage. By default (and by our recommendation) you get Ceph-backed storage, which isn't super fast but is reliable and will save your data from being vaporized if an OSD sinks. Alternatively, you can use fast local instance storage, which creates volumes in an LVM VG on top of a RAID 0 of SSDs. Very fast, potentially very dangerous. We have made it as abundantly clear as we can that it's not durable and could wreck your data at any time. People love it and use the hell out of it. All our top hypervisors are the ones dedicated to instance storage rather than Ceph, and we may have to start converting hypervisors running OSDs to instance storage to meet demand. If you build it, they will come, even when you tell them it's as lethal as a pit full of rattlesnakes.
|
# ? Jan 15, 2016 02:53 |
|
MagnumOpus posted:For examples they were recently stumped by the eventually-consistent nature of Cassandra, forcing a massive refactor of their app, and don't even get the PM started on how badly they missed the mark on estimating SSD storage costs. cheese-cube posted:Yeah I've read that article and unfortunately it looks like system routes cannot be overridden. Was hoping that there was a way to do it but oh well. Thanks Ants posted:It's the Microsoft Excel of the virtualization stacks. Ixian posted:If you are googling "eventually consistent" at 3am after the poo poo has hit the fan and splattered around the room you just might be an Openstack user.
|
# ? Jan 15, 2016 03:23 |
|
necrobobsledder posted:Those routes look pretty reasonable. In fact, they're exactly the routes that I'm afraid of losing in our enterprise-as-all-goddamn-hell network routing configuration in AWS that we're going to duplicate in Azure. Internet subnet, cloud-local subnet, and your private route - this is what a privileged VM should look like. It gets dicey if you want NAT instances that act as edge routers and IDS like most enterprises do admittedly, but in those cases I'd just override the routes locally via something like iptables (or whatever the hell is in Windows) instead of letting anything outside the instance take precedence. Almost everything I've heard about people using Azure leads me to believe "we just want cheaper Windows VMs that are connected to the corporate network than what corporate IT can give us, oh dear god please anything but those" (myself included, our internal private cloud is miserable and destitute). The issue with overriding via the guest OS routing table is that anyone with access to and sufficient permissions could just change it back. The thing is that there aren't currently any traffic inspection capabilities on the platform and if you ask Microsoft for say Layer 3 logs they will refuse. Sure you can deploy a FW virtual appliance however if you cannot enforce the flow of traffic then it's essentially moot.
|
# ? Jan 15, 2016 03:34 |
|
cheese-cube posted:The issue with overriding via the guest OS routing table is that anyone with access to and sufficient permissions could just change it back. The thing is that there aren't currently any traffic inspection capabilities on the platform and if you ask Microsoft for say Layer 3 logs they will refuse. Sure you can deploy a FW virtual appliance however if you cannot enforce the flow of traffic then it's essentially moot. Layer 3 logs are difficult due to the NVGRE and the way the Hyper-V Logical switch handles moving the packets around. There's a LOT of PFM going on there in my opinion. More PFM that I like most certainly and I get to help hosters build out their own public clouds with this crap!
|
# ? Jan 15, 2016 05:17 |
|
|
# ? Apr 25, 2024 02:40 |
|
I've been given a set of AWS keys from a client but they don't seem to have the permissions i need to do what I'm supposed to be able to do, is there a quick and easy way to list the associated IAM permissions with a valid key?
|
# ? Jan 20, 2016 10:10 |