Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
22 Eargesplitten
Oct 10, 2010



That's good. The stuff I got wrong makes sense now, I was usually just missing one key detail. I think I'm going to take another exam tomorrow morning and then study the stuff that I'm still getting wrong. I was planning on going camping on my PTO but that got scrapped, so now I have a lot of time to study since the nature of my job precludes me being able to focus on something like this for an extended period.

Adbot
ADBOT LOVES YOU

thotsky
Jun 7, 2005

hot to trot
What helped me the most by far was to look for options that are clearly wrong and can be dismissed instead of looking for the right answer. Even where you feel like you have no clue you might have enough of a clue to reduce the question to a 50-50 guess.

whats for dinner
Sep 25, 2006

IT TURN OUT METAL FOR DINNER!

thotsky posted:

What helped me the most by far was to look for options that are clearly wrong and can be dismissed instead of looking for the right answer. Even where you feel like you have no clue you might have enough of a clue to reduce the question to a 50-50 guess.

I haven't done the Solution Architect exams but this was definitely true for the SysOps and DevOps ones. The other bit that I noticed was that generally the solution with the least moving parts tended to be the most cost-effective and performant, which helped a lot for questions where I wasn't intimately familiar with the services.

Granite Octopus
Jun 24, 2008

thotsky posted:

Well, I managed to get VTL Mapping Templates to work. Hopefully the defaults for HTTP custom integrations are pretty good because I left most of it be.

I do sort of wonder how one would go about testing these templates though. Anyone have experience working with them?

I just ran a local copy of velocity when developing some templates a few months back. For testing I rely on our integration test suite that makes http requests to the api and checks the responses.

Sounds like you’ve done things the “right” way and that’s basically what API Gateway was originally designed for.

I just spent a bunch of time ripping vtl templates out of my project because the past developers made a bunch when they could have just passed through the requests and responses unmodified.

22 Eargesplitten
Oct 10, 2010



whats for dinner posted:

I haven't done the Solution Architect exams but this was definitely true for the SysOps and DevOps ones. The other bit that I noticed was that generally the solution with the least moving parts tended to be the most cost-effective and performant, which helped a lot for questions where I wasn't intimately familiar with the services.

Yeah, that seemed to be a theme. I also was doing the "eliminate obviously wrong answers" thing when I didn't know the answer. Thankfully I've always been good at taking tests, especially multiple choice.

Do you get a pass/fail immediately or do you have to wait for results?

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


You get an immediate pass or fail with stats emailed to you later. I failed Solutions Architect Professional a few months ago because 60% of the test was questions about billing.

Pile Of Garbage
May 28, 2007



Anyone else seeing issues with EC2 API in ap-southeast-2? Nothing on service status yet but I'm getting timeouts across the board.

Edit: nevermind, it just me who is the dingus.

CarForumPoster
Jun 26, 2013

⚡POWER⚡
I have a DB hosted in RDS that I want to put an API in front of to serve content via a flask app served by elastic beanstalk. I may potentially give ~10 users access to the API as well.

Does AWS have some hilariously easy way to make a REST or similar JSON serving API before I get started on a new django (DRF) or maybe FastAPI project? This won't get a ton of traffic, I'd mostly be using it to generate reports server side that are served by a small flask app.

CarForumPoster fucked around with this message at 12:50 on Jul 7, 2020

whats for dinner
Sep 25, 2006

IT TURN OUT METAL FOR DINNER!

CarForumPoster posted:

I have a DB hosted in RDS that I want to put an API in front of to serve content via a flask app served by elastic beanstalk. I may potentially give ~10 users access to the API as well.

Does AWS have some hilariously easy way to make a REST or similar JSON serving API before I get started on a new django (DRF) or maybe FastAPI project? This won't get a ton of traffic, I'd mostly be using it to generate reports server side that are served by a small flask app.

API Gateway's not bad, but might be a bit heavy for what you're after. You can supply an OpenAPI spec and it'll fill out all the resources and object representations for you and you just configure it to forward it to the HTTP endpoint of your flask app after validation. You can also set it up to require API keys which are managed by AWS if you don't want to handle that functionality yourself.

SurgicalOntologist
Jun 17, 2004

Question. I want to run GPU jobs on GCP. We have most things on a GKE cluster although we are still figuring out K8s. For these GPU jobs, we want to launch GPU machines by some trigger up to some maximum number of running machines and scale down as jobs finish. We never want GPU machines to be running waiting for a job or otherwise idle. Is this something that we can do with autoscaling or does it need more fine control? Can K8s help us with this or do we need a GCE solution (maybe similar to what's being discussed here. Obviously, we want to reinvent the wheel as little as possible. Something serverless like Cloud Run would make sense probably, except that it doesn't support GPU machines.

Granite Octopus
Jun 24, 2008

This feels like a really dumb question, but is there really no way to delete a bucket without enumerating and deleting each object in it first?

I have a bucket with ~130M objects in it. Running `aws s3 rb s3://loss-edits/ --force` from an ec2 instance in the same region as the bucket is going to take a couple of weeks, and I'd love to have this done sooner.

Docjowles
Apr 9, 2009

I think the advice I got when I had to do something similar was to attach a lifecycle policy that deletes anything older than 1 day. This processed pretty fast (like within a day) and I was able to delete the empty bucket.

Granite Octopus
Jun 24, 2008

Ohh thats a great idea, thanks! Have created a policy, will see how it goes.

Thanks Ants
May 21, 2004

#essereFerrari


Is there an Azure equivalent of the Google Identity-Aware Proxy?

I just want to put a service in Azure that exposes a web UI, and put that behind an Azure AD login as it's an internal-only service. Azure AD Application Proxy would work, but it would need to run on a separate Windows VM, and I can't see any sort of as-a-service version of it for workloads that are already in Azure.

freeasinbeer
Mar 26, 2015

by Fluffdaddy

SurgicalOntologist posted:

Question. I want to run GPU jobs on GCP. We have most things on a GKE cluster although we are still figuring out K8s. For these GPU jobs, we want to launch GPU machines by some trigger up to some maximum number of running machines and scale down as jobs finish. We never want GPU machines to be running waiting for a job or otherwise idle. Is this something that we can do with autoscaling or does it need more fine control? Can K8s help us with this or do we need a GCE solution (maybe similar to what's being discussed here. Obviously, we want to reinvent the wheel as little as possible. Something serverless like Cloud Run would make sense probably, except that it doesn't support GPU machines.


https://cloud.google.com/kubernetes-engine/docs/how-to/gpus#gpu_pool


The unsaid bit is that it should the gke autoscaler able to scale down to 0.

The Fool
Oct 16, 2003


Thanks Ants posted:

Is there an Azure equivalent of the Google Identity-Aware Proxy?

I just want to put a service in Azure that exposes a web UI, and put that behind an Azure AD login as it's an internal-only service. Azure AD Application Proxy would work, but it would need to run on a separate Windows VM, and I can't see any sort of as-a-service version of it for workloads that are already in Azure.

Pretty sure they want you to integrate azure ad directly or use saml in that situation.

See the custom developed and non-gallery options at this link: https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/what-is-application-management

SurgicalOntologist
Jun 17, 2004

freeasinbeer posted:

https://cloud.google.com/kubernetes-engine/docs/how-to/gpus#gpu_pool


The unsaid bit is that it should the gke autoscaler able to scale down to 0.

Hmm, if autoscaling is fast enough and can go to zero, it could work. Is there a way to set up k8s so that a pod just runs a container then gets deleted?

We'll basically want to run a container 10 times with different environment variables, and be able to e.g. have 5 nodes (running 2 jobs each, unless something weird happens with the timings), or have 10 nodes running one job each, with each node shutting down when it finishes if there are no new jobs. This isn't the normal k8s use case but surely we're not the first and there's a way to make it behave like that.

Edit: k8s has jobs? Well I feel stupid. That seems to be the object I was missing.

SurgicalOntologist fucked around with this message at 21:22 on Jul 11, 2020

freeasinbeer
Mar 26, 2015

by Fluffdaddy

SurgicalOntologist posted:

Hmm, if autoscaling is fast enough and can go to zero, it could work. Is there a way to set up k8s so that a pod just runs a container then gets deleted?

We'll basically want to run a container 10 times with different environment variables, and be able to e.g. have 5 nodes (running 2 jobs each, unless something weird happens with the timings), or have 10 nodes running one job each, with each node shutting down when it finishes if there are no new jobs. This isn't the normal k8s use case but surely we're not the first and there's a way to make it behave like that.

Edit: k8s has jobs? Well I feel stupid. That seems to be the object I was missing.

Yeah jobs is probably what you want, although sometimes you might want a higher level orchestrator of which there are a few. Argo and airflow come to mind.

Thanks Ants
May 21, 2004

#essereFerrari


The Fool posted:

Pretty sure they want you to integrate azure ad directly or use saml in that situation.

See the custom developed and non-gallery options at this link: https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/what-is-application-management

It's less for the integrated sign-in and more because it prevents anybody from hitting the application until they have authenticated - I don't want to expose it to the world. With Azure AD Application Proxy (love these snappy product names), an on-prem app doesn't see a request until the user has been through authentication, so any glaring security issues are only able to be exploited by our own staff.

It's not an application we develop so I have no control over linking it into our Azure AD, I just don't trust it enough to expose it to the Internet.

The Fool
Oct 16, 2003


Is it in an app service? The built in app service authentication might work for you.

quote:

How it works
The authentication and authorization module runs in the same sandbox as your application code. When it's enabled, every incoming HTTP request passes through it before being handled by your application code.
An architecture diagram showing requests being intercepted by a process in the site sandbox which interacts with identity providers before allowing traffic to the deployed site
This module handles several things for your app:
Authenticates users with the specified provider
Validates, stores, and refreshes tokens
Manages the authenticated session
Injects identity information into request headers
The module runs separately from your application code and is configured using app settings. No SDKs, specific languages, or changes to your application code are required.

Otherwise you might be able to do header rewrites with application gateway, but that would be my last resort.

22 Eargesplitten
Oct 10, 2010



I finally got my cert, and I was wondering if there's a way to find job openings specifically from AWS partner companies since they need a certain number of certs for their partner status. I'm trying to find a remote position and have been blasting out resumes, but no traction so far (he says after two business days because he's ridiculously impatient).

22 Eargesplitten
Oct 10, 2010



Double-posting, I've got a technical interview and the interviewer let me know ahead of time he wants me to explain different ways to do a URL shortener on AWS. So far I've got three, and I was wondering if there are others I should know. Not asking for a full thorough explanation because I don't want to ask people to do my homework, just other stacks that might be involved. I've included links explaining the ones I've found because I'm still working on understanding these and I might be describing them terribly. This would be a pretty fantastic first AWS job and the hiring manager knows I don't really have any serverless skills yet, just cares that I can pick them up, so I'm trying to do the work to prepare

So far I've got:
S3 bucket with website hosting, each object in the bucket is a redirect except for an admin page where you put the long URLs in and get short URLs, then put that behind a Cloudfront distribution. (https://aws.amazon.com/blogs/compute/build-a-serverless-private-url-shortener/)
DynamoDB table with lambda functions for the shorten/redirect, behind API Gateway. (https://outcrawl.com/go-url-shortener-lambda)
Amplify web host, API Gateway, DynamoDB, no Lambda (haven't gone through the whole three parts, starts here: https://aws.amazon.com/blogs/compute/building-a-serverless-url-shortener-app-without-lambda-part-1/)

CarForumPoster
Jun 26, 2013

⚡POWER⚡
Obvious one is a conventional web app type site on lambda/ebs with a regular ol sql db on RDS.

Vanadium
Jan 8, 2005

iirc there was a shitpost a while back where someone built a URL shortener as a lambda by hardcoding all the URLs, and having the lambda update itself when a new URL was to be shortened, imo that's important prior art

Adhemar
Jan 21, 2004

Kellner, da ist ein scheussliches Biest in meiner Suppe.
Is this interview at AWS, or a company using AWS? If it’s at AWS, avoid using a relational databases. I would recommend that in either case actually, but we really don’t like relational databases at AWS and this is definitely in DynamoDB’s sweet spot.

The Fool
Oct 16, 2003


Not in AWS, but I build a url shortener with azure functions where the URL’s are specified as environment variables

22 Eargesplitten
Oct 10, 2010



Huh, some good ideas there. It's not at AWS, it's at a company that's moving as much stuff as possible from traditional VMs to containers and serverless services on AWS, they're expanding their sysadmin team and are willing to take on a junior cloud guy. I didn't think of the traditional VM/web-based options although I'll probably spend the minimum amount of time fleshing those out because from what he was saying that's the exact kind of option that I should mention for thoroughness but not one he would find appealing.

Just-In-Timeberlake
Aug 18, 2003
Hey all,

Need a little help getting a static public facing IP address to work with a Lambda API behind a VPC.

Here's what I've got so far:

VPC with (3) public and (3) private subnets created using the VPC Wizard
The NAT gateway associated with that VPC has a static IP address assigned to it
DNS for abc.domain.com points to that IP address
All subnets have an entry in the routing tables for 0.0.0.0/0 pointing to the internet gateway that was created during VPC creation
A Lambda API function that I've attached to the VPC, with the private subnets specified under the subnet section. All inbound and outbound traffic is allowed to eliminate that as an issue. The Lambda function API gateway has the custom domain name (abc.domain.com) mapped to it

Using Postman, when calling the API function from the URL AWS assigns (ex. https://whatever.execute-api.us-east-2.amazonaws.com/api/showSomething) it works. When I make the same call using the URL (abc.domain.com/api/showSomething) I get the following depending on if I'm trying via HTTP or HTTPS:

HTTPS: Error: tunneling protocol could not be established
HTTP: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond

So something is obviously off, but I'm not sure where. If more details are needed just say the word.

Any help or pointing to what the issue might be would be much appreciated.

JehovahsWetness
Dec 9, 2005

bang that shit retarded

Just-In-Timeberlake posted:

The NAT gateway associated with that VPC has a static IP address assigned to it
DNS for abc.domain.com points to that IP address

Why? Inbound traffic to the NAT Gateway isn't somehow going to trigger your lambda or get routed via API Gateway. Running a Lambda function inside a VPC is so it can be part of your private network because it needs ip/network-level access to something that it couldn't otherwise reach.

If you just want a custom domain for your API Gateway:
https://aws.amazon.com/premiumsupport/knowledge-center/custom-domain-name-amazon-api-gateway/
https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html

Just-In-Timeberlake
Aug 18, 2003

JehovahsWetness posted:

Why? Inbound traffic to the NAT Gateway isn't somehow going to trigger your lambda or get routed via API Gateway. Running a Lambda function inside a VPC is so it can be part of your private network because it needs ip/network-level access to something that it couldn't otherwise reach.

If you just want a custom domain for your API Gateway:
https://aws.amazon.com/premiumsupport/knowledge-center/custom-domain-name-amazon-api-gateway/
https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html

You’re right, I’m an idiot who was thinking about this the wrong way.

I need outgoing traffic to come from the static IP address so it can be whitelisted.

sausage king of Chicago
Jun 13, 2001
I'm using ECS to try and get a short lived container to run. In the task definition I'm using parameter store to grab my secrets for my app to use as environment variables. This all works fine if I manually create a task through the dashboard, select all the settings and then enter in the env variables through the container UI.

However, if I use the JSON option (Task definitions -> create new -> configure via JSON) and I input my task def that way, the secrets are stripped out and I have to go through the dashboard and enter them manually.

This is a problem because I'm trying to do this all through github actions, where when the action runs, it creates a new task definition revision which I can just execute. However, this same issue is happening when the action pushes the task definition to ECS(secrets no longer are in the task definition).

Is this a setting I can change somewhere? Am I loving something up or is this a feature?

As an example, just to show what i'm talking about, an example task defnition:

code:
{
  "ipcMode": null,
  "executionRoleArn": "myrole",
  "containerDefinitions": [
    {
      "dnsSearchDomains": null,
      "logConfiguration": {
        "logDriver": "awslogs",
        "secretOptions": null,
      ....
      "secrets": [
        {
          "name": "myParamName",
          "valueFrom": "arn:aws:ssm:us-east-1:<myId>:parameter/pathToMyParam"
        },
        {
          "name": "myOtherParamName",
          "valueFrom": "arn:aws:ssm:us-east-1:<myId>:parameter/pathToMyOtherParam"
        }
      ],
      ....
      "privileged": null,
      "name": "myContainer"
    }
  ],
.....
}
so if that gets pushed to ECS, I then go and check the new revision created and the secrets part just isn't there. I run my app and it throws saying the variable/secret is null.

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


I've got a quick question about CloudWatch. We currently have 11 accounts and we're using the CloudWatch agent on our ec2 instances to ship system logs. The problem is we want a central location where we can view all of our logs. I was looking at doing this: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Cross-Account-Cross-Region.html and setting up a central logging account but I don't know how much of a pain that isto work with and maintain. Any thoughts?

Extremely Penetrated
Aug 8, 2004
Hail Spwwttag.

sausage king of Chicago posted:

code:
{
      "secrets": [
        {
          "name": "myParamName",
          "valueFrom": "arn:aws:ssm:us-east-1:<myId>:parameter/pathToMyParam"
        },
        {
          "name": "myOtherParamName",
          "valueFrom": "arn:aws:ssm:us-east-1:<myId>:parameter/pathToMyOtherParam"
        }
      ],
.....
}
so if that gets pushed to ECS, I then go and check the new revision created and the secrets part just isn't there. I run my app and it throws saying the variable/secret is null.

you're saying the json doesn't even have the "secrets": [...] block in the new task definition revision? sounds like a json formatting error, maybe github is mangling it. check that your line endings are just LF, not CRLF.

It should work, I make my task definitions in terraform and they pass in secrets from Secrets Manager. If the json still had the secrets but they just weren't showing up in the container I would expect a permissions problem (the ECS task execution role needs ssm:GetParameters and potentially kms:Decrypt for whatever was used to encrypt the secret).

thotsky
Jun 7, 2005

hot to trot
I am proxying a REST API with API Gateway, using Mapping Templates to transform the Integration Response. The underlying data is UTF-8 encoded and includes some non-escaped characters. The JSON validates normally, but when passed through the mapping template it fails because of stuff like "\r\n" etc... Now, I can use $util.escapeJavaScript($mapEntry.key).replaceAll("\\'","'") to fix this issue, which seems to be what is recommended, but this also escapes all special unicode characters to stuff like "\u00F8d", which I do not want. How do I do one and not the other?

thotsky fucked around with this message at 13:31 on Sep 2, 2020

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

deedee megadoodoo posted:

I've got a quick question about CloudWatch. We currently have 11 accounts and we're using the CloudWatch agent on our ec2 instances to ship system logs. The problem is we want a central location where we can view all of our logs. I was looking at doing this: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Cross-Account-Cross-Region.html and setting up a central logging account but I don't know how much of a pain that isto work with and maintain. Any thoughts?

It’s actually really straightforward once you get it set up the first time. It is a bit more work to work on (since you’ll need two browsers open for the two console sessions for the two accounts), but creating a central longing account with some compute and lots of storage means that everyone will get in the habit of dumping log files there and only there and you’ll always know where a given log stream will end up.

cosmin
Aug 29, 2008
Wooo! I just got the mail from HR for the initial interview for an Enterprise Architect position with AWS in Europe!

I’m really happy I managed to get my foot in the door, I was afraid I couldn’t get past the screening process, as most my experience is with a competing cloud vendor (where I am a certified professional architect) and had just got the AWS Solution Architect Associate before applying to show my skills are transferable!

This is a life changing opportunity for me and I’d like to make the most of it.

I know some of you are EAs or TAMs and i remember helping out another goon during the interview process, if you have any tips about the process or the culture please let me know, either here or via PMs and I’d gladly buy you a beer during the next non-virtual re:Invent if we ever get there :D

fluppet
Feb 10, 2009


Is there a reason that the aws android app doesn't support u2f

vanity slug
Jul 20, 2010

fluppet posted:



Is there a reason that the aws android app doesn't support u2f

U2F support for AWS is an afterthought at best.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
How do you guys successfully handle IAM roles for whatever process is doing your deployments?

I'm having a hard time striking a balance between permissiveness and actual practical ability to deploy applications that are actively changing and evolving. Any type of least-privilege role for deployment has to be constantly updated whenever we integrate a new AWS feature, and nobody's going to prioritize removing unused permissions from the role when we stop using something so it doesn't stay a least-privilege role at all.

My employer has 20+ AWS accounts to separate lines of business. I've been implementing automated deployments in an AWS account that's never had them before so I'm the one designing all of the roles and such. I've learned that internally, every single role that's been used for automated deployments by the 3+ groups I've talked to are wildly over-permissioned, hated by security, and everyone intends to clean them up at some point in the future but cleaning has never happened.

Am I missing some better way to determine what sort of access is necessary to run a Cloudformation-based deployment? We're using the Cloud Development Kit to create our Cloudformation stacks. Applications are all sorts of things involving a wide array of AWS products, which is what would make figuring out actual least privilege to touch all of them tough.

To be clear, I only am struggling with the roles used for the deployment process itself. CDK is creating least privilege roles for each application to run as.

Adbot
ADBOT LOVES YOU

whats for dinner
Sep 25, 2006

IT TURN OUT METAL FOR DINNER!

Twerk from Home posted:

How do you guys successfully handle IAM roles for whatever process is doing your deployments?

I'm having a hard time striking a balance between permissiveness and actual practical ability to deploy applications that are actively changing and evolving. Any type of least-privilege role for deployment has to be constantly updated whenever we integrate a new AWS feature, and nobody's going to prioritize removing unused permissions from the role when we stop using something so it doesn't stay a least-privilege role at all.

My employer has 20+ AWS accounts to separate lines of business. I've been implementing automated deployments in an AWS account that's never had them before so I'm the one designing all of the roles and such. I've learned that internally, every single role that's been used for automated deployments by the 3+ groups I've talked to are wildly over-permissioned, hated by security, and everyone intends to clean them up at some point in the future but cleaning has never happened.

Am I missing some better way to determine what sort of access is necessary to run a Cloudformation-based deployment? We're using the Cloud Development Kit to create our Cloudformation stacks. Applications are all sorts of things involving a wide array of AWS products, which is what would make figuring out actual least privilege to touch all of them tough.

To be clear, I only am struggling with the roles used for the deployment process itself. CDK is creating least privilege roles for each application to run as.

We have an Ansible playbook that we run locally against a newly created AWS account that provisions (among other things) our infrastructure deployment role. When we change the playbook to change the IAM role we make sure we apply it against all of our accounts. I'm sure it could be handled better with Control Tower

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply