Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
putin is a cunt
Apr 5, 2007

BOY DO I SURE ENJOY TRASH. THERE'S NOTHING MORE I LOVE THAN TO SIT DOWN IN FRONT OF THE BIG SCREEN AND EAT A BIIIIG STEAMY BOWL OF SHIT. WARNER BROS CAN COME OVER TO MY HOUSE AND ASSFUCK MY MOM WHILE I WATCH AND I WOULD CERTIFY IT FRESH, NO QUESTION
I'm trying to use localstack to develop some S3 stuff and I'm stuck at the very start getting the AWS SDK to communicate with localstack. I've set the endpoint like this (using Node.js):

code:
const params = {
    Bucket: "testbucket",
    Key: <my file name>,
    Body: <my file stream>,
};
const s3 = new AWS.S3({ endpoint: "http://localhost:4572" });
s3.upload(uploadParams);
It seems to be generating the endpoint URL incorrectly somewhere internal though, because I end up getting this message:

getaddrinfo ENOTFOUND testbucket.localhost
{
"service": "user-service",
"errno": -3008,
"code": "NetworkingError",
"syscall": "getaddrinfo",
"hostname": "testbucket.localhost",
"region": "us-east-1",
"retryable": true,
"time": "2020-05-28T04:56:21.583Z",
"stack": "Error: getaddrinfo ENOTFOUND testbucket.localhost\n at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:66:26)"
}


For some reason it's trying to use testbucket.localhost as the endpoint URL, even though I've clearly given it a different one.

Have I missed out a setting somewhere that I need to include?

I can use awslocal just fine, so I know localstack is actually up and running properly.

EDIT:

As always, I struggled with this all day and as soon as I post about it I figure out the answer. I needed to include s3ForcePathStyle: true in my S3 config to force it to use the path style URLs.

putin is a cunt fucked around with this message at 06:18 on May 28, 2020

Adbot
ADBOT LOVES YOU

thotsky
Jun 7, 2005

hot to trot
I want to use API Gateway to proxy an existing non-AWS API, and to do some simple transformations on the response. I got told at some point that this would be simple to do using something called "mapping".

Anyway, looking into API Gateway I'm first pushed to check out the newly GA "HTTP API" way of doing stuff. It's easy to set up, but as far as I can see I have little to no say about what is sent to the client. It took about 15 seconds to proxy my external API, which is great, but I can't actually do much else. So, I try doing stuff the "old way" and start looking into the "REST API" method of doing this. The documentation here tells me I should be using "HTTP proxy integration" if at all possible, but it turns out this method also prevents me from transforming the response.

It seems like I am left with the "REST API with HTTP non-proxy (custom) integration" option. It kind of feels like overkill. Would I be better off using a lambda function to proxy the API and transform the response? Did I miss something along the way?

thotsky
Jun 7, 2005

hot to trot
Well, I managed to get VTL Mapping Templates to work. Hopefully the defaults for HTTP custom integrations are pretty good because I left most of it be.

I do sort of wonder how one would go about testing these templates though. Anyone have experience working with them?

Fcdts26
Mar 18, 2009
Having a weird Network load balancer/Elastic IP issue with fargate. I have two elastic IP's that are attached to Ec2 instances. I want to move those two IP's to a load balancer and have it point to Fargate. I disassociated the IP's and created a Network Load Balancer with them. I then made two target groups hooked everything up and then created the service in ECS. As soon as I do this the containers loose all connectivity. they spin up the logs look OK but they never go healthy. I have a port exposed that resolves the Haproxy Stats page internally. I cant even hit that while the load balancer is connected. If I delete the load balancer I can immediately hit the container.

Ive created two new EIP, A New Load balancer hooked them up to the same target groups and it works fine. It's like something is wrong with these specific Elastic IP's? Ive compared the working setup and the non working in every menu I can think of and they are exactly the same. Ive also put in a support request with AWS but I thought i would try here also.

Pile Of Garbage
May 28, 2007



Fcdts26 posted:

Having a weird Network load balancer/Elastic IP issue with fargate. I have two elastic IP's that are attached to Ec2 instances. I want to move those two IP's to a load balancer and have it point to Fargate. I disassociated the IP's and created a Network Load Balancer with them. I then made two target groups hooked everything up and then created the service in ECS. As soon as I do this the containers loose all connectivity. they spin up the logs look OK but they never go healthy. I have a port exposed that resolves the Haproxy Stats page internally. I cant even hit that while the load balancer is connected. If I delete the load balancer I can immediately hit the container.

Ive created two new EIP, A New Load balancer hooked them up to the same target groups and it works fine. It's like something is wrong with these specific Elastic IP's? Ive compared the working setup and the non working in every menu I can think of and they are exactly the same. Ive also put in a support request with AWS but I thought i would try here also.

IIRC you can't point NLB at Fargate, only ALB. Also what you describe is bizarre. If you want to migrate to an ELB then you shouldn't worry about the EIPs and instead just update your DNS records. Deploy your new ECS service and ELB ahead of time and then when you cut the service over just update the DNS records for it so that they resolve to the new ELB.

Fcdts26
Mar 18, 2009

Pile Of Garbage posted:

IIRC you can't point NLB at Fargate, only ALB. Also what you describe is bizarre. If you want to migrate to an ELB then you shouldn't worry about the EIPs and instead just update your DNS records. Deploy your new ECS service and ELB ahead of time and then when you cut the service over just update the DNS records for it so that they resolve to the new ELB.

We have the same setup for some of our other products that go from NLB to fargate. We also have to keep the elastic IPs, since we have customers that setup A records to them. (Which sucks)

I did some more work on it last night and it seems that the issue comes from switching the load balancer listeners from instance based target groups to ip based ones. Even deleting the listeners and recreating them it has issues.

I added an additional port listener to the NLB forwarded it to a instance based target and then switched it to the IP based fargate one. It went unhealthy and killed the task 4 times before it eventually did go healthy. The entire time though I could hit the container successfully.

If I create a new NLB with elastic IPs and connect them to the fargate service it seems to be fine, but it still does kill the container one time.

With a single task it killed it 4 times before going healthy. I’m wondering now if that will multiply depending on the amount of tasks I have running. Im hoping I hear back from support tomorrow, I’ll update what they say.

PierreTheMime
Dec 9, 2004

Hero of hormagaunts everywhere!
Buglord
What’s the best deployment mechanism for a Step Functions state machine and underlying resources? Lambda, Batch are primary services, but presumably we’d also want associated IAM roles, DynamoDB table, SSM parameter store, secrets, etc.

Right now we have a ton of people tossing a project together and it’s working fine but every time someone asks the best repo/deployment method for it everyone just shrugs.

crazypenguin
Mar 9, 2005
nothing witty here, move along

Fcdts26 posted:

I did some more work on it last night and it seems that the issue comes from switching the load balancer listeners from instance based target groups to ip based ones. Even deleting the listeners and recreating them it has issues.

NLB has different behavior in how it forwards packets between IP and instance target groups. With instances, it preserves the client IP in packets and so target security groups have to be set up to allow traffic accordingly.

With IP targets the packets from the NLB always have the NLBs ip in the return address. Awkwardly, this means you can’t really use security groups to control access to an NLB targeting IPs, lol.

Adhemar
Jan 21, 2004

Kellner, da ist ein scheussliches Biest in meiner Suppe.

PierreTheMime posted:

What’s the best deployment mechanism for a Step Functions state machine and underlying resources? Lambda, Batch are primary services, but presumably we’d also want associated IAM roles, DynamoDB table, SSM parameter store, secrets, etc.

Right now we have a ton of people tossing a project together and it’s working fine but every time someone asks the best repo/deployment method for it everyone just shrugs.

What are you using now? CloudFormation will let you manage all that infrastructure.

Docjowles
Apr 9, 2009

Yeah cloud formation or terraform springs to mind. Are people just clicking around in the console to create those dependencies today? Get that into an IaC tool and version control.

PierreTheMime
Dec 9, 2004

Hero of hormagaunts everywhere!
Buglord
Yeah it’s a bit of a mess in that everyone is doing their own thing. At this point there’s version control for some things but not others. For a first-pass dev project it’s usable but obviously not sustainable. Is there a utility or extension for an IDE I can point at our current build and have it generate the stack or am I going to have to do the definition by hand?

Woodsy Owl
Oct 27, 2004

PierreTheMime posted:

Yeah it’s a bit of a mess in that everyone is doing their own thing. At this point there’s version control for some things but not others. For a first-pass dev project it’s usable but obviously not sustainable. Is there a utility or extension for an IDE I can point at our current build and have it generate the stack or am I going to have to do the definition by hand?

If you’ve got the resources deployed then you might be able to use CloudFormer to generate the template for you
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-using-cloudformer.html

It didn’t work for our use case but it could work for you. Might be worth an hour of fiddling with.

JHVH-1
Jun 28, 2002
I think they want you to import resources into a stack now instead. They haven’t touched that cloudformer in a while.

I think you just add your stuff to an empty stack and then you can see the template and add to/modify it in designer.

22 Eargesplitten
Oct 10, 2010



I'm working on getting certified, I finished the acloudguru Certified Solutions Architect - Associate class and am now doing some lab work. I'm trying to set up a VPC with a public and private subnet to serve an e-commerce website, files stored in S3, RDS instance for the database, NAT gateway, so on, so forth.

Are there any e-commerce platforms that are really simple to set up on AWS? I was looking at CloudFormation but I didn't see any templates, it's entirely possible that I missed some because at the time I was also keeping track of the queue at work. I was using PrestaShop but the documentation for a bash-based install seems to be very limited and it really wants you to set up your database on your web server.

This site would not be going live, at least not in this format, but I want to get everything functioning like it would be so I can be sure I'm not missing anything. I'm also having problems getting connectivity between the web host and the RDS instance, but it's probably a dumb mistake I'm overlooking because it's 3:30AM here. For instance, 20 minutes ago I realized that I had set up the routing table for the public subnet for the internet gateway, but never added the NAT gateway I created to the private subnet's routing table so there was nowhere for traffic to go.

thotsky
Jun 7, 2005

hot to trot
Is a custom authorizer really the only way of doing basic auth with API Gateway? I don't want to mess with the back-end, or replace parts of it's functionality; I just want to proxy it.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

22 Eargesplitten posted:

I'm working on getting certified, I finished the acloudguru Certified Solutions Architect - Associate class and am now doing some lab work. I'm trying to set up a VPC with a public and private subnet to serve an e-commerce website, files stored in S3, RDS instance for the database, NAT gateway, so on, so forth.

Are there any e-commerce platforms that are really simple to set up on AWS? I was looking at CloudFormation but I didn't see any templates, it's entirely possible that I missed some because at the time I was also keeping track of the queue at work. I was using PrestaShop but the documentation for a bash-based install seems to be very limited and it really wants you to set up your database on your web server.

This site would not be going live, at least not in this format, but I want to get everything functioning like it would be so I can be sure I'm not missing anything. I'm also having problems getting connectivity between the web host and the RDS instance, but it's probably a dumb mistake I'm overlooking because it's 3:30AM here. For instance, 20 minutes ago I realized that I had set up the routing table for the public subnet for the internet gateway, but never added the NAT gateway I created to the private subnet's routing table so there was nowhere for traffic to go.

Check the security group on the RDS instance.

Cant really help you on the e-commerce thing, sorry about that.

When you get comfy, you might want to start looking into terraform for doing this stuff. It can be a bit much but I kind of find its way of configuring NACL/SecurityGroups easier since you can just define resources as the value and it fills them in for you when you apply. But maybe that's down the road. It also makes tearing everything down super easy too.

I hope you're setting all of this up on a work account because those NAT gateways get expensive.

22 Eargesplitten
Oct 10, 2010



The security group on the RDS instance has an inbound rule allowing all traffic from the public subnet at this point, I had it accepting just Aurora/MySQL on port 3306 from the web server's IP followed by /32, but I read someone saying that having the /32 didn't work so I set it to the /24 of the public subnet. And then that didn't work so I changed it to all traffic, basically removing restrictions to figure out where it's breaking down.

It doesn't have anything inbound from the NAT gateway, does the traffic coming from the public subnet need to go through the NAT gateway, or can I pick a specific machine on the public subnet? I looked up documentation on connecting to a private RDS instance through a bastion EC2 host and thought I had it set up like that, but running nmap isn't getting me anything. Maybe I should set up a dummy ec2 instance on the private subnet and see if nmap is seeing it, figure out whether it's unable to communicate between subnets or it's something specifically related to the RDS instance.

Are NAT gateways not free tier eligible? I already got an unexpected $30 bill from AWS a month or two ago after setting up Algo and forgetting to turn off the instance when I was done, so I've been trying to be careful about stopping services when not in use.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

22 Eargesplitten posted:

The security group on the RDS instance has an inbound rule allowing all traffic from the public subnet at this point, I had it accepting just Aurora/MySQL on port 3306 from the web server's IP followed by /32, but I read someone saying that having the /32 didn't work so I set it to the /24 of the public subnet. And then that didn't work so I changed it to all traffic, basically removing restrictions to figure out where it's breaking down.

It doesn't have anything inbound from the NAT gateway, does the traffic coming from the public subnet need to go through the NAT gateway, or can I pick a specific machine on the public subnet? I looked up documentation on connecting to a private RDS instance through a bastion EC2 host and thought I had it set up like that, but running nmap isn't getting me anything. Maybe I should set up a dummy ec2 instance on the private subnet and see if nmap is seeing it, figure out whether it's unable to communicate between subnets or it's something specifically related to the RDS instance.

Are NAT gateways not free tier eligible? I already got an unexpected $30 bill from AWS a month or two ago after setting up Algo and forgetting to turn off the instance when I was done, so I've been trying to be careful about stopping services when not in use.

I believe the best option is to get rid of the gateway and run a t2.micro Nat instance

https://docs.aws.amazon.com/vpc/latest/userguide/VPC_NAT_Instance.html

22 Eargesplitten
Oct 10, 2010



Thanks, but it looks like it started working once I installed more PHP modules. Sadly the documentation on the software's site didn't work, I had to follow documentation on another site (and then another site again because the directions on the first site installed an old version of PHP).

Now I have another question. Is it possible to create a VPC endpoint within a private subnet? I'm not seeing that as a routing table option, and it's not asking for the subnet. I'm trying to put an S3 bucket so it can only be accessed by the RDS server (and possibly web server if that is needed), but the only routing tables I see are the main and the public. It looks like the best way to attach a bucket to the VPC is through a VPC endpoint, so that's what I was going with.

22 Eargesplitten
Oct 10, 2010



I, uh, may have hosed up.

I was trying to copy a zip file over using scp and was getting permission denied. I tried a few things, but got impatient so I just used chmod to change the ~ directory to 777 (not recursive). Now whenever I try to scp or ssh to the device I'm getting an error message saying "Permission denied (publickey,gssapi-keyex,gssapi-with-mic)". All the instances of this that I'm finding on stack overflow and whatnot seem to involve people not using the correct username. Digital Ocean has a tutorial but it requires logging directly into the machine, which I don't believe is possible in AWS.

Is there a way to fix this, or do I need to scrap the instance and start over? I was dumb and didn't take a snapshot of it.

JHVH-1
Jun 28, 2002

22 Eargesplitten posted:

I, uh, may have hosed up.

I was trying to copy a zip file over using scp and was getting permission denied. I tried a few things, but got impatient so I just used chmod to change the ~ directory to 777 (not recursive). Now whenever I try to scp or ssh to the device I'm getting an error message saying "Permission denied (publickey,gssapi-keyex,gssapi-with-mic)". All the instances of this that I'm finding on stack overflow and whatnot seem to involve people not using the correct username. Digital Ocean has a tutorial but it requires logging directly into the machine, which I don't believe is possible in AWS.

Is there a way to fix this, or do I need to scrap the instance and start over? I was dumb and didn't take a snapshot of it.

I would check if you have systems manager running and you might be able to repair or get access to it that way.

Other option I guess is something like spinning up a new instance and attaching the volume to mount and fix the permissions (or just get the data off and use the new instance. ). Probably a good idea to make a snapshot just in case if its important data or will set you back if you lose it.

22 Eargesplitten
Oct 10, 2010



Thanks, Systems Manager was exactly what I needed. The chmod permissions all seemed normal (guess I'm going to have to figure out what I set to 777 because it sure wasn't my home directory), so I checked the public key. Somehow it got changed so that after the key, there was a space and then the key pair's name. Deleting that (after copying it) fixed it. No idea whatsoever how that got there, that's bizarre. Unless somehow my attempt to scp managed to put the name of the key pair into the public key, but I would think that would be pretty hard to do even on accident.

E: gently caress, spoke too soon. Guess I'm trying to figure out what else could be wrong. I wonder if I somehow corrupted my private key. Nope, creating an AMI of the instance and starting it with a new keypair doesn't work. Guess I will try to mount the volume on a working machine like you said, although my issue so far has been that I seem to have stumbled into a correct php configuration and when I try to do it "better" (using a version of PHP that isn't EOL in 5 months, not installing php 5.4 as well, etc) it doesn't work at all.

22 Eargesplitten fucked around with this message at 08:06 on Jun 4, 2020

Pile Of Garbage
May 28, 2007



What AMI did you use to spin up the instance originally? What is being written to the logs on the instance when you try to connect? Also have you checked the permissions on the .ssh folder as well as the public key and authorized_keys?

Edit: if the instance has the SSM agent on it you can probably just run the AWSSupport-ExecuteEC2Rescue document against it.

Pile Of Garbage fucked around with this message at 12:25 on Jun 4, 2020

JHVH-1
Jun 28, 2002
Looks like they have an SSM automation just for fixing ssh as well https://docs.aws.amazon.com/systems-manager/latest/userguide/automation-awssupport-troubleshootssh.html

If everything is 777 it won't work correctly. It needs to be something like shown here or else the ssh daemon rejects it:
https://gist.github.com/grenade/6318301

22 Eargesplitten
Oct 10, 2010



Pile Of Garbage posted:

What AMI did you use to spin up the instance originally? What is being written to the logs on the instance when you try to connect? Also have you checked the permissions on the .ssh folder as well as the public key and authorized_keys?

Edit: if the instance has the SSM agent on it you can probably just run the AWSSupport-ExecuteEC2Rescue document against it.

Thanks, I'll try that just to see if it works. I used the standard Amazon Linux 2.whatever AMI initially on a t2 micro.

Nothing seems to be 777 as far as I can tell, which is weird. I ended up spinning up a different instance with Wordpress because A) the documentation for the other software is garbage, B) the other software really doesn't seem like it's intended to be set up through a CLI, and C) the use case for the guy I'm building it for is more blogging than e-commerce at that point. And if/when e-commerce does become a part, it's only going to be a few items so a plugin should do well enough.

22 Eargesplitten
Oct 10, 2010



Hate to double post, but does anyone have any recommendations for practice exams for the Certified Solutions Architect - Associate? I think I'm ready to see where I stand and what I need to study more.

sinequanon01
Oct 20, 2017

22 Eargesplitten posted:

Hate to double post, but does anyone have any recommendations for practice exams for the Certified Solutions Architect - Associate? I think I'm ready to see where I stand and what I need to study more.

Jon Bonso on Udemy

22 Eargesplitten
Oct 10, 2010



Thanks, I'll pick those up. I still need to see if my company will pick up the bill for practice exams and the actual test. I'm not super-optimistic given that they seem to be cutting as much extra stuff as they can.

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


I'm working on a project to convert our AWS accounts to use SSO through Okta, but I'm having some trouble finding answers to some of our concerns. Mainly, all of the documentation I'm finding is for setting up a brand new AWS account to use SSO. We already have a bunch of IAM users defined across a half dozen AWS accounts. If we enable SSO is it going to break anything for those users or will they still be able to access their existing IAM accounts? Additionally, we currently map our IAM accounts to AD names so if IAM accounts are preserved will we need to destroy our IAM users in order to migrate them to SSO?

fluppet
Feb 10, 2009

22 Eargesplitten posted:

Hate to double post, but does anyone have any recommendations for practice exams for the Certified Solutions Architect - Associate? I think I'm ready to see where I stand and what I need to study more.

Im working my way through the a cloud guru course, it did the job for the devops professional exam

Arzakon
Nov 24, 2002

"I hereby retire from Mafia"
Please turbo me if you catch me in a game.

deedee megadoodoo posted:

I'm working on a project to convert our AWS accounts to use SSO through Okta, but I'm having some trouble finding answers to some of our concerns. Mainly, all of the documentation I'm finding is for setting up a brand new AWS account to use SSO. We already have a bunch of IAM users defined across a half dozen AWS accounts. If we enable SSO is it going to break anything for those users or will they still be able to access their existing IAM accounts? Additionally, we currently map our IAM accounts to AD names so if IAM accounts are preserved will we need to destroy our IAM users in order to migrate them to SSO?

Okta, and any other SSO provider, will be assuming a role or issuing a console link that has a policy attached via STS API calls. These are distinct from IAM Users, and the IAM users will continue to work unless explicitly removed. I haven't touched AWS SSO in a while so if you are integrating that with Okta instead of just using Okta/IAM it might be different, but probably isn't because AWS SSO users are distinct from IAM users.

Also you should get rid of non-break-glass IAM users once you are happy with the SSO setup because *insert boring best practices lecture*

Nomnom Cookie
Aug 30, 2009



From inside the AWS account, Okta integration just looks like assumed roles. User clicks link, receives session in the IAM role you specify. It works alongside IAM users more or less the same way that EC2 instances assuming roles does. AWS is not going to care that you have some IAM users and some IAM roles.

Scrapez
Feb 27, 2004

JHVH-1 posted:

I think they want you to import resources into a stack now instead. They haven’t touched that cloudformer in a while.

I think you just add your stuff to an empty stack and then you can see the template and add to/modify it in designer.

Cloudformer is still useful for giving you the bones of a template that you can then make changes to rather than creating from scratch. I really don't understand why they bailed on developing Cloudformer as it would be very helpful if it worked properly.

fluppet
Feb 10, 2009
im trying to get a list of EBS Snapshots that aren't from a list of policy-ids

code:
aws ec2 describe-snapshots --owner-ids self --query "Snapshots[?Tags[?Key=='aws:dlm:lifecycle-policy-id' && Value!='policy-foo']].[SnapshotId,Description]" 
but trying to exclude multiple policies fails and my google skills can't turn up the correct jmes syntax to query it properly

any suggestions?

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Arzakon posted:

Also you should get rid of non-break-glass IAM users once you are happy with the SSO setup because *insert boring best practices lecture*
If you're asking about any straggler IAM users it's already not a big deal for compliance for your org. However, I will say that if your company is expected to grow a lot, be acquired, etc. I would recommend against having any AWS IAM users at all except for automation users that you can pretty easily rotate on demand.

PierreTheMime
Dec 9, 2004

Hero of hormagaunts everywhere!
Buglord
What’s the simplest way to variabilize your account ID for SAM template ARNs? I got it working by setting a parameter in SSM prior to deploying and referencing that but there’s got to be a less awkward way to do that.

JHVH-1
Jun 28, 2002

PierreTheMime posted:

What’s the simplest way to variabilize your account ID for SAM template ARNs? I got it working by setting a parameter in SSM prior to deploying and referencing that but there’s got to be a less awkward way to do that.

I think it supports some kind of templating similar to CloudFormation (not surprising since it generates it) with pseudo variables

https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-policy-template-list.html

PierreTheMime
Dec 9, 2004

Hero of hormagaunts everywhere!
Buglord

JHVH-1 posted:

I think it supports some kind of templating similar to CloudFormation (not surprising since it generates it) with pseudo variables

https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-policy-template-list.html

Thanks, for whatever reason my searching wasn't leading me in the right direction, but that article did the trick.

!Sub 'arn:aws:iam::${AWS::AccountId}:role/rolename' is the correct format.

22 Eargesplitten
Oct 10, 2010



drat, 56% on my first AWS practice exam. I'm scheduled to take the exam on the 16th, does that seem like enough time given that I've got a few hours free to study every day (at least)? There were a good 5 or so services I didn't recognize on the exam, so I think I need to start there. Not sure if the ACloudGuru course hadn't been updated completely for 2020 when I went through before the corona apathy kicked in or if I just forgot about some of the products. I got a 74% on security, 68% on resiliency, but 33% on cost-efficiency and ~45% on high performance. I'm thinking my goal for my last couple exams should be 80% since it's 72% to pass, so if I have an off day I still scrape by.

Adbot
ADBOT LOVES YOU

thotsky
Jun 7, 2005

hot to trot
I've only taken the associate developer one; I crammed for two days, mostly hovering just below 72% in the test exams and then passed with 88% on the real thing. I got a shitton of Kinesis questions, which was never mentioned in the test exams or the course I took. I got lucky, I guess.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply