Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
sinequanon01
Oct 20, 2017

Agrikk posted:

Why yell at the TAM? They haven’t caused the issue, did not write the patch nor cause the delay in deployment.

If the TAM gave you bad advice or information that’s one thing, but I’d it’s services that are failing you should have the TAM escalate in your behalf. Try this:

“Hey TAM-

We are really pissed/upset/angry/irritated at our experience with these issues. Please tell me what you are going to do to make sure service leadership understands how angry we are.”

It doesn’t blame the TAM for the issues, but calls the TAM to task for owning your issues and escalating on your behalf. It also uses the words “angry” and “upset” which are trigger words that will engage the account manager and SA as well, bringing your whole account team to bear in your behalf.

This. Getting angry with the TAM does nothing. Expressing your frustration and production pain like an adult WILL get your issue chased all the way up to service leadership if need be.

Adbot
ADBOT LOVES YOU

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


When the TAM sends you a response you don't like just respond with "?"

Fcdts26
Mar 18, 2009
Anyone have experience with getting help with something like the CDK from business support?

sausage king of Chicago
Jun 13, 2001
edit n/m ignore me

sausage king of Chicago fucked around with this message at 21:41 on Oct 21, 2020

JHVH-1
Jun 28, 2002

Fcdts26 posted:

Anyone have experience with getting help with something like the CDK from business support?

Never tried before. Have you hit up the gitter thing yet https://gitter.im/awslabs/aws-cdk

Probably not business level support but the people working on the project post there and I’ve gotten help before.

thotsky
Jun 7, 2005

hot to trot
Maybe this is a python issue rather than AWS centric, but I'm having some trouble when working locally with Lambda Layers on my SAM/Python Lambda projects.
I put my layer code in "/layers/python/example_layer.py" while my lambas live in "/lambdas/example_lambda/example_lambda.py" and import the layer using "import example_layer" in my lambda.
SAM deploys both lambdas and the layer just fine, and it seems to work fine in the cloud, but my local IDE (VS Code) and pylint does not like the import statement (unable to import example_layer).

I figure that maybe the layer code needs to be below the lambda code in my folder structure for it to be found, but that would sort of defeat the point of a shared layer, especially since SAM appears to require that I keep the various lambdas in separate folders if I don't want them bundled together.

JHVH-1
Jun 28, 2002

thotsky posted:

Maybe this is a python issue rather than AWS centric, but I'm having some trouble when working locally with Lambda Layers on my SAM/Python Lambda projects.
I put my layer code in "/layers/python/example_layer.py" while my lambas live in "/lambdas/example_lambda/example_lambda.py" and import the layer using "import example_layer" in my lambda.
SAM deploys both lambdas and the layer just fine, and it seems to work fine in the cloud, but my local IDE (VS Code) and pylint does not like the import statement (unable to import example_layer).

I figure that maybe the layer code needs to be below the lambda code in my folder structure for it to be found, but that would sort of defeat the point of a shared layer, especially since SAM appears to require that I keep the various lambdas in separate folders if I don't want them bundled together.

Never done it in SAM so maybe it had some way to manage it but when I was making a basic Twitter bot I had to do ‘pip install twython -t .’ To install it into the same directory as my index.py

They just added a way or use docker containers though which will get rid of a lot of these annoyances. I need to test that out.

https://aws.amazon.com/blogs/aws/new-for-aws-lambda-container-image-support/

CyberPingu
Sep 15, 2013


If you're not striving to improve, you'll end up going backwards.
Anyone know if there is a way to add users to a group for a specific time period.

We need to add users temporarily to groups for like a week or so and wondering if there is a way to automate revoking when their due date ends.

12 rats tied together
Sep 7, 2006

Using AWS native features only, 2 options immediately spring to mind:

1: Use whatever you're creating the user and group membership with to also create a scheduled lambda function that runs 1 week after creation which does the user cleanup.
2: Instead of putting a user in a group, create an IAM Role, and apply a policy to the user that allows sts:AssumeRole only between now and one week from now.

CyberPingu
Sep 15, 2013


If you're not striving to improve, you'll end up going backwards.

12 rats tied together posted:

Using AWS native features only, 2 options immediately spring to mind:

1: Use whatever you're creating the user and group membership with to also create a scheduled lambda function that runs 1 week after creation which does the user cleanup.
2: Instead of putting a user in a group, create an IAM Role, and apply a policy to the user that allows sts:AssumeRole only between now and one week from now.

So currently we are creating the users manually and adding them to the groups manually too. The users need to persist after removal from the group.

Hmm, the IAM role might work, as its granting access to dev/production environments ill need to check how that would work

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
Is anyone doing SSL certificate rotation with HSM?

We have a client that wants it implemented due to security key management requirements.

I can find a few white papers but I was wondering how people are doing this? Also can a TAM get us credits to be able to test all of this out?

I'm not too concerned about the end points getting the new certs as that's all scriptable. All I need is the CA public cert, the cert itself, and the private key.

From what I understand with the PCM the standard way to do this is to have a lambda script that creates a new cert and plops it into Secrets and then you can get it out of there in your EC2 instances? I can't really find anything on this with HSM involved and now I'm seeing that HSM itself has PKI features?

Super-NintendoUser
Jan 16, 2004

COWABUNGERDER COMPADRES
Soiled Meat

Matt Zerella posted:

Is anyone doing SSL certificate rotation with HSM?

We have a client that wants it implemented due to security key management requirements.

I can find a few white papers but I was wondering how people are doing this? Also can a TAM get us credits to be able to test all of this out?

I'm not too concerned about the end points getting the new certs as that's all scriptable. All I need is the CA public cert, the cert itself, and the private key.

From what I understand with the PCM the standard way to do this is to have a lambda script that creates a new cert and plops it into Secrets and then you can get it out of there in your EC2 instances? I can't really find anything on this with HSM involved and now I'm seeing that HSM itself has PKI features?

Hey! I'm trying to solve the same problem! I figured I'd come to the AWS Thread as well and get some input. Let me know how it goes.

Cancelbot
Nov 22, 2006

Canceling spam since 1928

PKI/HSM isn't my speciality so I can't comment on that - however you get credits through an Account Manager, if you have a TAM they can refer you to the AM, but if you're under business/developer support you should have an AM allocated and should be able to help there.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
There's no transfer fees on S3 upload, or minimum storage duration (or early deletion fees) on S3 normal storage, right?

If I stashed about 85TB up there for a couple weeks while doing a risky operation, and then delete it without reading it I'd only be out the storage cost, right?

I understand that retrieving it would be a couple thousand if I had to.

Thanks Ants
May 21, 2004

#essereFerrari


Correct, but consider the amount of time it would take you to get 85TB back if you had to.

Would a storage-only Snowball Edge fit your requirements? They are pretty cheap to hire and then you’ve got 100TB connected at up to 100Gbit to play with.

Thanks Ants fucked around with this message at 01:53 on Jan 16, 2021

Mulloy
Jan 3, 2005

I am your best friend's wife's sword student's current roommate.
So I've been dicking around trying to get a list of EC2 host names / FQDNs but so far I've just been able to get a global list of tags which were implemented poorly. After various forays into googling I've been unsuccessful. Any tips? Not every host is external so I can't really go off DNS, but I'm not seeing like an EC2 instance value of host name or anything to query for.

Pile Of Garbage
May 28, 2007



Mulloy posted:

So I've been dicking around trying to get a list of EC2 host names / FQDNs but so far I've just been able to get a global list of tags which were implemented poorly. After various forays into googling I've been unsuccessful. Any tips? Not every host is external so I can't really go off DNS, but I'm not seeing like an EC2 instance value of host name or anything to query for.

If your instances are registering with SSM (And they should be, it's free for up to 10k instances IIRC) then that information should be exposed via Fleet Manager. Failing that, unless they're actively registering DNS records somewhere you're out of luck. Do you happen to use any other endpoint/config management system that you could query (e.g. Puppet, Ansible, SCCM, etc.)?


Amazon Linux 2 question: can anyone explain to me or point me in the direction of a page that clearly outlines the differences between the standard 4.14 kernel and the "next generation" 5.4 kernel (Installed via sudo amazon-linux-extras kernel-ng)? All I can find is a whats new entry from 2019 announcing the addition of the kernel to the Extras channel. The only thing I do know is that the 5.4 kernel doesn't support live patching like the 4.14 one does.

I'm mainly just trying to work out whether there are tangible benefits to using the 5.4 kernel or whether it's just one of those "only install it if you need it" situations.

crazypenguin
Mar 9, 2005
nothing witty here, move along

Pile Of Garbage posted:

Amazon Linux 2 question: can anyone explain to me or point me in the direction of a page that clearly outlines the differences between the standard 4.14 kernel and the "next generation" 5.4 kernel (Installed via sudo amazon-linux-extras kernel-ng)?

Newer kernel versions have some new features (io_uring comes immediately to mind because I'm wild excited about it's potential... someday), and often have slight performance improvements. (Sometimes large ones in niche areas.) That's all.

Note: the -ng kernel has obviously already gone from 4.19 to 5.4 once. I suspect we'll see it jump to 5.10 this summer.

So on the one hand, maybe slightly better performance (or niche features if you know of them), and on the other hand, you might see major version jumps in the future. If you got a good system validating patches before deploying to production servers, then maybe it's worthwhile to go -ng. If not, you probably want to be more conservative with the kernel upgrades.

Pile Of Garbage
May 28, 2007



crazypenguin posted:

Newer kernel versions have some new features (io_uring comes immediately to mind because I'm wild excited about it's potential... someday), and often have slight performance improvements. (Sometimes large ones in niche areas.) That's all.

Note: the -ng kernel has obviously already gone from 4.19 to 5.4 once. I suspect we'll see it jump to 5.10 this summer.

So on the one hand, maybe slightly better performance (or niche features if you know of them), and on the other hand, you might see major version jumps in the future. If you got a good system validating patches before deploying to production servers, then maybe it's worthwhile to go -ng. If not, you probably want to be more conservative with the kernel upgrades.

Cheers thanks for the reply. I'll probably end up evaluating it on a case-by-case basis. The absence of Kernel Live Patching is kind of a deal-breaker for workloads that aren't highly available (Either due to customer intransigence or dumb architectures). At least with EC2 Image Builder I can push out AMIs for one of the other.

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


Having a weird issue and hoping someone can point me in the right direction. We have some old IAM credentials that we were using to send email via SES. We've since updated all of our apps to a different process but something is still sending email. We have no idea what it is. CloudTrail doesn't log SES:Send* events so I can't figure out what the hell is using it. Any ideas to help track down whatever is using these credentials?

Pile Of Garbage
May 28, 2007



deedee megadoodoo posted:

Having a weird issue and hoping someone can point me in the right direction. We have some old IAM credentials that we were using to send email via SES. We've since updated all of our apps to a different process but something is still sending email. We have no idea what it is. CloudTrail doesn't log SES:Send* events so I can't figure out what the hell is using it. Any ideas to help track down whatever is using these credentials?

Are the messages being sent via the SES API or via SMTP relay using SES (Are they definitely coming from SES or from SNS)? Also is there no metadata in the e-mail headers to narrow it down?

Edit: also how many IAM users do you have configured? Surely it's not enough so that you can't weed them out based on last use time?

Pile Of Garbage fucked around with this message at 17:46 on Apr 5, 2021

vanity slug
Jul 20, 2010

deedee megadoodoo posted:

Having a weird issue and hoping someone can point me in the right direction. We have some old IAM credentials that we were using to send email via SES. We've since updated all of our apps to a different process but something is still sending email. We have no idea what it is. CloudTrail doesn't log SES:Send* events so I can't figure out what the hell is using it. Any ideas to help track down whatever is using these credentials?

Make the access key inactive, see what breaks.

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


Pile Of Garbage posted:

Are the messages being sent via the SES API or via SMTP relay using SES (Are they definitely coming from SES or from SNS)? Also is there no metadata in the e-mail headers to narrow it down?

Edit: also how many IAM users do you have configured? Surely it's not enough so that you can't weed them out based on last use time?

It's using the SES SMTP endpoint. And it's just one user. We can see in the IAM console that something is using the credentials but we can't figure out what.

edit - I believe you are interpreting the problem as "we can see email being generated" when in fact the problem is "we can see the credentials are being used". All we know is that the credentials are being used to send email (from Access Advisor: Amazon SES AmazonSesSendingAccess Today). We can't figure out what is actually being sent or where it is coming from.

Jeoh posted:

Make the access key inactive, see what breaks.

This is my plan if all else fails. The problem is that it's a production system that handles a lot of money so I'd like to not break something without exhausting all other options.

deedee megadoodoo fucked around with this message at 18:14 on Apr 5, 2021

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
The access key used with some metadata of the login session will show up in Cloudtrail as well.

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


necrobobsledder posted:

The access key used with some metadata of the login session will show up in Cloudtrail as well.

I believe this is only if you are using a configuration set. Otherwise the access key doesn't even show up in the cloudtrail.

Pile Of Garbage
May 28, 2007



deedee megadoodoo posted:

It's using the SES SMTP endpoint. And it's just one user. We can see in the IAM console that something is using the credentials but we can't figure out what.

edit - I believe you are interpreting the problem as "we can see email being generated" when in fact the problem is "we can see the credentials are being used". All we know is that the credentials are being used to send email (from Access Advisor: Amazon SES AmazonSesSendingAccess Today). We can't figure out what is actually being sent or where it is coming from.

Ah my bad! Aside from raising a support case with Amazon the only other option would be checking firewall logs to spot the SMTP traffic which may be close to impossible depending on your environment.

Failing that:

Jeoh posted:

Make the access key inactive, see what breaks.

JHVH-1
Jun 28, 2002
I just had to do that a few weeks ago. Had an old access key for SES that they were going to shut off because of updates. It said when the key was last used, but it’s not something they make available in cloudtrail like an API call. I just shut it off after replacing as many places I could remember it might be used with the new key.

Thanks Ants
May 21, 2004

#essereFerrari


Is there any way to get better logs out of SES when it's being used as an SMTP relay? Something like source IP, access key/IAM user, from and to address.

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


Thanks Ants posted:

Is there any way to get better logs out of SES when it's being used as an SMTP relay? Something like source IP, access key/IAM user, from and to address.

https://docs.aws.amazon.com/ses/latest/DeveloperGuide/monitor-using-event-publishing.html

You can set up an smtp configuration set and specify it when you send an email.

JHVH-1
Jun 28, 2002

deedee megadoodoo posted:

https://docs.aws.amazon.com/ses/latest/DeveloperGuide/monitor-using-event-publishing.html

You can set up an smtp configuration set and specify it when you send an email.

I should probably set this up for the future. Every 6 months or so we get some unwarranted spam complains that fall through the cracks and put our reputation level over the edge.

I manage the external public websites but the domain is shared with our internal IT team. They have it cluttered up with really dumb stuff like a service for managing email signatures, and this pointless phishing test service which sends out spoofed phishing mails which wouldn’t even pass through our o365 if they were real.

Also I hate email.

Scrapez
Feb 27, 2004

Question related to a UDP network load balancer. Is there a way to stop the flow of new traffic to a target immediately without having to wait for the health check to timeout?

The use case is SIP voice traffic. If we actively want to do maintenance on one of the targets, we can't just stop listening on port 5060 because the load balancer will continue to send calls to it for the duration of the health check. Is there a way to tell the load balancer immediately that a target is down?

Edit: I guess I can just deregister the target and then register it after maintenance is complete but does the deregistration process stop the flow of traffic to the target immediately?

Scrapez fucked around with this message at 21:53 on Apr 14, 2021

Thanks Ants
May 21, 2004

#essereFerrari


How are the health checks done? Can you publish another service on the SIP endpoints and set the health checks to require both to return as healthy, then just shut the non-5060 service down when you want the load balancer to drain the endpoint?

SurgicalOntologist
Jun 17, 2004

Pretty specific question but does anyone know if one of the media services supports this use case? Frankly it is a bit of a mess deciphering all the different services available.

Basically, we are generating 2-minute long mp4s every 2 minutes and want to send this to a pseudo-live stream. Ideally this would also get saved as VOD for the future (we already have VOD working with concatenating the files locally then sending to MediaConvert, but want to switch to a as-soon-as-each-chunk-is-ready model).

The best I can parse the docs so far is that the closest service is MediaLive. It can turn one mp4 into a stream but for multiple files I can't find any relevant use case. I'm wondering if I'm missing something or we need to locally convert the files into RTP pushes or another supported ingest format.

vanity slug
Jul 20, 2010

Does Kinesis Video Streams help?

SurgicalOntologist
Jun 17, 2004

I'm not sure.... it sounds like MediaLive is closer to what we need, but it's hard to tell. The FAQ question on what the difference is just repeats the stock description of each.

For context, this video is the end result from our CV/ML pipeline, which we currently don't use any AWS services. It sounds like Kinesis could help with something earlier in our pipeline. Distributing the end result, itself a video, as if it were a live stream, seems closer to MediaLive.

Edit: Oh, I think what we need is "input switching" in MediaLive. Let's see.

SurgicalOntologist fucked around with this message at 15:57 on Apr 15, 2021

Plank Walker
Aug 11, 2005
I'm looking to transfer storing some application required API keys in AWS secrets manager, vs the previous method of storing them as environment variables and then also having each developer manually add them to a global, non-source controlled config file. My question is, what are the best practices for allowing the application to retrieve these keys when it's running locally on a developer machine?

I'm using the AWS SDK for .NET which grabs AWS credentials from the user/.aws folder, and I have my application configured to look for a "my app" named profile there. Should I be passing the same AWS access key and secret access key to each developer? Or should I create separate IAM users for each developer, add the Secrets Manager access role, and have them add their own access key and secret key? Or am I missing some third, easier option?

Just-In-Timeberlake
Aug 18, 2003

Plank Walker posted:

I'm looking to transfer storing some application required API keys in AWS secrets manager, vs the previous method of storing them as environment variables and then also having each developer manually add them to a global, non-source controlled config file. My question is, what are the best practices for allowing the application to retrieve these keys when it's running locally on a developer machine?

I'm using the AWS SDK for .NET which grabs AWS credentials from the user/.aws folder, and I have my application configured to look for a "my app" named profile there. Should I be passing the same AWS access key and secret access key to each developer? Or should I create separate IAM users for each developer, add the Secrets Manager access role, and have them add their own access key and secret key? Or am I missing some third, easier option?

I create IAMs for "applications" and then add those ARNs to the permissions for any secrets the application needs

code:
{
  "Version" : "2012-10-17",
  "Statement" : [ {
    "Effect" : "Deny",
    "Principal" : "*",
    "Action" : "secretsmanager:GetSecretValue",
    "Resource" : "arn of secret",
    "Condition" : {
      "StringNotEquals" : {
        "aws:PrincipalArn" : ["arn of IAM account 1", "arn of IAM account 2", "etc"]
      }
    }
  } ]
}
In my case, for say database access, each application IAM gets it's own set of credentials and object permissions on the database, that is stored in a secret. That way if the AWSKey and AWSKeyId used to access the secret is compromised it's easy to revoke the permission/change the access keys, and since that application has the bare minimum level of access needed, (hopefully) the damage is minimal. This also makes it so if you do have to revoke privileges, only one application is affected instead of all of them.

Might be overkill, but we had some keys compromised that had way too much access that caused all sorts of headaches so I locked poo poo down because I don't want to deal with it again.

Scrapez
Feb 27, 2004

Thanks Ants posted:

How are the health checks done? Can you publish another service on the SIP endpoints and set the health checks to require both to return as healthy, then just shut the non-5060 service down when you want the load balancer to drain the endpoint?

This seems like the appropriate method but will require code change on the target server software. Right now, the health check is done on TCP port 5060 and the only way to stop kill the port would kill calls as well. I did learn that deregistering a target works. It stops the flow of new traffic to the target without dropping calls. Only issue is that re-registering that target when you want to bring it back into routing takes several minutes.

Vanadium
Jan 8, 2005

SurgicalOntologist posted:

Pretty specific question but does anyone know if one of the media services supports this use case? Frankly it is a bit of a mess deciphering all the different services available.

Basically, we are generating 2-minute long mp4s every 2 minutes and want to send this to a pseudo-live stream. Ideally this would also get saved as VOD for the future (we already have VOD working with concatenating the files locally then sending to MediaConvert, but want to switch to a as-soon-as-each-chunk-is-ready model).

The best I can parse the docs so far is that the closest service is MediaLive. It can turn one mp4 into a stream but for multiple files I can't find any relevant use case. I'm wondering if I'm missing something or we need to locally convert the files into RTP pushes or another supported ingest format.

What are you looking to do with the resulting livestream? Is someone gonna watch it in real time? How long is it gonna be, is it 2 minute chunks forever and ever?

Amazon Interactive Video Service (twitch-as-a-service) has "live streams" and "VOD" but I'm not sure if piping an indefinite number of mp4 files into an RTMP stream is gonna be very convenient.

Adbot
ADBOT LOVES YOU

SurgicalOntologist
Jun 17, 2004

Vanadium posted:

What are you looking to do with the resulting livestream? Is someone gonna watch it in real time? How long is it gonna be, is it 2 minute chunks forever and ever?

Amazon Interactive Video Service (twitch-as-a-service) has "live streams" and "VOD" but I'm not sure if piping an indefinite number of mp4 files into an RTMP stream is gonna be very convenient.

Looks like we'd still have to create the RTMP stream to use Amazon Interactive Video Service. So far I think MediaLive with "follow" transitions between files is the way to go. Hopefully it's seamless enough -- it seems the main use cases are more about switching to a pre-recorded message or something where the transition doesn't need to be 100% seamless.

Anyways, yeah, people will watch it as fast as we can generate the stream, but it's not a massive operation, at most 20 viewers per stream. Most events are 2-3 hours and we have 2-3 per day.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply