Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Adhemar
Jan 21, 2004

Kellner, da ist ein scheussliches Biest in meiner Suppe.

dividertabs posted:

DynamoDB on-demand pricing. They don't tell you this in their documentation* but it still throttles. It appears to just be a wrapper hiding Dynamo's autoscaling feature plus a different billing scheme.

*To rant, this kind of marketing- instead of technical- focused documentation is the main reason I roll my eyes every time I hear someone in AWS mention "customer obsession"

Mildly triggered AWS engineer here (not from the DDB team). Can you provide more detail on the circumstance under which you got throttled from your on demand table? It may throttle for a short time before it scales up, but once scaled it should subsequently be able to handle the same load in the future. You can shortcut the initial throttling by setting an appropriate provisioned capacity on the table before switching it to on demand.

Adbot
ADBOT LOVES YOU

Adhemar
Jan 21, 2004

Kellner, da ist ein scheussliches Biest in meiner Suppe.

Nomnom Cookie posted:

Athena is managed presto and totally unsuitable for anything.

Fixed.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

dividertabs posted:

*To rant, this kind of marketing- instead of technical- focused documentation is the main reason I roll my eyes every time I hear someone in AWS mention "customer obsession"

Triggered. I take this as an affront: The poo poo that my org does, the calisthenics we pull to bend over backwards for our customers is insane. We lobby for feature requests on existing services. We advocate for new services, we take the blame when a service falls short. We issue refunds when things so sideways. We inspect your poo poo to make sure it’s well-architected. We empathize when you ignore our advice and poo poo falls over and you blame us.

To base “customer obsession” on documentation is ridiculous. Yes our documentation could be better and is often out of date or incomplete. How’s yours?

But to judge our customer obsession on a generalized weakness of the IT industry is to say that apples are a terrible fruit because they eventually rot if left on a bowl on the kitchen counter.

Agrikk fucked around with this message at 15:07 on Apr 15, 2020

Nomnom Cookie
Aug 30, 2009




It’s good at count queries

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug

Agrikk posted:

Triggered. I take this as an affront: The poo poo that my org does, the calisthenics we pull to bend over backwards for our customers is insane. We lobby for feature requests on existing services. We advocate for new services, we take the blame when a service falls short. We issue refunds when things so sideways. We inspect your poo poo to make sure it’s well-architected. We empathize when you ignore our advice and poo poo falls over and you blame us.

To base “customer obsession” on documentation is ridiculous. Yes our documentation could be better and is often out of date or incomplete. How’s yours?

But to judge our customer obsession on a generalized weakness of the IT industry is to say that apples are a terrible fruit because they eventually rot if left on a bowl on the kitchen counter.
I have no doubt you and your group do amazing work, but AWS is a big place. We've had huge issues with feature parity in govcloud and services that are supposedly available suddenly have "whoops, that part doesn't work yet" issues that are suddenly discovered after we open tickets.

Like, for example, it took like 3+ months for the route53 endpoints to get added into the aws cli after it was officially announced as being available.

our TAM is incredibly responsive but a lot of issues just get bubbled up to who knows where with no timelines and then randomly get resolved or hang because of the decentralized nature and how tickets are allocated across a ton of various departments.

Bhodi fucked around with this message at 19:33 on Apr 15, 2020

12 rats tied together
Sep 7, 2006

it's a little unfair to consider govcloud and aws china as even being the same business, IMO, but yeah they are both pretty bad

my one bad experience with the docs to date, about 5 years working with aws, was when the site to site VPN docs specifically recommended you use the same BGP ASN on all of your spokes in a hub/spoke topology which ended up causing a bunch of routing issues that I was not mentally equipped to understand at the time

I filed a support ticket and they got back to me instantly after having identified the problem, apologized for the documentation error, and fixed it right away

dividertabs
Oct 1, 2004

quote:

Can you provide more detail on the circumstance under which you got throttled from your on demand table?
Say we want to scale from 0 to 1,000,000 reads/sec in 1 second. S3 can do it (with good P95 latency, and poor p99 latency). It was not surprising to me that Dynamo couldn't do it. It was surprising to me that Dynamo couldn't do it, and the documentation for Dynamo on-demand, which emphasizes that it 'instantly accommodates customers’ workloads as they ramp up or down', didn't mention it. Bolding my own. No judgment on whether my ideal behavior is reasonable.

Adhemar posted:

Mildly triggered AWS engineer here.

Agrikk posted:

Triggered.

Both of you might find it pertinent that I have worked as an SDE for AWS. I saw my own team's product was marketed in a way that omitted severe limitations, causing customers to waste time trying to get it to work.

Agrikk posted:

We empathize when you ignore our advice and poo poo falls over and you blame us.
Do you really think this is what I was complaining about?

quote:

But to judge our customer obsession on a generalized weakness of the IT industry is to say that apples are a terrible fruit because they eventually rot if left on a bowl on the kitchen counter.
Amazon claims it is customer obsessed. It seems clear to me that it is untrue, just like most companies that claim to be customer obsessed. I don't see how your analogy fits.

quote:

Yes our documentation could be better and is often out of date or incomplete. How’s yours?
My team's docs suck. I've asked the PM for time make it better. And if I want to convince a client to use my product, I do my best to make sure they are making an informed decision. In the mean time, I would agree if clients complain that my org is not customer obsessed. I wouldn't take it as a personal affront to my morals or skills.

Adhemar
Jan 21, 2004

Kellner, da ist ein scheussliches Biest in meiner Suppe.

dividertabs posted:

Say we want to scale from 0 to 1,000,000 reads/sec in 1 second. S3 can do it (with good P95 latency, and poor p99 latency). It was not surprising to me that Dynamo couldn't do it. It was surprising to me that Dynamo couldn't do it, and the documentation for Dynamo on-demand, which emphasizes that it 'instantly accommodates customers’ workloads as they ramp up or down', didn't mention it. Bolding my own. No judgment on whether my ideal behavior is reasonable.

It should only throttle the first time you scale up that table though. Never if you use the shortcut I mentioned. See also: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html#HowItWorks.InitialThroughput

I agree our docs are generally poor.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Govcloud may as well be a separate entity from Amazon given the requirements it has to operate under to meet demands of FedRAMP and similar horrific abbreviations and acronyms. The number of times AWS has reached out to me to work on it and the horror stories I've heard from others tells me I am much happier where I am now for the foreseeable future.

dividertabs
Oct 1, 2004

Adhemar posted:

It should only throttle the first time you scale up that table though. Never if you use the shortcut I mentioned. See also: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html#HowItWorks.InitialThroughput

Thanks, this is really interesting and I will definitely try it out.

mobby_6kl
Aug 9, 2009

by Fluffdaddy
Somehow I never noticed we have an AWS thread so I originally posted this in the web thread so I'll repost here as it's probably a pretty trivial issue for anyone familiar with the service.

I'm trying to get my Internet of poo poo device to upload stuff to S3 with the REST API but can't get the signature right. I'm not super familiar with the API but the short version seems to be that you're supposed to create a string with some hearers from your request, calculate the HMAC-SHA1 with your secret key, base-64 encode it, and send it as part of your request authorization header.

I think I got the string right, it matches what the server spits back byte to byte. The problem is then probably with the hashing or encoding. Probably something very dumb but difficult to find since I don't know the correct result.

I tried to test it on their example from https://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html

quote:

PUT\n
\n
image/jpeg\n
Tue, 27 Mar 2007 21:15:45 +0000\n
/awsexamplebucket1/photos/puppy.jpg
The key is wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

Their signature is MyyxeRY7whkBe+bq8fHCL/2kKUg=
My code generates iqRxw+ileNPulfhspnRs8nOjjIBU
I tried some presumably correct implementations: 530b4adc5500dced8481f18a50e7e615e307f195, 8aa473c3e8a578d3eed5f86ca6746cf273a38c80, YOvv/mr//HwVecxGLyxqoLHiN7c=

At least my signature is the right length, I suppose the online tools are down to how they encode/decode the signature and key, but I can't get anything to match anything else and it's driving me nuts :downs:. My actual IoT code is C but the following C# does does the same thing:
code:
		String StringToHash = "PUT\n\nimage/jpeg\nTue, 27 Mar 2007 21:15:45 +0000\n/awsexamplebucket1/photos/puppy.jpg";
		String HashKey = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY";
		
		System.Text.UTF8Encoding myEncoder = new System.Text.UTF8Encoding();
		byte[] Key = myEncoder.GetBytes(HashKey);
		byte[] Text = myEncoder.GetBytes(StringToHash);
		System.Security.Cryptography.HMACSHA1 myHMACSHA1 = new System.Security.Cryptography.HMACSHA1(Key);
		byte[] HashCode = myHMACSHA1.ComputeHash(Text);
		string hash =  BitConverter.ToString(HashCode).Replace("-", "");
		
		Console.WriteLine(System.Convert.ToBase64String(HashCode));

Adhemar
Jan 21, 2004

Kellner, da ist ein scheussliches Biest in meiner Suppe.
From the link you posted:

quote:

This topic explains authenticating requests using Signature Version 2. Amazon S3 now supports the latest Signature Version 4. This latest signature version is supported in all regions and any new regions after January 30, 2014 will support only Signature Version 4. For more information, go to Authenticating Requests (AWS Signature Version 4) in the Amazon Simple Storage Service API Reference.

Make sure you're implementing SigV4, not SigV2, which is ancient. Read this: https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html

Skier
Apr 24, 2003

Fuck yeah.
Fan of Britches
Implementing the signing can be tricky - could you see how other projects do it such as https://github.com/penmanglewood/aws_sigv4 or https://github.com/sidbai/aws-sigv4-c ? That way you don't have to do it from scratch.

I've had to implement the signing myself and there are a lot of edge cases and gotchas, so looking at someone else's work, licenses permitting, is probably the way to go.

Adhemar
Jan 21, 2004

Kellner, da ist ein scheussliches Biest in meiner Suppe.
This library contains the official AWS C implementation for SigV4 signing: https://github.com/awslabs/aws-c-auth. It’s Apache 2 licensed.

Scrapez
Feb 27, 2004

Having an issue with cloudformation trying to do an Fn::FindInMap inside of an Fn::Sub to generate the proper subnet value. Cloudformation is returning: "Template error: every Ref object must have a single String value." I've been banging my head against the desk for awhile trying to understand why it thinks there are either no string values or multiple string values for a Ref object. I believe there should be a single string for each Ref.

Any ideas?

code:
  "Mappings" : {
    "Region" : {
      "us-east-2" : {
	  	"regionCIDR" : "10.0.0.0/16",
        "regionCIDR2Octet" : "10.0"
      },
      "us-west-2" : {
		"regionCIDR" : "10.5.0.0/16",
        "regionCIDR2Octet" : "10.5"
	  }
	}
  },
code:
    "subnet0c41cd3e1702cc8a8": {
      "Type": "AWS::EC2::Subnet",
      "Properties": {
        "CidrBlock": {
            "Ref": { "Fn::Sub": [ "${sub_region_CIDR}.130.0/27", { "sub_region_CIDR": {"Fn::FindInMap" : [ "Region", { "Ref" : "AWS::Region" }, "regionCIDR2Octet"]}} ]}
        },
        "AvailabilityZone": "us-west-2a",
        "VpcId": {
          "Ref": "vpc0c556f5a1567a8960"
        },
        "Tags": [
          {
            "Key": "Name",
            "Value": "Kafka Consumer 1"
          }
        ]
      }
    },

12 rats tied together
Sep 7, 2006

I am really struggling with the json syntax here instead of yaml but I would guess that CidrBlock -> Ref: "10.5.130.0/27" is not a valid usage of Ref. IIRC it's only used for referring to other logical resource ids or parameter names (ex: "subnet0c41cd3e1702cc8a8"), it can't be used for composing string values like you have here.

Instead I think you can use just sub directly?
code:
 "CidrBlock": "Fn::Sub": [
	"${sub_region_CIDR}.130.0/27", {  "sub_region_CIDR": { "Fn::FindInMap" : [ "Region", {  "Ref" : "AWS::Region"  },  "regionCIDR2Octet"]}}
]
Or, in yaml :shobon::
code:
Type: AWS::EC2::Subnet
Properties:
  CidrBlock: !Sub
    - ${sub_region_CIDR}
    - { sub_region_CIDR: !FindInMap [ Region, !Ref: AWS::Region, regionCIDR2Octet ]
edit: I think in this case the error message is because the value of the Ref is a dictionary, not a string.

Scrapez
Feb 27, 2004

12 rats tied together posted:

I am really struggling with the json syntax here instead of yaml but I would guess that CidrBlock -> Ref: "10.5.130.0/27" is not a valid usage of Ref. IIRC it's only used for referring to other logical resource ids or parameter names (ex: "subnet0c41cd3e1702cc8a8"), it can't be used for composing string values like you have here.

Instead I think you can use just sub directly?
code:
 "CidrBlock": "Fn::Sub": [
	"${sub_region_CIDR}.130.0/27", {  "sub_region_CIDR": { "Fn::FindInMap" : [ "Region", {  "Ref" : "AWS::Region"  },  "regionCIDR2Octet"]}}
]
Or, in yaml :shobon::
code:
Type: AWS::EC2::Subnet
Properties:
  CidrBlock: !Sub
    - ${sub_region_CIDR}
    - { sub_region_CIDR: !FindInMap [ Region, !Ref: AWS::Region, regionCIDR2Octet ]
edit: I think in this case the error message is because the value of the Ref is a dictionary, not a string.

You are absolutely correct. The Ref: at the beginning was not required and was the issue. Can't thank you enough!

PierreTheMime
Dec 9, 2004

Hero of hormagaunts everywhere!
Buglord
I need to run some PowerShell commands on a Windows server remotely from Lambda. What’s the best Lambda-friendly way to do this?

fluppet
Feb 10, 2009
Install ssm agent on windows server use lambda to call ssm run command

PierreTheMime
Dec 9, 2004

Hero of hormagaunts everywhere!
Buglord

fluppet posted:

Install ssm agent on windows server use lambda to call ssm run command

Oh nice it’s already on the AMI and everything. :tipshat:

Pile Of Garbage
May 28, 2007



PierreTheMime posted:

Oh nice it’s already on the AMI and everything. :tipshat:

Your EC2 instances will need an instance profile that has the AmazonSSMManagedInstanceCore managed policy attached to register with SSM but otherwise yeah run command/association/maintenance window task are the way to go.

Pile Of Garbage
May 28, 2007



Does anyone know if it's possible to create SSM Distributor packages via CFN? According to the doco they are just documents so I should be able to do it via the AWS::SSM::Document resource by selecting Package for the DocumentType. However I can't find any reference on what the document content needs to contain.

If I run aws ssm get-document against a package that I created via the console I can see the document content but it doesn't include information about the S3 bucket that contains the package(s). I suspect there's some stuff that happens behind the scenes.

JHVH-1
Jun 28, 2002
Haven’t used that feature personally but you might need to create a lambda function to do it for you.

mobby_6kl
Aug 9, 2009

by Fluffdaddy

Adhemar posted:

From the link you posted:


Make sure you're implementing SigV4, not SigV2, which is ancient. Read this: https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html

Skier posted:

Implementing the signing can be tricky - could you see how other projects do it such as https://github.com/penmanglewood/aws_sigv4 or https://github.com/sidbai/aws-sigv4-c ? That way you don't have to do it from scratch.

I've had to implement the signing myself and there are a lot of edge cases and gotchas, so looking at someone else's work, licenses permitting, is probably the way to go.

Adhemar posted:

This library contains the official AWS C implementation for SigV4 signing: https://github.com/awslabs/aws-c-auth. It’s Apache 2 licensed.
Thanks! I got the V2 authentication to work somehow, still no clue what was actually wrong but if it works, it works.

I then started implementing V4 anyway because it seemed to be the only option to sign API gateway requests. First of all, what a pain in the rear end. I really don't see what's the point of the turducken hashing and signing, if the attacker knows the secret key they can sign all the pieces just like they can sign the whole request.

Anyway I got it produce the right results for the example requests and when I went to test it, discovered that I can actually just make completely unsigned POST request lol. Which makes sense that it would be possible, but it complained about missing auth tokens once so I just went straight for the nuclear option. Oops.

Of course it would totally make sense to use an existing implementation/library but this is mostly DIY stuff (for now) and I'm already running into ROM issues on the micro so I wanted to keep the code to a minimum necessary to make it work.

FamDav
Mar 29, 2008

mobby_6kl posted:

I then started implementing V4 anyway because it seemed to be the only option to sign API gateway requests. First of all, what a pain in the rear end. I really don't see what's the point of the turducken hashing and signing, if the attacker knows the secret key they can sign all the pieces just like they can sign the whole request.

The point is when an attacker doesn’t know the secret but has access to other information. Off the top of my head

* canonical request signing ensures that the request can’t be meaningfully modified
* as part of that, the credential scope mitigates against replay attacks at different times and to different regions/services
* the signing key derivation process protects the secret key and therefore the account if a signing key is leake

Nomnom Cookie
Aug 30, 2009



Crypto is voodoo magic that you apply to the computer as directed by the witch doctor, without question and without fail. Otherwise evil warlocks will steal your soul, or worse your credentials.

Pile Of Garbage
May 28, 2007



JHVH-1 posted:

Haven’t used that feature personally but you might need to create a lambda function to do it for you.

Yeah certainly looks that way. Package creation is supported by AWS CLI and the API, the only gap appears to be CFN. From poking around it looks like there's some stuff Amazon does on the back-end where they publish some metadata which associates the SSM Document with your package manifest and content.

Nomnom Cookie posted:

Crypto is voodoo magic that you apply to the computer as directed by the witch doctor, without question and without fail. Otherwise evil warlocks will steal your soul, or worse your credentials.

Only the math part of crypto is voodoo, IMO. The fundamentals of how crypto works, how it is applied and what risks are involved is worth learning. You don't need to be able to do AES in your head but you should be able to understand things like FamDavs good post.

That said, never roll your own crypto. Doing so will prompt a lich to steal your life essence by ridiculing you online.

JHVH-1
Jun 28, 2002

Pile Of Garbage posted:

Yeah certainly looks that way. Package creation is supported by AWS CLI and the API, the only gap appears to be CFN. From poking around it looks like there's some stuff Amazon does on the back-end where they publish some metadata which associates the SSM Document with your package manifest and content.

It’s one of the things I like about CDK, it will create the lambda functions in certain cases to fill in those gaps. Plus it writes the cloudformation for you.

SAVE-LISP-AND-DIE
Nov 4, 2010
I've got to move files from a Windows EC2 instance into EFS. The files range in size from KB to a couple hundred MB, and are being created infrequently but regularly.

At the moment I see 2 possibilities: DataSync into EFS, or setting up a samba share on a Linux EC2 instance and connecting to that. How screwed am I?

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
You can also set up an NFS share on the windows box and then rsync the NFS share to EFS mounted on a Linux EC2 host.


But back up a sec. How are the file appearing on the Windows box? Is it possible to redirect the location of those files and have them end us straight on an EFS mount point on EC2 Linux?

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
Doesn't windows have an NFS driver? That's all EFS is. Iirc AWS gives you instructions on mounting EFS on Windows when you create one.

E: nope, not supported. Ignore me!

Docjowles
Apr 9, 2009

Will the storage be used primarily by Windows hosts? Because they also have managed SMB/CIFS if that’s a better fit for your workload.


https://aws.amazon.com/fsx/windows/

SAVE-LISP-AND-DIE
Nov 4, 2010
The files are being sent in by third-parties over SFTP and I'm mandated to use a Windows-based SFTP server. However, the files will be processed by Linux instances.

The Samba share route seems to be working for now.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Anyone found a way to use AWS Backup with instances that have RAID configured? I’m stuck using stupid expensive io1 EBS volumes for our software because we don’t get synced EBS snapshots in AWS Backup yet (and granted weren’t possible until late last year). Support gave me a meh answer but maybe one of you fine folks figured something out (planning on eithe mdraid or ZFS for disk I/O expansion)

sinequanon01
Oct 20, 2017

necrobobsledder posted:

Anyone found a way to use AWS Backup with instances that have RAID configured? I’m stuck using stupid expensive io1 EBS volumes for our software because we don’t get synced EBS snapshots in AWS Backup yet (and granted weren’t possible until late last year). Support gave me a meh answer but maybe one of you fine folks figured something out (planning on eithe mdraid or ZFS for disk I/O expansion)

Have you seen this? Backup supports backing up entire EC2 instances including those with RAID.

https://aws.amazon.com/blogs/aws/aws-backup-ec2-instances-efs-single-file-restore-and-cross-region-backup/

You’ll need a way to quiesce your file system and flush writes to disk.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

sinequanon01 posted:

Have you seen this? Backup supports backing up entire EC2 instances including those with RAID.

https://aws.amazon.com/blogs/aws/aws-backup-ec2-instances-efs-single-file-restore-and-cross-region-backup/

You’ll need a way to quiesce your file system and flush writes to disk.

quote:

Note that you need to stop write activity and flush filesystem caches in case you’re using RAID volumes or any other type of technique to group your volumes.
So basically we'd need to stop our software from writing to disk shortly before the AWS Backup cron job starts, which is silly for our situation. We're not about to put much more effort into developing the application itself to make changes to support this either. Crash-consistent snapshots require coordination at the EBS level in the form of multi-volume snapshots but that would be basically a small change in the Backup service to select multiple EBS volumes https://aws.amazon.com/blogs/storage/taking-crash-consistent-snapshots-across-multiple-amazon-ebs-volumes-on-an-amazon-ec2-instance/

It gets a bit more awkward if you're trying to do snapshots of multi-node databases / datastores like Kafka, Elasticsearch, etc. but it's worked out enough for us so far with the stack we've setup. But we're hitting a performance wall and our EBS setup would be best served with a RAID0 of GP2 EBS volumes.

unpacked robinhood
Feb 18, 2013

by Fluffdaddy
Could I render a simple static html page in a Lambda, and serve it through API Gateway everytime the url is visited ?
I know I can serve a page stored in an S3 bucket but I'm gonna have to fetch data and stuff in JS which I'm not fond of.

Vanadium
Jan 8, 2005

Yes, that's how a coworker implemented their wedding's website. :toot:

Maybe you can get somewhere with an ALB instead of API Gateway too? idk.

JHVH-1
Jun 28, 2002
You could also put stuff in lambda at the edge functions in cloudfront and use s3 for static files. I think it would be cheaper than an ALB but depends on what you want to do.

Adbot
ADBOT LOVES YOU

unpacked robinhood
Feb 18, 2013

by Fluffdaddy
I'd like to fetch a few rows in dynamodb, build a really basic page using that data and serve it. Each request may build a different page depending on the current time. It's going to get 5 visits a day tops.

I have a 3kb prototype in an s3 already but I'd rather render the whole thing programmatically somewhere else, I think.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply