We use cookies on this site to enhance your user experience. You accept to our cookies if you continue to use this website.

Kinesis Data Stream as Lambda Trigger in AWS CloudFormation

This AWS CloudFormation YAML template demonstrates how a Kinesis Data Stream stream can be implemented as Lambda Trigger in AWS CloudFormation.

Simply deploy the following template via the AWS CloudFormation console. In the designer the template looks like this: Kinesis Data Stream Lambda CloudFormation Designer

Template:

Now we can gather the stream name from the CloudFormation stack outputs section and send a test event using the AWS CLI:

AWS CloudFormation Console Outputs

  aws kinesis put-record --stream-name <value> --data <value> --partition-key <value>

CLI Reference: https://docs.aws.amazon.com/cli/latest/reference/kinesis/put-record.html

The output should look like this:

AWS CLI Kinesis put-record

Now you can check in the Lambda Console if the Lambda has been invoked and what has been written to the logs. AWS Lambda Monitoring AWS Lambda Monitoring

Implement Conditional Properties in AWS CloudFormation

The following CloudFormation snippet shows how to use conditional properties in an CloudFormation template. The example configures one or two subnets in the VPCOptions section of an Elasticsearch domain, depending on whether a parameter called ZoneAwareness is set to true or not.

Parameters:
 ZoneAwareness:
      Type: String
      AllowedValues: [true, false]
      Default: true

Conditions:
  ZoneAwarenessTrue: !Equals [!Ref ZoneAwareness, true]

  ElasticsearchDomain:
    Type: AWS::Elasticsearch::Domain
    Properties:
      ...
      VPCOptions:
        SubnetIds: 
          -  !Ref SubnetA
          - Fn::If:
            - ZoneAwarenessTrue
            -  !Ref SubnetB
            - !Ref "AWS::NoValue"
        SecurityGroupIds:
          - !Ref SecurityGroup

Read more Pseudo Parameters Reference

Retrieve StackName from nested Stacks in AWS CloudFormation

Using the intrinsic function Ref on a Stack created within another Stack only gives you the Id of the referenced Stack. If you want to get the StackName which is generated automatically you have to do a combination of the intrinsic function Split and Select as follows:

!Select [1, !Split ["/", !Ref MyStack]]

This works since the Stack Id is structured as follows:

arn:aws:cloudformation:eu-west-1:*********:stack/test-nested-MyStack-R5E52GRQGVZH/8d90dd40-17a7-11ea-b079-02c18823f600

The statement splits the Stack Id by deleimiter “/” resulting a list which contains the StackName on index 1 which we then can select using the Select function.

MacOS Chrome accept self-signed certificates bypass your connection is not private

MacOS Chrome connection is not private

Everyone who works as a developer will sometimes face the problem that he has self-issued / self-signed certificates for his website / webserver / api in his development environment and these certificates are then not accepted by Chrome.

You will get a warning “Your connection is not private” and error codes like “NET::ERR_CERT_INVALID” or “NET::ERR_CERT_REVOKED”.

This message can be bypassed very easily. Simply type thisisunsafe into your browser window. This bypasses the warning immediately.

Please do not use this on sites you don’t trust. Because as the warning says: Attackers might be trying to steal your information from xyz.com (for example, passwords, messages or credit cards)

Implement Metric Filter to profile memory usage for AWS Lambda Functions in AWS CloudFormation

Not long ago I came across the problem that I wanted to know in detail how much of the allocated memory my individual lambda functions consumes.

Since memory consumption is not part of the standard Lambda metrics, I had to find an individual solution.

Default AWS Lambda Metrics

As each lambda execution logs the memory usage I thought about implementing a metric filter extracting this information to create a custom metric in AWS CloudWatch.

AWS Lambda memory consumption log output

A sample metric filter was quickly found on the AWS forums (related thread).

You can test the metric filter by applying it to the log group of a lambda function like I did in the example below:

apply metric filter to log group

Now since verified the metric filter is actually working I only had to implement it in CloudFormation to be able to evaluate the memory consumption. It is important that a function name is defined so that the log group belonging to the Lambda function can also be created using the CloudFormation template.

You can find the template below:

Now you can find the memory consumption metric under StackName > LambdaFunctionName > Memory in AWS CloudWatch:

memory consumption result

Use Metric Math in CloudWatch Alarm using AWS CloudFormation

Recently I had the following problem, a CloudWatch Alarm based on the Error-Metric of a critical Lambda Function occasionaly caused notifications.

The reason for the notifications was quickly found through a search in the lambda logs. The errors were caused by lambda timeouts. Since lambda timeouts are not critical in the utilised architecture I was looking for a way to ignore them in the CloudWatch Alarms.

The solution is called Metric Math.

Metric math enables you to query multiple CloudWatch metrics and use math expressions to create new time series based on these metrics.

Source: Using Metric Math by AWS

Herewith it is possible to create a new metric excluding the timeouts by subtracting timeouts from errors. By default there is no metric for timeouts within lambda functions. But this metric can be extracted with a simple Metric Filter applied to the loggroup of the respective lambda function:

Then a CloudWatch Alarm can be created with a mathematical expression:

In the CloudWatch console the result of the template looks like this: Use Metric Math in CloudWatch Alarm using CloudFormation

Define ApiGateway, Lambda and DynamoDB using AWS CDK

AWS CDK VSCode AWS has released a developer preview of AWS CDK during re:Invent 2018. A detailed description and the release informations can be found here: AWS CDK Developer Preview. AWS CDK offers the possibility to define Infrastructure as Code in different programming languages, based on CloudFormation - a kind of compiler.

The introductory session of re:Invent 2018:

Since I found some free time during the re:Invent, I have played around with this new software development framework - and the result is the following snippet. Here I create a very simple API with only one method implemented by a Lambda function that has permissions on a DynamoDB table. A very common scenario.

AWS CDK makes a very good impression and I am looking forward to further development. The api reference and the introductory tutorial helped me a lot while trying out CDK.

Read More

Amazon DynamoDB Transactions support

Somewhat unexpectedly but quite deservedly, Amazon Web Services (AWS) has released DynamoDB Transactions at this year’s re:Invent. Until now there was only one “official” additional Java library to support transactions (Java Transaction Library for DynamoDB). In other programming languages widely used in the AWS environment, such as Node.js, Python or Go, you had to use update conditions to create a transactional behavior. Since manual checks had to be implemented in many places, this easily becomes very error-prone. DynamoDB Transactions will now allow you to perform atomic write operations to multiple items of which either all, or none go through. Besides that isolated reads will ensure that read operations applied to one ore multiple items are not interfered by other transactions.

Currently the newly introduced DynamoDB operations, TransactWriteItems and TransactGetItems, are not yet inserted in the official API definition. But I will insert the sections in this post as soon as the operations are documented there.

Operation description from the AWS blog post:

  • TransactWriteItems, a batch operation that contains a write set, with one or more PutItem, UpdateItem, and DeleteItem operations. TransactWriteItems can optionally check for prerequisite conditions that must be satisfied before making updates. These conditions may involve the same or different items than those in the write set. If any condition is not met, the transaction is rejected.
  • TransactGetItems, a batch operation that contains a read set, with one or more GetItem operations. If a TransactGetItems request is issued on an item that is part of an active write transaction, the read transaction is canceled. To get the previously committed value, you can use a standard read.

A nice fact about transactions is that they do not incur any additional costs and they are now available globally in all commercial regions.

If you want to learn more about this topic read the official AWS blog post: New – Amazon DynamoDB Transactions containing a more detailed explanation and samples.

AWS re:Invent 2018 session introducing DynamoDB transactions:

Since I use self-built transactions very often in one of my projects I am very happy about this new feature and will try it out soon. Testimonials will follow!

via GIPHY

Authenticate via additional IAM Users or Roles in AWS EKS Kubernetes Cluster

AWS EKS Kubernetes Cluster Recently I had the following problem: I created an EKS cluster in an AWS account with the root user and could not access the cluster later with other IAM users (with all permissions on EKS) via kubectl.

After some research I came across the following paragraph in the AWS documentation:

“You must use IAM user credentials for this step, not root credentials. If you create your Amazon EKS cluster using root credentials, you cannot authenticate to the cluster.”

So I created the cluster again with my IAM user. Now I was able to connect to the cluster (after configuring kubectl as described here: configure kubectl) using my IAM user - but IAM users of colleagues still could not.

In order to authorize further IAM users to use the cluster, the following steps were necessary:

  • Gather user or role ARNs
  • Create a config.yaml file and insert the presets listed below
apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapUsers: |
    - userarn: arn:aws:iam::<AWS account id>:user/<username of iam user>
      username: <username>
      groups:
        - system:masters
  mapRoles: |
    - rolearn: <ARN of IAM role>
      username: admin:
      groups:
        - system:masters
    - rolearn: <ARN of instance role (not instance profile)>
      username: system:node:
      groups:
        - system:bootstrappers
        - system:nodes

system:masters - Allows super-user access to perform any action on any resource. If you want to grant access more granular, please refer to the kubernetes documentation.

The last statement in the mapRoles section must always be present (and configured) in an EKS cluster to allow nodes to join the cluster.

Use the following command to apply the configuration:

 kubectl apply -f config.yaml

Using kubectl describe configmap you can validate the current aws-auth configuration

kubectl describe configmap -n kube-system aws-auth

Now the configured IAM users or users holding the defined roles can configure there kubectl pointing to your EKS cluster.

After trying managed Kubernetes cluster on AWS and Google Cloud I have to say that getting started on Google Cloud was quite easier. But after overcoming the start difficulties, EKS works as desired.

If you want to dive deeper into the subject I recommend the following article: EKS and roles