Automating CloudWatch Logs Aggregation | Altostra
NEW: AI-Powered Cloud Deployments
Join BETA

Automating CloudWatch Logs Aggregation

How we automated our log shipping to a centralized solution using Logz.io
Yossi Ittach

July 1 2020 · 5 min read

SHARE ON

Automating CloudWatch Logs Aggregation

CloudWatch logs are a very useful tool for Lambda debugging. However, since our entire solution is serverless and we’re running multiple Lambda functions across several environments, we quickly realized that we need a better logging solution - one that will enable us to get a centralized cross-system overview and facilitate cross-services debugging.

After checking a lot of options (including running our own ELK) we decided to use a specialized solution provider, most of which have a very useful free tier too (We choose Logz.io).

Logz.io and most of the solution providers provide you with a Shipper Lambda that you create in your account and all you need to do is to connect all your log-groups to that Lambda. That Lambda takes care of parsing and forwarding your logs to their service.

The question is, how do you connect all your log groups to that Lambda?

We realized that our optimal solution will be something that enables us to:

  1. Automatically registers all log groups

  2. Can add metadata to the logs (running env, service name…) to facilitate search and debug

The Basic Solution — Manually connect your Log-Groups

This solution wasn’t relevant for us, but if you have a small environment with only a few Lambdas — this will work best for you. Once you’ve created the ShipperLambda for your solution provider, go to the Lambda triggers and manually add each log group.

The default log group name for Lambda is /aws/lambda/{your-Lambda's-name}

Adding a CloudWatch Log trigger to the Shipper Lambda
Adding a CloudWatch Log trigger to the Shipper Lambda

Do notice, however, that you can only connect triggers of an existing log group: Your Lambda’s log group is only created on the first time your Lambda writes anything, So if your Lambda hasn’t run yet, you won’t be able to connect it.

The Automated Solution — Listening on newly created LogGroups

Important This solution requires that CloudTrail service is enabled. Besides being a very useful auditing service, in its most basic form (which is enough for this case), the cost is only for the S3 storage you’re using, so ~$0.023 per GB per month.

If your solution is anything like ours, you’re constantly adding new serverless projects and deploy them to multiple environments. In this case, the manual solution becomes tedious and error-prone. What we need is a way to automatically register newly generated log groups as triggers to the Shipper Lambda.

In order to do that, we’ll need another Lambda: a LogGroupListener Lambda. This Lambda will listen on CreateLogGroup events from CloudTrail, and register the newly created log groups to the ShipperLambda.

This is how the flow is going to look like:

  1. Create the ShipperLambda for the solution supplier

  2. Create our LogGroupListenerLambda

  3. When any Lambda writes output for the first time, it automatically creates a log group

  4. CloudFormation triggers a CreateLogGroupevent

  5. Our LogGroupListenerLambda listens on this event and registers the newly created log group to the ShipperLambda

When this solution is deployed, every newly created log group in this region will be automatically registered to our ShipperLambda. This is great — unless you have several environments running in the same region and you want each of them to write to a different account. (You wouldn’t want QA logs hogging the Production account, for example)

The LogGroupListener Lambda

Our Lambda needs:

  1. The ARN of our ShipperLambda as an Env variable, so it’ll know where to register our new LogGroups to (DESTINATION_ARN)

  2. The code to do the registration

  3. Set up to be triggered by a CreateLogGroup CloudTrail event

  4. The permission to register LogGroups to our ShipperLambda

The Code:

const util = require('util')
const AWS = require('aws-sdk')
const cloudWatchLogs = new AWS.CloudWatchLogs();

let DESTINATION_ARN = process.env.DESTINATION_ARN

async function registerLogGroupToLogz(logGroupName) {

    let filterName = 'sample-filterName-1'
    let filterPattern = '' //everything

	const req = {
		destinationArn: DESTINATION_ARN,
		logGroupName: logGroupName,
		filterName: filterName,
		filterPattern: filterPattern
	};

	console.debug("adding subscription filter...", {
		logGroupName,
		arn: DESTINATION_ARN,
		filterName,
		filterPattern
	});

	await cloudWatchLogs
			.putSubscriptionFilter(req)
			.promise()

	console.info(`subscribed log group to [${DESTINATION_ARN}]`, {
		logGroupName,
		arn: DESTINATION_ARN
	});

}

exports.handler = async (event) => {
    console.log(`New LogGroup identifies, registering: ${util.inspect(event, {depth: 20})}`)
    await registerLogGroupToLogz(event.detail.requestParameters.logGroupName)
    console.log(`${event.detail.requestParameters.logGroupName} registered`
};

The Trigger

We need to set up a new EventBridge (Formerly CloudWatchEvent) rule and connect it to our Lambda:

Go to EventBridge -> Create Rule , give it a meaningful name, and create an Event Pattern with this Custom Pattern:

{
  "source": [
    "aws.logs"
  ],
  "detail-type": [
    "AWS API Call via CloudTrail"
  ],
  "detail": {
    "eventSource": [
      "logs.amazonaws.com"
    ],
    "eventName": [
      "CreateLogGroup"
    ]
  }
}

Now, set our Lambda as the target of this rule. Go back to the Lambda and make sure you can see the attached trigger in the Designer screen

The Role

We now have a Lambda that is triggered by CreateLogGroupevent, and has the code to register it to the ShipperLambda.

The only thing our Lambda is missing is the permission to register LogGroups on the ShipperLambda. Let’s add that:

Go to your Lambda permission and add an inline policy:

{
  "Version": "2012–10–17",
  "Statement": [
    {
      "Sid": "Manual",
      "Effect": "Allow",
      "Action": "logs:PutSubscriptionFilter",
      "Resource": "arn:aws:logs:us-east-1:{YOUR_ACCOUNT_ID}:log-group:/aws/Lambda/*"
     }
  ]
}

CloudFormation template for the Listener Lambda

You can use this CloudFormation template to create the lambda, its permissions, and the EventBridge rule and only add in the code:

{
   "AWSTemplateFormatVersion":"2010-09-09",
   "Transform":"AWS::Serverless-2016-10-31",
   "Resources":{
      "StackUpdatesListener01":{
         "Type":"AWS::Serverless::Function",
         "Properties":{
            "InlineCode": "module.exports.handler = async (event, context) => { console.log('LOGGING', context); return { statusCode: 200 } }",
            "Handler": "index.handler",
            "Runtime":"nodejs12.x",
            "MemorySize":512,
            "Timeout":10,
            "Environment":{
               "Variables":{
                  "DESTINATION_ARN":""
               }
            },
            "Events":{
               "SubscribeEvent":{
                  "Type":"CloudWatchEvent",
                  "Properties": {
                    "Pattern": {
                        "source":[
                         "aws.logs"
                        ],
                        "detail-type":[
                            "AWS API Call via CloudTrail"
                        ],
                        "detail": {
                            "eventSource": [
                                "logs.amazonaws.com"
                            ],
                            "eventName":[
                                "CreateLogGroup"
                            ]
                        }
                    }
                }
               }
            },
            "Policies":[
               {
                  "Version":"2012-10-17",
                  "Statement":[
                     {
                        "Effect":"Allow",
                        "Action":[
                           "logs:PutSubscriptionFilter",
                           "logs:ListTagsLogGroup"
                        ],
                        "Resource":"*"
                     },
                     {
                        "Effect":"Allow",
                        "Action":[
                           "lambda:AddPermission"
                        ],
                        "Resource":"*"
                     }
                  ]
               }
            ]
         }
      }
   }
}

The Project-Specific Solution

This is the solution we’ve chosen to use. It is the most fine-grained solution, and it’s the only one that fits the defined requirements we set at the beginning of the post, but it is also the hardest to implement and works only for solutions based on CloudFormation stack. It worked so well that once we’ve ironed out all the wrinkles we now provide it as an “Out of the box” feature for our customers.

What we’ll do is basically add a ShipperLambda to each stack, and pre-register all the project’s log groups in advance to that shipping Lambda. We can then pass to the ShipperLambda environment variables with all the metadata we want to add to the logs (service name, environment name, version, etc.) — and what we’ll get is a project-specific shipping solution.

How it is done

  1. For each stack we have, we’ll add our solution provider ShipperLambda (This is costless because we only pay for invocations).

  2. For each Lambda in our stack, we’ll generate its own log group within our template (instead of waiting for it to be created by the Lambda)

  3. In the template, we register each log group to the local ShipperLambda.

Basically, each Lambda object ("Func01") in the template will have in addition these two resources:

"Func01LogGroup": {
    "Type": "AWS::Logs::LogGroup",
    "Properties": {
        "LogGroupName": {
            "Fn::Sub": [
                "/aws/Lambda/${LambdaName}",
                {
                    "LambdaName": {
                        "Ref": "Func01Name"
                    }
                }
            ]
        }
    }
},
"Func01SubscriptionFilter": {
    "Type": "AWS::Logs::SubscriptionFilter",
    "Properties": {
        "LogGroupName": {
            "Ref": "Func01LogGroup"
        },
        "DestinationArn": {
            "Fn::GetAtt": [
                "ShippingLogsLambda",
                "Arn"
            ]
        },
        "FilterPattern": ""
    }
}

One of the problems of using this solution is the number of resources used in the CloudFormation template. AWS limits the number of resources per stack to 200. When you have multiple Lambda functions, adding 2 more resources (LogGroup and LogGroupSubscription) per function might get you over that limit fast. We solved this issue by using Nested Stacks, so all the logging resources are defined in a separate stack.

To sum everything up, if you have a small number of Lambdas, just manually connect them to your shipper Lambda. If your environments are distributed between regions, using a LogGroupListener will ease up your life and make sure no log is ever lost. If you have multiple environments in the same region and need them to be written to different logging accounts, you’ll have to either fine-tune the filter in the LogGroupListener or utilize the shipper-lambda-per-project solution.

By submitting this form, you are accepting our Terms of Service and our Privacy Policy

Thanks for subscribing!

Ready to Get Started?

Get Started for Free