How To Verify Cosigned Container Images In Amazon ECS

In a previous blog post, we demonstrated how to sign container images with sigstore’s Cosign via AWS CodePipeline. Now it’s time to deploy that image, but how do we verify it is signed? In Kubernetes, we would use an admission controller to validate that the image is signed. Sigstore has its own admission controller, and there are other open-source options, like Connaisseur. But there is more than one way to orchestrate a container! In this blog post, we will verify signed images in Amazon Elastic Container Service, which does not have admission controllers.

By signing images others can use it to authenticate that the images were built by you and your organization’s build pipeline. Also, once it is determined that an image is valid, you can set policies that say which valid images you trust to use on your system and which registries you trust to use without validation, which we will examine today with ECS with Lambda.

ECS background

In 2014, AWS released their first managed service for running containers, Elastic Container Service. According to Datadog in 2020, “nearly 90 percent of containers are orchestrated by Kubernetes, Amazon ECS, Mesos, or Nomad.” And that is still true today, Amazon uses ECS to power a number of its other services, including Amazon SageMaker, Amazon Polly, Amazon Lex, and AWS Batch, [2] and it has several compute options for running your ECS containers:

  • AWS Fargate
  • EC2 Instances
  • AWS Outpost
  • AWS Local Zone
  • AWS Wavelength

ECS is a contender for container orchestration choices on the AWS platform; ECS container security is as crucial as Kubernetes.

Let’s walk through how to run only signed and verified containers on ECS with sigstore’s Cosign.

Solution Overview

Once a task definition starts [1], Amazon EventBridge [3]  will notify our lambda function [4]. The lambda function uses the KMS Key [5]  provided in the environment variables to verify that the image being run has a valid signature using the Cosign Golang package. If verification fails [6] the lambda function kills the tasks and notifies the user using SNS [7].

This is example is also available on Github

Solution Components

  1. Amazon ECS Cluster and task
  2. Amazon ECR
  3. Amazon EventBridge
  4. AWS Lambda
  5. AWS KMS
  6. Golang Cosign Lambda function
  7. Amazon Simple Notification Service

Amazon ECS Cluster and Tasks

Below, we create an ECS cluster to demonstrate the operation of our solution.

ECS uses Services and Tasks to run containers. A service is similar to a deployment in Kubernetes: it maintains a certain number of tasks. And tasks are our container specs, which have a task definition. Those are required to run Docker containers in Amazon ECS.

Here is our task definition; we have two signed and unsigned for testing purposes. Here is the partial signed task definitions below; in the containerDefinitions, we have our signed ECR image

"taskDefinition": {
	"taskDefinitionArn": "arn:aws:ecs:us-west-2:12345678910:task-definition/cosign-ecs-task-definition:5",
	"containerDefinitions": [{
		"name": "cosign-ecs-container",
		"image": "",
		"cpu": 10,
		"memory": 512,
		"essential": true
	"cpu": "1024",
	"memory": "2048"

The container images are coming from AWS ECR.

Elastic Container Registry

We use Elastic Container Registry in this example from the previous CodePipeline blog post. Cosign supports many different registries. If your registry isn't on the list, please open an issue and let us know about it.

Amazon EventBridge

EventBridge is the connective tissue between our Lambda function and ECS cluster events. It is a serverless event bus that makes it easier to build event-driven applications and pass events generated from your applications to other services. We have defined an EventBridge object that will send information about any ECS tasks running in the cluster to our Lambda function.


Lambda allows us to respond to EventBridge events with custom behavior: in our case, validation using Cosign. We use the AWS Serverless Application Model, SAM, for our lambda function to make developing and deploying Lambda functions easier. We can build, package and deploy our Lambda function, EventBridge, and other artifacts with AWS SAM CLI. We can even test the lambda function locally! Below is what our SAM definition deployed for us; a lambda function running our Golang function and the EventBridge as mentioned above to ECS.


Our Lambda function needs access to the public key used to sign the container in CodePipeline. We store the key in KMS and have cosign retrieve the public key information and verify the signature stored in the ECR repo alongside the container image.

Golang Lambda function

The Golang lambda function integrates with the cosign package. Cosign does all the heavy lifting here.

~40 Lines of code to verify our containers

Get the public key information. 

pubKey, err := sigs.LoadPublicKey(ctx, fmt.Sprintf("awskms:///%s", keyID))
	if err != nil {
		return false, err

Parse out the container information.

	ref, err := name.ParseReference(containerImage)
	if err != nil {
		return false, err

Sometimes we all need a little help; this gives us access to the ECR repo. 

	ecrHelper := ecrlogin.ECRHelper{ClientFactory: api.DefaultClientFactory{}}

	opts := []remote.Option{

Set it all up! 

	co := &cosign.CheckOpts{
		ClaimVerifier:      cosign.SimpleClaimVerifier,
		RegistryClientOpts: []ociremote.Option{ociremote.WithRemoteOptions(opts...)},
		SigVerifier:        pubKey,


	log.Println("[INFO] COSIGN Verifying sig")
	verifiedSigs, _, err := cosign.VerifyImageSignatures(ctx, ref, co)
	if err != nil {
		log.Printf("[ERROR] COSIGN error: %v", err)
		return false, err

	return len(verifiedSigs) > 0, err

Cosign VerifyImageSignatures will return an array of signatures with container information like a Payload of what was signed, a Base64 of the Signature, the public cert used to sign the image. We could do more verification with this information but we only return true if there is something in the array.

Check out more on the golang package documentation.

Amazon Simple Notification Service

When teams deploy ESC tasks or services that cosign may stop, they need to know that. The notes in the ECS task are not intuitive, so we have integrated the stop functionality with SNS. The lambda function will alert teams if the container is not verified and let them know what cluster and task definition caused it.

What does this all look like together? Well, we have a task definition with an unsigned container image. Let’s kick it off and watch the magic happen.

aws ecs run-task \
--task-definition ${TASK_DEF_ARN} \
--cluster ${CLUSTER_ARN} \
--network-configuration \
"awsvpcConfiguration={subnets=${SUBNET_ID}],assignPublicIp=ENABLED}" \
--launch-type FARGATE \

As soon as the function starts we can see below the Lambda logs showing that the signature was not verified and that the function stopped the task definition.

In the Stopped reason, Lambda stopping ECS task is highlighted in red on the task definition.

Below is the SNS email notification letting me know there was an issue with my task.

Who knew failure was so fun!


All of this was tied together with Lambda, Golang, and EventBridge. The implementation is *reactive*, unlike an admission controller: it responds too, but doesn’t prevent, requests to run unsigned containers. Nevertheless, EventBridge and Lambda are fast enough that in our testing all tasks were stopped quickly. Some other drawbacks include requiring access to the Keys for Lambda that were the same used to sign the container. As implemented, the expressible policies are limited (all images must be signed by the same key) and apply to all clusters.  AWS Lambda supports signing code running in our lambda function, it may possible in the future for teams to verify and sign containers natively in ECS.

Security in any container environment, orchestrator, or tool is essential. As these tools evolve, so will the barrier to entry. Let us know what you think of this example on twitter or email!

The code to run this lambda function and verify your signed containers are in the example repo on GitHub.

If you are interested in getting involved or learning more about sigstore, please reach out via slack, email, or join the weekly community call.



Show Comments