Amazon EventBridge Pipes
A service that builds point-to-point integrations from source to target with built-in filtering and transformation
Overview
Amazon EventBridge Pipes is a service that builds point-to-point integrations from event sources to targets without writing code. It supports SQS, Kinesis Data Streams, DynamoDB Streams, Amazon MSK, and self-managed Kafka as sources, defining a four-stage pipeline of filtering, enrichment, input transformation, and target delivery. You can inject custom logic through Lambda or Step Functions, enabling declarative construction of event integration patterns that previously required Lambda functions.
The Four-Stage Pipeline - Source, Filter, Enrichment, and Target
An EventBridge Pipes pipeline consists of four stages: source, filtering, enrichment, and target. The source is where events originate, supporting SQS queues, Kinesis Data Streams, DynamoDB Streams, Amazon MSK topics, and self-managed Kafka topics. Filtering uses event pattern matching to exclude unwanted events, employing EventBridge's event pattern syntax. For example, you can extract only INSERT events from DynamoDB Streams, or pass through only SQS messages where a specific field matches a condition. Events excluded by filtering are not billed, directly contributing to cost optimization. Enrichment is an optional stage that calls Lambda functions, Step Functions state machines, API Gateway, or API Destinations to supplement or transform event data. The target is the final delivery destination, supporting over 15 services including Lambda, Step Functions, SQS, SNS, Kinesis, EventBridge event buses, ECS tasks, and API Gateway.
Batch Processing and Error Handling Design
EventBridge Pipes retrieves events from sources in batches for efficient processing. You can configure batch size (1 to 10,000) and batch window (up to 300 seconds), with the pipeline triggering when either the batch size is reached or the batch window expires. When using Kinesis or DynamoDB Streams as sources, the parallelization factor increases the number of concurrent processors per shard. For error handling, understanding the retry behavior specific to each source type is critical. With SQS sources, failed messages return to the queue after the visibility timeout expires and move to a dead-letter queue (DLQ) after exceeding the maximum receive count. With Kinesis or DynamoDB Streams sources, you can configure retry attempts, maximum record age, and bisect on function error. The bisect feature splits a failed batch in half for reprocessing, which is effective when a specific record within a batch causes the entire batch to fail. Designating SQS queues or SNS topics as failure destinations for failed records enables subsequent investigation and reprocessing. For a comprehensive look at event-driven architecture patterns, related books (Amazon) are a great starting point.
Event Transformation and InputTemplate Usage
EventBridge Pipes' input transformation converts event data received from the source into the format expected by the target. InputTemplate is a JSON template that references source event fields as variables, letting you freely compose the payload delivered to the target. For example, you can extract only the necessary fields from a DynamoDB Streams NewImage, transform them into a flat JSON structure, and send them to SQS - all without a Lambda function. Variables are referenced using JSONPath-style syntax (<$.detail.orderId>), and templates can combine static and dynamic values. Enrichment output can also be referenced within the template, enabling you to build new payloads that merge source event data with enrichment results. In practice, using InputTemplate to transform payloads to match the target service's API specification and eliminating intermediate Lambda functions is an effective pattern. By removing Lambda from the pipeline, you avoid cold start latency, Lambda concurrent execution limits, and additional costs, achieving a simpler and more reliable integration.