The AWS::Lambda::EventSourceMapping
resource creates a mapping between an event source and an LAMlong function. LAM reads items from the event source and triggers the function.
For details about each event source type, see the following topics. In particular, each of the topics describes the required and optional parameters for the specific event source.
function_name
(String) The name or ARN of the Lambda function.
Name formats
MyFunction
.arn:aws:lambda:us-west-2:123456789012:function:MyFunction
.arn:aws:lambda:us-west-2:123456789012:function:MyFunction:PROD
.123456789012:function:MyFunction
.The length constraint applies only to the full ARN. If you specify only the function name, it's limited to 64 characters in length.
amazon_managed_kafka_event_source_config
(Attributes) Specific configuration settings for an Amazon Managed Streaming for Apache Kafka (Amazon MSK) event source. (see below for nested schema)batch_size
(Number) The maximum number of records in each batch that Lambda pulls from your stream or queue and sends to your function. Lambda passes all of the records in the batch to the function in a single call, up to the payload limit for synchronous invocation (6 MB).
bisect_batch_on_function_error
(Boolean) (Kinesis and DynamoDB Streams only) If the function returns an error, split the batch in two and retry. The default value is false.destination_config
(Attributes) (Kinesis, DynamoDB Streams, Amazon MSK, and self-managed Apache Kafka event sources only) A configuration object that specifies the destination of an event after Lambda processes it. (see below for nested schema)document_db_event_source_config
(Attributes) Specific configuration settings for a DocumentDB event source. (see below for nested schema)enabled
(Boolean) When true, the event source mapping is active. When false, Lambda pauses polling and invocation.
Default: Trueevent_source_arn
(String) The Amazon Resource Name (ARN) of the event source.
filter_criteria
(Attributes) An object that defines the filter criteria that determine whether Lambda should process an event. For more information, see Lambda event filtering. (see below for nested schema)function_response_types
(List of String) (Streams and SQS) A list of current response type enums applied to the event source mapping.
Valid Values: ReportBatchItemFailures
maximum_batching_window_in_seconds
(Number) The maximum amount of time, in seconds, that Lambda spends gathering records before invoking the function.
Default (, , event sources): 0
Default (, Kafka, , event sources): 500 ms
Related setting: For SQS event sources, when you set BatchSize
to a value greater than 10, you must set MaximumBatchingWindowInSeconds
to at least 1.maximum_record_age_in_seconds
(Number) (Kinesis and DynamoDB Streams only) Discard records older than the specified age. The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, Lambda never discards old records.
The minimum valid value for maximum record age is 60s. Although values less than 60 and greater than -1 fall within the parameter's absolute range, they are not allowedmaximum_retry_attempts
(Number) (Kinesis and DynamoDB Streams only) Discard records after the specified number of retries. The default value is -1, which sets the maximum number of retries to infinite. When MaximumRetryAttempts is infinite, Lambda retries failed records until the record expires in the event source.parallelization_factor
(Number) (Kinesis and DynamoDB Streams only) The number of batches to process concurrently from each shard. The default value is 1.queues
(List of String) (Amazon MQ) The name of the Amazon MQ broker destination queue to consume.scaling_config
(Attributes) (Amazon SQS only) The scaling configuration for the event source. For more information, see Configuring maximum concurrency for Amazon SQS event sources. (see below for nested schema)self_managed_event_source
(Attributes) The self-managed Apache Kafka cluster for your event source. (see below for nested schema)self_managed_kafka_event_source_config
(Attributes) Specific configuration settings for a self-managed Apache Kafka event source. (see below for nested schema)source_access_configurations
(Attributes List) An array of the authentication protocol, VPC components, or virtual host to secure and define your event source. (see below for nested schema)starting_position
(String) The position in a stream from which to start reading. Required for Amazon Kinesis and Amazon DynamoDB.
starting_position_timestamp
(Number) With StartingPosition
set to AT_TIMESTAMP
, the time from which to start reading, in Unix time seconds. StartingPositionTimestamp
cannot be in the future.topics
(List of String) The name of the Kafka topic.tumbling_window_in_seconds
(Number) (Kinesis and DynamoDB Streams only) The duration in seconds of a processing window for DynamoDB and Kinesis Streams event sources. A value of 0 seconds indicates no tumbling window.event_source_mapping_id
(String)id
(String) Uniquely identifies the resource.amazon_managed_kafka_event_source_config
Optional:
consumer_group_id
(String) The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID.destination_config
Optional:
on_failure
(Attributes) The destination configuration for failed invocations. (see below for nested schema)destination_config.on_failure
Optional:
destination
(String) The Amazon Resource Name (ARN) of the destination resource.
To retain records of asynchronous invocations, you can configure an Amazon SNS topic, Amazon SQS queue, Lambda function, or Amazon EventBridge event bus as the destination.
To retain records of failed invocations from Kinesis and DynamoDB event sources, you can configure an Amazon SNS topic or Amazon SQS queue as the destination.
To retain records of failed invocations from self-managed Kafka or Amazon MSK, you can configure an Amazon SNS topic, Amazon SQS queue, or Amazon S3 bucket as the destination.document_db_event_source_config
Optional:
collection_name
(String) The name of the collection to consume within the database. If you do not specify a collection, Lambda consumes all collections.database_name
(String) The name of the database to consume within the DocumentDB cluster.full_document
(String) Determines what DocumentDB sends to your event stream during document update operations. If set to UpdateLookup, DocumentDB sends a delta describing the changes, along with a copy of the entire document. Otherwise, DocumentDB sends only a partial document that contains the changes.filter_criteria
Optional:
filters
(Attributes List) A list of filters. (see below for nested schema)filter_criteria.filters
Optional:
pattern
(String) A filter pattern. For more information on the syntax of a filter pattern, see Filter rule syntax.scaling_config
Optional:
maximum_concurrency
(Number) Limits the number of concurrent instances that the SQS event source can invoke.self_managed_event_source
Optional:
endpoints
(Attributes) The list of bootstrap servers for your Kafka brokers in the following format: "KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"]
. (see below for nested schema)self_managed_event_source.endpoints
Optional:
kafka_bootstrap_servers
(List of String) The list of bootstrap servers for your Kafka brokers in the following format: "KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"]
.self_managed_kafka_event_source_config
Optional:
consumer_group_id
(String) The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID.source_access_configurations
Optional:
type
(String) The type of authentication protocol, VPC components, or virtual host for your event source. For example: "Type":"SASL_SCRAM_512_AUTH"
.
BASIC_AUTH
? (Amazon MQ) The ASMlong secret that stores your broker credentials.BASIC_AUTH
? (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL/PLAIN authentication of your Apache Kafka brokers.VPC_SUBNET
? (Self-managed Apache Kafka) The subnets associated with your VPC. Lambda connects to these subnets to fetch data from your self-managed Apache Kafka cluster.VPC_SECURITY_GROUP
? (Self-managed Apache Kafka) The VPC security group used to manage access to your self-managed Apache Kafka brokers.SASL_SCRAM_256_AUTH
? (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-256 authentication of your self-managed Apache Kafka brokers.SASL_SCRAM_512_AUTH
? (Amazon MSK, Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-512 authentication of your self-managed Apache Kafka brokers.VIRTUAL_HOST
?- (RabbitMQ) The name of the virtual host in your RabbitMQ broker. Lambda uses this RabbitMQ host as the event source. This property cannot be specified in an UpdateEventSourceMapping API call.CLIENT_CERTIFICATE_TLS_AUTH
? (Amazon MSK, self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the certificate chain (X.509 PEM), private key (PKCS#8 PEM), and private key password (optional) used for mutual TLS authentication of your MSK/Apache Kafka brokers.SERVER_ROOT_CA_CERTIFICATE
? (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the root CA certificate (X.509 PEM) used for TLS encryption of your Apache Kafka brokers.uri
(String) The value for your chosen configuration in Type
. For example: "URI": "arn:aws:secretsmanager:us-east-1:01234567890:secret:MyBrokerSecretName"
.Import is supported using the following syntax:
$ terraform import awscc_lambda_event_source_mapping.example <resource ID>