Provides an independent configuration resource for S3 bucket lifecycle configuration.
An S3 Lifecycle configuration consists of one or more Lifecycle rules. Each rule consists of the following:
id
and status
)For more information see the Amazon S3 User Guide on Lifecycle Configuration Elements
.
The Lifecycle rule applies to a subset of objects based on the key name prefix (""
).
This configuration is intended to replicate the default behavior of the lifecycle_rule
parameter in the Terraform AWS Provider aws_s3_bucket
resource prior to v4.0
.
resource "aws_s3_bucket_lifecycle_configuration" "example" {
bucket = aws_s3_bucket.bucket.id
rule {
id = "rule-1"
# ... other transition/expiration actions ...
status = "Enabled"
}
}
The Lifecycle rule applies to all objects in the bucket.
resource "aws_s3_bucket_lifecycle_configuration" "example" {
bucket = aws_s3_bucket.bucket.id
rule {
id = "rule-1"
filter {}
# ... other transition/expiration actions ...
status = "Enabled"
}
}
The Lifecycle rule applies to a subset of objects based on the key name prefix (logs/
).
resource "aws_s3_bucket_lifecycle_configuration" "example" {
bucket = aws_s3_bucket.bucket.id
rule {
id = "rule-1"
filter {
prefix = "logs/"
}
# ... other transition/expiration actions ...
status = "Enabled"
}
}
If you want to apply a Lifecycle action to a subset of objects based on different key name prefixes, specify separate rules.
resource "aws_s3_bucket_lifecycle_configuration" "example" {
bucket = aws_s3_bucket.bucket.id
rule {
id = "rule-1"
filter {
prefix = "logs/"
}
# ... other transition/expiration actions ...
status = "Enabled"
}
rule {
id = "rule-2"
filter {
prefix = "tmp/"
}
# ... other transition/expiration actions ...
status = "Enabled"
}
}
The Lifecycle rule specifies a filter based on a tag key and value. The rule then applies only to a subset of objects with the specific tag.
resource "aws_s3_bucket_lifecycle_configuration" "example" {
bucket = aws_s3_bucket.bucket.id
rule {
id = "rule-1"
filter {
tag {
key = "Name"
value = "Staging"
}
}
# ... other transition/expiration actions ...
status = "Enabled"
}
}
The Lifecycle rule directs Amazon S3 to perform lifecycle actions on objects with two tags (with the specific tag keys and values). Notice tags
is wrapped in the and
configuration block.
resource "aws_s3_bucket_lifecycle_configuration" "example" {
bucket = aws_s3_bucket.bucket.id
rule {
id = "rule-1"
filter {
and {
tags = {
Key1 = "Value1"
Key2 = "Value2"
}
}
}
# ... other transition/expiration actions ...
status = "Enabled"
}
}
The Lifecycle rule directs Amazon S3 to perform lifecycle actions on objects with the specified prefix and two tags (with the specific tag keys and values). Notice both prefix
and tags
are wrapped in the and
configuration block.
resource "aws_s3_bucket_lifecycle_configuration" "example" {
bucket = aws_s3_bucket.bucket.id
rule {
id = "rule-1"
filter {
and {
prefix = "logs/"
tags = {
Key1 = "Value1"
Key2 = "Value2"
}
}
}
# ... other transition/expiration actions ...
status = "Enabled"
}
}
Object size values are in bytes. Maximum filter size is 5TB. Some storage classes have minimum object size limitations, for more information, see Comparing the Amazon S3 storage classes.
resource "aws_s3_bucket_lifecycle_configuration" "example" {
bucket = aws_s3_bucket.bucket.id
rule {
id = "rule-1"
filter {
object_size_greater_than = 500
}
# ... other transition/expiration actions ...
status = "Enabled"
}
}
The object_size_greater_than
must be less than the object_size_less_than
. Notice both the object size range and prefix are wrapped in the and
configuration block.
resource "aws_s3_bucket_lifecycle_configuration" "example" {
bucket = aws_s3_bucket.bucket.id
rule {
id = "rule-1"
filter {
and {
prefix = "logs/"
object_size_greater_than = 500
object_size_less_than = 64000
}
}
# ... other transition/expiration actions ...
status = "Enabled"
}
}
resource "aws_s3_bucket" "bucket" {
bucket = "my-bucket"
}
resource "aws_s3_bucket_acl" "bucket_acl" {
bucket = aws_s3_bucket.bucket.id
acl = "private"
}
resource "aws_s3_bucket_lifecycle_configuration" "bucket-config" {
bucket = aws_s3_bucket.bucket.id
rule {
id = "log"
expiration {
days = 90
}
filter {
and {
prefix = "log/"
tags = {
rule = "log"
autoclean = "true"
}
}
}
status = "Enabled"
transition {
days = 30
storage_class = "STANDARD_IA"
}
transition {
days = 60
storage_class = "GLACIER"
}
}
rule {
id = "tmp"
filter {
prefix = "tmp/"
}
expiration {
date = "2023-01-13T00:00:00Z"
}
status = "Enabled"
}
}
resource "aws_s3_bucket" "versioning_bucket" {
bucket = "my-versioning-bucket"
}
resource "aws_s3_bucket_acl" "versioning_bucket_acl" {
bucket = aws_s3_bucket.versioning_bucket.id
acl = "private"
}
resource "aws_s3_bucket_versioning" "versioning" {
bucket = aws_s3_bucket.versioning_bucket.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_lifecycle_configuration" "versioning-bucket-config" {
# Must have bucket versioning enabled first
depends_on = [aws_s3_bucket_versioning.versioning]
bucket = aws_s3_bucket.versioning_bucket.id
rule {
id = "config"
filter {
prefix = "config/"
}
noncurrent_version_expiration {
noncurrent_days = 90
}
noncurrent_version_transition {
noncurrent_days = 30
storage_class = "STANDARD_IA"
}
noncurrent_version_transition {
noncurrent_days = 60
storage_class = "GLACIER"
}
status = "Enabled"
}
}
This resource supports the following arguments:
bucket
- (Required) Name of the source S3 bucket you want Amazon S3 to monitor.expected_bucket_owner
- (Optional) Account ID of the expected bucket owner. If the bucket is owned by a different account, the request will fail with an HTTP 403 (Access Denied) error.rule
- (Required) List of configuration blocks describing the rules managing the replication. See below.The rule
configuration block supports the following arguments:
abort_incomplete_multipart_upload
- (Optional) Configuration block that specifies the days since the initiation of an incomplete multipart upload that Amazon S3 will wait before permanently removing all parts of the upload. See below.expiration
- (Optional) Configuration block that specifies the expiration for the lifecycle of the object in the form of date, days and, whether the object has a delete marker. See below.filter
- (Optional) Configuration block used to identify objects that a Lifecycle Rule applies to. See below. If not specified, the rule
will default to using prefix
.id
- (Required) Unique identifier for the rule. The value cannot be longer than 255 characters.noncurrent_version_expiration
- (Optional) Configuration block that specifies when noncurrent object versions expire. See below.noncurrent_version_transition
- (Optional) Set of configuration blocks that specify the transition rule for the lifecycle rule that describes when noncurrent objects transition to a specific storage class. See below.prefix
- (Optional) DEPRECATED Use filter
instead. This has been deprecated by Amazon S3. Prefix identifying one or more objects to which the rule applies. Defaults to an empty string (""
) if filter
is not specified.status
- (Required) Whether the rule is currently being applied. Valid values: Enabled
or Disabled
.transition
- (Optional) Set of configuration blocks that specify when an Amazon S3 object transitions to a specified storage class. See below.The abort_incomplete_multipart_upload
configuration block supports the following arguments:
days_after_initiation
- Number of days after which Amazon S3 aborts an incomplete multipart upload.The expiration
configuration block supports the following arguments:
date
- (Optional) Date the object is to be moved or deleted. The date value must be in RFC3339 full-date format e.g. 2023-08-22
.days
- (Optional) Lifetime, in days, of the objects that are subject to the rule. The value must be a non-zero positive integer.expired_object_delete_marker
- (Optional, Conflicts with date
and days
) Indicates whether Amazon S3 will remove a delete marker with no noncurrent versions. If set to true
, the delete marker will be expired; if set to false
the policy takes no action.The filter
configuration block supports the following arguments:
and
- (Optional) Configuration block used to apply a logical AND
to two or more predicates. See below. The Lifecycle Rule will apply to any object matching all the predicates configured inside the and
block.object_size_greater_than
- (Optional) Minimum object size (in bytes) to which the rule applies.object_size_less_than
- (Optional) Maximum object size (in bytes) to which the rule applies.prefix
- (Optional) Prefix identifying one or more objects to which the rule applies. Defaults to an empty string (""
) if not specified.tag
- (Optional) Configuration block for specifying a tag key and value. See below.The noncurrent_version_expiration
configuration block supports the following arguments:
newer_noncurrent_versions
- (Optional) Number of noncurrent versions Amazon S3 will retain. Must be a non-zero positive integer.noncurrent_days
- (Optional) Number of days an object is noncurrent before Amazon S3 can perform the associated action. Must be a positive integer.The noncurrent_version_transition
configuration block supports the following arguments:
newer_noncurrent_versions
- (Optional) Number of noncurrent versions Amazon S3 will retain. Must be a non-zero positive integer.noncurrent_days
- (Optional) Number of days an object is noncurrent before Amazon S3 can perform the associated action.storage_class
- (Required) Class of storage used to store the object. Valid Values: GLACIER
, STANDARD_IA
, ONEZONE_IA
, INTELLIGENT_TIERING
, DEEP_ARCHIVE
, GLACIER_IR
.The transition
configuration block supports the following arguments:
date
- (Optional, Conflicts with days
) Date objects are transitioned to the specified storage class. The date value must be in RFC3339 full-date format e.g. 2023-08-22
.days
- (Optional, Conflicts with date
) Number of days after creation when objects are transitioned to the specified storage class. The value must be a positive integer. If both days
and date
are not specified, defaults to 0
. Valid values depend on storage_class
, see Transition objects using Amazon S3 Lifecycle for more details.storage_class
- Class of storage used to store the object. Valid Values: GLACIER
, STANDARD_IA
, ONEZONE_IA
, INTELLIGENT_TIERING
, DEEP_ARCHIVE
, GLACIER_IR
.The and
configuration block supports the following arguments:
object_size_greater_than
- (Optional) Minimum object size to which the rule applies. Value must be at least 0
if specified.object_size_less_than
- (Optional) Maximum object size to which the rule applies. Value must be at least 1
if specified.prefix
- (Optional) Prefix identifying one or more objects to which the rule applies.tags
- (Optional) Key-value map of resource tags. All of these tags must exist in the object's tag set in order for the rule to apply.The tag
configuration block supports the following arguments:
key
- (Required) Name of the object key.value
- (Required) Value of the tag.This resource exports the following attributes in addition to the arguments above:
id
- The bucket
or bucket
and expected_bucket_owner
separated by a comma (,
) if the latter is provided.In Terraform v1.5.0 and later, use an import
block to import S3 bucket lifecycle configuration using the bucket
or using the bucket
and expected_bucket_owner
separated by a comma (,
). For example:
If the owner (account ID) of the source bucket is the same account used to configure the Terraform AWS Provider, import using the bucket
:
import {
to = aws_s3_bucket_lifecycle_configuration.example
id = "bucket-name"
}
If the owner (account ID) of the source bucket differs from the account used to configure the Terraform AWS Provider, import using the bucket
and expected_bucket_owner
separated by a comma (,
):
import {
to = aws_s3_bucket_lifecycle_configuration.example
id = "bucket-name,123456789012"
}
Using terraform import
to import S3 bucket lifecycle configuration using the bucket
or using the bucket
and expected_bucket_owner
separated by a comma (,
). For example:
If the owner (account ID) of the source bucket is the same account used to configure the Terraform AWS Provider, import using the bucket
:
% terraform import aws_s3_bucket_lifecycle_configuration.example bucket-name
If the owner (account ID) of the source bucket differs from the account used to configure the Terraform AWS Provider, import using the bucket
and expected_bucket_owner
separated by a comma (,
):
% terraform import aws_s3_bucket_lifecycle_configuration.example bucket-name,123456789012