Terraform AWS Provider Version 4 Upgrade Guide

Version 4.0.0 of the AWS provider for Terraform is a major release and includes some changes that you will need to consider when upgrading. We intend this guide to help with that process and focus only on changes from version 3.X to version 4.0.0. See the Version 3 Upgrade Guide for information about upgrading from 2.X to version 3.0.0.

We previously marked most of the changes we outline in this guide as deprecated in the Terraform plan/apply output throughout previous provider releases. You can find these changes, including deprecation notices, in the Terraform AWS Provider CHANGELOG.

Upgrade topics:

Additional Topics:

Provider Version Configuration

Use version constraints when configuring Terraform providers. If you are following that recommendation, update the version constraints in your Terraform configuration and run terraform init -upgrade to download the new version.

For example, given this previous configuration:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.74"
    }
  }
}

provider "aws" {
  # Configuration options
}

Update to the latest 4.X version:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.0"
    }
  }
}

provider "aws" {
  # Configuration options
}

Changes to Authentication

The authentication configuration for the AWS Provider has changed in this version to match the behavior of other AWS products, including the AWS SDK and AWS CLI. _This will cause authentication failures in AWS provider configurations where you set a non-empty profile in the provider configuration but the profile does not correspond to an AWS profile with valid credentials._

Precedence for authentication settings is as follows:

In previous versions of the provider, you could explicitly set profile in the provider, and if the profile did not correspond to valid credentials, the provider would use credentials from environment variables. Starting in v4.0, the Terraform AWS provider enforces the precedence shown above, similarly to how the AWS SDK and AWS CLI behave.

In other words, when you explicitly set profile in provider, the AWS provider will not use environment variables per the precedence shown above. Before v4.0, if profile was configured in the provider configuration but did not correspond to an AWS profile or valid credentials, the provider would attempt to use environment variables. This is no longer the case. An explicitly set profile that does not have valid credentials will cause an authentication error.

For example, with the following, the environment variables will not be used:

$ export AWS_ACCESS_KEY_ID="anaccesskey"
$ export AWS_SECRET_ACCESS_KEY="asecretkey"
provider "aws" {
  region  = "us-west-2"
  profile = "customprofile"
}

New Provider Arguments

Version 4.x adds these new provider arguments:

For example, in previous versions, to use FIPS endpoints, you would need to provide all the FIPS endpoints that you wanted to use in the endpoints configuration block:

provider "aws" {
  endpoints {
    ec2 = "https://ec2-fips.us-west-2.amazonaws.com"
    s3  = "https://s3-fips.us-west-2.amazonaws.com"
    sts = "https://sts-fips.us-west-2.amazonaws.com"
  }
}

In v4.0.0, you can still set endpoints in the same way. However, you can instead use the use_fips_endpoint argument to have the provider automatically resolve FIPS endpoints for all supported services:

provider "aws" {
  use_fips_endpoint = true
}

Note that the provider can only resolve FIPS endpoints where AWS provides FIPS support. Support depends on the service and may include us-east-1, us-east-2, us-west-1, us-west-2, us-gov-east-1, us-gov-west-1, and ca-central-1. For more information, see Federal Information Processing Standard (FIPS) 140-2.

Changes to S3 Bucket Drift Detection

To remediate the breaking changes introduced to the aws_s3_bucket resource in v4.0.0 of the AWS Provider, v4.9.0 and later retain the same configuration parameters of the aws_s3_bucket resource as in v3.x and functionality of the aws_s3_bucket resource only differs from v3.x in that Terraform will only perform drift detection for each of the following parameters if a configuration value is provided:

Thus, if one of these parameters was once configured and then is entirely removed from an aws_s3_bucket resource configuration, Terraform will not pick up on these changes on a subsequent terraform plan or terraform apply.

For example, given the following configuration with a single cors_rule:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
  cors_rule {
    allowed_headers = ["*"]
    allowed_methods = ["PUT", "POST"]
    allowed_origins = ["https://s3-website-test.hashicorp.com"]
    expose_headers  = ["ETag"]
    max_age_seconds = 3000
  }
}

When updated to the following configuration without a cors_rule:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
}

Terraform CLI with v4.9.0 of the AWS Provider will report back:

aws_s3_bucket.example: Refreshing state... [id=yournamehere]
...
No changes. Your infrastructure matches the configuration.

With that said, to manage changes to these parameters in the aws_s3_bucket resource, practitioners should configure each parameter's respective standalone resource and perform updates directly on those new configurations. The parameters are mapped to the standalone resources as follows:

aws_s3_bucket Parameter Standalone Resource
acceleration_status aws_s3_bucket_accelerate_configuration
acl aws_s3_bucket_acl
cors_rule aws_s3_bucket_cors_configuration
grant aws_s3_bucket_acl
lifecycle_rule aws_s3_bucket_lifecycle_configuration
logging aws_s3_bucket_logging
object_lock_configuration aws_s3_bucket_object_lock_configuration
policy aws_s3_bucket_policy
replication_configuration aws_s3_bucket_replication_configuration
request_payer aws_s3_bucket_request_payment_configuration
server_side_encryption_configuration aws_s3_bucket_server_side_encryption_configuration
versioning aws_s3_bucket_versioning
website aws_s3_bucket_website_configuration

Going back to the earlier example, given the following configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
  cors_rule {
    allowed_headers = ["*"]
    allowed_methods = ["PUT", "POST"]
    allowed_origins = ["https://s3-website-test.hashicorp.com"]
    expose_headers  = ["ETag"]
    max_age_seconds = 3000
  }
}

Practitioners can upgrade to v4.9.0 and then introduce the standalone aws_s3_bucket_cors_configuration resource, e.g.

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"
  # ... other configuration ...
}

resource "aws_s3_bucket_cors_configuration" "example" {
  bucket = aws_s3_bucket.example.id

  cors_rule {
    allowed_headers = ["*"]
    allowed_methods = ["PUT", "POST"]
    allowed_origins = ["https://s3-website-test.hashicorp.com"]
    expose_headers  = ["ETag"]
    max_age_seconds = 3000
  }
}

Depending on the tools available to you, the above configuration can either be directly applied with Terraform or the standalone resource can be imported into Terraform state. Please refer to each standalone resource's _Import_ documentation for the proper syntax.

Once the standalone resources are managed by Terraform, updates and removal can be performed as needed.

The following sections depict standalone resource adoption per individual parameter. Standalone resource adoption is not required to upgrade but is recommended to ensure drift is detected by Terraform. The examples below are by no means exhaustive. The aim is to provide important concepts when migrating to a standalone resource whose parameters may not entirely align with the corresponding parameter in the aws_s3_bucket resource.

Migrating to aws_s3_bucket_accelerate_configuration

Given this previous configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
  acceleration_status = "Enabled"
}

Update the configuration to:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
}

resource "aws_s3_bucket_accelerate_configuration" "example" {
  bucket = aws_s3_bucket.example.id
  status = "Enabled"
}

Migrating to aws_s3_bucket_acl

With acl

Given this previous configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"
  acl    = "private"

  # ... other configuration ...
}

Update the configuration to:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"
}

resource "aws_s3_bucket_acl" "example" {
  bucket = aws_s3_bucket.example.id
  acl    = "private"
}

With grant

Given this previous configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
  grant {
    id          = data.aws_canonical_user_id.current_user.id
    type        = "CanonicalUser"
    permissions = ["FULL_CONTROL"]
  }

  grant {
    type        = "Group"
    permissions = ["READ_ACP", "WRITE"]
    uri         = "http://acs.amazonaws.com/groups/s3/LogDelivery"
  }
}

Update the configuration to:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
}

resource "aws_s3_bucket_acl" "example" {
  bucket = aws_s3_bucket.example.id

  access_control_policy {
    grant {
      grantee {
        id   = data.aws_canonical_user_id.current_user.id
        type = "CanonicalUser"
      }
      permission = "FULL_CONTROL"
    }

    grant {
      grantee {
        type = "Group"
        uri  = "http://acs.amazonaws.com/groups/s3/LogDelivery"
      }
      permission = "READ_ACP"
    }

    grant {
      grantee {
        type = "Group"
        uri  = "http://acs.amazonaws.com/groups/s3/LogDelivery"
      }
      permission = "WRITE"
    }

    owner {
      id = data.aws_canonical_user_id.current_user.id
    }
  }
}

Migrating to aws_s3_bucket_cors_configuration

Given this previous configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
  cors_rule {
    allowed_headers = ["*"]
    allowed_methods = ["PUT", "POST"]
    allowed_origins = ["https://s3-website-test.hashicorp.com"]
    expose_headers  = ["ETag"]
    max_age_seconds = 3000
  }
}

Update the configuration to:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
}

resource "aws_s3_bucket_cors_configuration" "example" {
  bucket = aws_s3_bucket.example.id

  cors_rule {
    allowed_headers = ["*"]
    allowed_methods = ["PUT", "POST"]
    allowed_origins = ["https://s3-website-test.hashicorp.com"]
    expose_headers  = ["ETag"]
    max_age_seconds = 3000
  }
}

Migrating to aws_s3_bucket_lifecycle_configuration

For Lifecycle Rules with no prefix previously configured

Given this previous configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  lifecycle_rule {
    id      = "Keep previous version 30 days, then in Glacier another 60"
    enabled = true

    noncurrent_version_transition {
      days          = 30
      storage_class = "GLACIER"
    }

    noncurrent_version_expiration {
      days = 90
    }
  }

  lifecycle_rule {
    id                                     = "Delete old incomplete multi-part uploads"
    enabled                                = true
    abort_incomplete_multipart_upload_days = 7
  }
}

Update the configuration to:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
}

resource "aws_s3_bucket_lifecycle_configuration" "example" {
  bucket = aws_s3_bucket.example.id

  rule {
    id     = "Keep previous version 30 days, then in Glacier another 60"
    status = "Enabled"

    noncurrent_version_transition {
      noncurrent_days = 30
      storage_class   = "GLACIER"
    }

    noncurrent_version_expiration {
      noncurrent_days = 90
    }
  }

  rule {
    id     = "Delete old incomplete multi-part uploads"
    status = "Enabled"

    abort_incomplete_multipart_upload {
      days_after_initiation = 7
    }
  }
}

For Lifecycle Rules with prefix previously configured as an empty string

Given this previous configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  lifecycle_rule {
    id      = "log-expiration"
    enabled = true
    prefix  = ""

    transition {
      days          = 30
      storage_class = "STANDARD_IA"
    }

    transition {
      days          = 180
      storage_class = "GLACIER"
    }
  }
}

Update the configuration to:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
}

resource "aws_s3_bucket_lifecycle_configuration" "example" {
  bucket = aws_s3_bucket.example.id

  rule {
    id     = "log-expiration"
    status = "Enabled"

    transition {
      days          = 30
      storage_class = "STANDARD_IA"
    }

    transition {
      days          = 180
      storage_class = "GLACIER"
    }
  }
}

For Lifecycle Rules with prefix

Given this previous configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  lifecycle_rule {
    id      = "log-expiration"
    enabled = true
    prefix  = "foobar"

    transition {
      days          = 30
      storage_class = "STANDARD_IA"
    }

    transition {
      days          = 180
      storage_class = "GLACIER"
    }
  }
}

Update the configuration to:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
}

resource "aws_s3_bucket_lifecycle_configuration" "example" {
  bucket = aws_s3_bucket.example.id

  rule {
    id     = "log-expiration"
    status = "Enabled"

    filter {
      prefix = "foobar"
    }

    transition {
      days          = 30
      storage_class = "STANDARD_IA"
    }

    transition {
      days          = 180
      storage_class = "GLACIER"
    }
  }
}

For Lifecycle Rules with prefix and tags

Given this previous configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
  lifecycle_rule {
    id      = "log"
    enabled = true
    prefix  = "log/"

    tags = {
      rule      = "log"
      autoclean = "true"
    }

    transition {
      days          = 30
      storage_class = "STANDARD_IA"
    }

    transition {
      days          = 60
      storage_class = "GLACIER"
    }

    expiration {
      days = 90
    }
  }

  lifecycle_rule {
    id      = "tmp"
    prefix  = "tmp/"
    enabled = true

    expiration {
      date = "2022-12-31"
    }
  }
}

Update the configuration to:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
}

resource "aws_s3_bucket_lifecycle_configuration" "example" {
  bucket = aws_s3_bucket.example.id

  rule {
    id     = "log"
    status = "Enabled"

    filter {
      and {
        prefix = "log/"

        tags = {
          rule      = "log"
          autoclean = "true"
        }
      }
    }

    transition {
      days          = 30
      storage_class = "STANDARD_IA"
    }

    transition {
      days          = 60
      storage_class = "GLACIER"
    }

    expiration {
      days = 90
    }
  }

  rule {
    id = "tmp"

    filter {
      prefix = "tmp/"
    }

    expiration {
      date = "2022-12-31T00:00:00Z"
    }

    status = "Enabled"
  }
}

Migrating to aws_s3_bucket_logging

Given this previous configuration:

resource "aws_s3_bucket" "log_bucket" {
  # ... other configuration ...
  bucket = "example-log-bucket"
}

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
  logging {
    target_bucket = aws_s3_bucket.log_bucket.id
    target_prefix = "log/"
  }
}

Update the configuration to:

resource "aws_s3_bucket" "log_bucket" {
  bucket = "example-log-bucket"

  # ... other configuration ...
}

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
}

resource "aws_s3_bucket_logging" "example" {
  bucket        = aws_s3_bucket.example.id
  target_bucket = aws_s3_bucket.log_bucket.id
  target_prefix = "log/"
}

Migrating to aws_s3_bucket_object_lock_configuration

Given this previous configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
  object_lock_configuration {
    object_lock_enabled = "Enabled"

    rule {
      default_retention {
        mode = "COMPLIANCE"
        days = 3
      }
    }
  }
}

Update the configuration to:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
  object_lock_enabled = true
}

resource "aws_s3_bucket_object_lock_configuration" "example" {
  bucket = aws_s3_bucket.example.id

  rule {
    default_retention {
      mode = "COMPLIANCE"
      days = 3
    }
  }
}

Migrating to aws_s3_bucket_policy

Given this previous configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
  policy = <<EOF
{
  "Id": "Policy1446577137248",
  "Statement": [
    {
      "Action": "s3:PutObject",
      "Effect": "Allow",
      "Principal": {
        "AWS": "${data.aws_elb_service_account.current.arn}"
      },
      "Resource": "arn:${data.aws_partition.current.partition}:s3:::yournamehere/*",
      "Sid": "Stmt1446575236270"
    }
  ],
  "Version": "2012-10-17"
}
EOF
}

Update the configuration to:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
}

resource "aws_s3_bucket_policy" "example" {
  bucket = aws_s3_bucket.example.id
  policy = <<EOF
{
  "Id": "Policy1446577137248",
  "Statement": [
    {
      "Action": "s3:PutObject",
      "Effect": "Allow",
      "Principal": {
        "AWS": "${data.aws_elb_service_account.current.arn}"
      },
      "Resource": "${aws_s3_bucket.example.arn}/*",
      "Sid": "Stmt1446575236270"
    }
  ],
  "Version": "2012-10-17"
}
EOF
}

Migrating to aws_s3_bucket_replication_configuration

Given this previous configuration:

resource "aws_s3_bucket" "example" {
  provider = aws.central
  bucket   = "yournamehere"

  # ... other configuration ...

  replication_configuration {
    role = aws_iam_role.replication.arn
    rules {
      id     = "foobar"
      status = "Enabled"
      filter {
        tags = {}
      }
      destination {
        bucket        = aws_s3_bucket.destination.arn
        storage_class = "STANDARD"
        replication_time {
          status  = "Enabled"
          minutes = 15
        }
        metrics {
          status  = "Enabled"
          minutes = 15
        }
      }
    }
  }
}

Update the configuration to:

resource "aws_s3_bucket" "example" {
  provider = aws.central
  bucket   = "yournamehere"

  # ... other configuration ...
}

resource "aws_s3_bucket_replication_configuration" "example" {
  bucket = aws_s3_bucket.source.id
  role   = aws_iam_role.replication.arn

  rule {
    id     = "foobar"
    status = "Enabled"

    filter {}

    delete_marker_replication {
      status = "Enabled"
    }

    destination {
      bucket        = aws_s3_bucket.destination.arn
      storage_class = "STANDARD"

      replication_time {
        status = "Enabled"
        time {
          minutes = 15
        }
      }

      metrics {
        status = "Enabled"
        event_threshold {
          minutes = 15
        }
      }
    }
  }
}

Migrating to aws_s3_bucket_request_payment_configuration

Given this previous configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
  request_payer = "Requester"
}

Update the configuration to:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
}

resource "aws_s3_bucket_request_payment_configuration" "example" {
  bucket = aws_s3_bucket.example.id
  payer  = "Requester"
}

Migrating to aws_s3_bucket_server_side_encryption_configuration

Given this previous configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        kms_master_key_id = aws_kms_key.mykey.arn
        sse_algorithm     = "aws:kms"
      }
    }
  }
}

Update the configuration to:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
}

resource "aws_s3_bucket_server_side_encryption_configuration" "example" {
  bucket = aws_s3_bucket.example.id

  rule {
    apply_server_side_encryption_by_default {
      kms_master_key_id = aws_kms_key.mykey.arn
      sse_algorithm     = "aws:kms"
    }
  }
}

Migrating to aws_s3_bucket_versioning

Buckets With Versioning Enabled

Given this previous configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
  versioning {
    enabled = true
  }
}

Update the configuration to:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
}

resource "aws_s3_bucket_versioning" "example" {
  bucket = aws_s3_bucket.example.id
  versioning_configuration {
    status = "Enabled"
  }
}

Buckets With Versioning Disabled or Suspended

Depending on the version of the Terraform AWS Provider you are migrating from, the interpretation of versioning.enabled = false in your aws_s3_bucket resource will differ and thus the migration to the aws_s3_bucket_versioning resource will also differ as follows.

If you are migrating from the Terraform AWS Provider v3.70.0 or later:

If you are migrating from an earlier version of the Terraform AWS Provider:

Given this previous configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
  versioning {
    enabled = false
  }
}

Update the configuration to one of the following:

Ensure Objects Depend on Versioning

When you create an object whose version_id you need and an aws_s3_bucket_versioning resource in the same configuration, you are more likely to have success by ensuring the s3_object depends either implicitly (see below) or explicitly (i.e., using depends_on = [aws_s3_bucket_versioning.example]) on the aws_s3_bucket_versioning resource.

This example shows the aws_s3_object.example depending implicitly on the versioning resource through the reference to aws_s3_bucket_versioning.example.id to define bucket:

resource "aws_s3_bucket" "example" {
  bucket = "yotto"
}

resource "aws_s3_bucket_versioning" "example" {
  bucket = aws_s3_bucket.example.id

  versioning_configuration {
    status = "Enabled"
  }
}

resource "aws_s3_object" "example" {
  bucket = aws_s3_bucket_versioning.example.id
  key    = "droeloe"
  source = "example.txt"
}

Migrating to aws_s3_bucket_website_configuration

Given this previous configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
  website {
    index_document = "index.html"
    error_document = "error.html"
  }
}

Update the configuration to:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
}

resource "aws_s3_bucket_website_configuration" "example" {
  bucket = aws_s3_bucket.example.id

  index_document {
    suffix = "index.html"
  }

  error_document {
    key = "error.html"
  }
}

Given this previous configuration that uses the aws_s3_bucket parameter website_domain with aws_route53_record:

resource "aws_route53_zone" "main" {
  name = "domain.test"
}

resource "aws_s3_bucket" "website" {
  # ... other configuration ...
  website {
    index_document = "index.html"
    error_document = "error.html"
  }
}

resource "aws_route53_record" "alias" {
  zone_id = aws_route53_zone.main.zone_id
  name    = "www"
  type    = "A"

  alias {
    zone_id                = aws_s3_bucket.website.hosted_zone_id
    name                   = aws_s3_bucket.website.website_domain
    evaluate_target_health = true
  }
}

Update the configuration to use the aws_s3_bucket_website_configuration resource and its website_domain parameter:

resource "aws_route53_zone" "main" {
  name = "domain.test"
}

resource "aws_s3_bucket" "website" {
  # ... other configuration ...
}

resource "aws_s3_bucket_website_configuration" "example" {
  bucket = aws_s3_bucket.website.id

  index_document {
    suffix = "index.html"
  }
}

resource "aws_route53_record" "alias" {
  zone_id = aws_route53_zone.main.zone_id
  name    = "www"
  type    = "A"

  alias {
    zone_id                = aws_s3_bucket.website.hosted_zone_id
    name                   = aws_s3_bucket_website_configuration.example.website_domain
    evaluate_target_health = true
  }
}

S3 Bucket Refactor

To help distribute the management of S3 bucket settings via independent resources, various arguments and attributes in the aws_s3_bucket resource have become read-only.

Configurations dependent on these arguments should be updated to use the corresponding aws_s3_bucket_* resource in order to prevent Terraform from reporting “unconfigurable attribute” errors for read-only arguments. Once updated, it is recommended to import new aws_s3_bucket_* resources into Terraform state.

In the event practitioners do not anticipate future modifications to the S3 bucket settings associated with these read-only arguments or drift detection is not needed, these read-only arguments should be removed from aws_s3_bucket resource configurations in order to prevent Terraform from reporting “unconfigurable attribute” errors; the states of these arguments will be preserved but are subject to change with modifications made outside Terraform.

acceleration_status Argument

Switch your Terraform configuration to the aws_s3_bucket_accelerate_configuration resource instead.

For example, given this previous configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
  acceleration_status = "Enabled"
}

You will get the following error after upgrading:

│ Error: Value for unconfigurable attribute
│
│   with aws_s3_bucket.example,
│   on main.tf line 1, in resource "aws_s3_bucket" "example":
│    1: resource "aws_s3_bucket" "example" {
│
│ Can't configure a value for "acceleration_status": its value will be decided automatically based on the result of applying this configuration.

Since acceleration_status is now read only, update your configuration to use the aws_s3_bucket_accelerate_configuration resource and remove acceleration_status in the aws_s3_bucket resource:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
}

resource "aws_s3_bucket_accelerate_configuration" "example" {
  bucket = aws_s3_bucket.example.id
  status = "Enabled"
}

Run terraform import on each new resource, _e.g._,

$ terraform import aws_s3_bucket_accelerate_configuration.example yournamehere
aws_s3_bucket_accelerate_configuration.example: Importing from ID "yournamehere"...
aws_s3_bucket_accelerate_configuration.example: Import prepared!
  Prepared aws_s3_bucket_accelerate_configuration for import
aws_s3_bucket_accelerate_configuration.example: Refreshing state... [id=yournamehere]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

acl Argument

Switch your Terraform configuration to the aws_s3_bucket_acl resource instead.

For example, given this previous configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"
  acl    = "private"

  # ... other configuration ...
}

You will get the following error after upgrading:

│ Error: Value for unconfigurable attribute
│
│   with aws_s3_bucket.example,
│   on main.tf line 1, in resource "aws_s3_bucket" "example":
│    1: resource "aws_s3_bucket" "example" {
│
│ Can't configure a value for "acl": its value will be decided automatically based on the result of applying this configuration.

Since acl is now read only, update your configuration to use the aws_s3_bucket_acl resource and remove the acl argument in the aws_s3_bucket resource:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"
}

resource "aws_s3_bucket_acl" "example" {
  bucket = aws_s3_bucket.example.id
  acl    = "private"
}

Run terraform import on each new resource, _e.g._,

$ terraform import aws_s3_bucket_acl.example yournamehere,private
aws_s3_bucket_acl.example: Importing from ID "yournamehere,private"...
aws_s3_bucket_acl.example: Import prepared!
  Prepared aws_s3_bucket_acl for import
aws_s3_bucket_acl.example: Refreshing state... [id=yournamehere,private]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

cors_rule Argument

Switch your Terraform configuration to the aws_s3_bucket_cors_configuration resource instead.

For example, given this previous configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
  cors_rule {
    allowed_headers = ["*"]
    allowed_methods = ["PUT", "POST"]
    allowed_origins = ["https://s3-website-test.hashicorp.com"]
    expose_headers  = ["ETag"]
    max_age_seconds = 3000
  }
}

You will get the following error after upgrading:

│ Error: Value for unconfigurable attribute
│
│   with aws_s3_bucket.example,
│   on main.tf line 1, in resource "aws_s3_bucket" "example":
│    1: resource "aws_s3_bucket" "example" {
│
│ Can't configure a value for "cors_rule": its value will be decided automatically based on the result of applying this configuration.

Since cors_rule is now read only, update your configuration to use the aws_s3_bucket_cors_configuration resource and remove cors_rule and its nested arguments in the aws_s3_bucket resource:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
}

resource "aws_s3_bucket_cors_configuration" "example" {
  bucket = aws_s3_bucket.example.id

  cors_rule {
    allowed_headers = ["*"]
    allowed_methods = ["PUT", "POST"]
    allowed_origins = ["https://s3-website-test.hashicorp.com"]
    expose_headers  = ["ETag"]
    max_age_seconds = 3000
  }
}

Run terraform import on each new resource, _e.g._,

$ terraform import aws_s3_bucket_cors_configuration.example yournamehere
aws_s3_bucket_cors_configuration.example: Importing from ID "yournamehere"...
aws_s3_bucket_cors_configuration.example: Import prepared!
  Prepared aws_s3_bucket_cors_configuration for import
aws_s3_bucket_cors_configuration.example: Refreshing state... [id=yournamehere]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

grant Argument

Switch your Terraform configuration to the aws_s3_bucket_acl resource instead.

For example, given this previous configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
  grant {
    id          = data.aws_canonical_user_id.current_user.id
    type        = "CanonicalUser"
    permissions = ["FULL_CONTROL"]
  }

  grant {
    type        = "Group"
    permissions = ["READ_ACP", "WRITE"]
    uri         = "http://acs.amazonaws.com/groups/s3/LogDelivery"
  }
}

You will get the following error after upgrading:

│ Error: Value for unconfigurable attribute
│
│   with aws_s3_bucket.example,
│   on main.tf line 1, in resource "aws_s3_bucket" "example":
│    1: resource "aws_s3_bucket" "example" {
│
│ Can't configure a value for "grant": its value will be decided automatically based on the result of applying this configuration.

Since grant is now read only, update your configuration to use the aws_s3_bucket_acl resource and remove grant in the aws_s3_bucket resource:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
}

resource "aws_s3_bucket_acl" "example" {
  bucket = aws_s3_bucket.example.id

  access_control_policy {
    grant {
      grantee {
        id   = data.aws_canonical_user_id.current_user.id
        type = "CanonicalUser"
      }
      permission = "FULL_CONTROL"
    }

    grant {
      grantee {
        type = "Group"
        uri  = "http://acs.amazonaws.com/groups/s3/LogDelivery"
      }
      permission = "READ_ACP"
    }

    grant {
      grantee {
        type = "Group"
        uri  = "http://acs.amazonaws.com/groups/s3/LogDelivery"
      }
      permission = "WRITE"
    }

    owner {
      id = data.aws_canonical_user_id.current_user.id
    }
  }
}

Run terraform import on each new resource, _e.g._,

$ terraform import aws_s3_bucket_acl.example yournamehere
aws_s3_bucket_acl.example: Importing from ID "yournamehere"...
aws_s3_bucket_acl.example: Import prepared!
  Prepared aws_s3_bucket_acl for import
aws_s3_bucket_acl.example: Refreshing state... [id=yournamehere]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

lifecycle_rule Argument

Switch your Terraform configuration to the aws_s3_bucket_lifecycle_configuration resource instead.

For Lifecycle Rules with no prefix previously configured

For example, given this previous configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  lifecycle_rule {
    id      = "Keep previous version 30 days, then in Glacier another 60"
    enabled = true

    noncurrent_version_transition {
      days          = 30
      storage_class = "GLACIER"
    }

    noncurrent_version_expiration {
      days = 90
    }
  }

  lifecycle_rule {
    id                                     = "Delete old incomplete multi-part uploads"
    enabled                                = true
    abort_incomplete_multipart_upload_days = 7
  }
}

You will receive the following error after upgrading:

│ Error: Value for unconfigurable attribute
│
│   with aws_s3_bucket.example,
│   on main.tf line 1, in resource "aws_s3_bucket" "example":
│    1: resource "aws_s3_bucket" "example" {
│
│ Can't configure a value for "lifecycle_rule": its value will be decided automatically based on the result of applying this configuration.

Since the lifecycle_rule argument changed to read-only, update the configuration to use the aws_s3_bucket_lifecycle_configuration resource and remove lifecycle_rule and its nested arguments in the aws_s3_bucket resource.

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
}

resource "aws_s3_bucket_lifecycle_configuration" "example" {
  bucket = aws_s3_bucket.example.id

  rule {
    id     = "Keep previous version 30 days, then in Glacier another 60"
    status = "Enabled"

    noncurrent_version_transition {
      noncurrent_days = 30
      storage_class   = "GLACIER"
    }

    noncurrent_version_expiration {
      noncurrent_days = 90
    }
  }

  rule {
    id     = "Delete old incomplete multi-part uploads"
    status = "Enabled"

    abort_incomplete_multipart_upload {
      days_after_initiation = 7
    }
  }
}

Run terraform import on each new resource, _e.g._,

$ terraform import aws_s3_bucket_lifecycle_configuration.example yournamehere
aws_s3_bucket_lifecycle_configuration.example: Importing from ID "yournamehere"...
aws_s3_bucket_lifecycle_configuration.example: Import prepared!
  Prepared aws_s3_bucket_lifecycle_configuration for import
aws_s3_bucket_lifecycle_configuration.example: Refreshing state... [id=yournamehere]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

For Lifecycle Rules with prefix previously configured as an empty string

For example, given this configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  lifecycle_rule {
    id      = "log-expiration"
    enabled = true
    prefix  = ""

    transition {
      days          = 30
      storage_class = "STANDARD_IA"
    }

    transition {
      days          = 180
      storage_class = "GLACIER"
    }
  }
}

You will receive the following error after upgrading:

│ Error: Value for unconfigurable attribute
│
│   with aws_s3_bucket.example,
│   on main.tf line 1, in resource "aws_s3_bucket" "example":
│    1: resource "aws_s3_bucket" "example" {
│
│ Can't configure a value for "lifecycle_rule": its value will be decided automatically based on the result of applying this configuration.

Since the lifecycle_rule argument changed to read-only, update the configuration to use the aws_s3_bucket_lifecycle_configuration resource and remove lifecycle_rule and its nested arguments in the aws_s3_bucket resource:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
}

resource "aws_s3_bucket_lifecycle_configuration" "example" {
  bucket = aws_s3_bucket.example.id

  rule {
    id     = "log-expiration"
    status = "Enabled"

    transition {
      days          = 30
      storage_class = "STANDARD_IA"
    }

    transition {
      days          = 180
      storage_class = "GLACIER"
    }
  }
}

Run terraform import on each new resource, _e.g._,

$ terraform import aws_s3_bucket_lifecycle_configuration.example yournamehere
aws_s3_bucket_lifecycle_configuration.example: Importing from ID "yournamehere"...
aws_s3_bucket_lifecycle_configuration.example: Import prepared!
  Prepared aws_s3_bucket_lifecycle_configuration for import
aws_s3_bucket_lifecycle_configuration.example: Refreshing state... [id=yournamehere]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

For Lifecycle Rules with prefix

For example, given this configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  lifecycle_rule {
    id      = "log-expiration"
    enabled = true
    prefix  = "foobar"

    transition {
      days          = 30
      storage_class = "STANDARD_IA"
    }

    transition {
      days          = 180
      storage_class = "GLACIER"
    }
  }
}

You will receive the following error after upgrading:

│ Error: Value for unconfigurable attribute
│
│   with aws_s3_bucket.example,
│   on main.tf line 1, in resource "aws_s3_bucket" "example":
│    1: resource "aws_s3_bucket" "example" {
│
│ Can't configure a value for "lifecycle_rule": its value will be decided automatically based on the result of applying this configuration.

Since the lifecycle_rule argument changed to read-only, update the configuration to use the aws_s3_bucket_lifecycle_configuration resource and remove lifecycle_rule and its nested arguments in the aws_s3_bucket resource:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
}

resource "aws_s3_bucket_lifecycle_configuration" "example" {
  bucket = aws_s3_bucket.example.id

  rule {
    id     = "log-expiration"
    status = "Enabled"

    filter {
      prefix = "foobar"
    }

    transition {
      days          = 30
      storage_class = "STANDARD_IA"
    }

    transition {
      days          = 180
      storage_class = "GLACIER"
    }
  }
}

Run terraform import on each new resource, _e.g._,

$ terraform import aws_s3_bucket_lifecycle_configuration.example yournamehere
aws_s3_bucket_lifecycle_configuration.example: Importing from ID "yournamehere"...
aws_s3_bucket_lifecycle_configuration.example: Import prepared!
  Prepared aws_s3_bucket_lifecycle_configuration for import
aws_s3_bucket_lifecycle_configuration.example: Refreshing state... [id=yournamehere]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

For Lifecycle Rules with prefix and tags

For example, given this previous configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
  lifecycle_rule {
    id      = "log"
    enabled = true
    prefix  = "log/"

    tags = {
      rule      = "log"
      autoclean = "true"
    }

    transition {
      days          = 30
      storage_class = "STANDARD_IA"
    }

    transition {
      days          = 60
      storage_class = "GLACIER"
    }

    expiration {
      days = 90
    }
  }

  lifecycle_rule {
    id      = "tmp"
    prefix  = "tmp/"
    enabled = true

    expiration {
      date = "2022-12-31"
    }
  }
}

You will get the following error after upgrading:

│ Error: Value for unconfigurable attribute
│
│   with aws_s3_bucket.example,
│   on main.tf line 1, in resource "aws_s3_bucket" "example":
│    1: resource "aws_s3_bucket" "example" {
│
│ Can't configure a value for "lifecycle_rule": its value will be decided automatically based on the result of applying this configuration.

Since lifecycle_rule is now read only, update your configuration to use the aws_s3_bucket_lifecycle_configuration resource and remove lifecycle_rule and its nested arguments in the aws_s3_bucket resource:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
}

resource "aws_s3_bucket_lifecycle_configuration" "example" {
  bucket = aws_s3_bucket.example.id

  rule {
    id     = "log"
    status = "Enabled"

    filter {
      and {
        prefix = "log/"

        tags = {
          rule      = "log"
          autoclean = "true"
        }
      }
    }

    transition {
      days          = 30
      storage_class = "STANDARD_IA"
    }

    transition {
      days          = 60
      storage_class = "GLACIER"
    }

    expiration {
      days = 90
    }
  }

  rule {
    id = "tmp"

    filter {
      prefix = "tmp/"
    }

    expiration {
      date = "2022-12-31T00:00:00Z"
    }

    status = "Enabled"
  }
}

Run terraform import on each new resource, _e.g._,

$ terraform import aws_s3_bucket_lifecycle_configuration.example yournamehere
aws_s3_bucket_lifecycle_configuration.example: Importing from ID "yournamehere"...
aws_s3_bucket_lifecycle_configuration.example: Import prepared!
  Prepared aws_s3_bucket_lifecycle_configuration for import
aws_s3_bucket_lifecycle_configuration.example: Refreshing state... [id=yournamehere]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

logging Argument

Switch your Terraform configuration to the aws_s3_bucket_logging resource instead.

For example, given this previous configuration:

resource "aws_s3_bucket" "log_bucket" {
  # ... other configuration ...
  bucket = "example-log-bucket"
}

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
  logging {
    target_bucket = aws_s3_bucket.log_bucket.id
    target_prefix = "log/"
  }
}

You will get the following error after upgrading:

│ Error: Value for unconfigurable attribute
│
│   with aws_s3_bucket.example,
│   on main.tf line 1, in resource "aws_s3_bucket" "example":
│    1: resource "aws_s3_bucket" "example" {
│
│ Can't configure a value for "logging": its value will be decided automatically based on the result of applying this configuration.

Since logging is now read only, update your configuration to use the aws_s3_bucket_logging resource and remove logging and its nested arguments in the aws_s3_bucket resource:

resource "aws_s3_bucket" "log_bucket" {
  bucket = "example-log-bucket"

  # ... other configuration ...
}

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
}

resource "aws_s3_bucket_logging" "example" {
  bucket        = aws_s3_bucket.example.id
  target_bucket = aws_s3_bucket.log_bucket.id
  target_prefix = "log/"
}

Run terraform import on each new resource, _e.g._,

$ terraform import aws_s3_bucket_logging.example yournamehere
aws_s3_bucket_logging.example: Importing from ID "yournamehere"...
aws_s3_bucket_logging.example: Import prepared!
  Prepared aws_s3_bucket_logging for import
aws_s3_bucket_logging.example: Refreshing state... [id=yournamehere]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

object_lock_configuration rule Argument

Switch your Terraform configuration to the aws_s3_bucket_object_lock_configuration resource instead.

For example, given this previous configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
  object_lock_configuration {
    object_lock_enabled = "Enabled"

    rule {
      default_retention {
        mode = "COMPLIANCE"
        days = 3
      }
    }
  }
}

You will get the following error after upgrading:

│ Error: Value for unconfigurable attribute
│
│   with aws_s3_bucket.example,
│   on main.tf line 1, in resource "aws_s3_bucket" "example":
│    1: resource "aws_s3_bucket" "example" {
│
│ Can't configure a value for "object_lock_configuration.0.rule": its value will be decided automatically based on the result of applying this configuration.

Since the rule argument of the object_lock_configuration configuration block changed to read-only, update your configuration to use the aws_s3_bucket_object_lock_configuration resource and remove rule and its nested arguments in the aws_s3_bucket resource:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
  object_lock_enabled = true
}

resource "aws_s3_bucket_object_lock_configuration" "example" {
  bucket = aws_s3_bucket.example.id

  rule {
    default_retention {
      mode = "COMPLIANCE"
      days = 3
    }
  }
}

Run terraform import on each new resource, _e.g._,

$ terraform import aws_s3_bucket_object_lock_configuration.example yournamehere
aws_s3_bucket_object_lock_configuration.example: Importing from ID "yournamehere"...
aws_s3_bucket_object_lock_configuration.example: Import prepared!
  Prepared aws_s3_bucket_object_lock_configuration for import
aws_s3_bucket_object_lock_configuration.example: Refreshing state... [id=yournamehere]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

policy Argument

Switch your Terraform configuration to the aws_s3_bucket_policy resource instead.

For example, given this previous configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
  policy = <<EOF
{
  "Id": "Policy1446577137248",
  "Statement": [
    {
      "Action": "s3:PutObject",
      "Effect": "Allow",
      "Principal": {
        "AWS": "${data.aws_elb_service_account.current.arn}"
      },
      "Resource": "arn:${data.aws_partition.current.partition}:s3:::yournamehere/*",
      "Sid": "Stmt1446575236270"
    }
  ],
  "Version": "2012-10-17"
}
EOF
}

You will get the following error after upgrading:

│ Error: Value for unconfigurable attribute
│
│   with aws_s3_bucket.example,
│   on main.tf line 1, in resource "aws_s3_bucket" "example":
│    1: resource "aws_s3_bucket" "example" {
│
│ Can't configure a value for "policy": its value will be decided automatically based on the result of applying this configuration.

Since policy is now read only, update your configuration to use the aws_s3_bucket_policy resource and remove policy in the aws_s3_bucket resource:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
}

resource "aws_s3_bucket_policy" "example" {
  bucket = aws_s3_bucket.example.id
  policy = <<EOF
{
  "Id": "Policy1446577137248",
  "Statement": [
    {
      "Action": "s3:PutObject",
      "Effect": "Allow",
      "Principal": {
        "AWS": "${data.aws_elb_service_account.current.arn}"
      },
      "Resource": "${aws_s3_bucket.example.arn}/*",
      "Sid": "Stmt1446575236270"
    }
  ],
  "Version": "2012-10-17"
}
EOF
}

Run terraform import on each new resource, _e.g._,

$ terraform import aws_s3_bucket_policy.example yournamehere
aws_s3_bucket_policy.example: Importing from ID "yournamehere"...
aws_s3_bucket_policy.example: Import prepared!
  Prepared aws_s3_bucket_policy for import
aws_s3_bucket_policy.example: Refreshing state... [id=yournamehere]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

replication_configuration Argument

Switch your Terraform configuration to the aws_s3_bucket_replication_configuration resource instead.

For example, given this previous configuration:

resource "aws_s3_bucket" "example" {
  provider = aws.central
  bucket   = "yournamehere"

  # ... other configuration ...

  replication_configuration {
    role = aws_iam_role.replication.arn
    rules {
      id     = "foobar"
      status = "Enabled"
      filter {
        tags = {}
      }
      destination {
        bucket        = aws_s3_bucket.destination.arn
        storage_class = "STANDARD"
        replication_time {
          status  = "Enabled"
          minutes = 15
        }
        metrics {
          status  = "Enabled"
          minutes = 15
        }
      }
    }
  }
}

You will get the following error after upgrading:

│ Error: Value for unconfigurable attribute
│
│   with aws_s3_bucket.example,
│   on main.tf line 1, in resource "aws_s3_bucket" "example":
│    1: resource "aws_s3_bucket" "example" {
│
│ Can't configure a value for "replication_configuration": its value will be decided automatically based on the result of applying this configuration.

Since replication_configuration is now read only, update your configuration to use the aws_s3_bucket_replication_configuration resource and remove replication_configuration and its nested arguments in the aws_s3_bucket resource:

resource "aws_s3_bucket" "example" {
  provider = aws.central
  bucket   = "yournamehere"

  # ... other configuration ...
}

resource "aws_s3_bucket_replication_configuration" "example" {
  bucket = aws_s3_bucket.example.id
  role   = aws_iam_role.replication.arn

  rule {
    id     = "foobar"
    status = "Enabled"

    filter {}

    delete_marker_replication {
      status = "Enabled"
    }

    destination {
      bucket        = aws_s3_bucket.destination.arn
      storage_class = "STANDARD"

      replication_time {
        status = "Enabled"
        time {
          minutes = 15
        }
      }

      metrics {
        status = "Enabled"
        event_threshold {
          minutes = 15
        }
      }
    }
  }
}

Run terraform import on each new resource, _e.g._,

$ terraform import aws_s3_bucket_replication_configuration.example yournamehere
aws_s3_bucket_replication_configuration.example: Importing from ID "yournamehere"...
aws_s3_bucket_replication_configuration.example: Import prepared!
  Prepared aws_s3_bucket_replication_configuration for import
aws_s3_bucket_replication_configuration.example: Refreshing state... [id=yournamehere]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

request_payer Argument

Switch your Terraform configuration to the aws_s3_bucket_request_payment_configuration resource instead.

For example, given this previous configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
  request_payer = "Requester"
}

You will get the following error after upgrading:

│ Error: Value for unconfigurable attribute
│
│   with aws_s3_bucket.example,
│   on main.tf line 1, in resource "aws_s3_bucket" "example":
│    1: resource "aws_s3_bucket" "example" {
│
│ Can't configure a value for "request_payer": its value will be decided automatically based on the result of applying this configuration.

Since request_payer is now read only, update your configuration to use the aws_s3_bucket_request_payment_configuration resource and remove request_payer in the aws_s3_bucket resource:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
}

resource "aws_s3_bucket_request_payment_configuration" "example" {
  bucket = aws_s3_bucket.example.id
  payer  = "Requester"
}

Run terraform import on each new resource, _e.g._,

$ terraform import aws_s3_bucket_request_payment_configuration.example yournamehere
aws_s3_bucket_request_payment_configuration.example: Importing from ID "yournamehere"...
aws_s3_bucket_request_payment_configuration.example: Import prepared!
  Prepared aws_s3_bucket_request_payment_configuration for import
aws_s3_bucket_request_payment_configuration.example: Refreshing state... [id=yournamehere]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

server_side_encryption_configuration Argument

Switch your Terraform configuration to the aws_s3_bucket_server_side_encryption_configuration resource instead.

For example, given this previous configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        kms_master_key_id = aws_kms_key.mykey.arn
        sse_algorithm     = "aws:kms"
      }
    }
  }
}

You will get the following error after upgrading:

│ Error: Value for unconfigurable attribute
│
│   with aws_s3_bucket.example,
│   on main.tf line 1, in resource "aws_s3_bucket" "example":
│    1: resource "aws_s3_bucket" "example" {
│
│ Can't configure a value for "server_side_encryption_configuration": its value will be decided automatically based on the result of applying this configuration.

Since server_side_encryption_configuration is now read only, update your configuration to use the aws_s3_bucket_server_side_encryption_configuration resource and remove server_side_encryption_configuration and its nested arguments in the aws_s3_bucket resource:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
}

resource "aws_s3_bucket_server_side_encryption_configuration" "example" {
  bucket = aws_s3_bucket.example.id

  rule {
    apply_server_side_encryption_by_default {
      kms_master_key_id = aws_kms_key.mykey.arn
      sse_algorithm     = "aws:kms"
    }
  }
}

Run terraform import on each new resource, _e.g._,

$ terraform import aws_s3_bucket_server_side_encryption_configuration.example yournamehere
aws_s3_bucket_server_side_encryption_configuration.example: Importing from ID "yournamehere"...
aws_s3_bucket_server_side_encryption_configuration.example: Import prepared!
  Prepared aws_s3_bucket_server_side_encryption_configuration for import
aws_s3_bucket_server_side_encryption_configuration.example: Refreshing state... [id=yournamehere]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

versioning Argument

Switch your Terraform configuration to the aws_s3_bucket_versioning resource instead.

Buckets With Versioning Enabled

Given this previous configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
  versioning {
    enabled = true
  }
}

You will get the following error after upgrading:

│ Error: Value for unconfigurable attribute
│
│   with aws_s3_bucket.example,
│   on main.tf line 1, in resource "aws_s3_bucket" "example":
│    1: resource "aws_s3_bucket" "example" {
│
│ Can't configure a value for "versioning": its value will be decided automatically based on the result of applying this configuration.

Since versioning is now read only, update your configuration to use the aws_s3_bucket_versioning resource and remove versioning and its nested arguments in the aws_s3_bucket resource:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
}

resource "aws_s3_bucket_versioning" "example" {
  bucket = aws_s3_bucket.example.id
  versioning_configuration {
    status = "Enabled"
  }
}

Run terraform import on each new resource, _e.g._,

$ terraform import aws_s3_bucket_versioning.example yournamehere
aws_s3_bucket_versioning.example: Importing from ID "yournamehere"...
aws_s3_bucket_versioning.example: Import prepared!
  Prepared aws_s3_bucket_versioning for import
aws_s3_bucket_versioning.example: Refreshing state... [id=yournamehere]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

Buckets With Versioning Disabled or Suspended

Depending on the version of the Terraform AWS Provider you are migrating from, the interpretation of versioning.enabled = false in your aws_s3_bucket resource will differ and thus the migration to the aws_s3_bucket_versioning resource will also differ as follows.

If you are migrating from the Terraform AWS Provider v3.70.0 or later:

If you are migrating from an earlier version of the Terraform AWS Provider:

Given this previous configuration :

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
  versioning {
    enabled = false
  }
}

You will get the following error after upgrading:

│ Error: Value for unconfigurable attribute
│
│   with aws_s3_bucket.example,
│   on main.tf line 1, in resource "aws_s3_bucket" "example":
│    1: resource "aws_s3_bucket" "example" {
│
│ Can't configure a value for "versioning": its value will be decided automatically based on the result of applying this configuration.

Since versioning is now read only, update your configuration to use the aws_s3_bucket_versioning resource and remove versioning and its nested arguments in the aws_s3_bucket resource.

Run terraform import on each new resource, _e.g._,

$ terraform import aws_s3_bucket_versioning.example yournamehere
aws_s3_bucket_versioning.example: Importing from ID "yournamehere"...
aws_s3_bucket_versioning.example: Import prepared!
  Prepared aws_s3_bucket_versioning for import
aws_s3_bucket_versioning.example: Refreshing state... [id=yournamehere]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

Ensure Objects Depend on Versioning

When you create an object whose version_id you need and an aws_s3_bucket_versioning resource in the same configuration, you are more likely to have success by ensuring the s3_object depends either implicitly (see below) or explicitly (i.e., using depends_on = [aws_s3_bucket_versioning.example]) on the aws_s3_bucket_versioning resource.

This example shows the aws_s3_object.example depending implicitly on the versioning resource through the reference to aws_s3_bucket_versioning.example.bucket to define bucket:

resource "aws_s3_bucket" "example" {
  bucket = "yotto"
}

resource "aws_s3_bucket_versioning" "example" {
  bucket = aws_s3_bucket.example.id

  versioning_configuration {
    status = "Enabled"
  }
}

resource "aws_s3_object" "example" {
  bucket = aws_s3_bucket_versioning.example.id
  key    = "droeloe"
  source = "example.txt"
}

website, website_domain, and website_endpoint Arguments

Switch your Terraform configuration to the aws_s3_bucket_website_configuration resource instead.

For example, given this previous configuration:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
  website {
    index_document = "index.html"
    error_document = "error.html"
  }
}

You will get the following error after upgrading:

│ Error: Value for unconfigurable attribute
│
│   with aws_s3_bucket.example,
│   on main.tf line 1, in resource "aws_s3_bucket" "example":
│    1: resource "aws_s3_bucket" "example" {
│
│ Can't configure a value for "website": its value will be decided automatically based on the result of applying this configuration.

Since website is now read only, update your configuration to use the aws_s3_bucket_website_configuration resource and remove website and its nested arguments in the aws_s3_bucket resource:

resource "aws_s3_bucket" "example" {
  bucket = "yournamehere"

  # ... other configuration ...
}

resource "aws_s3_bucket_website_configuration" "example" {
  bucket = aws_s3_bucket.example.id

  index_document {
    suffix = "index.html"
  }

  error_document {
    key = "error.html"
  }
}

Run terraform import on each new resource, _e.g._,

$ terraform import aws_s3_bucket_website_configuration.example yournamehere
aws_s3_bucket_website_configuration.example: Importing from ID "yournamehere"...
aws_s3_bucket_website_configuration.example: Import prepared!
  Prepared aws_s3_bucket_website_configuration for import
aws_s3_bucket_website_configuration.example: Refreshing state... [id=yournamehere]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

For example, if you use the aws_s3_bucket attribute website_domain with aws_route53_record, as shown below, you will need to update your configuration:

resource "aws_route53_zone" "main" {
  name = "domain.test"
}

resource "aws_s3_bucket" "website" {
  # ... other configuration ...
  website {
    index_document = "index.html"
    error_document = "error.html"
  }
}

resource "aws_route53_record" "alias" {
  zone_id = aws_route53_zone.main.zone_id
  name    = "www"
  type    = "A"

  alias {
    zone_id                = aws_s3_bucket.website.hosted_zone_id
    name                   = aws_s3_bucket.website.website_domain
    evaluate_target_health = true
  }
}

Instead, you will now use the aws_s3_bucket_website_configuration resource and its website_domain attribute:

resource "aws_route53_zone" "main" {
  name = "domain.test"
}

resource "aws_s3_bucket" "website" {
  # ... other configuration ...
}

resource "aws_s3_bucket_website_configuration" "example" {
  bucket = aws_s3_bucket.website.id

  index_document {
    suffix = "index.html"
  }
}

resource "aws_route53_record" "alias" {
  zone_id = aws_route53_zone.main.zone_id
  name    = "www"
  type    = "A"

  alias {
    zone_id                = aws_s3_bucket.website.hosted_zone_id
    name                   = aws_s3_bucket_website_configuration.example.website_domain
    evaluate_target_health = true
  }
}

Full Resource Lifecycle of Default Resources

Default subnets and vpcs can now do full resource lifecycle operations such that resource creation and deletion are now supported.

Resource: aws_default_subnet

The aws_default_subnet resource behaves differently from normal resources in that if a default subnet exists in the specified Availability Zone, Terraform does not _create_ this resource, but instead "adopts" it into management. If no default subnet exists, Terraform creates a new default subnet. By default, terraform destroy does not delete the default subnet but does remove the resource from Terraform state. Set the force_destroy argument to true to delete the default subnet.

For example, given this previous configuration with no existing default subnet:

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
  required_version = ">= 0.13"
}

provider "aws" {
  region = "eu-west-2"
}

resource "aws_default_subnet" "default" {}

The following error was thrown on terraform apply:

│ Error: Default subnet not found.
│
│   with aws_default_subnet.default,
│   on main.tf line 5, in resource "aws_default_subnet" "default":
│    5: resource "aws_default_subnet" "default" {}

Now after upgrading, the above configuration will apply successfully.

To delete the default subnet, the above configuration should be updated as follows:

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
  required_version = ">= 0.13"
}

resource "aws_default_subnet" "default" {
  force_destroy = true
}

Resource: aws_default_vpc

The aws_default_vpc resource behaves differently from normal resources in that if a default VPC exists, Terraform does not _create_ this resource, but instead "adopts" it into management. If no default VPC exists, Terraform creates a new default VPC, which leads to the implicit creation of other resources. By default, terraform destroy does not delete the default VPC but does remove the resource from Terraform state. Set the force_destroy argument to true to delete the default VPC.

For example, given this previous configuration with no existing default VPC:

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
  required_version = ">= 0.13"
}

resource "aws_default_vpc" "default" {}

The following error was thrown on terraform apply:

│ Error: No default VPC found in this region.
│
│   with aws_default_vpc.default,
│   on main.tf line 5, in resource "aws_default_vpc" "default":
│    5: resource "aws_default_vpc" "default" {}

Now after upgrading, the above configuration will apply successfully.

To delete the default VPC, the above configuration should be updated to:

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
  required_version = ">= 0.13"
}

resource "aws_default_vpc" "default" {
  force_destroy = true
}

Plural Data Source Behavior

The following plural data sources are now consistent with Provider Design such that they no longer return an error if zero results are found.

Empty Strings Not Valid For Certain Resources

First, this is a breaking change but should affect very few configurations.

Second, the motivation behind this change is that previously, you might set an argument to "" to explicitly convey it is empty. However, with the introduction of null in Terraform 0.12 and to prepare for continuing enhancements that distinguish between unset arguments and those that have a value, including an empty string (""), we are moving away from this use of zero values. We ask practitioners to either use null instead or remove the arguments that are set to "".

Resource: aws_cloudwatch_event_target (Empty String)

Previously, you could set ecs_target.0.launch_type to "". However, the value "" is no longer valid. Now, set the argument to null (_e.g._, launch_type = null) or remove the empty-string configuration.

For example, this type of configuration is now not valid:

resource "aws_cloudwatch_event_target" "example" {
  # ...
  ecs_target {
    task_count          = 1
    task_definition_arn = aws_ecs_task_definition.task.arn
    launch_type         = ""
    # ...
  }
}

We fix this configuration by setting launch_type to null:

resource "aws_cloudwatch_event_target" "example" {
  # ...
  ecs_target {
    task_count          = 1
    task_definition_arn = aws_ecs_task_definition.task.arn
    launch_type         = null
    # ...
  }
}

Resource: aws_customer_gateway

Previously, you could set ip_address to "", which would result in an AWS error. However, the provider now also gives an error.

Resource: aws_default_network_acl

Previously, you could set egress.*.cidr_block, egress.*.ipv6_cidr_block, ingress.*.cidr_block, or ingress.*.ipv6_cidr_block to "". However, the value "" is no longer valid. Now, set the argument to null (_e.g._, ipv6_cidr_block = null) or remove the empty-string configuration.

For example, this type of configuration is now not valid:

resource "aws_default_network_acl" "example" {
  # ...
  egress {
    cidr_block      = "0.0.0.0/0"
    ipv6_cidr_block = ""
    # ...
  }
}

To fix this configuration, we remove the empty-string configuration:

resource "aws_default_network_acl" "example" {
  # ...
  egress {
    cidr_block = "0.0.0.0/0"
    # ...
  }
}

Resource: aws_default_route_table

Previously, you could set route.*.cidr_block or route.*.ipv6_cidr_block to "". However, the value "" is no longer valid. Now, set the argument to null (_e.g._, ipv6_cidr_block = null) or remove the empty-string configuration.

For example, this type of configuration is now not valid:

resource "aws_default_route_table" "example" {
  # ...
  route {
    cidr_block      = local.ipv6 ? "" : local.destination
    ipv6_cidr_block = local.ipv6 ? local.destination_ipv6 : ""
  }
}

We fix this configuration by using null instead of an empty string (""):

resource "aws_default_route_table" "example" {
  # ...
  route {
    cidr_block      = local.ipv6 ? null : local.destination
    ipv6_cidr_block = local.ipv6 ? local.destination_ipv6 : null
  }
}

Resource: aws_default_vpc (Empty String)

Previously, you could set ipv6_cidr_block to "". However, the value "" is no longer valid. Now, set the argument to null (_e.g._, ipv6_cidr_block = null) or remove the empty-string configuration.

Resource: aws_instance

Previously, you could set private_ip to "". However, the value "" is no longer valid. Now, set the argument to null (_e.g._, private_ip = null) or remove the empty-string configuration.

For example, this type of configuration is now not valid:

resource "aws_instance" "example" {
  instance_type = "t2.micro"
  private_ip    = ""
}

We fix this configuration by removing the empty-string configuration:

resource "aws_instance" "example" {
  instance_type = "t2.micro"
}

Resource: aws_efs_mount_target

Previously, you could set ip_address to "". However, the value "" is no longer valid. Now, set the argument to null (_e.g._, ip_address = null) or remove the empty-string configuration.

For example, this type of configuration is now not valid: ip_address = "".

Resource: aws_elasticsearch_domain

Previously, you could set ebs_options.0.volume_type to "". However, the value "" is no longer valid. Now, set the argument to null (_e.g._, volume_type = null) or remove the empty-string configuration.

For example, this type of configuration is now not valid:

resource "aws_elasticsearch_domain" "example" {
  # ...
  ebs_options {
    ebs_enabled = true
    volume_size = var.volume_size
    volume_type = var.volume_size > 0 ? local.volume_type : ""
  }
}

We fix this configuration by using null instead of "":

resource "aws_elasticsearch_domain" "example" {
  # ...
  ebs_options {
    ebs_enabled = true
    volume_size = var.volume_size
    volume_type = var.volume_size > 0 ? local.volume_type : null
  }
}

Resource: aws_network_acl

Previously, egress.*.cidr_block, egress.*.ipv6_cidr_block, ingress.*.cidr_block, and ingress.*.ipv6_cidr_block could be set to "". However, the value "" is no longer valid. Now, set the argument to null (_e.g._, ipv6_cidr_block = null) or remove the empty-string configuration.

For example, this type of configuration is now not valid:

resource "aws_network_acl" "example" {
  # ...
  egress {
    cidr_block      = "0.0.0.0/0"
    ipv6_cidr_block = ""
    # ...
  }
}

We fix this configuration by removing the empty-string configuration:

resource "aws_network_acl" "example" {
  # ...
  egress {
    cidr_block = "0.0.0.0/0"
    # ...
  }
}

Resource: aws_route

Previously, destination_cidr_block and destination_ipv6_cidr_block could be set to "". However, the value "" is no longer valid. Now, set the argument to null (_e.g._, destination_ipv6_cidr_block = null) or remove the empty-string configuration.

In addition, now exactly one of destination_cidr_block, destination_ipv6_cidr_block, and destination_prefix_list_id can be set.

For example, this type of configuration for aws_route is now not valid:

resource "aws_route" "example" {
  route_table_id = aws_route_table.example.id
  gateway_id     = aws_internet_gateway.example.id

  destination_cidr_block      = local.ipv6 ? "" : local.destination
  destination_ipv6_cidr_block = local.ipv6 ? local.destination_ipv6 : ""
}

We fix this configuration by using null instead of an empty-string (""):

resource "aws_route" "example" {
  route_table_id = aws_route_table.example.id
  gateway_id     = aws_internet_gateway.example.id

  destination_cidr_block      = local.ipv6 ? null : local.destination
  destination_ipv6_cidr_block = local.ipv6 ? local.destination_ipv6 : null
}

Resource: aws_route_table

Previously, route.*.cidr_block and route.*.ipv6_cidr_block could be set to "". However, the value "" is no longer valid. Now, set the argument to null (_e.g._, ipv6_cidr_block = null) or remove the empty-string configuration.

For example, this type of configuration is now not valid:

resource "aws_route_table" "example" {
  # ...
  route {
    cidr_block      = local.ipv6 ? "" : local.destination
    ipv6_cidr_block = local.ipv6 ? local.destination_ipv6 : ""
  }
}

We fix this configuration by usingd null instead of an empty-string (""):

resource "aws_route_table" "example" {
  # ...
  route {
    cidr_block      = local.ipv6 ? null : local.destination
    ipv6_cidr_block = local.ipv6 ? local.destination_ipv6 : null
  }
}

Resource: aws_vpc

Previously, ipv6_cidr_block could be set to "". However, the value "" is no longer valid. Now, set the argument to null (_e.g._, ipv6_cidr_block = null) or remove the empty-string configuration.

For example, this type of configuration is now not valid:

resource "aws_vpc" "example" {
  cidr_block      = "10.1.0.0/16"
  ipv6_cidr_block = ""
}

We fix this configuration by removing ipv6_cidr_block:

resource "aws_vpc" "example" {
  cidr_block = "10.1.0.0/16"
}

Resource: aws_vpc_ipv6_cidr_block_association

Previously, ipv6_cidr_block could be set to "". However, the value "" is no longer valid. Now, set the argument to null (_e.g._, ipv6_cidr_block = null) or remove the empty-string configuration.

Data Source: aws_cloudwatch_log_group

Removal of arn Wildcard Suffix

Previously, the data source returned the ARN directly from the API, which included a :* suffix to denote all CloudWatch Log Streams under the CloudWatch Log Group. Most other AWS resources that return ARNs and many other AWS services do not use the :* suffix. The suffix is now automatically removed. For example, the data source previously returned an ARN such as arn:aws:logs:us-east-1:123456789012:log-group:/example:* but will now return arn:aws:logs:us-east-1:123456789012:log-group:/example.

Workarounds, such as using replace() as shown below, should be removed:

data "aws_cloudwatch_log_group" "example" {
  name = "example"
}
resource "aws_datasync_task" "example" {
  # ... other configuration ...
  cloudwatch_log_group_arn = replace(data.aws_cloudwatch_log_group.example.arn, ":*", "")
}

Removing the :* suffix is a breaking change for some configurations. Fix these configurations using string interpolations as demonstrated below. For example, this configuration is now broken:

data "aws_iam_policy_document" "ad-log-policy" {
  statement {
    actions = [
      "logs:CreateLogStream",
      "logs:PutLogEvents"
    ]
    principals {
      identifiers = ["ds.amazonaws.com"]
      type        = "Service"
    }
    resources = [data.aws_cloudwatch_log_group.example.arn]
    effect    = "Allow"
  }
}

An updated configuration:

data "aws_iam_policy_document" "ad-log-policy" {
  statement {
    actions = [
      "logs:CreateLogStream",
      "logs:PutLogEvents"
    ]
    principals {
      identifiers = ["ds.amazonaws.com"]
      type        = "Service"
    }
    resources = ["${data.aws_cloudwatch_log_group.example.arn}:*"]
    effect    = "Allow"
  }
}

Data Source: aws_subnet_ids

The aws_subnet_ids data source has been deprecated and will be removed in a future version. Use the aws_subnets data source instead.

For example, change a configuration such as

data "aws_subnet_ids" "example" {
  vpc_id = var.vpc_id
}

data "aws_subnet" "example" {
  for_each = data.aws_subnet_ids.example.ids
  id       = each.value
}

output "subnet_cidr_blocks" {
  value = [for s in data.aws_subnet.example : s.cidr_block]
}

to

data "aws_subnets" "example" {
  filter {
    name   = "vpc-id"
    values = [var.vpc_id]
  }
}

data "aws_subnet" "example" {
  for_each = data.aws_subnets.example.ids
  id       = each.value
}

output "subnet_cidr_blocks" {
  value = [for s in data.aws_subnet.example : s.cidr_block]
}

Data Source: aws_s3_bucket_object

Version 4.x deprecates the aws_s3_bucket_object data source. Maintainers will remove it in a future version. Use aws_s3_object instead, where new features and fixes will be added.

Data Source: aws_s3_bucket_objects

Version 4.x deprecates the aws_s3_bucket_objects data source. Maintainers will remove it in a future version. Use aws_s3_objects instead, where new features and fixes will be added.

Resource: aws_batch_compute_environment

You can no longer specify compute_resources when type is UNMANAGED.

Previously, you could apply this configuration and the provider would ignore any compute resources:

resource "aws_batch_compute_environment" "test" {
  compute_environment_name = "test"

  compute_resources {
    instance_role = aws_iam_instance_profile.ecs_instance.arn
    instance_type = [
      "c4.large",
    ]
    max_vcpus = 16
    min_vcpus = 0
    security_group_ids = [
      aws_security_group.test.id
    ]
    subnets = [
      aws_subnet.test.id
    ]
    type = "EC2"
  }

  service_role = aws_iam_role.batch_service.arn
  type         = "UNMANAGED"
}

Now, this configuration is invalid and will result in an error during plan.

To resolve this error, simply remove or comment out the compute_resources configuration block.

resource "aws_batch_compute_environment" "test" {
  compute_environment_name = "test"

  service_role = aws_iam_role.batch_service.arn
  type         = "UNMANAGED"
}

Resource: aws_cloudwatch_event_target

Removal of ecs_target launch_type default value

Previously, the provider assigned ecs_target launch_type the default value of EC2 if you did not configure a value. However, the provider no longer assigns a default value.

For example, previously you could workaround the default value by using an empty string (""), as shown:

resource "aws_cloudwatch_event_target" "test" {
  arn      = aws_ecs_cluster.test.id
  rule     = aws_cloudwatch_event_rule.test.id
  role_arn = aws_iam_role.test.arn
  ecs_target {
    launch_type         = ""
    task_count          = 1
    task_definition_arn = aws_ecs_task_definition.task.arn
    network_configuration {
      subnets = [aws_subnet.subnet.id]
    }
  }
}

This is no longer necessary. We fix the configuration by removing the empty string assignment:

resource "aws_cloudwatch_event_target" "test" {
  arn      = aws_ecs_cluster.test.id
  rule     = aws_cloudwatch_event_rule.test.id
  role_arn = aws_iam_role.test.arn
  ecs_target {
    task_count          = 1
    task_definition_arn = aws_ecs_task_definition.task.arn
    network_configuration {
      subnets = [aws_subnet.subnet.id]
    }
  }
}

Resource: aws_elasticache_cluster

Error raised if neither engine nor replication_group_id is specified

Previously, when you did not specify either engine or replication_group_id, Terraform would not prevent you from applying the invalid configuration. Now, this will produce an error similar to the one below:

Error: Invalid combination of arguments

          with aws_elasticache_cluster.example,
          on terraform_plugin_test.tf line 2, in resource "aws_elasticache_cluster" "example":
           2: resource "aws_elasticache_cluster" "example" {

        "replication_group_id": one of `engine,replication_group_id` must be
        specified

        Error: Invalid combination of arguments

          with aws_elasticache_cluster.example,
          on terraform_plugin_test.tf line 2, in resource "aws_elasticache_cluster" "example":
           2: resource "aws_elasticache_cluster" "example" {

        "engine": one of `engine,replication_group_id` must be specified

Update your configuration to supply one of engine or replication_group_id.

Resource: aws_elasticache_global_replication_group

actual_engine_version Attribute removal

Switch your Terraform configuration from using actual_engine_version to use the engine_version_actual attribute instead.

For example, given this previous configuration:

output "elasticache_global_replication_group_version_result" {
  value = aws_elasticache_global_replication_group.example.actual_engine_version
}

An updated configuration:

output "elasticache_global_replication_group_version_result" {
  value = aws_elasticache_global_replication_group.example.engine_version_actual
}

Resource: aws_fsx_ontap_storage_virtual_machine

We removed the misspelled argument active_directory_configuration.0.self_managed_active_directory_configuration.0.organizational_unit_distinguidshed_name that we previously deprecated. Use active_directory_configuration.0.self_managed_active_directory_configuration.0.organizational_unit_distinguished_name now instead. Terraform will automatically migrate the state to active_directory_configuration.0.self_managed_active_directory_configuration.0.organizational_unit_distinguished_name during planning.

Resource: aws_lb_target_group

For protocol = "TCP", you can no longer set stickiness.type to lb_cookie even when enabled = false. Instead, either change the protocol to "HTTP" or "HTTPS", or change stickiness.type to "source_ip".

For example, this configuration is no longer valid:

resource "aws_lb_target_group" "test" {
  port     = 25
  protocol = "TCP"
  vpc_id   = aws_vpc.test.id

  stickiness {
    type    = "lb_cookie"
    enabled = false
  }
}

To fix this, we change the stickiness.type to "source_ip".

resource "aws_lb_target_group" "test" {
  port     = 25
  protocol = "TCP"
  vpc_id   = aws_vpc.test.id

  stickiness {
    type    = "source_ip"
    enabled = false
  }
}

Resource: aws_s3_bucket_object

Version 4.x deprecates the aws_s3_bucket_object and maintainers will remove it in a future version. Use aws_s3_object instead, where new features and fixes will be added.

When replacing aws_s3_bucket_object with aws_s3_object in your configuration, on the next apply, Terraform will recreate the object. If you prefer to not have Terraform recreate the object, import the object using aws_s3_object.

For example, the following will import an S3 object into state, assuming the configuration exists, as aws_s3_object.example:

% terraform import aws_s3_object.example s3://some-bucket-name/some/key.txt

EC2-Classic Resource and Data Source Support

While an upgrade to this major version will not directly impact EC2-Classic resources configured with Terraform, it is important to keep in the mind the following AWS Provider resources will eventually no longer be compatible with EC2-Classic as AWS completes their EC2-Classic networking retirement (expected around August 15, 2022).

Macie Classic Resource Support

These resources should be considered deprecated and will be removed in version 5.0.0.