confluent_kafka_cluster
provides a Kafka cluster resource that enables creating, editing, and deleting Kafka clusters on Confluent Cloud.
resource "confluent_environment" "development" {
display_name = "Development"
lifecycle {
prevent_destroy = true
}
}
resource "confluent_kafka_cluster" "basic" {
display_name = "basic_kafka_cluster"
availability = "SINGLE_ZONE"
cloud = "AWS"
region = "us-east-2"
basic {}
environment {
id = confluent_environment.development.id
}
lifecycle {
prevent_destroy = true
}
}
resource "confluent_kafka_cluster" "standard" {
display_name = "standard_kafka_cluster"
availability = "SINGLE_ZONE"
cloud = "AWS"
region = "us-east-2"
standard {}
environment {
id = confluent_environment.development.id
}
lifecycle {
prevent_destroy = true
}
}
resource "confluent_kafka_cluster" "dedicated" {
display_name = "dedicated_kafka_cluster"
availability = "MULTI_ZONE"
cloud = "AWS"
region = "us-east-2"
dedicated {
cku = 2
}
environment {
id = confluent_environment.development.id
}
lifecycle {
prevent_destroy = true
}
}
resource "confluent_environment" "development" {
display_name = "Development"
lifecycle {
prevent_destroy = true
}
}
resource "confluent_kafka_cluster" "basic" {
display_name = "basic_kafka_cluster"
availability = "SINGLE_ZONE"
cloud = "AZURE"
region = "centralus"
basic {}
environment {
id = confluent_environment.development.id
}
lifecycle {
prevent_destroy = true
}
}
resource "confluent_kafka_cluster" "standard" {
display_name = "standard_kafka_cluster"
availability = "SINGLE_ZONE"
cloud = "AZURE"
region = "centralus"
standard {}
environment {
id = confluent_environment.development.id
}
lifecycle {
prevent_destroy = true
}
}
resource "confluent_kafka_cluster" "dedicated" {
display_name = "dedicated_kafka_cluster"
availability = "MULTI_ZONE"
cloud = "AZURE"
region = "centralus"
dedicated {
cku = 2
}
environment {
id = confluent_environment.development.id
}
lifecycle {
prevent_destroy = true
}
}
resource "confluent_environment" "development" {
display_name = "Development"
}
resource "confluent_kafka_cluster" "basic" {
display_name = "basic_kafka_cluster"
availability = "SINGLE_ZONE"
cloud = "GCP"
region = "us-central1"
basic {}
environment {
id = confluent_environment.development.id
}
lifecycle {
prevent_destroy = true
}
}
resource "confluent_kafka_cluster" "standard" {
display_name = "standard_kafka_cluster"
availability = "SINGLE_ZONE"
cloud = "GCP"
region = "us-central1"
standard {}
environment {
id = confluent_environment.development.id
}
lifecycle {
prevent_destroy = true
}
}
resource "confluent_kafka_cluster" "dedicated" {
display_name = "dedicated_kafka_cluster"
availability = "MULTI_ZONE"
cloud = "GCP"
region = "us-central1"
dedicated {
cku = 2
}
environment {
id = confluent_environment.development.id
}
lifecycle {
prevent_destroy = true
}
}
The following arguments are supported:
display_name
- (Required String) The name of the Kafka cluster.availability
- (Required String) The availability zone configuration of the Kafka cluster. Accepted values are: SINGLE_ZONE
, MULTI_ZONE
, LOW
, and HIGH
.cloud
- (Required String) The cloud service provider that runs the Kafka cluster. Accepted values are: AWS
, AZURE
, and GCP
.region
- (Required String) The cloud service provider region where the Kafka cluster is running, for example, us-west-2
. See Cloud Providers and Regions for a full list of options for AWS, Azure, and GCP.basic
- (Optional Configuration Block) The configuration of the Basic Kafka cluster.standard
- (Optional Configuration Block) The configuration of the Standard Kafka cluster.enterprise
- (Optional Configuration Block) The configuration of the Enterprise Kafka cluster.dedicated
- (Optional Configuration Block) The configuration of the Dedicated Kafka cluster. It supports the following:
cku
- (Required Number) The number of Confluent Kafka Units (CKUs) for Dedicated cluster types. The minimum number of CKUs for SINGLE_ZONE
dedicated clusters is 1
whereas MULTI_ZONE
dedicated clusters must have 2
CKUs or more.environment
(Required Configuration Block) supports the following:
id
- (Required String) The ID of the Environment that the Kafka cluster belongs to, for example, env-abc123
.network
(Optional Configuration Block) supports the following:
id
- (Required String) The ID of the Network that the Kafka cluster belongs to, for example, n-abc123
.byok_key
(Optional Configuration Block) supports the following:
id
- (Required String) The ID of the Confluent key that is used to encrypt the data in the Kafka cluster, for example, cck-lye5m
.In addition to the preceding arguments, the following attributes are exported:
id
- (Required String) The ID of the Kafka cluster (e.g., lkc-abc123
).api_version
- (Required String) An API Version of the schema version of the Kafka cluster, for example, cmk/v2
.kind
- (Required String) A kind of the Kafka cluster, for example, Cluster
.bootstrap_endpoint
- (Required String) The bootstrap endpoint used by Kafka clients to connect to the Kafka cluster. (e.g., SASL_SSL://pkc-00000.us-central1.gcp.confluent.cloud:9092
).rest_endpoint
- (Required String) The REST endpoint of the Kafka cluster (e.g., https://pkc-00000.us-central1.gcp.confluent.cloud:443
).rbac_crn
- (Required String) The Confluent Resource Name of the Kafka cluster, for example, crn://confluent.cloud/organization=1111aaaa-11aa-11aa-11aa-111111aaaaaa/environment=env-abc123/cloud-cluster=lkc-abc123
.dedicated
- (Optional Configuration Block) The configuration of the Dedicated Kafka cluster. It supports the following:
zones
- (Required List of String) The list of zones the cluster is in.
On AWS, zones are AWS AZ IDs, for example, use1-az3
.
On GCP, zones are GCP zones, for example, us-central1-c
.
On Azure, zones are Confluent-chosen names (for example, 1
, 2
, 3
) since Azure does not have universal zone identifiers.You can import a Kafka cluster by using Environment ID and Kafka cluster ID, in the format <Environment ID>/<Kafka cluster ID>
, e.g.
$ export CONFLUENT_CLOUD_API_KEY="<cloud_api_key>"
$ export CONFLUENT_CLOUD_API_SECRET="<cloud_api_secret>"
$ terraform import confluent_kafka_cluster.my_kafka env-abc123/lkc-abc123
The following end-to-end examples might help to get started with confluent_kafka_cluster
resource:
basic-kafka-acls
: _Basic_ Kafka cluster with authorization using ACLsbasic-kafka-acls-with-alias
: _Basic_ Kafka cluster with authorization using ACLsstandard-kafka-acls
: _Standard_ Kafka cluster with authorization using ACLsstandard-kafka-rbac
: _Standard_ Kafka cluster with authorization using RBACdedicated-public-kafka-acls
: _Dedicated_ Kafka cluster that is accessible over the public internet with authorization using ACLsdedicated-public-kafka-rbac
: _Dedicated_ Kafka cluster that is accessible over the public internet with authorization using RBACdedicated-privatelink-aws-kafka-acls
: _Dedicated_ Kafka cluster on AWS that is accessible via PrivateLink connections with authorization using ACLsdedicated-privatelink-aws-kafka-rbac
: _Dedicated_ Kafka cluster on AWS that is accessible via PrivateLink connections with authorization using RBACdedicated-privatelink-azure-kafka-rbac
: _Dedicated_ Kafka cluster on Azure that is accessible via PrivateLink connections with authorization using RBACdedicated-privatelink-azure-kafka-acls
: _Dedicated_ Kafka cluster on Azure that is accessible via PrivateLink connections with authorization using ACLsdedicated-private-service-connect-gcp-kafka-acls
: _Dedicated_ Kafka cluster on GCP that is accessible via Private Service Connect connections with authorization using ACLsdedicated-private-service-connect-gcp-kafka-rbac
: _Dedicated_ Kafka cluster on GCP that is accessible via Private Service Connect connections with authorization using RBACdedicated-vnet-peering-azure-kafka-acls
: _Dedicated_ Kafka cluster on Azure that is accessible via VPC Peering connections with authorization using ACLsdedicated-vnet-peering-azure-kafka-rbac
: _Dedicated_ Kafka cluster on Azure that is accessible via VPC Peering connections with authorization using RBACdedicated-vpc-peering-aws-kafka-acls
: _Dedicated_ Kafka cluster on AWS that is accessible via VPC Peering connections with authorization using ACLsdedicated-vpc-peering-aws-kafka-rbac
: _Dedicated_ Kafka cluster on AWS that is accessible via VPC Peering connections with authorization using RBACdedicated-vpc-peering-gcp-kafka-acls
: _Dedicated_ Kafka cluster on GCP that is accessible via VPC Peering connections with authorization using ACLsdedicated-vpc-peering-gcp-kafka-rbac
: _Dedicated_ Kafka cluster on GCP that is accessible via VPC Peering connections with authorization using RBACdedicated-transit-gateway-attachment-aws-kafka-acls
: _Dedicated_ Kafka cluster on AWS that is accessible via Transit Gateway Endpoint with authorization using ACLsdedicated-transit-gateway-attachment-aws-kafka-rbac
: _Dedicated_ Kafka cluster on AWS that is accessible via Transit Gateway Endpoint with authorization using RBACenterprise-privatelinkattachment-aws-kafka-acls
: _Enterprise_ Kafka cluster on AWS that is accessible via PrivateLink connections with authorization using ACLsenterprise-privatelinkattachment-azure-kafka-acls
: _Enterprise_ Kafka cluster on Azure that is accessible via PrivateLink connections with authorization using ACLs