confluent_kafka_topic
provides a Kafka Topic resource that enables creating and deleting Kafka Topics on a Kafka cluster on Confluent Cloud.
provider "confluent" {
cloud_api_key = var.confluent_cloud_api_key # optionally use CONFLUENT_CLOUD_API_KEY env var
cloud_api_secret = var.confluent_cloud_api_secret # optionally use CONFLUENT_CLOUD_API_SECRET env var
}
resource "confluent_kafka_topic" "orders" {
kafka_cluster {
id = confluent_kafka_cluster.basic-cluster.id
}
topic_name = "orders"
rest_endpoint = confluent_kafka_cluster.basic-cluster.rest_endpoint
credentials {
key = confluent_api_key.app-manager-kafka-api-key.id
secret = confluent_api_key.app-manager-kafka-api-key.secret
}
lifecycle {
prevent_destroy = true
}
}
provider "confluent" {
kafka_id = var.kafka_id # optionally use KAFKA_ID env var
kafka_rest_endpoint = var.kafka_rest_endpoint # optionally use KAFKA_REST_ENDPOINT env var
kafka_api_key = var.kafka_api_key # optionally use KAFKA_API_KEY env var
kafka_api_secret = var.kafka_api_secret # optionally use KAFKA_API_SECRET env var
}
resource "confluent_kafka_topic" "orders" {
topic_name = "orders"
lifecycle {
prevent_destroy = true
}
}
The following arguments are supported:
kafka_cluster
- (Optional Configuration Block) supports the following:
id
- (Required String) The ID of the Kafka cluster, for example, lkc-abc123
.topic_name
- (Required String) The name of the topic, for example, orders-1
. The topic name can be up to 249 characters in length, and can include the following characters: a-z, A-Z, 0-9, . (dot), _ (underscore), and - (dash). As a best practice, we recommend against using any personally identifiable information (PII) when naming your topic.rest_endpoint
- (Optional String) The REST endpoint of the Kafka cluster, for example, https://pkc-00000.us-central1.gcp.confluent.cloud:443
).credentials
(Optional Configuration Block) supports the following:
key
- (Required String) The Kafka API Key.secret
- (Required String, Sensitive) The Kafka API Secret.partitions_count
- (Optional Number) The number of partitions to create in the topic. Defaults to 6
.config
- (Optional Map) The custom topic settings to set:
name
- (Required String) The setting name, for example, cleanup.policy
.value
- (Required String) The setting value, for example, compact
.In addition to the preceding arguments, the following attributes are exported:
id
- (Required String) The ID of the Kafka topic, in the format <Kafka cluster ID>/<Kafka Topic name>
, for example, lkc-abc123/orders-1
.You can import a Kafka topic by using the Kafka cluster ID and Kafka topic name in the format <Kafka cluster ID>/<Kafka topic name>
, for example:
# Option #1: Manage multiple Kafka clusters in the same Terraform workspace
$ export IMPORT_KAFKA_API_KEY="<kafka_api_key>"
$ export IMPORT_KAFKA_API_SECRET="<kafka_api_secret>"
$ export IMPORT_KAFKA_REST_ENDPOINT="<kafka_rest_endpoint>"
$ terraform import confluent_kafka_topic.my_topic lkc-abc123/orders-123
# Option #2: Manage a single Kafka cluster in the same Terraform workspace
$ terraform import confluent_kafka_topic.my_topic lkc-abc123/orders-123
The following end-to-end examples might help to get started with confluent_kafka_topic
resource:
basic-kafka-acls
: _Basic_ Kafka cluster with authorization using ACLsbasic-kafka-acls-with-alias
: _Basic_ Kafka cluster with authorization using ACLsstandard-kafka-acls
: _Standard_ Kafka cluster with authorization using ACLsstandard-kafka-rbac
: _Standard_ Kafka cluster with authorization using RBACdedicated-public-kafka-acls
: _Dedicated_ Kafka cluster that is accessible over the public internet with authorization using ACLsdedicated-public-kafka-rbac
: _Dedicated_ Kafka cluster that is accessible over the public internet with authorization using RBACdedicated-privatelink-aws-kafka-acls
: _Dedicated_ Kafka cluster on AWS that is accessible via PrivateLink connections with authorization using ACLsdedicated-privatelink-aws-kafka-rbac
: _Dedicated_ Kafka cluster on AWS that is accessible via PrivateLink connections with authorization using RBACdedicated-privatelink-azure-kafka-rbac
: _Dedicated_ Kafka cluster on Azure that is accessible via PrivateLink connections with authorization using RBACdedicated-privatelink-azure-kafka-acls
: _Dedicated_ Kafka cluster on Azure that is accessible via PrivateLink connections with authorization using ACLsdedicated-private-service-connect-gcp-kafka-acls
: _Dedicated_ Kafka cluster on GCP that is accessible via Private Service Connect connections with authorization using ACLsdedicated-private-service-connect-gcp-kafka-rbac
: _Dedicated_ Kafka cluster on GCP that is accessible via Private Service Connect connections with authorization using RBACdedicated-vnet-peering-azure-kafka-acls
: _Dedicated_ Kafka cluster on Azure that is accessible via VPC Peering connections with authorization using ACLsdedicated-vnet-peering-azure-kafka-rbac
: _Dedicated_ Kafka cluster on Azure that is accessible via VPC Peering connections with authorization using RBACdedicated-vpc-peering-aws-kafka-acls
: _Dedicated_ Kafka cluster on AWS that is accessible via VPC Peering connections with authorization using ACLsdedicated-vpc-peering-aws-kafka-rbac
: _Dedicated_ Kafka cluster on AWS that is accessible via VPC Peering connections with authorization using RBACdedicated-vpc-peering-gcp-kafka-acls
: _Dedicated_ Kafka cluster on GCP that is accessible via VPC Peering connections with authorization using ACLsdedicated-vpc-peering-gcp-kafka-rbac
: _Dedicated_ Kafka cluster on GCP that is accessible via VPC Peering connections with authorization using RBACdedicated-transit-gateway-attachment-aws-kafka-acls
: _Dedicated_ Kafka cluster on AWS that is accessible via Transit Gateway Endpoint with authorization using ACLsdedicated-transit-gateway-attachment-aws-kafka-rbac
: _Dedicated_ Kafka cluster on AWS that is accessible via Transit Gateway Endpoint with authorization using RBACenterprise-privatelinkattachment-aws-kafka-acls
: _Enterprise_ Kafka cluster on AWS that is accessible via PrivateLink connections with authorization using ACLs