Use the Confluent Terraform provider to enable the lifecycle management of Confluent Cloud resources:
In this guide, you will:
tf_runner
in Confluent CloudOrganizationAdmin
role to the tf_runner
service accounttf_runner
service accountCreate Resources on Confluent Cloud via Terraform
Select the appropriate Terraform configuration from a list of example configurations that describe the following infrastructure setup:
Staging
that contains a Kafka cluster called inventory
with a Kafka topic named orders
app_manager
, app_producer
, and app_consumer
with an associated Kafka API Key eachAppropriate permissions for service accounts granted either using Role-based Access Control (RBAC) or ACLs:
app_manager
service account's Kafka API Key is used for creating the orders
topic on the inventory
Kafka cluster and creating ACLs if needed.app_producer
service account's Kafka API Key is used for _producing_ messages to the orders
topicapp_consumer
service account's Kafka API Key is used for _consuming_ messages from the orders
topic-> Note: API Keys inherit the permissions granted to the owner.
Initialize and apply the selected Terraform configuration
orders
topic using the app_producer
service account's Kafka API Keyorders
topic using the app_consumer
service account's Kafka API KeyTerraform (0.14+) installed:
To ensure you're using the acceptable version of Terraform you may run the following command:
terraform version
Your output should resemble:
Terraform v0.14.0 # any version >= v0.14.0 is OK
...
tf_runner
), then click Next.tf_runner
service account. Save your Cloud API key and secret in a secure location. You will need this API key and secret to use the Confluent Terraform Provider.OrganizationAdmin
role to the tf_runner
service account by following this guide.Clone the repository containing the example configurations:
git clone https://github.com/confluentinc/terraform-provider-confluent.git
Change into configurations
subdirectory:
cd terraform-provider-confluent/examples/configurations
The configurations
directory has a subdirectory for each of the following configurations:
basic-kafka-acls
: _Basic_ Kafka cluster with authorization using ACLsbasic-kafka-acls-with-alias
: _Basic_ Kafka cluster with authorization using ACLsstandard-kafka-acls
: _Standard_ Kafka cluster with authorization using ACLsstandard-kafka-rbac
: _Standard_ Kafka cluster with authorization using RBACdedicated-public-kafka-acls
: _Dedicated_ Kafka cluster that is accessible over the public internet with authorization using ACLsdedicated-public-kafka-rbac
: _Dedicated_ Kafka cluster that is accessible over the public internet with authorization using RBACdedicated-privatelink-aws-kafka-acls
: _Dedicated_ Kafka cluster on AWS that is accessible via PrivateLink connections with authorization using ACLsdedicated-privatelink-aws-kafka-rbac
: _Dedicated_ Kafka cluster on AWS that is accessible via PrivateLink connections with authorization using RBACdedicated-privatelink-azure-kafka-rbac
: _Dedicated_ Kafka cluster on Azure that is accessible via PrivateLink connections with authorization using RBACdedicated-privatelink-azure-kafka-acls
: _Dedicated_ Kafka cluster on Azure that is accessible via PrivateLink connections with authorization using ACLsdedicated-private-service-connect-gcp-kafka-acls
: _Dedicated_ Kafka cluster on GCP that is accessible via Private Service Connect connections with authorization using ACLsdedicated-private-service-connect-gcp-kafka-rbac
: _Dedicated_ Kafka cluster on GCP that is accessible via Private Service Connect connections with authorization using RBACdedicated-vnet-peering-azure-kafka-acls
: _Dedicated_ Kafka cluster on Azure that is accessible via VPC Peering connections with authorization using ACLsdedicated-vnet-peering-azure-kafka-rbac
: _Dedicated_ Kafka cluster on Azure that is accessible via VPC Peering connections with authorization using RBACdedicated-vpc-peering-aws-kafka-acls
: _Dedicated_ Kafka cluster on AWS that is accessible via VPC Peering connections with authorization using ACLsdedicated-vpc-peering-aws-kafka-rbac
: _Dedicated_ Kafka cluster on AWS that is accessible via VPC Peering connections with authorization using RBACdedicated-vpc-peering-gcp-kafka-acls
: _Dedicated_ Kafka cluster on GCP that is accessible via VPC Peering connections with authorization using ACLsdedicated-vpc-peering-gcp-kafka-rbac
: _Dedicated_ Kafka cluster on GCP that is accessible via VPC Peering connections with authorization using RBACdedicated-transit-gateway-attachment-aws-kafka-acls
: _Dedicated_ Kafka cluster on AWS that is accessible via Transit Gateway Endpoint with authorization using ACLsdedicated-transit-gateway-attachment-aws-kafka-rbac
: _Dedicated_ Kafka cluster on AWS that is accessible via Transit Gateway Endpoint with authorization using RBACenterprise-privatelinkattachment-aws-kafka-acls
: _Enterprise_ Kafka cluster on AWS that is accessible via PrivateLink connections with authorization using ACLs-> Note: _Basic_ Kafka cluster with authorization using RBAC configuration is not supported, because both DeveloperRead
and DeveloperWrite
roles are not available for _Basic_ Kafka clusters.
-> Note: When considering whether to use RBAC or ACLs for access control, it is suggested you use RBAC as the default because of its ease of use and manageability at scale, but for edge cases where you need to have more granular access control, or wish to explicitly deny access, ACLs may make more sense. For example, you could use RBAC to allow access for a group of users, but an ACL to deny access for a particular member of that group.
-> Note: When using a private networking option, you must execute terraform
on a system with connectivity to the Kafka REST API. Check the Kafka REST API docs to learn more about it.
-> Note: If you're interested in a more granular setup with TF configuration split between a Kafka Ops team and a Product team, see kafka-ops-env-admin-product-team and kafka-ops-kafka-admin-product-team.
Select the target configuration and change into its directory:
# Using the example configuration #1 as an example
cd basic-kafka-acls
Download and install the providers defined in the configuration:
terraform init
Use the saved Cloud API Key of the tf_runner
service account to set values to the confluent_cloud_api_key
and confluent_cloud_api_secret
input variables using environment variables:
export TF_VAR_confluent_cloud_api_key="<cloud_api_key>"
export TF_VAR_confluent_cloud_api_secret="<cloud_api_secret>"
Ensure the configuration is syntactically valid and internally consistent:
terraform validate
Apply the configuration:
terraform apply
!> Warning: Before running terraform apply
, please take a look at the corresponding README file for other instructions.
You have now created infrastructure using Terraform! Visit the Confluent Cloud Console or use the Confluent CLI to see the resources you provisioned.
Ensure you're using the acceptable version of the Confluent CLI by running the following command:
confluent version
Your output should resemble:
...
Version: v2.5.1 # any version >= v2.0 is OK
...
Run the following command to print out generated Confluent CLI commands with the correct resource IDs injected:
# Alternatively, you could also run terraform output -json resource-ids
terraform output resource-ids
Your output should resemble:
# 1. Log in to Confluent Cloud
$ confluent login
# 2. Produce key-value records to topic '<TOPIC_NAME>' by using <APP-PRODUCER'S NAME>'s Kafka API Key
# Enter a few records and then press 'Ctrl-C' when you're done.
# Sample records:
# {"number":1,"date":18500,"shipping_address":"899 W Evelyn Ave, Mountain View, CA 94041, USA","cost":15.00}
# {"number":2,"date":18501,"shipping_address":"1 Bedford St, London WC2E 9HG, United Kingdom","cost":5.00}
# {"number":3,"date":18502,"shipping_address":"3307 Northland Dr Suite 400, Austin, TX 78731, USA","cost":10.00}
$ confluent kafka topic produce <TOPIC_NAME> --environment <ENVIRONMENT_ID> --cluster <CLUSTER_ID> --api-key "<APP-PRODUCER'S KAFKA API KEY>" --api-secret "<APP-PRODUCER'S KAFKA API SECRET>"
# 3. Consume records from topic '<TOPIC_NAME>' by using <APP-CONSUMER'S NAME>'s Kafka API Key
$ confluent kafka topic consume <TOPIC_NAME> --from-beginning --environment <ENVIRONMENT_ID> --cluster <CLUSTER_ID> --api-key "<APP-CONSUMER'S KAFKA API KEY>" --api-secret "<APP-CONSUMER'S KAFKA API SECRET>"
# When you are done, press 'Ctrl-C'.
Execute printed out commands.
-> Note: Add the --from-beginning
flag to enable printing all messages from the beginning of the topic.
Run the following command to destroy all the resources you created:
terraform destroy
This command destroys all the resources specified in your Terraform state. terraform destroy
doesn't destroy resources running elsewhere that aren't managed by the current Terraform project.
Now you've created and destroyed an entire Confluent Cloud deployment!
Visit the Confluent Cloud Console to verify the resources have been destroyed to avoid unexpected charges.
If you're interested in additional Confluent Cloud infrastructure configurations view our repository for more end-to-end examples.