Sample Project for Confluent Terraform Provider

Summary

asciicast

Use the Confluent Terraform provider to enable the lifecycle management of Confluent Cloud resources:

In this guide, you will:

  1. Create a Cloud API Key
  2. Create Resources on Confluent Cloud via Terraform

  3. [Optional] Run a quick test
  4. [Optional] Destroy created resources on Confluent Cloud

Prerequisites

  1. A Confluent Cloud account. If you do not have a Confluent Cloud account, create one now.
  2. Terraform (0.14+) installed:

Create a Cloud API Key

  1. Open the Confluent Cloud Console and click Granular access tab, and then click Next.
  2. Click Create a new one to create tab. Enter the new service account name (tf_runner), then click Next.
  3. The Cloud API key and secret are generated for the tf_runner service account. Save your Cloud API key and secret in a secure location. You will need this API key and secret to use the Confluent Terraform Provider.
  4. Assign the OrganizationAdmin role to the tf_runner service account by following this guide.

Assigning the OrganizationAdmin role to tf_runner service account

Create Resources on Confluent Cloud via Terraform

  1. Clone the repository containing the example configurations:

    git clone https://github.com/confluentinc/terraform-provider-confluent.git
    
  2. Change into configurations subdirectory:

    cd terraform-provider-confluent/examples/configurations
    
  3. The configurations directory has a subdirectory for each of the following configurations:

    -> Note: _Basic_ Kafka cluster with authorization using RBAC configuration is not supported, because both DeveloperRead and DeveloperWrite roles are not available for _Basic_ Kafka clusters.

    -> Note: When considering whether to use RBAC or ACLs for access control, it is suggested you use RBAC as the default because of its ease of use and manageability at scale, but for edge cases where you need to have more granular access control, or wish to explicitly deny access, ACLs may make more sense. For example, you could use RBAC to allow access for a group of users, but an ACL to deny access for a particular member of that group.

    -> Note: When using a private networking option, you must execute terraform on a system with connectivity to the Kafka REST API. Check the Kafka REST API docs to learn more about it.

    -> Note: If you're interested in a more granular setup with TF configuration split between a Kafka Ops team and a Product team, see kafka-ops-env-admin-product-team and kafka-ops-kafka-admin-product-team.

  4. Select the target configuration and change into its directory:

    # Using the example configuration #1 as an example 
    cd basic-kafka-acls
    
  5. Download and install the providers defined in the configuration:

    terraform init
    
  6. Use the saved Cloud API Key of the tf_runner service account to set values to the confluent_cloud_api_key and confluent_cloud_api_secret input variables using environment variables:

    export TF_VAR_confluent_cloud_api_key="<cloud_api_key>"
    export TF_VAR_confluent_cloud_api_secret="<cloud_api_secret>"
    
  7. Ensure the configuration is syntactically valid and internally consistent:

    terraform validate
    
  8. Apply the configuration:

    terraform apply
    

    !> Warning: Before running terraform apply, please take a look at the corresponding README file for other instructions.

  9. You have now created infrastructure using Terraform! Visit the Confluent Cloud Console or use the Confluent CLI to see the resources you provisioned.

[Optional] Run a Quick Test

  1. Ensure you're using the acceptable version of the Confluent CLI by running the following command:

    confluent version
    

    Your output should resemble:

    ...
    Version:     v2.5.1 # any version >= v2.0 is OK
    ...
    
  2. Run the following command to print out generated Confluent CLI commands with the correct resource IDs injected:

    # Alternatively, you could also run terraform output -json resource-ids
    terraform output resource-ids
    

    Your output should resemble:

    # 1. Log in to Confluent Cloud
    $ confluent login
    
    # 2. Produce key-value records to topic '<TOPIC_NAME>' by using <APP-PRODUCER'S NAME>'s Kafka API Key
    # Enter a few records and then press 'Ctrl-C' when you're done.
    # Sample records:
    # {"number":1,"date":18500,"shipping_address":"899 W Evelyn Ave, Mountain View, CA 94041, USA","cost":15.00}
    # {"number":2,"date":18501,"shipping_address":"1 Bedford St, London WC2E 9HG, United Kingdom","cost":5.00}
    # {"number":3,"date":18502,"shipping_address":"3307 Northland Dr Suite 400, Austin, TX 78731, USA","cost":10.00} 
    $ confluent kafka topic produce <TOPIC_NAME> --environment <ENVIRONMENT_ID> --cluster <CLUSTER_ID> --api-key "<APP-PRODUCER'S KAFKA API KEY>" --api-secret "<APP-PRODUCER'S KAFKA API SECRET>"
    
    # 3. Consume records from topic '<TOPIC_NAME>' by using <APP-CONSUMER'S NAME>'s Kafka API Key
    $ confluent kafka topic consume <TOPIC_NAME> --from-beginning --environment <ENVIRONMENT_ID> --cluster <CLUSTER_ID> --api-key "<APP-CONSUMER'S KAFKA API KEY>" --api-secret "<APP-CONSUMER'S KAFKA API SECRET>"
    # When you are done, press 'Ctrl-C'.
    
  3. Execute printed out commands.

    -> Note: Add the --from-beginning flag to enable printing all messages from the beginning of the topic.

[Optional] Teardown Confluent Cloud resources

Run the following command to destroy all the resources you created:

terraform destroy

This command destroys all the resources specified in your Terraform state. terraform destroy doesn't destroy resources running elsewhere that aren't managed by the current Terraform project.

Now you've created and destroyed an entire Confluent Cloud deployment!

Visit the Confluent Cloud Console to verify the resources have been destroyed to avoid unexpected charges.

If you're interested in additional Confluent Cloud infrastructure configurations view our repository for more end-to-end examples.