confluent_schema
provides a Schema resource that enables creating, evolving, and deleting Schemas on a Schema Registry cluster on Confluent Cloud.
provider "confluent" {
cloud_api_key = var.confluent_cloud_api_key # optionally use CONFLUENT_CLOUD_API_KEY env var
cloud_api_secret = var.confluent_cloud_api_secret # optionally use CONFLUENT_CLOUD_API_SECRET env var
}
resource "confluent_schema" "avro-purchase" {
schema_registry_cluster {
id = confluent_schema_registry_cluster.essentials.id
}
rest_endpoint = confluent_schema_registry_cluster.essentials.rest_endpoint
subject_name = "avro-purchase-value"
format = "AVRO"
schema = file("./schemas/avro/purchase.avsc")
credentials {
key = "<Schema Registry API Key for confluent_schema_registry_cluster.essentials>"
secret = "<Schema Registry API Secret for confluent_schema_registry_cluster.essentials>"
}
lifecycle {
prevent_destroy = true
}
}
provider "confluent" {
schema_registry_id = var.schema_registry_id # optionally use SCHEMA_REGISTRY_ID env var
schema_registry_rest_endpoint = var.schema_registry_rest_endpoint # optionally use SCHEMA_REGISTRY_REST_ENDPOINT env var
schema_registry_api_key = var.schema_registry_api_key # optionally use SCHEMA_REGISTRY_API_KEY env var
schema_registry_api_secret = var.schema_registry_api_secret # optionally use SCHEMA_REGISTRY_API_SECRET env var
}
resource "confluent_schema" "avro-purchase" {
subject_name = "avro-purchase-value"
format = "AVRO"
schema = file("./schemas/avro/purchase.avsc")
lifecycle {
prevent_destroy = true
}
}
The following arguments are supported:
schema_registry_cluster
- (Optional Configuration Block) supports the following:
id
- (Required String) The ID of the Schema Registry cluster, for example, lsrc-abc123
.rest_endpoint
- (Optional String) The REST endpoint of the Schema Registry cluster, for example, https://psrc-00000.us-central1.gcp.confluent.cloud:443
).credentials
(Optional Configuration Block) supports the following:
key
- (Required String) The Schema Registry API Key.secret
- (Required String, Sensitive) The Schema Registry API Secret.subject_name
- (Required String) The name of the subject (in other words, the namespace), representing the subject under which the schema will be registered, for example, test-subject
. Schemas evolve safely, following a compatibility mode defined, under a subject name.format
- (Required String) The format of the schema. Accepted values are: AVRO
, PROTOBUF
, and JSON
.schema
- (Required String) The schema string, for example, file("./schema_version_1.avsc")
.hard_delete
- (Optional Boolean) An optional flag to control whether a schema should be soft or hard deleted. Set it to true
if you want to hard delete a schema on destroy (see Schema Deletion Guidelines for more details). Must be unset when importing. Defaults to false
(soft delete).recreate_on_update
- (Optional Boolean) An optional flag to control whether a schema should be recreated on an update. Set it to true
if you want to manage different schema versions using different resource instances. Must be set to the target value when importing. Defaults to false
, which manages the latest schema version only. The resource instance always points to the latest schema version by supporting in-place updates.schema_reference
- (Optional List) The list of referenced schemas (see Schema References for more details):
name
- (Required String) The name of the subject, representing the subject under which the referenced schema is registered.subject_name
- (Required String) The name for the reference. (For Avro Schema, the reference name is the fully qualified schema name, for JSON Schema it is a URL, and for Protobuf Schema, it is the name of another Protobuf file.)version
- (Required Integer) The version, representing the exact version of the schema under the registered subject.metadata
- (Optional Block) See here for more details. Supports the following:
properties
- (Optional Map) The custom properties to set:
name
- (Required String) The setting name.value
- (Required String) The setting value.tags
- (Optional List of Blocks) supports the following:
key
- (Required String) The setting name.value
- (Required List of Strings) The list of tags.sensitive
- (Optional List of Strings) A list of metadata properties to be encrypted.ruleset
- (Optional Block) The list of schema rules. See Data Contracts for Schema Registry for more details. For example, these rules can enforce that a field that contains sensitive information must be encrypted, or that a message containing an invalid age must be sent to a dead letter queue.
domain_rules
- (Optional Block) supports the following:
name
- (Optional String) A user-defined name that can be used to reference the rule.doc
- (Optional String) An optional description of the rule.kind
- (Optional String) The kind of the rule. Accepted values are CONDITION
and TRANSFORM
.mode
- (Optional String) The mode of the rule. Accepted values are UPGRADE
, DOWNGRADE
, UPDOWN
, WRITE
, READ
, and WRITEREAD
.type
- (Optional String) The type of rule, which invokes a specific rule executor, such as Google Common Expression Language (CEL) or JSONata.expr
- (Optional String) The body of the rule, which is optional.on_success
- (Optional String) An optional action to execute if the rule succeeds, otherwise the built-in action type NONE is used. For UPDOWN
and WRITEREAD
rules, one can specify two actions separated by commas, such as "NONE,ERROR" for a WRITEREAD
rule. In this case NONE
applies to WRITE
and ERROR
applies to READ
.on_failure
- (Optional String) An optional action to execute if the rule fails, otherwise the built-in action type ERROR is used. For UPDOWN
and WRITEREAD
rules, one can specify two actions separated by commas, as mentioned above.tags
- (Optional String List) The tags to which the rule applies, if any.params
- (Optional Configuration Block) A set of static parameters for the rule, which is optional. These are key-value pairs that are passed to the rule.In addition to the preceding arguments, the following attributes are exported:
id
- (Required String) The ID of the Schema, in the format <Schema Registry cluster ID>/<Subject name>/<Schema identifier>
, for example, lsrc-abc123/test-subject/100003
.schema_identifier
- (Required Integer) The globally unique ID of the Schema, for example, 100003
. If the same schema is registered under a different subject, the same identifier will be returned. However, the version
of the schema may be different under different subjects.version
- (Required Integer) The version of the Schema, for example, 4
.You can import a Schema by using the Schema Registry cluster ID, Subject name, and unique identifier (or latest
when recreate_on_update = false
) of the Schema in the format <Schema Registry cluster ID>/<Subject name>/<Schema identifier>
, for example:
# Option A: recreate_on_update = false (by default)
$ export IMPORT_SCHEMA_REGISTRY_API_KEY="<schema_registry_api_key>"
$ export IMPORT_SCHEMA_REGISTRY_API_SECRET="<schema_registry_api_secret>"
$ export IMPORT_SCHEMA_REGISTRY_REST_ENDPOINT="<schema_registry_rest_endpoint>"
$ terraform import confluent_schema.my_schema_1 lsrc-abc123/test-subject/latest
# Option B: recreate_on_update = true
$ export IMPORT_SCHEMA_REGISTRY_API_KEY="<schema_registry_api_key>"
$ export IMPORT_SCHEMA_REGISTRY_API_SECRET="<schema_registry_api_secret>"
$ export IMPORT_SCHEMA_REGISTRY_REST_ENDPOINT="<schema_registry_rest_endpoint>"
$ terraform import confluent_schema.my_schema_1 lsrc-abc123/test-subject/100003
The following end-to-end examples might help to get started with confluent_schema
resource:
single-event-types-avro-schema
single-event-types-proto-schema
single-event-types-proto-schema-with-alias
multiple-event-types-avro-schema
multiple-event-types-proto-schema
# Step #1: Run 'terraform plan' and 'terraform apply' to create
# v1 of avro-purchase schema.
provider "confluent" {
schema_registry_id = var.schema_registry_id # optionally use SCHEMA_REGISTRY_ID env var
schema_registry_rest_endpoint = var.schema_registry_rest_endpoint # optionally use SCHEMA_REGISTRY_REST_ENDPOINT env var
schema_registry_api_key = var.schema_registry_api_key # optionally use SCHEMA_REGISTRY_API_KEY env var
schema_registry_api_secret = var.schema_registry_api_secret # optionally use SCHEMA_REGISTRY_API_SECRET env var
}
# confluent_schema.avro-purchase points to v1.
resource "confluent_schema" "avro-purchase" {
subject_name = "avro-purchase-value"
format = "AVRO"
schema = file("./schemas/avro/purchase.avsc")
// additional metadata
metadata {
properties = {
"owner": "Bob Jones",
"email": "bob@acme.com"
}
sensitive = ["s1", "s2"]
tags {
key = "tag1"
value = ["PII"]
}
tags {
key = "tag2"
value = ["PIIIII"]
}
}
// additional rules:
ruleset {
domain_rules {
name = "encryptPII"
kind = "TRANSFORM"
type = "ENCRYPT"
mode = "WRITEREAD"
tags = ["PII"]
params = {
"encrypt.kek.name" = "testkek2"
}
}
domain_rules {
name = "encrypt"
kind = "TRANSFORM"
type = "ENCRYPT"
mode = "WRITEREAD"
tags = ["PIIIII"]
params = {
"encrypt.kek.name" = "testkek2"
}
}
}
lifecycle {
prevent_destroy = true
}
}
# Step #2: Evolve schema by updating schemas/avro/purchase.avsc.
# Step #3: Run 'terraform plan' and 'terraform apply' to update
# confluent_schema.avro-purchase in-place to evolve avro-purchase
# schema from v1 to v2.
# Note: after running 'terraform destroy' just v2 (the latest version) will
# be soft-deleted by default (set hard_delete=true for a hard deletion).
# Before
# Step #1: Run 'terraform plan' and 'terraform apply'
# to create v1 of avro-purchase schema.
provider "confluent" {
schema_registry_id = var.schema_registry_id # optionally use SCHEMA_REGISTRY_ID env var
schema_registry_rest_endpoint = var.schema_registry_rest_endpoint # optionally use SCHEMA_REGISTRY_REST_ENDPOINT env var
schema_registry_api_key = var.schema_registry_api_key # optionally use SCHEMA_REGISTRY_API_KEY env var
schema_registry_api_secret = var.schema_registry_api_secret # optionally use SCHEMA_REGISTRY_API_SECRET env var
}
# confluent_schema.avro-purchase-v1 manages v1.
resource "confluent_schema" "avro-purchase-v1" {
subject_name = "avro-purchase-value"
format = "AVRO"
schema = file("./schemas/avro/purchase_v1.avsc")
recreate_on_update = true
lifecycle {
prevent_destroy = true
}
}
# After
# Step #2: Create schemas/avro/purchase_v2.avsc.
# Step #3: Run 'terraform plan' and 'terraform apply'
# to create confluent_schema.avro-purchase-v2.
provider "confluent" {
schema_registry_id = var.schema_registry_id # optionally use SCHEMA_REGISTRY_ID env var
schema_registry_rest_endpoint = var.schema_registry_rest_endpoint # optionally use SCHEMA_REGISTRY_REST_ENDPOINT env var
schema_registry_api_key = var.schema_registry_api_key # optionally use SCHEMA_REGISTRY_API_KEY env var
schema_registry_api_secret = var.schema_registry_api_secret # optionally use SCHEMA_REGISTRY_API_SECRET env var
}
# confluent_schema.avro-purchase-v1 manages v1.
resource "confluent_schema" "avro-purchase-v1" {
subject_name = "avro-purchase-value"
format = "AVRO"
schema = file("./schemas/avro/purchase_v1.avsc")
recreate_on_update = true
lifecycle {
prevent_destroy = true
}
}
# confluent_schema.avro-purchase-v2 manages v2.
resource "confluent_schema" "avro-purchase-v2" {
subject_name = "avro-purchase-value"
format = "AVRO"
schema = file("./schemas/avro/purchase_v2.avsc")
recreate_on_update = true
lifecycle {
prevent_destroy = true
}
}
# Note: after running 'terraform destroy' both v1 and v2 will
# be soft deleted by default. Set hard_delete=true for a hard deletion.