A connection allows BigQuery connections to external data sources..
To get more information about Connection, see:
resource "google_bigquery_connection" "connection" {
connection_id = "my-connection"
location = "US"
friendly_name = "π"
description = "a riveting description"
cloud_resource {}
}
resource "google_sql_database_instance" "instance" {
name = "my-database-instance"
database_version = "POSTGRES_11"
region = "us-central1"
settings {
tier = "db-f1-micro"
}
deletion_protection = "true"
}
resource "google_sql_database" "db" {
instance = google_sql_database_instance.instance.name
name = "db"
}
resource "random_password" "pwd" {
length = 16
special = false
}
resource "google_sql_user" "user" {
name = "user"
instance = google_sql_database_instance.instance.name
password = random_password.pwd.result
}
resource "google_bigquery_connection" "connection" {
friendly_name = "π"
description = "a riveting description"
location = "US"
cloud_sql {
instance_id = google_sql_database_instance.instance.connection_name
database = google_sql_database.db.name
type = "POSTGRES"
credential {
username = google_sql_user.user.name
password = google_sql_user.user.password
}
}
}
resource "google_sql_database_instance" "instance" {
name = "my-database-instance"
database_version = "POSTGRES_11"
region = "us-central1"
settings {
tier = "db-f1-micro"
}
deletion_protection = "true"
}
resource "google_sql_database" "db" {
instance = google_sql_database_instance.instance.name
name = "db"
}
resource "random_password" "pwd" {
length = 16
special = false
}
resource "google_sql_user" "user" {
name = "user"
instance = google_sql_database_instance.instance.name
password = random_password.pwd.result
}
resource "google_bigquery_connection" "connection" {
connection_id = "my-connection"
location = "US"
friendly_name = "π"
description = "a riveting description"
cloud_sql {
instance_id = google_sql_database_instance.instance.connection_name
database = google_sql_database.db.name
type = "POSTGRES"
credential {
username = google_sql_user.user.name
password = google_sql_user.user.password
}
}
}
resource "google_bigquery_connection" "connection" {
connection_id = "my-connection"
location = "aws-us-east-1"
friendly_name = "π"
description = "a riveting description"
aws {
access_role {
iam_role_id = "arn:aws:iam::999999999999:role/omnirole"
}
}
}
resource "google_bigquery_connection" "connection" {
connection_id = "my-connection"
location = "azure-eastus2"
friendly_name = "π"
description = "a riveting description"
azure {
customer_tenant_id = "customer-tenant-id"
federated_application_client_id = "b43eeeee-eeee-eeee-eeee-a480155501ce"
}
}
resource "google_bigquery_connection" "connection" {
connection_id = "my-connection"
location = "US"
friendly_name = "π"
description = "a riveting description"
cloud_spanner {
database = "projects/project/instances/instance/databases/database"
database_role = "database_role"
}
}
resource "google_bigquery_connection" "connection" {
connection_id = "my-connection"
location = "US"
friendly_name = "π"
description = "a riveting description"
cloud_spanner {
database = "projects/project/instances/instance/databases/database"
use_parallelism = true
use_data_boost = true
max_parallelism = 100
}
}
resource "google_bigquery_connection" "connection" {
connection_id = "my-connection"
location = "US"
friendly_name = "π"
description = "a riveting description"
spark {
spark_history_server_config {
dataproc_cluster = google_dataproc_cluster.basic.id
}
}
}
resource "google_dataproc_cluster" "basic" {
name = "my-connection"
region = "us-central1"
cluster_config {
# Keep the costs down with smallest config we can get away with
software_config {
override_properties = {
"dataproc:dataproc.allow.zero.workers" = "true"
}
}
master_config {
num_instances = 1
machine_type = "e2-standard-2"
disk_config {
boot_disk_size_gb = 35
}
}
}
}
The following arguments are supported:
connection_id
-
(Optional)
Optional connection id that should be assigned to the created connection.
location
-
(Optional)
The geographic location where the connection should reside.
Cloud SQL instance must be in the same location as the connection
with following exceptions: Cloud SQL us-central1 maps to BigQuery US, Cloud SQL europe-west1 maps to BigQuery EU.
Examples: US, EU, asia-northeast1, us-central1, europe-west1.
Spanner Connections same as spanner region
AWS allowed regions are aws-us-east-1
Azure allowed regions are azure-eastus2
friendly_name
-
(Optional)
A descriptive name for the connection
description
-
(Optional)
A descriptive description for the connection
cloud_sql
-
(Optional)
Connection properties specific to the Cloud SQL.
Structure is documented below.
aws
-
(Optional)
Connection properties specific to Amazon Web Services.
Structure is documented below.
azure
-
(Optional)
Container for connection properties specific to Azure.
Structure is documented below.
cloud_spanner
-
(Optional)
Connection properties specific to Cloud Spanner
Structure is documented below.
cloud_resource
-
(Optional)
Container for connection properties for delegation of access to GCP resources.
Structure is documented below.
spark
-
(Optional)
Container for connection properties to execute stored procedures for Apache Spark. resources.
Structure is documented below.
project
- (Optional) The ID of the project in which the resource belongs.
If it is not provided, the provider project is used.
instance_id
-
(Required)
Cloud SQL instance ID in the form project:location:instance.
database
-
(Required)
Database name.
credential
-
(Required)
Cloud SQL properties.
Structure is documented below.
type
-
(Required)
Type of the Cloud SQL database.
Possible values are: DATABASE_TYPE_UNSPECIFIED
, POSTGRES
, MYSQL
.
service_account_id
-
(Output)
When the connection is used in the context of an operation in BigQuery, this service account will serve as the identity being used for connecting to the CloudSQL instance specified in this connection.
The credential
block supports:
username
-
(Required)
Username for database.
password
-
(Required)
Password for database.
Note: This property is sensitive and will not be displayed in the plan.
access_role
-
(Required)
Authentication using Google owned service account to assume into customer's AWS IAM Role.
Structure is documented below.The access_role
block supports:
iam_role_id
-
(Required)
The userβs AWS IAM Role that trusts the Google-owned AWS IAM user Connection.
identity
-
(Output)
A unique Google-owned and Google-generated identity for the Connection. This identity will be used to access the user's AWS IAM Role.
application
-
(Output)
The name of the Azure Active Directory Application.
client_id
-
(Output)
The client id of the Azure Active Directory Application.
object_id
-
(Output)
The object id of the Azure Active Directory Application.
customer_tenant_id
-
(Required)
The id of customer's directory that host the data.
federated_application_client_id
-
(Optional)
The Azure Application (client) ID where the federated credentials will be hosted.
redirect_uri
-
(Output)
The URL user will be redirected to after granting consent during connection setup.
identity
-
(Output)
A unique Google-owned and Google-generated identity for the Connection. This identity will be used to access the user's Azure Active Directory Application.
The cloud_spanner
block supports:
database
-
(Required)
Cloud Spanner database in the form `project/instance/database'.
use_parallelism
-
(Optional)
If parallelism should be used when reading from Cloud Spanner.
max_parallelism
-
(Optional)
Allows setting max parallelism per query when executing on Spanner independent compute resources. If unspecified, default values of parallelism are chosen that are dependent on the Cloud Spanner instance configuration. useParallelism
and useDataBoost
must be set when setting max parallelism.
use_data_boost
-
(Optional)
If set, the request will be executed via Spanner independent compute resources. use_parallelism
must be set when using data boost.
database_role
-
(Optional)
Cloud Spanner database role for fine-grained access control. The Cloud Spanner admin should have provisioned the database role with appropriate permissions, such as SELECT
and INSERT
. Other users should only use roles provided by their Cloud Spanner admins. The database role name must start with a letter, and can only contain letters, numbers, and underscores. For more details, see https://cloud.google.com/spanner/docs/fgac-about.
use_serverless_analytics
-
(Optional, Deprecated)
If the serverless analytics service should be used to read data from Cloud Spanner. useParallelism
must be set when using serverless analytics.
~> Warning: useServerlessAnalytics
is deprecated and will be removed in a future major release. Use useDataBoost
instead.
The cloud_resource
block supports:
service_account_id
-
(Output)
The account ID of the service created for the purpose of this connection.service_account_id
-
(Output)
The account ID of the service created for the purpose of this connection.
metastore_service_config
-
(Optional)
Dataproc Metastore Service configuration for the connection.
Structure is documented below.
spark_history_server_config
-
(Optional)
Spark History Server configuration for the connection.
Structure is documented below.
The metastore_service_config
block supports:
metastore_service
-
(Optional)
Resource name of an existing Dataproc Metastore service in the form of projects/[projectId]/locations/[region]/services/[serviceId].The spark_history_server_config
block supports:
dataproc_cluster
-
(Optional)
Resource name of an existing Dataproc Cluster to act as a Spark History Server for the connection if the form of projects/[projectId]/regions/[region]/clusters/[cluster_name].In addition to the arguments listed above, the following computed attributes are exported:
id
- an identifier for the resource with format projects/{{project}}/locations/{{location}}/connections/{{connection_id}}
name
-
The resource name of the connection in the form of:
"projects/{project_id}/locations/{location_id}/connections/{connectionId}"
has_credential
-
True if the connection has credential assigned.
This resource provides the following Timeouts configuration options:
create
- Default is 20 minutes.update
- Default is 20 minutes.delete
- Default is 20 minutes.Connection can be imported using any of these accepted formats:
projects/{{project}}/locations/{{location}}/connections/{{connection_id}}
{{project}}/{{location}}/{{connection_id}}
{{location}}/{{connection_id}}
In Terraform v1.5.0 and later, use an import
block to import Connection using one of the formats above. For example:
import {
id = "projects/{{project}}/locations/{{location}}/connections/{{connection_id}}"
to = google_bigquery_connection.default
}
When using the terraform import
command, Connection can be imported using one of the formats above. For example:
$ terraform import google_bigquery_connection.default projects/{{project}}/locations/{{location}}/connections/{{connection_id}}
$ terraform import google_bigquery_connection.default {{project}}/{{location}}/{{connection_id}}
$ terraform import google_bigquery_connection.default {{location}}/{{connection_id}}
This resource supports User Project Overrides.