Represents a table.
To get more information about Table, see:
resource "google_biglake_catalog" "catalog" {
name = "my_catalog"
location = "US"
}
resource "google_storage_bucket" "bucket" {
name = "my_bucket"
location = "US"
force_destroy = true
uniform_bucket_level_access = true
}
resource "google_storage_bucket_object" "metadata_folder" {
name = "metadata/"
content = " "
bucket = google_storage_bucket.bucket.name
}
resource "google_storage_bucket_object" "data_folder" {
name = "data/"
content = " "
bucket = google_storage_bucket.bucket.name
}
resource "google_biglake_database" "database" {
name = "my_database"
catalog = google_biglake_catalog.catalog.id
type = "HIVE"
hive_options {
location_uri = "gs://${google_storage_bucket.bucket.name}/${google_storage_bucket_object.metadata_folder.name}"
parameters = {
"owner" = "Alex"
}
}
}
resource "google_biglake_table" "table" {
name = "my_table"
database = google_biglake_database.database.id
type = "HIVE"
hive_options {
table_type = "MANAGED_TABLE"
storage_descriptor {
location_uri = "gs://${google_storage_bucket.bucket.name}/${google_storage_bucket_object.data_folder.name}"
input_format = "org.apache.hadoop.mapred.SequenceFileInputFormat"
output_format = "org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat"
}
# Some Example Parameters.
parameters = {
"spark.sql.create.version" = "3.1.3"
"spark.sql.sources.schema.numParts" = "1"
"transient_lastDdlTime" = "1680894197"
"spark.sql.partitionProvider" = "catalog"
"owner" = "John Doe"
"spark.sql.sources.schema.part.0"= "{\"type\":\"struct\",\"fields\":[{\"name\":\"id\",\"type\":\"integer\",\"nullable\":true,\"metadata\":{}},{\"name\":\"name\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"age\",\"type\":\"integer\",\"nullable\":true,\"metadata\":{}}]}"
"spark.sql.sources.provider" = "iceberg"
"provider" = "iceberg"
}
}
}
The following arguments are supported:
name
-
(Required)
Output only. The name of the Table. Format:
projects/{project_id_or_number}/locations/{locationId}/catalogs/{catalogId}/databases/{databaseId}/tables/{tableId}type
-
(Optional)
The database type.
Possible values are: HIVE
.
hive_options
-
(Optional)
Options of a Hive table.
Structure is documented below.
database
-
(Optional)
The id of the parent database.
The hive_options
block supports:
parameters
-
(Optional)
Stores user supplied Hive table parameters. An object containing a
list of "key": value pairs.
Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.
table_type
-
(Optional)
Hive table type. For example, MANAGED_TABLE, EXTERNAL_TABLE.
storage_descriptor
-
(Optional)
Stores physical storage information on the data.
Structure is documented below.
The storage_descriptor
block supports:
location_uri
-
(Optional)
Cloud Storage folder URI where the table data is stored, starting with "gs://".
input_format
-
(Optional)
The fully qualified Java class name of the input format.
output_format
-
(Optional)
The fully qualified Java class name of the output format.
In addition to the arguments listed above, the following computed attributes are exported:
id
- an identifier for the resource with format {{database}}/tables/{{name}}
create_time
-
Output only. The creation time of the table. A timestamp in RFC3339 UTC
"Zulu" format, with nanosecond resolution and up to nine fractional
digits. Examples: "2014-10-02T15:01:23Z" and
"2014-10-02T15:01:23.045123456Z".
update_time
-
Output only. The last modification time of the table. A timestamp in
RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine
fractional digits. Examples: "2014-10-02T15:01:23Z" and
"2014-10-02T15:01:23.045123456Z".
delete_time
-
Output only. The deletion time of the table. Only set after the
table is deleted. A timestamp in RFC3339 UTC "Zulu" format, with
nanosecond resolution and up to nine fractional digits. Examples:
"2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".
expire_time
-
Output only. The time when this table is considered expired. Only set
after the table is deleted. A timestamp in RFC3339 UTC "Zulu" format,
with nanosecond resolution and up to nine fractional digits. Examples:
"2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".
etag
-
The checksum of a table object computed by the server based on the value
of other fields. It may be sent on update requests to ensure the client
has an up-to-date value before proceeding. It is only checked for update
table operations.
This resource provides the following Timeouts configuration options:
create
- Default is 20 minutes.update
- Default is 20 minutes.delete
- Default is 20 minutes.Table can be imported using any of these accepted formats:
{{database}}/tables/{{name}}
In Terraform v1.5.0 and later, use an import
block to import Table using one of the formats above. For example:
import {
id = "{{database}}/tables/{{name}}"
to = google_biglake_table.default
}
When using the terraform import
command, Table can be imported using one of the formats above. For example:
$ terraform import google_biglake_table.default {{database}}/tables/{{name}}