A Google Vmware Node Pool.
resource "google_gkeonprem_vmware_cluster" "default-basic" {
name = "my-cluster"
location = "us-west1"
admin_cluster_membership = "projects/870316890899/locations/global/memberships/gkeonprem-terraform-test"
description = "test cluster"
on_prem_version = "1.13.1-gke.35"
network_config {
service_address_cidr_blocks = ["10.96.0.0/12"]
pod_address_cidr_blocks = ["192.168.0.0/16"]
dhcp_ip_config {
enabled = true
}
}
control_plane_node {
cpus = 4
memory = 8192
replicas = 1
}
load_balancer {
vip_config {
control_plane_vip = "10.251.133.5"
ingress_vip = "10.251.135.19"
}
metal_lb_config {
address_pools {
pool = "ingress-ip"
manual_assign = "true"
addresses = ["10.251.135.19"]
}
address_pools {
pool = "lb-test-ip"
manual_assign = "true"
addresses = ["10.251.135.19"]
}
}
}
}
resource "google_gkeonprem_vmware_node_pool" "nodepool-basic" {
name = "my-nodepool"
location = "us-west1"
vmware_cluster = google_gkeonprem_vmware_cluster.default-basic.name
config {
replicas = 3
image_type = "ubuntu_containerd"
enable_load_balancer = true
}
}
resource "google_gkeonprem_vmware_cluster" "default-full" {
name = "my-cluster"
location = "us-west1"
admin_cluster_membership = "projects/870316890899/locations/global/memberships/gkeonprem-terraform-test"
description = "test cluster"
on_prem_version = "1.13.1-gke.35"
network_config {
service_address_cidr_blocks = ["10.96.0.0/12"]
pod_address_cidr_blocks = ["192.168.0.0/16"]
dhcp_ip_config {
enabled = true
}
}
control_plane_node {
cpus = 4
memory = 8192
replicas = 1
}
load_balancer {
vip_config {
control_plane_vip = "10.251.133.5"
ingress_vip = "10.251.135.19"
}
metal_lb_config {
address_pools {
pool = "ingress-ip"
manual_assign = "true"
addresses = ["10.251.135.19"]
}
address_pools {
pool = "lb-test-ip"
manual_assign = "true"
addresses = ["10.251.135.19"]
}
}
}
}
resource "google_gkeonprem_vmware_node_pool" "nodepool-full" {
name = "my-nodepool"
location = "us-west1"
vmware_cluster = google_gkeonprem_vmware_cluster.default-full.name
annotations = {}
config {
cpus = 4
memory_mb = 8196
replicas = 3
image_type = "ubuntu_containerd"
image = "image"
boot_disk_size_gb = 10
taints {
key = "key"
value = "value"
}
taints {
key = "key"
value = "value"
effect = "NO_SCHEDULE"
}
labels = {}
vsphere_config {
datastore = "test-datastore"
tags {
category = "test-category-1"
tag = "tag-1"
}
tags {
category = "test-category-2"
tag = "tag-2"
}
host_groups = ["host1", "host2"]
}
enable_load_balancer = true
}
node_pool_autoscaling {
min_replicas = 1
max_replicas = 5
}
}
The following arguments are supported:
config
-
(Required)
The node configuration of the node pool.
Structure is documented below.
name
-
(Required)
The vmware node pool name.
vmware_cluster
-
(Required)
The cluster this node pool belongs to.
location
-
(Required)
The location of the resource.
cpus
-
(Optional)
The number of CPUs for each node in the node pool.
memory_mb
-
(Optional)
The megabytes of memory for each node in the node pool.
replicas
-
(Optional)
The number of nodes in the node pool.
image_type
-
(Required)
The OS image to be used for each node in a node pool.
Currently cos
, ubuntu
, ubuntu_containerd
and windows
are supported.
image
-
(Optional)
The OS image name in vCenter, only valid when using Windows.
boot_disk_size_gb
-
(Optional)
VMware disk size to be used during creation.
taints
-
(Optional)
The initial taints assigned to nodes of this node pool.
Structure is documented below.
labels
-
(Optional)
The map of Kubernetes labels (key/value pairs) to be applied to each node.
These will added in addition to any default label(s) that
Kubernetes may apply to the node.
In case of conflict in label keys, the applied set may differ depending on
the Kubernetes version -- it's best to assume the behavior is undefined
and conflicts should be avoided.
vsphere_config
-
(Optional)
Specifies the vSphere config for node pool.
Structure is documented below.
enable_load_balancer
-
(Optional)
Allow node pool traffic to be load balanced. Only works for clusters with
MetalLB load balancers.
key
-
(Required)
Key associated with the effect.
value
-
(Required)
Value associated with the effect.
effect
-
(Optional)
Available taint effects.
Possible values are: EFFECT_UNSPECIFIED
, NO_SCHEDULE
, PREFER_NO_SCHEDULE
, NO_EXECUTE
.
The vsphere_config
block supports:
datastore
-
(Optional)
The name of the vCenter datastore. Inherited from the user cluster.
tags
-
(Optional)
Tags to apply to VMs.
Structure is documented below.
host_groups
-
(Optional)
Vsphere host groups to apply to all VMs in the node pool
category
-
(Optional)
The Vsphere tag category.
tag
-
(Optional)
The Vsphere tag name.
display_name
-
(Optional)
The display name for the node pool.
annotations
-
(Optional)
Annotations on the node Pool.
This field has the same restrictions as Kubernetes annotations.
The total size of all keys and values combined is limited to 256k.
Key can have 2 segments: prefix (optional) and name (required),
separated by a slash (/).
Prefix must be a DNS subdomain.
Name must be 63 characters or less, begin and end with alphanumerics,
with dashes (-), underscores (_), dots (.), and alphanumerics between.
Note: This field is non-authoritative, and will only manage the annotations present in your configuration.
Please refer to the field effective_annotations
for all of the annotations present on the resource.
node_pool_autoscaling
-
(Optional)
Node Pool autoscaling config for the node pool.
Structure is documented below.
project
- (Optional) The ID of the project in which the resource belongs.
If it is not provided, the provider project is used.
The node_pool_autoscaling
block supports:
min_replicas
-
(Required)
Minimum number of replicas in the NodePool.
max_replicas
-
(Required)
Maximum number of replicas in the NodePool.
In addition to the arguments listed above, the following computed attributes are exported:
id
- an identifier for the resource with format projects/{{project}}/locations/{{location}}/vmwareClusters/{{vmware_cluster}}/vmwareNodePools/{{name}}
status
-
ResourceStatus representing detailed cluster state.
Structure is documented below.
uid
-
The unique identifier of the node pool.
state
-
The current state of this cluster.
reconciling
-
If set, there are currently changes in flight to the node pool.
create_time
-
The time the cluster was created, in RFC3339 text format.
update_time
-
The time the cluster was last updated, in RFC3339 text format.
delete_time
-
The time the cluster was deleted, in RFC3339 text format.
etag
-
This checksum is computed by the server based on the value of other
fields, and may be sent on update and delete requests to ensure the
client has an up-to-date value before proceeding.
Allows clients to perform consistent read-modify-writes
through optimistic concurrency control.
on_prem_version
-
Anthos version for the node pool. Defaults to the user cluster version.
effective_annotations
-
All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services.
error_message
-
(Output)
Human-friendly representation of the error message from the user cluster
controller. The error message can be temporary as the user cluster
controller creates a cluster or node pool. If the error message persists
for a longer period of time, it can be used to surface error message to
indicate real problems requiring user intervention.
conditions
-
(Output)
ResourceConditions provide a standard mechanism for higher-level status reporting from user cluster controller.
Structure is documented below.
The conditions
block contains:
type
-
(Output)
Type of the condition.
(e.g., ClusterRunning, NodePoolRunning or ServerSidePreflightReady)
reason
-
(Output)
Machine-readable message indicating details about last transition.
message
-
(Output)
Human-readable message indicating details about last transition.
last_transition_time
-
(Output)
Last time the condition transit from one status to another.
state
-
(Output)
The lifecycle state of the condition.
This resource provides the following Timeouts configuration options:
create
- Default is 60 minutes.update
- Default is 60 minutes.delete
- Default is 60 minutes.VmwareNodePool can be imported using any of these accepted formats:
projects/{{project}}/locations/{{location}}/vmwareClusters/{{vmware_cluster}}/vmwareNodePools/{{name}}
{{project}}/{{location}}/{{vmware_cluster}}/{{name}}
{{location}}/{{vmware_cluster}}/{{name}}
In Terraform v1.5.0 and later, use an import
block to import VmwareNodePool using one of the formats above. For example:
import {
id = "projects/{{project}}/locations/{{location}}/vmwareClusters/{{vmware_cluster}}/vmwareNodePools/{{name}}"
to = google_gkeonprem_vmware_node_pool.default
}
When using the terraform import
command, VmwareNodePool can be imported using one of the formats above. For example:
$ terraform import google_gkeonprem_vmware_node_pool.default projects/{{project}}/locations/{{location}}/vmwareClusters/{{vmware_cluster}}/vmwareNodePools/{{name}}
$ terraform import google_gkeonprem_vmware_node_pool.default {{project}}/{{location}}/{{vmware_cluster}}/{{name}}
$ terraform import google_gkeonprem_vmware_node_pool.default {{location}}/{{vmware_cluster}}/{{name}}
This resource supports User Project Overrides.