digitalocean_kubernetes_cluster

Provides a DigitalOcean Kubernetes cluster resource. This can be used to create, delete, and modify clusters. For more information see the official documentation.

Example Usage

Basic Example

resource "digitalocean_kubernetes_cluster" "foo" {
  name   = "foo"
  region = "nyc1"
  # Grab the latest version slug from `doctl kubernetes options versions`
  version = "1.22.8-do.1"

  node_pool {
    name       = "worker-pool"
    size       = "s-2vcpu-2gb"
    node_count = 3

    taint {
      key    = "workloadKind"
      value  = "database"
      effect = "NoSchedule"
    }
  }
}

Autoscaling Example

Node pools may also be configured to autoscale. For example:

resource "digitalocean_kubernetes_cluster" "foo" {
  name    = "foo"
  region  = "nyc1"
  version = "1.22.8-do.1"

  node_pool {
    name       = "autoscale-worker-pool"
    size       = "s-2vcpu-2gb"
    auto_scale = true
    min_nodes  = 1
    max_nodes  = 5
  }
}

Note that, each node pool must always have at least one node and when using autoscaling the min_nodes must be greater than or equal to 1.

Auto Upgrade Example

DigitalOcean Kubernetes clusters may also be configured to auto upgrade patch versions. You may explicitly specify the maintenance window policy. For example:

data "digitalocean_kubernetes_versions" "example" {
  version_prefix = "1.22."
}

resource "digitalocean_kubernetes_cluster" "foo" {
  name         = "foo"
  region       = "nyc1"
  auto_upgrade = true
  version      = data.digitalocean_kubernetes_versions.example.latest_version

  maintenance_policy {
    start_time = "04:00"
    day        = "sunday"
  }

  node_pool {
    name       = "default"
    size       = "s-1vcpu-2gb"
    node_count = 3
  }
}

Note that a data source is used to supply the version. This is needed to prevent configuration diff whenever a cluster is upgraded.

Kubernetes Terraform Provider Example

The cluster's kubeconfig is exported as an attribute allowing you to use it with the Kubernetes Terraform provider.

When using the Kubernetes provider with a cluster created in a separate Terraform module or configuration, use the digitalocean_kubernetes_cluster data-source to access the cluster's credentials. See here for a full example.

data "digitalocean_kubernetes_cluster" "example" {
  name = "prod-cluster-01"
}

provider "kubernetes" {
  host  = data.digitalocean_kubernetes_cluster.example.endpoint
  token = data.digitalocean_kubernetes_cluster.example.kube_config[0].token
  cluster_ca_certificate = base64decode(
    data.digitalocean_kubernetes_cluster.example.kube_config[0].cluster_ca_certificate
  )
}

Exec credential plugin

Another method to ensure that the Kubernetes provider is receiving valid credentials is to use an exec plugin. In order to use use this approach, the DigitalOcean CLI (doctl) must be present. doctl will renew the token if needed before initializing the provider.

provider "kubernetes" {
  host = data.digitalocean_kubernetes_cluster.foo.endpoint
  cluster_ca_certificate = base64decode(
    data.digitalocean_kubernetes_cluster.foo.kube_config[0].cluster_ca_certificate
  )

  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    command     = "doctl"
    args = ["kubernetes", "cluster", "kubeconfig", "exec-credential",
    "--version=v1beta1", data.digitalocean_kubernetes_cluster.foo.id]
  }
}

Argument Reference

The following arguments are supported:

This resource supports customized create timeouts. The default timeout is 30 minutes.

Attributes Reference

In addition to the arguments listed above, the following additional attributes are exported:

Import

Before importing a Kubernetes cluster, the cluster's default node pool must be tagged with the terraform:default-node-pool tag. The provider will automatically add this tag if the cluster only has a single node pool. Clusters with more than one node pool, however, will require that you manually add the terraform:default-node-pool tag to the node pool that you intend to be the default node pool.

Then the Kubernetes cluster and its default node pool can be imported using the cluster's id, e.g.

terraform import digitalocean_kubernetes_cluster.mycluster 1b8b2100-0e9f-4e8f-ad78-9eb578c2a0af

Additional node pools must be imported separately as digitalocean_kubernetes_cluster resources, e.g.

terraform import digitalocean_kubernetes_node_pool.mynodepool 9d76f410-9284-4436-9633-4066852442c8