nomad_job

Manages a job registered in Nomad.

This can be used to initialize your cluster with system jobs, common services, and more. In day to day Nomad use it is common for developers to submit jobs to Nomad directly, such as for general app deployment. In addition to these apps, a Nomad cluster often runs core system services that are ideally setup during infrastructure creation. This resource is ideal for the latter type of job, but can be used to manage any job within Nomad.

Example Usage

Registering a job from a jobspec file:

resource "nomad_job" "app" {
  jobspec = file("${path.module}/jobspec.hcl")
}

Registering a job from an inline jobspec. This is less realistic but is an example of how it is possible. More likely, the contents will be paired with something such as the template_file resource to render parameterized jobspecs.

resource "nomad_job" "app" {
  jobspec = <<EOT
job "foo" {
  datacenters = ["dc1"]
  type        = "service"
  group "foo" {
    task "foo" {
      driver = "raw_exec"
      config {
        command = "/bin/sleep"
        args    = ["1"]
      }

      resources {
        cpu    = 20
        memory = 10
      }

      logs {
        max_files     = 3
        max_file_size = 10
      }
    }
  }
}
EOT
}

JSON jobspec

The input jobspec can also be provided as JSON instead of HCL by setting the argument json to true:

resource "nomad_job" "app" {
  jobspec = file("${path.module}/jobspec.json")
  json    = true
}

When using JSON, the input jobspec should have the same structured used by the Nomad API. The Nomad CLI can translate HCL jobs to JSON:

nomad job run -output my-job.nomad > my-job.json

Or you can also use the /v1/jobs/parse API endpoint.

HCL2 jobspec

By default, HCL jobs are parsed using the HCL2 format. If your job is not compatible with HCL2 you may set the hcl1 argument to true to use the previous HCL1 parser.

resource "nomad_job" "app" {
  jobspec = file("${path.module}/jobspec.hcl")

  hcl1 = true
}

Variables

HCL2 variables can be passed from Terraform to the jobspec parser through the vars attribute inside the hcl2 block. The variable must also be declared inside the jobspec as an input variable.

Due to the way resource attributes are stored in the Terraform state, the values must be provided as strings.

resource "nomad_job" "app" {
  hcl2 {
    vars = {
      "restart_attempts" = "5",
      "datacenters"      = "[\"dc1\", \"dc2\"]",
    }
  }

  jobspec = <<EOT
variable "datacenters" {
  type = list(string)
}

variable "restart_attempts" {
  type = number
}

job "foo-hcl2" {
  datacenters = var.datacenters

  restart {
    attempts = var.restart_attempts
    ...
  }
  ...
}
variable "datacenters" {
  type = list(string)
}

variable "restart_attempts" {
  type = number
}

job "foo-hcl2" {
  datacenters = var.datacenters

  restart {
    attempts = var.restart_attempts
    ...
  }
  ...
}

Variables must have known-values at plan time. This means that you will not be able to reference values from resources that don't exist in the Terraform state yet. Instead, use string templates or the templatefile Terraform function to provide a fully rendered jobspec.

resource "random_pet" "random_dc" {}

# This resource will fail to plan because random_pet.random_dc.id is unknown.
resource "nomad_job" "job_with_hcl2" {
  jobspec = <<EOT
variable "datacenter" {
  type = string
}

job "example" {
  datacenters = [var.datacenter]
  ...
}
EOT

  hcl2 {
    vars = {
      datacenter = random_pet.random_dc.id
    }
  }
}

# This will work since Terraform will provide a fully rendered jobspec once it
# knows the value of random_pet.random_dc.id.
resource "nomad_job" "job_with_hcl2" {
  jobspec = <<EOT
job "example" {
  datacenters = ["${random_pet.random_dc.id}"]
  ...
}
EOT
}

Filesystem functions

Please note that filesystem functions will create an implicit dependency in your Terraform configuration. For example, Terraform will not be able to detect changes to files loaded using the file function inside a jobspec.

To avoid confusion, these functions are disabled by default. To enable them set allow_fs to true:

resource "nomad_job" "app" {
  jobspec = file("${path.module}/jobspec.hcl")

  hcl2 {
    allow_fs = true
  }
}

If you do need to track changes to external files, you can use the local_file data source and the templatefile function to load the local file into Terraform and then render its content into the jobspec:

# main.tf

data "local_file" "index_html" {
  filename = "${path.module}/index.html"
}

resource "nomad_job" "nginx" {
  jobspec = templatefile("${path.module}/nginx.nomad.tpl", {
    index_html = data.local_file.index_html.content
  })
}
# nginx.nomad.tpl

job "nginx" {
...
      template {
        data        = <<EOF
${index_html}
EOF
        destination = "local/www/index.html"
      }
...
}

Tracking Jobspec Changes

The Nomad API allows submitting the raw jobspec when registering and updating jobs. If available, the job submission source is used to detect changes to the jobspec and hcl2.vars arguments.

Argument Reference

The following arguments are supported:

Timeouts

nomad_job provides the following Timeouts configuration options when detach is set to false:

Importing Jobs

Jobs are imported using the pattern <job ID>@<namespace>.

$ terraform import nomad_job.example example@my-namespace
nomad_job.example: Importing from ID "example@my-namespace"...
nomad_job.example: Import prepared!
  Prepared nomad_job for import
nomad_job.example: Refreshing state... [id=example@my-namespace]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.