Provides a Fastly Compute service. Compute is a computation platform capable of running custom binaries that you compile on your own systems and upload to Fastly. Security and portability is provided by compiling your code to WebAssembly using the wasm32-wasi
target. A compute service encompasses Domains and Backends.
The Service resource requires a domain name that is correctly set up to direct traffic to the Fastly service. See Fastly's guide on Adding CNAME Records on their documentation site for guidance.
Basic usage:
data "fastly_package_hash" "example" {
filename = "./path/to/package.tar.gz"
}
resource "fastly_service_compute" "example" {
name = "demofastly"
domain {
name = "demo.notexample.com"
comment = "demo"
}
package {
filename = "package.tar.gz"
source_code_hash = data.fastly_package_hash.example.hash
}
force_destroy = true
}
The package
block supports uploading or modifying Wasm packages for use in a Fastly Compute service. See Fastly's documentation on
Compute
The Product Enablement APIs allow customers to enable and disable specific products.
Not all customers are entitled to use these endpoints and so care needs to be given when configuring a product_enablement
block in your Terraform configuration.
Consult the Product Enablement Guide to understand the internal workings for the product_enablement
block.
Fastly Services can be imported using their service ID, e.g.
$ terraform import fastly_service_compute.demo xxxxxxxxxxxxxxxxxxxx
By default, either the active version will be imported, or the latest version if no version is active.
Alternatively, a specific version of the service can be selected by appending an @
followed by the version number to the service ID, e.g.
$ terraform import fastly_service_compute.demo xxxxxxxxxxxxxxxxxxxx@2
domain
(Block Set, Min: 1) A set of Domain names to serve as entry points for your Service (see below for nested schema)name
(String) The unique name for the Service to createactivate
(Boolean) Conditionally prevents the Service from being activated. The apply step will continue to create a new draft version but will not activate it if this is set to false
. Default true
backend
(Block Set) (see below for nested schema)comment
(String) Description field for the service. Default Managed by Terraform
dictionary
(Block Set) (see below for nested schema)force_destroy
(Boolean) Services that are active cannot be destroyed. In order to destroy the Service, set force_destroy
to true
. Default false
logging_bigquery
(Block Set) (see below for nested schema)logging_blobstorage
(Block Set) (see below for nested schema)logging_cloudfiles
(Block Set) (see below for nested schema)logging_datadog
(Block Set) (see below for nested schema)logging_digitalocean
(Block Set) (see below for nested schema)logging_elasticsearch
(Block Set) (see below for nested schema)logging_ftp
(Block Set) (see below for nested schema)logging_gcs
(Block Set) (see below for nested schema)logging_googlepubsub
(Block Set) (see below for nested schema)logging_heroku
(Block Set) (see below for nested schema)logging_honeycomb
(Block Set) (see below for nested schema)logging_https
(Block Set) (see below for nested schema)logging_kafka
(Block Set) (see below for nested schema)logging_kinesis
(Block Set) (see below for nested schema)logging_logentries
(Block Set) (see below for nested schema)logging_loggly
(Block Set) (see below for nested schema)logging_logshuttle
(Block Set) (see below for nested schema)logging_newrelic
(Block Set) (see below for nested schema)logging_openstack
(Block Set) (see below for nested schema)logging_papertrail
(Block Set) (see below for nested schema)logging_s3
(Block Set) (see below for nested schema)logging_scalyr
(Block Set) (see below for nested schema)logging_sftp
(Block Set) (see below for nested schema)logging_splunk
(Block Set) (see below for nested schema)logging_sumologic
(Block Set) (see below for nested schema)logging_syslog
(Block Set) (see below for nested schema)package
(Block List, Max: 1) The package
block supports uploading or modifying Wasm packages for use in a Fastly Compute service (if omitted, ensure activate = false
is set on fastly_service_compute
to avoid service validation errors). See Fastly's documentation on Compute (see below for nested schema)product_enablement
(Block Set, Max: 1) (see below for nested schema)resource_link
(Block Set) A resource link represents a link between a shared resource (such as an KV Store or Config Store) and a service version. (see below for nested schema)reuse
(Boolean) Services that are active cannot be destroyed. If set to true
a service Terraform intends to destroy will instead be deactivated (allowing it to be reused by importing it into another Terraform project). If false
, attempting to destroy an active service will cause an error. Default false
version_comment
(String) Description field for the versionactive_version
(Number) The currently active version of your Fastly Servicecloned_version
(Number) The latest cloned version by the providerforce_refresh
(Boolean) Used internally by the provider to temporarily indicate if all resources should call their associated API to update the local state. This is for scenarios where the service version has been reverted outside of Terraform (e.g. via the Fastly UI) and the provider needs to resync the state for a different active version (this is only if activate
is true
).id
(String) The ID of this resource.imported
(Boolean) Used internally by the provider to temporarily indicate if the service is being imported, and is reset to false once the import is finisheddomain
Required:
name
(String) The domain that this Service will respond to. It is important to note that changing this attribute will delete and recreate the resource.Optional:
comment
(String) An optional comment about the Domain.backend
Required:
address
(String) An IPv4, hostname, or IPv6 address for the Backendname
(String) Name for this Backend. Must be unique to this Service. It is important to note that changing this attribute will delete and recreate the resourceOptional:
between_bytes_timeout
(Number) How long to wait between bytes in milliseconds. Default 10000
connect_timeout
(Number) How long to wait for a timeout in milliseconds. Default 1000
error_threshold
(Number) Number of errors to allow before the Backend is marked as down. Default 0
first_byte_timeout
(Number) How long to wait for the first bytes in milliseconds. Default 15000
healthcheck
(String) Name of a defined healthcheck
to assign to this backendkeepalive_time
(Number) How long in seconds to keep a persistent connection to the backend between requests.max_conn
(Number) Maximum number of connections for this Backend. Default 200
max_tls_version
(String) Maximum allowed TLS version on SSL connections to this backend.min_tls_version
(String) Minimum allowed TLS version on SSL connections to this backend.override_host
(String) The hostname to override the Host headerport
(Number) The port number on which the Backend responds. Default 80
share_key
(String) Value that when shared across backends will enable those backends to share the same health check.shield
(String) The POP of the shield designated to reduce inbound load. Valid values for shield
are included in the GET /datacenters
API responsessl_ca_cert
(String) CA certificate attached to origin.ssl_cert_hostname
(String) Configure certificate validation. Does not affect SNI at allssl_check_cert
(Boolean) Be strict about checking SSL certs. Default true
ssl_ciphers
(String) Cipher list consisting of one or more cipher strings separated by colons. Commas or spaces are also acceptable separators but colons are normally used.ssl_client_cert
(String, Sensitive) Client certificate attached to origin. Used when connecting to the backendssl_client_key
(String, Sensitive) Client key attached to origin. Used when connecting to the backendssl_sni_hostname
(String) Configure SNI in the TLS handshake. Does not affect cert validation at alluse_ssl
(Boolean) Whether or not to use SSL to reach the Backend. Default false
weight
(Number) The portion of traffic to send to this Backend. Each Backend receives weight / total of the traffic. Default 100
dictionary
Required:
name
(String) A unique name to identify this dictionary. It is important to note that changing this attribute will delete and recreate the dictionary, and discard the current items in the dictionaryOptional:
force_destroy
(Boolean) Allow the dictionary to be deleted, even if it contains entries. Defaults to false.write_only
(Boolean) If true
, the dictionary is a private dictionary. Default is false
. Please note that changing this attribute will delete and recreate the dictionary, and discard the current items in the dictionary. fastly_service_vcl
resource will only manage the dictionary object itself, and items under private dictionaries can not be managed using fastly_service_dictionary_items
resource. Therefore, using a write-only/private dictionary should only be done if the items are managed outside of TerraformRead-Only:
dictionary_id
(String) The ID of the dictionarylogging_bigquery
Required:
dataset
(String) The ID of your BigQuery datasetname
(String) A unique name to identify this BigQuery logging endpoint. It is important to note that changing this attribute will delete and recreate the resourceproject_id
(String) The ID of your GCP projecttable
(String) The ID of your BigQuery tableOptional:
account_name
(String) The google account name used to obtain temporary credentials (default none). You may optionally provide this via an environment variable, FASTLY_GCS_ACCOUNT_NAME
.email
(String, Sensitive) The email for the service account with write access to your BigQuery dataset. If not provided, this will be pulled from a FASTLY_BQ_EMAIL
environment variablesecret_key
(String, Sensitive) The secret key associated with the service account that has write access to your BigQuery table. If not provided, this will be pulled from the FASTLY_BQ_SECRET_KEY
environment variable. Typical format for this is a private key in a string with newlinestemplate
(String) BigQuery table name suffix templatelogging_blobstorage
Required:
account_name
(String) The unique Azure Blob Storage namespace in which your data objects are storedcontainer
(String) The name of the Azure Blob Storage container in which to store logsname
(String) A unique name to identify the Azure Blob Storage endpoint. It is important to note that changing this attribute will delete and recreate the resourceOptional:
compression_codec
(String) The codec used for compression of your logs. Valid values are zstd, snappy, and gzip. If the specified codec is "gzip", gzip_level will default to 3. To specify a different level, leave compression_codec blank and explicitly set the level using gzip_level. Specifying both compression_codec and gzip_level in the same API request will result in an error.file_max_bytes
(Number) Maximum size of an uploaded log file, if non-zero.gzip_level
(Number) Level of Gzip compression from 0-9
. 0
means no compression. 1
is the fastest and the least compressed version, 9
is the slowest and the most compressed version. Default 0
message_type
(String) How the message should be formatted. Can be either classic
, loggly
, logplex
or blank
. Default is classic
path
(String) The path to upload logs to. Must end with a trailing slash. If this field is left empty, the files will be saved in the container's root pathperiod
(Number) How frequently the logs should be transferred in seconds. Default 3600
public_key
(String) A PGP public key that Fastly will use to encrypt your log files before writing them to disksas_token
(String, Sensitive) The Azure shared access signature providing write access to the blob service objects. Be sure to update your token before it expires or the logging functionality will not worktimestamp_format
(String) The strftime
specified timestamp formatting (default %Y-%m-%dT%H:%M:%S.000
)logging_cloudfiles
Required:
access_key
(String, Sensitive) Your Cloud File account access keybucket_name
(String) The name of your Cloud Files containername
(String) The unique name of the Rackspace Cloud Files logging endpoint. It is important to note that changing this attribute will delete and recreate the resourceuser
(String) The username for your Cloud Files accountOptional:
compression_codec
(String) The codec used for compression of your logs. Valid values are zstd, snappy, and gzip. If the specified codec is "gzip", gzip_level will default to 3. To specify a different level, leave compression_codec blank and explicitly set the level using gzip_level. Specifying both compression_codec and gzip_level in the same API request will result in an error.gzip_level
(Number) Level of Gzip compression from 0-9
. 0
means no compression. 1
is the fastest and the least compressed version, 9
is the slowest and the most compressed version. Default 0
message_type
(String) How the message should be formatted. Can be either classic
, loggly
, logplex
or blank
. Default is classic
path
(String) The path to upload logs toperiod
(Number) How frequently log files are finalized so they can be available for reading (in seconds, default 3600
)public_key
(String) The PGP public key that Fastly will use to encrypt your log files before writing them to diskregion
(String) The region to stream logs to. One of: DFW (Dallas), ORD (Chicago), IAD (Northern Virginia), LON (London), SYD (Sydney), HKG (Hong Kong)timestamp_format
(String) The strftime
specified timestamp formatting (default %Y-%m-%dT%H:%M:%S.000
)logging_datadog
Required:
name
(String) The unique name of the Datadog logging endpoint. It is important to note that changing this attribute will delete and recreate the resourcetoken
(String, Sensitive) The API key from your Datadog accountOptional:
region
(String) The region that log data will be sent to. One of US
or EU
. Defaults to US
if undefinedlogging_digitalocean
Required:
access_key
(String, Sensitive) Your DigitalOcean Spaces account access keybucket_name
(String) The name of the DigitalOcean Spacename
(String) The unique name of the DigitalOcean Spaces logging endpoint. It is important to note that changing this attribute will delete and recreate the resourcesecret_key
(String, Sensitive) Your DigitalOcean Spaces account secret keyOptional:
compression_codec
(String) The codec used for compression of your logs. Valid values are zstd, snappy, and gzip. If the specified codec is "gzip", gzip_level will default to 3. To specify a different level, leave compression_codec blank and explicitly set the level using gzip_level. Specifying both compression_codec and gzip_level in the same API request will result in an error.domain
(String) The domain of the DigitalOcean Spaces endpoint (default nyc3.digitaloceanspaces.com
)gzip_level
(Number) Level of Gzip compression from 0-9
. 0
means no compression. 1
is the fastest and the least compressed version, 9
is the slowest and the most compressed version. Default 0
message_type
(String) How the message should be formatted. Can be either classic
, loggly
, logplex
or blank
. Default is classic
path
(String) The path to upload logs toperiod
(Number) How frequently log files are finalized so they can be available for reading (in seconds, default 3600
)public_key
(String) A PGP public key that Fastly will use to encrypt your log files before writing them to disktimestamp_format
(String) The strftime
specified timestamp formatting (default %Y-%m-%dT%H:%M:%S.000
)logging_elasticsearch
Required:
index
(String) The name of the Elasticsearch index to send documents (logs) toname
(String) The unique name of the Elasticsearch logging endpoint. It is important to note that changing this attribute will delete and recreate the resourceurl
(String) The Elasticsearch URL to stream logs toOptional:
password
(String, Sensitive) BasicAuth password for Elasticsearchpipeline
(String) The ID of the Elasticsearch ingest pipeline to apply pre-process transformations to before indexingrequest_max_bytes
(Number) The maximum number of logs sent in one request. Defaults to 0
for unboundedrequest_max_entries
(Number) The maximum number of bytes sent in one request. Defaults to 0
for unboundedtls_ca_cert
(String) A secure certificate to authenticate the server with. Must be in PEM formattls_client_cert
(String) The client certificate used to make authenticated requests. Must be in PEM formattls_client_key
(String, Sensitive) The client private key used to make authenticated requests. Must be in PEM formattls_hostname
(String) The hostname used to verify the server's certificate. It can either be the Common Name (CN) or a Subject Alternative Name (SAN)user
(String) BasicAuth username for Elasticsearchlogging_ftp
Required:
address
(String) The FTP address to stream logs toname
(String) The unique name of the FTP logging endpoint. It is important to note that changing this attribute will delete and recreate the resourcepassword
(String, Sensitive) The password for the server (for anonymous use an email address)path
(String) The path to upload log files to. If the path ends in /
then it is treated as a directoryuser
(String) The username for the server (can be anonymous
)Optional:
compression_codec
(String) The codec used for compression of your logs. Valid values are zstd, snappy, and gzip. If the specified codec is "gzip", gzip_level will default to 3. To specify a different level, leave compression_codec blank and explicitly set the level using gzip_level. Specifying both compression_codec and gzip_level in the same API request will result in an error.gzip_level
(Number) Level of Gzip compression from 0-9
. 0
means no compression. 1
is the fastest and the least compressed version, 9
is the slowest and the most compressed version. Default 0
message_type
(String) How the message should be formatted. Can be either classic
, loggly
, logplex
or blank
. Default is classic
period
(Number) How frequently the logs should be transferred, in seconds (Default 3600
)port
(Number) The port number. Default: 21
public_key
(String) The PGP public key that Fastly will use to encrypt your log files before writing them to disktimestamp_format
(String) The strftime
specified timestamp formatting (default %Y-%m-%dT%H:%M:%S.000
)logging_gcs
Required:
bucket_name
(String) The name of the bucket in which to store the logsname
(String) A unique name to identify this GCS endpoint. It is important to note that changing this attribute will delete and recreate the resourceOptional:
account_name
(String) The google account name used to obtain temporary credentials (default none). You may optionally provide this via an environment variable, FASTLY_GCS_ACCOUNT_NAME
.compression_codec
(String) The codec used for compression of your logs. Valid values are zstd, snappy, and gzip. If the specified codec is "gzip", gzip_level will default to 3. To specify a different level, leave compression_codec blank and explicitly set the level using gzip_level. Specifying both compression_codec and gzip_level in the same API request will result in an error.gzip_level
(Number) Level of Gzip compression from 0-9
. 0
means no compression. 1
is the fastest and the least compressed version, 9
is the slowest and the most compressed version. Default 0
message_type
(String) How the message should be formatted. Can be either classic
, loggly
, logplex
or blank
. Default is classic
path
(String) Path to store the files. Must end with a trailing slash. If this field is left empty, the files will be saved in the bucket's root pathperiod
(Number) How frequently the logs should be transferred, in seconds (Default 3600)project_id
(String) The ID of your Google Cloud Platform projectsecret_key
(String, Sensitive) The secret key associated with the target gcs bucket on your account. You may optionally provide this secret via an environment variable, FASTLY_GCS_SECRET_KEY
. A typical format for the key is PEM format, containing actual newline characters where requiredtimestamp_format
(String) The strftime
specified timestamp formatting (default %Y-%m-%dT%H:%M:%S.000
)user
(String) Your Google Cloud Platform service account email address. The client_email
field in your service account authentication JSON. You may optionally provide this via an environment variable, FASTLY_GCS_EMAIL
.logging_googlepubsub
Required:
name
(String) The unique name of the Google Cloud Pub/Sub logging endpoint. It is important to note that changing this attribute will delete and recreate the resourceproject_id
(String) The ID of your Google Cloud Platform projecttopic
(String) The Google Cloud Pub/Sub topic to which logs will be publishedOptional:
account_name
(String) The google account name used to obtain temporary credentials (default none). You may optionally provide this via an environment variable, FASTLY_GCS_ACCOUNT_NAME
.secret_key
(String, Sensitive) Your Google Cloud Platform account secret key. The private_key
field in your service account authentication JSON. You may optionally provide this secret via an environment variable, FASTLY_GOOGLE_PUBSUB_SECRET_KEY
.user
(String) Your Google Cloud Platform service account email address. The client_email
field in your service account authentication JSON. You may optionally provide this via an environment variable, FASTLY_GOOGLE_PUBSUB_EMAIL
.logging_heroku
Required:
name
(String) The unique name of the Heroku logging endpoint. It is important to note that changing this attribute will delete and recreate the resourcetoken
(String, Sensitive) The token to use for authentication (https://www.heroku.com/docs/customer-token-authentication-token/)url
(String) The URL to stream logs tologging_honeycomb
Required:
dataset
(String) The Honeycomb Dataset you want to log toname
(String) The unique name of the Honeycomb logging endpoint. It is important to note that changing this attribute will delete and recreate the resourcetoken
(String, Sensitive) The Write Key from the Account page of your Honeycomb accountlogging_https
Required:
name
(String) The unique name of the HTTPS logging endpoint. It is important to note that changing this attribute will delete and recreate the resourceurl
(String) URL that log data will be sent to. Must use the https protocolOptional:
content_type
(String) Value of the Content-Type
header sent with the requestheader_name
(String) Custom header sent with the requestheader_value
(String) Value of the custom header sent with the requestjson_format
(String) Formats log entries as JSON. Can be either disabled (0
), array of json (1
), or newline delimited json (2
)message_type
(String) How the message should be formatted. Can be either classic
, loggly
, logplex
or blank
. Default is classic
method
(String) HTTP method used for request. Can be either POST
or PUT
. Default POST
request_max_bytes
(Number) The maximum number of bytes sent in one requestrequest_max_entries
(Number) The maximum number of logs sent in one requesttls_ca_cert
(String) A secure certificate to authenticate the server with. Must be in PEM formattls_client_cert
(String) The client certificate used to make authenticated requests. Must be in PEM formattls_client_key
(String, Sensitive) The client private key used to make authenticated requests. Must be in PEM formattls_hostname
(String) Used during the TLS handshake to validate the certificatelogging_kafka
Required:
brokers
(String) A comma-separated list of IP addresses or hostnames of Kafka brokersname
(String) The unique name of the Kafka logging endpoint. It is important to note that changing this attribute will delete and recreate the resourcetopic
(String) The Kafka topic to send logs toOptional:
auth_method
(String) SASL authentication method. One of: plain, scram-sha-256, scram-sha-512compression_codec
(String) The codec used for compression of your logs. One of: gzip
, snappy
, lz4
parse_log_keyvals
(Boolean) Enables parsing of key=value tuples from the beginning of a logline, turning them into record headerspassword
(String, Sensitive) SASL Passrequest_max_bytes
(Number) Maximum size of log batch, if non-zero. Defaults to 0 for unboundedrequired_acks
(String) The Number of acknowledgements a leader must receive before a write is considered successful. One of: 1
(default) One server needs to respond. 0
No servers need to respond. -1
Wait for all in-sync replicas to respondtls_ca_cert
(String) A secure certificate to authenticate the server with. Must be in PEM formattls_client_cert
(String) The client certificate used to make authenticated requests. Must be in PEM formattls_client_key
(String, Sensitive) The client private key used to make authenticated requests. Must be in PEM formattls_hostname
(String) The hostname used to verify the server's certificate. It can either be the Common Name or a Subject Alternative Name (SAN)use_tls
(Boolean) Whether to use TLS for secure logging. Can be either true
or false
user
(String) SASL Userlogging_kinesis
Required:
name
(String) The unique name of the Kinesis logging endpoint. It is important to note that changing this attribute will delete and recreate the resourcetopic
(String) The Kinesis stream nameOptional:
access_key
(String, Sensitive) The AWS access key to be used to write to the streamiam_role
(String) The Amazon Resource Name (ARN) for the IAM role granting Fastly access to Kinesis. Not required if access_key
and secret_key
are provided.region
(String) The AWS region the stream resides in. (Default: us-east-1
)secret_key
(String, Sensitive) The AWS secret access key to authenticate withlogging_logentries
Required:
name
(String) The unique name of the Logentries logging endpoint. It is important to note that changing this attribute will delete and recreate the resourcetoken
(String) Use token based authentication (https://logentries.com/doc/input-token/)Optional:
port
(Number) The port number configured in Logentriesuse_tls
(Boolean) Whether to use TLS for secure logginglogging_loggly
Required:
name
(String) The unique name of the Loggly logging endpoint. It is important to note that changing this attribute will delete and recreate the resourcetoken
(String, Sensitive) The token to use for authentication (https://www.loggly.com/docs/customer-token-authentication-token/).logging_logshuttle
Required:
name
(String) The unique name of the Log Shuttle logging endpoint. It is important to note that changing this attribute will delete and recreate the resourcetoken
(String, Sensitive) The data authentication token associated with this endpointurl
(String) Your Log Shuttle endpoint URLlogging_newrelic
Required:
name
(String) The unique name of the New Relic logging endpoint. It is important to note that changing this attribute will delete and recreate the resourcetoken
(String, Sensitive) The Insert API key from the Account page of your New Relic accountOptional:
region
(String) The region that log data will be sent to. Default: US
logging_openstack
Required:
access_key
(String, Sensitive) Your OpenStack account access keybucket_name
(String) The name of your OpenStack containername
(String) The unique name of the OpenStack logging endpoint. It is important to note that changing this attribute will delete and recreate the resourceurl
(String) Your OpenStack auth urluser
(String) The username for your OpenStack accountOptional:
compression_codec
(String) The codec used for compression of your logs. Valid values are zstd, snappy, and gzip. If the specified codec is "gzip", gzip_level will default to 3. To specify a different level, leave compression_codec blank and explicitly set the level using gzip_level. Specifying both compression_codec and gzip_level in the same API request will result in an error.gzip_level
(Number) Level of Gzip compression from 0-9
. 0
means no compression. 1
is the fastest and the least compressed version, 9
is the slowest and the most compressed version. Default 0
message_type
(String) How the message should be formatted. Can be either classic
, loggly
, logplex
or blank
. Default is classic
path
(String) Path to store the files. Must end with a trailing slash. If this field is left empty, the files will be saved in the bucket's root pathperiod
(Number) How frequently the logs should be transferred, in seconds. Default 3600
public_key
(String) A PGP public key that Fastly will use to encrypt your log files before writing them to disktimestamp_format
(String) The strftime
specified timestamp formatting (default %Y-%m-%dT%H:%M:%S.000
)logging_papertrail
Required:
address
(String) The address of the Papertrail endpointname
(String) A unique name to identify this Papertrail endpoint. It is important to note that changing this attribute will delete and recreate the resourceport
(Number) The port associated with the address where the Papertrail endpoint can be accessedlogging_s3
Required:
bucket_name
(String) The name of the bucket in which to store the logsname
(String) The unique name of the S3 logging endpoint. It is important to note that changing this attribute will delete and recreate the resourceOptional:
acl
(String) The AWS Canned ACL to use for objects uploaded to the S3 bucket. Options are: private
, public-read
, public-read-write
, aws-exec-read
, authenticated-read
, bucket-owner-read
, bucket-owner-full-control
compression_codec
(String) The codec used for compression of your logs. Valid values are zstd, snappy, and gzip. If the specified codec is "gzip", gzip_level will default to 3. To specify a different level, leave compression_codec blank and explicitly set the level using gzip_level. Specifying both compression_codec and gzip_level in the same API request will result in an error.domain
(String) If you created the S3 bucket outside of us-east-1
, then specify the corresponding bucket endpoint. Example: s3-us-west-2.amazonaws.com
file_max_bytes
(Number) Maximum size of an uploaded log file, if non-zero.gzip_level
(Number) Level of Gzip compression from 0-9
. 0
means no compression. 1
is the fastest and the least compressed version, 9
is the slowest and the most compressed version. Default 0
message_type
(String) How the message should be formatted. Can be either classic
, loggly
, logplex
or blank
. Default is classic
path
(String) Path to store the files. Must end with a trailing slash. If this field is left empty, the files will be saved in the bucket's root pathperiod
(Number) How frequently the logs should be transferred, in seconds. Default 3600
public_key
(String) A PGP public key that Fastly will use to encrypt your log files before writing them to diskredundancy
(String) The S3 storage class (redundancy level). Should be one of: standard
, intelligent_tiering
, standard_ia
, onezone_ia
, glacier
, glacier_ir
, deep_archive
, or reduced_redundancy
s3_access_key
(String, Sensitive) AWS Access Key of an account with the required permissions to post logs. It is strongly recommended you create a separate IAM user with permissions to only operate on this Bucket. This key will be not be encrypted. Not required if iam_role
is provided. You can provide this key via an environment variable, FASTLY_S3_ACCESS_KEY
s3_iam_role
(String) The Amazon Resource Name (ARN) for the IAM role granting Fastly access to S3. Not required if access_key
and secret_key
are provided. You can provide this value via an environment variable, FASTLY_S3_IAM_ROLE
s3_secret_key
(String, Sensitive) AWS Secret Key of an account with the required permissions to post logs. It is strongly recommended you create a separate IAM user with permissions to only operate on this Bucket. This secret will be not be encrypted. Not required if iam_role
is provided. You can provide this secret via an environment variable, FASTLY_S3_SECRET_KEY
server_side_encryption
(String) Specify what type of server side encryption should be used. Can be either AES256
or aws:kms
server_side_encryption_kms_key_id
(String) Optional server-side KMS Key Id. Must be set if server_side_encryption is set to aws:kms
timestamp_format
(String) The strftime
specified timestamp formatting (default %Y-%m-%dT%H:%M:%S.000
)logging_scalyr
Required:
name
(String) The unique name of the Scalyr logging endpoint. It is important to note that changing this attribute will delete and recreate the resourcetoken
(String, Sensitive) The token to use for authentication (https://www.scalyr.com/keys)Optional:
project_id
(String) The name of the logfile field sent to Scalyrregion
(String) The region that log data will be sent to. One of US
or EU
. Defaults to US
if undefinedlogging_sftp
Required:
address
(String) The SFTP address to stream logs toname
(String) The unique name of the SFTP logging endpoint. It is important to note that changing this attribute will delete and recreate the resourcepath
(String) The path to upload log files to. If the path ends in /
then it is treated as a directoryssh_known_hosts
(String) A list of host keys for all hosts we can connect to over SFTPuser
(String) The username for the serverOptional:
compression_codec
(String) The codec used for compression of your logs. Valid values are zstd, snappy, and gzip. If the specified codec is "gzip", gzip_level will default to 3. To specify a different level, leave compression_codec blank and explicitly set the level using gzip_level. Specifying both compression_codec and gzip_level in the same API request will result in an error.gzip_level
(Number) Level of Gzip compression from 0-9
. 0
means no compression. 1
is the fastest and the least compressed version, 9
is the slowest and the most compressed version. Default 0
message_type
(String) How the message should be formatted. Can be either classic
, loggly
, logplex
or blank
. Default is classic
password
(String, Sensitive) The password for the server. If both password
and secret_key
are passed, secret_key
will be preferredperiod
(Number) How frequently log files are finalized so they can be available for reading (in seconds, default 3600
)port
(Number) The port the SFTP service listens on. (Default: 22
)public_key
(String) A PGP public key that Fastly will use to encrypt your log files before writing them to disksecret_key
(String, Sensitive) The SSH private key for the server. If both password
and secret_key
are passed, secret_key
will be preferredtimestamp_format
(String) The strftime
specified timestamp formatting (default %Y-%m-%dT%H:%M:%S.000
)logging_splunk
Required:
name
(String) A unique name to identify the Splunk endpoint. It is important to note that changing this attribute will delete and recreate the resourcetoken
(String, Sensitive) The Splunk token to be used for authenticationurl
(String) The Splunk URL to stream logs toOptional:
tls_ca_cert
(String) A secure certificate to authenticate the server with. Must be in PEM format. You can provide this certificate via an environment variable, FASTLY_SPLUNK_CA_CERT
tls_client_cert
(String) The client certificate used to make authenticated requests. Must be in PEM format.tls_client_key
(String, Sensitive) The client private key used to make authenticated requests. Must be in PEM format.tls_hostname
(String) The hostname used to verify the server's certificate. It can either be the Common Name or a Subject Alternative Name (SAN)use_tls
(Boolean) Whether to use TLS for secure logging. Default: false
logging_sumologic
Required:
name
(String) A unique name to identify this Sumologic endpoint. It is important to note that changing this attribute will delete and recreate the resourceurl
(String) The URL to Sumologic collector endpointOptional:
message_type
(String) How the message should be formatted. Can be either classic
, loggly
, logplex
or blank
. Default is classic
logging_syslog
Required:
address
(String) A hostname or IPv4 address of the Syslog endpointname
(String) A unique name to identify this Syslog endpoint. It is important to note that changing this attribute will delete and recreate the resourceOptional:
message_type
(String) How the message should be formatted. Can be either classic
, loggly
, logplex
or blank
. Default is classic
port
(Number) The port associated with the address where the Syslog endpoint can be accessed. Default 514
tls_ca_cert
(String) A secure certificate to authenticate the server with. Must be in PEM format. You can provide this certificate via an environment variable, FASTLY_SYSLOG_CA_CERT
tls_client_cert
(String) The client certificate used to make authenticated requests. Must be in PEM format. You can provide this certificate via an environment variable, FASTLY_SYSLOG_CLIENT_CERT
tls_client_key
(String, Sensitive) The client private key used to make authenticated requests. Must be in PEM format. You can provide this key via an environment variable, FASTLY_SYSLOG_CLIENT_KEY
tls_hostname
(String) Used during the TLS handshake to validate the certificatetoken
(String) Whether to prepend each message with a specific tokenuse_tls
(Boolean) Whether to use TLS for secure logging. Default false
package
Optional:
content
(String) The contents of the Wasm deployment package as a base64 encoded string (e.g. could be provided using an input variable or via external data source output variable). Conflicts with filename
. Exactly one of these two arguments must be specifiedfilename
(String) The path to the Wasm deployment package within your local filesystem. Conflicts with content
. Exactly one of these two arguments must be specifiedsource_code_hash
(String) Used to trigger updates. Must be set to a SHA512 hash of all files (in sorted order) within the package. The usual way to set this is using the fastly_package_hash data source.product_enablement
Optional:
fanout
(Boolean) Enable Fanout supportname
(String) Used by the provider to identify modified settings (changing this value will force the entire block to be deleted, then recreated)websockets
(Boolean) Enable WebSockets supportresource_link
Required:
name
(String) The name of the resource link.resource_id
(String) The ID of the underlying linked resource.Read-Only:
link_id
(String) An alphanumeric string identifying the resource link.