Provides a resource which manages Cloudflare Logpush jobs. For Logpush jobs pushing to Amazon S3, Google Cloud Storage, Microsoft Azure or Sumo Logic, this resource cannot be automatically created. In order to have this automated, you must have:
cloudflare_logpush_ownership_challenge
: Configured to generate the challenge
to confirm ownership of the destination.ownership_challenge_filename
value from thecloudflare_logpush_ownership_challenge
resource.cloudflare_logpush_job
: Create and manage the Logpush Job itself.# Example Usage (Cloudflare R2)
#
# When using Cloudflare R2, no ownership challenge is required.
data "cloudflare_api_token_permission_groups" "all" {}
resource "cloudflare_api_token" "logpush_r2_token" {
name = "logpush_r2_token"
policy {
permission_groups = [
data.cloudflare_api_token_permission_groups.all.account["Workers R2 Storage Write"],
]
resources = {
"com.cloudflare.api.account.*" = "*"
}
}
}
resource "cloudflare_logpush_job" "http_requests" {
enabled = true
zone_id = var.zone_id
name = "http_requests"
logpull_options = "fields=ClientIP,ClientRequestHost,ClientRequestMethod,ClientRequestURI,EdgeEndTimestamp,EdgeResponseBytes,EdgeResponseStatus,EdgeStartTimestamp,RayID×tamps=rfc3339"
destination_conf = "r2://cloudflare-logs/http_requests/date={DATE}?account-id=${var.account_id}&access-key-id=${cloudflare_api_token.logpush_r2_token.id}&secret-access-key=${sha256(cloudflare_api_token.logpush_r2_token.value)}"
dataset = "http_requests"
}
# Example Usage (with AWS provider)
#
# Please see `cloudflare_logpush_ownership_challenge` for how to use that
# resource and the third party provider documentation if you
# choose to automate the intermediate step of fetching the ownership challenge
# contents.
#
# **Important:** If you're using this approach, the `destination_conf` values
# must match identically in all resources. Otherwise the challenge validation
# will fail.
resource "cloudflare_logpush_ownership_challenge" "ownership_challenge" {
zone_id = "0da42c8d2132a9ddaf714f9e7c920711"
destination_conf = "s3://my-bucket-path?region=us-west-2"
}
data "aws_s3_bucket_object" "challenge_file" {
bucket = "my-bucket-path"
key = cloudflare_logpush_ownership_challenge.ownership_challenge.ownership_challenge_filename
}
resource "cloudflare_logpush_job" "example_job" {
enabled = true
zone_id = "0da42c8d2132a9ddaf714f9e7c920711"
name = "My-logpush-job"
logpull_options = "fields=RayID,ClientIP,EdgeStartTimestamp×tamps=rfc3339"
destination_conf = "s3://my-bucket-path?region=us-west-2"
ownership_challenge = data.aws_s3_bucket_object.challenge_file.body
dataset = "http_requests"
}
# Example Usage (manual inspection of S3 bucket)
#
# 1. Create `cloudflare_logpush_ownership_challenge` resource
resource "cloudflare_logpush_ownership_challenge" "ownership_challenge" {
zone_id = "0da42c8d2132a9ddaf714f9e7c920711"
destination_conf = "s3://my-bucket-path?region=us-west-2"
}
# 2. Check S3 bucket for your ownership challenge filename and grab the contents.
# 3. Create the `cloudflare_logpush_job` substituting in your manual `ownership_challenge`.
resource "cloudflare_logpush_job" "example_job" {
enabled = true
zone_id = "0da42c8d2132a9ddaf714f9e7c920711"
name = "My-logpush-job"
logpull_options = "fields=RayID,ClientIP,EdgeStartTimestamp×tamps=rfc3339"
destination_conf = "s3://my-bucket-path?region=us-west-2"
ownership_challenge = "0000000000000"
dataset = "http_requests"
frequency = "high"
}
dataset
(String) The kind of the dataset to use with the logpush job. Available values: access_requests
, casb_findings
, firewall_events
, http_requests
, spectrum_events
, nel_reports
, audit_logs
, gateway_dns
, gateway_http
, gateway_network
, dns_logs
, network_analytics_logs
, workers_trace_events
, device_posture_results
, zero_trust_network_sessions
, magic_ids_detections
, page_shield_events
.destination_conf
(String) Uniquely identifies a resource (such as an s3 bucket) where data will be pushed. Additional configuration parameters supported by the destination may be included. See Logpush destination documentation.account_id
(String) The account identifier to target for the resource. Must provide only one of account_id
, zone_id
.enabled
(Boolean) Whether to enable the job.filter
(String) Use filters to select the events to include and/or remove from your logs. For more information, refer to Filters.frequency
(String) A higher frequency will result in logs being pushed on faster with smaller files. low
frequency will push logs less often with larger files. Available values: high
, low
. Defaults to high
.kind
(String) The kind of logpush job to create. Available values: edge
, instant-logs
, ""
.logpull_options
(String) Configuration string for the Logshare API. It specifies things like requested fields and timestamp formats. See Logpush options documentation.max_upload_bytes
(Number) The maximum uncompressed file size of a batch of logs. Value must be between 5MB and 1GB.max_upload_interval_seconds
(Number) The maximum interval in seconds for log batches. Value must be between 30 and 300.max_upload_records
(Number) The maximum number of log lines per batch. Value must be between 1000 and 1,000,000.name
(String) The name of the logpush job to create.output_options
(Block List, Max: 1) Structured replacement for logpull_options. When including this field, the logpull_option field will be ignored. (see below for nested schema)ownership_challenge
(String) Ownership challenge token to prove destination ownership, required when destination is Amazon S3, Google Cloud Storage, Microsoft Azure or Sumo Logic. See Developer documentation.zone_id
(String) The zone identifier to target for the resource. Must provide only one of account_id
, zone_id
.id
(String) The ID of this resource.output_options
Optional:
batch_prefix
(String) String to be prepended before each batch.batch_suffix
(String) String to be appended after each batch.cve20214428
(Boolean) Mitigation for CVE-2021-44228. If set to true, will cause all occurrences of ${ in the generated files to be replaced with x{. Defaults to false
.field_delimiter
(String) String to join fields. This field be ignored when record_template is set. Defaults to ,
.field_names
(List of String) List of field names to be included in the Logpush output.output_type
(String) Specifies the output type. Available values: ndjson
, csv
. Defaults to ndjson
.record_delimiter
(String) String to be inserted in-between the records as separator.record_prefix
(String) String to be prepended before each record. Defaults to {
.record_suffix
(String) String to be appended after each record. Defaults to }
.record_template
(String) String to use as template for each record instead of the default comma-separated list.sample_rate
(Number) Specifies the sampling rate. Defaults to 1
.timestamp_format
(String) Specifies the format for timestamps. Available values: unixnano
, unix
, rfc3339
. Defaults to unixnano
.Import is supported using the following syntax:
# Import an account-scoped job.
$ terraform import cloudflare_logpush_job.example account/<account_id>/<job_id>
# Import a zone-scoped job.
$ terraform import cloudflare_logpush_job.example zone/<zone_id>/<job_id>