Provides an AppFlow flow resource.
resource "aws_s3_bucket" "example_source" {
bucket = "example-source"
}
data "aws_iam_policy_document" "example_source" {
statement {
sid = "AllowAppFlowSourceActions"
effect = "Allow"
principals {
type = "Service"
identifiers = ["appflow.amazonaws.com"]
}
actions = [
"s3:ListBucket",
"s3:GetObject",
]
resources = [
"arn:aws:s3:::example-source",
"arn:aws:s3:::example-source/*",
]
}
}
resource "aws_s3_bucket_policy" "example_source" {
bucket = aws_s3_bucket.example_source.id
policy = data.aws_iam_policy_document.example_source.json
}
resource "aws_s3_object" "example" {
bucket = aws_s3_bucket.example_source.id
key = "example_source.csv"
source = "example_source.csv"
}
resource "aws_s3_bucket" "example_destination" {
bucket = "example-destination"
}
data "aws_iam_policy_document" "example_destination" {
statement {
sid = "AllowAppFlowDestinationActions"
effect = "Allow"
principals {
type = "Service"
identifiers = ["appflow.amazonaws.com"]
}
actions = [
"s3:PutObject",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts",
"s3:ListBucketMultipartUploads",
"s3:GetBucketAcl",
"s3:PutObjectAcl",
]
resources = [
"arn:aws:s3:::example-destination",
"arn:aws:s3:::example-destination/*",
]
}
}
resource "aws_s3_bucket_policy" "example_destination" {
bucket = aws_s3_bucket.example_destination.id
policy = data.aws_iam_policy_document.example_destination.json
}
resource "aws_appflow_flow" "example" {
name = "example"
source_flow_config {
connector_type = "S3"
source_connector_properties {
s3 {
bucket_name = aws_s3_bucket_policy.example_source.bucket
bucket_prefix = "example"
}
}
}
destination_flow_config {
connector_type = "S3"
destination_connector_properties {
s3 {
bucket_name = aws_s3_bucket_policy.example_destination.bucket
s3_output_format_config {
prefix_config {
prefix_type = "PATH"
}
}
}
}
}
task {
source_fields = ["exampleField"]
destination_field = "exampleField"
task_type = "Map"
connector_operator {
s3 = "NO_OP"
}
}
trigger_config {
trigger_type = "OnDemand"
}
}
This resource supports the following arguments:
name
- (Required) Name of the flow.destination_flow_config
- (Required) A Destination Flow Config that controls how Amazon AppFlow places data in the destination connector.source_flow_config
- (Required) The Source Flow Config that controls how Amazon AppFlow retrieves data from the source connector.task
- (Required) A Task that Amazon AppFlow performs while transferring the data in the flow run.trigger_config
- (Required) A Trigger that determine how and when the flow runs.description
- (Optional) Description of the flow you want to create.kms_arn
- (Optional) ARN (Amazon Resource Name) of the Key Management Service (KMS) key you provide for encryption. This is required if you do not want to use the Amazon AppFlow-managed KMS key. If you don't provide anything here, Amazon AppFlow uses the Amazon AppFlow-managed KMS key.tags
- (Optional) Key-value mapping of resource tags. If configured with a provider default_tags
configuration block present, tags with matching keys will overwrite those defined at the provider-level.connector_type
- (Required) Type of connector, such as Salesforce, Amplitude, and so on. Valid values are Salesforce
, Singular
, Slack
, Redshift
, S3
, Marketo
, Googleanalytics
, Zendesk
, Servicenow
, Datadog
, Trendmicro
, Snowflake
, Dynatrace
, Infornexus
, Amplitude
, Veeva
, EventBridge
, LookoutMetrics
, Upsolver
, Honeycode
, CustomerProfiles
, SAPOData
, and CustomConnector
.destination_connector_properties
- (Required) This stores the information that is required to query a particular connector. See Destination Connector Properties for more information.api_version
- (Optional) API version that the destination connector uses.connector_profile_name
- (Optional) Name of the connector profile. This name must be unique for each connector profile in the AWS account.custom_connector
- (Optional) Properties that are required to query the custom Connector. See Custom Connector Destination Properties for more details.customer_profiles
- (Optional) Properties that are required to query Amazon Connect Customer Profiles. See Customer Profiles Destination Properties for more details.event_bridge
- (Optional) Properties that are required to query Amazon EventBridge. See Generic Destination Properties for more details.honeycode
- (Optional) Properties that are required to query Amazon Honeycode. See Generic Destination Properties for more details.marketo
- (Optional) Properties that are required to query Marketo. See Generic Destination Properties for more details.redshift
- (Optional) Properties that are required to query Amazon Redshift. See Redshift Destination Properties for more details.s3
- (Optional) Properties that are required to query Amazon S3. See S3 Destination Properties for more details.salesforce
- (Optional) Properties that are required to query Salesforce. See Salesforce Destination Properties for more details.sapo_data
- (Optional) Properties that are required to query SAPOData. See SAPOData Destination Properties for more details.snowflake
- (Optional) Properties that are required to query Snowflake. See Snowflake Destination Properties for more details.upsolver
- (Optional) Properties that are required to query Upsolver. See Upsolver Destination Properties for more details.zendesk
- (Optional) Properties that are required to query Zendesk. See Zendesk Destination Properties for more details.EventBridge, Honeycode, and Marketo destination properties all support the following attributes:
object
- (Required) Object specified in the flow destination.error_handling_config
- (Optional) Settings that determine how Amazon AppFlow handles an error when placing data in the destination. See Error Handling Config for more details.entity_name
- (Required) Entity specified in the custom connector as a destination in the flow.custom_properties
- (Optional) Custom properties that are specific to the connector when it's used as a destination in the flow. Maximum of 50 items.error_handling_config
- (Optional) Settings that determine how Amazon AppFlow handles an error when placing data in the custom connector as destination. See Error Handling Config for more details.id_field_names
- (Optional) Name of the field that Amazon AppFlow uses as an ID when performing a write operation such as update, delete, or upsert.write_operation_type
- (Optional) Type of write operation to be performed in the custom connector when it's used as destination. Valid values are INSERT
, UPSERT
, UPDATE
, and DELETE
.domain_name
- (Required) Unique name of the Amazon Connect Customer Profiles domain.object_type_name
- (Optional) Object specified in the Amazon Connect Customer Profiles flow destination.intermediate_bucket_name
- (Required) Intermediate bucket that Amazon AppFlow uses when moving data into Amazon Redshift.object
- (Required) Object specified in the Amazon Redshift flow destination.bucket_prefix
- (Optional) Object key for the bucket in which Amazon AppFlow places the destination files.error_handling_config
- (Optional) Settings that determine how Amazon AppFlow handles an error when placing data in the destination. See Error Handling Config for more details.bucket_name
- (Required) Amazon S3 bucket name in which Amazon AppFlow places the transferred data.bucket_prefix
- (Optional) Object key for the bucket in which Amazon AppFlow places the destination files.s3_output_format_config
- (Optional) Configuration that determines how Amazon AppFlow should format the flow output data when Amazon S3 is used as the destination. See S3 Output Format Config for more details.aggregation_config
- (Optional) Aggregation settings that you can use to customize the output format of your flow data. See Aggregation Config for more details.file_type
- (Optional) File type that Amazon AppFlow places in the Amazon S3 bucket. Valid values are CSV
, JSON
, and PARQUET
.prefix_config
- (Optional) Determines the prefix that Amazon AppFlow applies to the folder name in the Amazon S3 bucket. You can name folders according to the flow frequency and date. See Prefix Config for more details.preserve_source_data_typing
- (Optional, Boolean) Whether the data types from the source system need to be preserved (Only valid for Parquet
file type)object
- (Required) Object specified in the flow destination.error_handling_config
- (Optional) Settings that determine how Amazon AppFlow handles an error when placing data in the destination. See Error Handling Config for more details.id_field_names
- (Optional) Name of the field that Amazon AppFlow uses as an ID when performing a write operation such as update or delete.write_operation_type
- (Optional) This specifies the type of write operation to be performed in Salesforce. When the value is UPSERT
, then id_field_names
is required. Valid values are INSERT
, UPSERT
, UPDATE
, and DELETE
.object_path
- (Required) Object path specified in the SAPOData flow destination.error_handling_config
- (Optional) Settings that determine how Amazon AppFlow handles an error when placing data in the destination. See Error Handling Config for more details.id_field_names
- (Optional) Name of the field that Amazon AppFlow uses as an ID when performing a write operation such as update or delete.success_response_handling_config
- (Optional) Determines how Amazon AppFlow handles the success response that it gets from the connector after placing data. See Success Response Handling Config for more details.write_operation
- (Optional) Possible write operations in the destination connector. When this value is not provided, this defaults to the INSERT
operation. Valid values are INSERT
, UPSERT
, UPDATE
, and DELETE
.bucket_name
- (Optional) Name of the Amazon S3 bucket.bucket_prefix
- (Optional) Amazon S3 bucket prefix.intermediate_bucket_name
- (Required) Intermediate bucket that Amazon AppFlow uses when moving data into Amazon Snowflake.object
- (Required) Object specified in the Amazon Snowflake flow destination.bucket_prefix
- (Optional) Object key for the bucket in which Amazon AppFlow places the destination files.error_handling_config
- (Optional) Settings that determine how Amazon AppFlow handles an error when placing data in the destination. See Error Handling Config for more details.bucket_name
- (Required) Upsolver Amazon S3 bucket name in which Amazon AppFlow places the transferred data. This must begin with upsolver-appflow
.bucket_prefix
- (Optional) Object key for the Upsolver Amazon S3 Bucket in which Amazon AppFlow places the destination files.s3_output_format_config
- (Optional) Configuration that determines how Amazon AppFlow should format the flow output data when Upsolver is used as the destination. See Upsolver S3 Output Format Config for more details.aggregation_config
- (Optional) Aggregation settings that you can use to customize the output format of your flow data. See Aggregation Config for more details.file_type
- (Optional) File type that Amazon AppFlow places in the Upsolver Amazon S3 bucket. Valid values are CSV
, JSON
, and PARQUET
.prefix_config
- (Optional) Determines the prefix that Amazon AppFlow applies to the folder name in the Amazon S3 bucket. You can name folders according to the flow frequency and date. See Prefix Config for more details.aggregation_type
- (Optional) Whether Amazon AppFlow aggregates the flow records into a single file, or leave them unaggregated. Valid values are None
and SingleFile
.target_file_size
- (Optional) The desired file size, in MB, for each output file that Amazon AppFlow writes to the flow destination. Integer value.prefix_format
- (Optional) Determines the level of granularity that's included in the prefix. Valid values are YEAR
, MONTH
, DAY
, HOUR
, and MINUTE
.prefix_type
- (Optional) Determines the format of the prefix, and whether it applies to the file name, file path, or both. Valid values are FILENAME
, PATH
, and PATH_AND_FILENAME
.object
- (Required) Object specified in the flow destination.error_handling_config
- (Optional) Settings that determine how Amazon AppFlow handles an error when placing data in the destination. See Error Handling Config for more details.id_field_names
- (Optional) Name of the field that Amazon AppFlow uses as an ID when performing a write operation such as update or delete.write_operation_type
- (Optional) This specifies the type of write operation to be performed in Zendesk. When the value is UPSERT
, then id_field_names
is required. Valid values are INSERT
, UPSERT
, UPDATE
, and DELETE
.bucket_name
- (Optional) Name of the Amazon S3 bucket.bucket_prefix
- (Optional) Amazon S3 bucket prefix.fail_on_first_destination_error
- (Optional, boolean) If the flow should fail after the first instance of a failure when attempting to place data in the destination.connector_type
- (Required) Type of connector, such as Salesforce, Amplitude, and so on. Valid values are Salesforce
, Singular
, Slack
, Redshift
, S3
, Marketo
, Googleanalytics
, Zendesk
, Servicenow
, Datadog
, Trendmicro
, Snowflake
, Dynatrace
, Infornexus
, Amplitude
, Veeva
, EventBridge
, LookoutMetrics
, Upsolver
, Honeycode
, CustomerProfiles
, SAPOData
, and CustomConnector
.source_connector_properties
- (Required) Information that is required to query a particular source connector. See Source Connector Properties for details.api_version
- (Optional) API version that the destination connector uses.connector_profile_name
- (Optional) Name of the connector profile. This name must be unique for each connector profile in the AWS account.incremental_pull_config
- (Optional) Defines the configuration for a scheduled incremental data pull. If a valid configuration is provided, the fields specified in the configuration are used when querying for the incremental data pull. See Incremental Pull Config for more details.amplitude
- (Optional) Information that is required for querying Amplitude. See Generic Source Properties for more details.custom_connector
- (Optional) Properties that are applied when the custom connector is being used as a source. See Custom Connector Source Properties.datadog
- (Optional) Information that is required for querying Datadog. See Generic Source Properties for more details.dynratrace
- (Optional) Information that is required for querying Dynatrace. See Generic Source Properties for more details.infor_nexus
- (Optional) Information that is required for querying Infor Nexus. See Generic Source Properties for more details.marketo
- (Optional) Information that is required for querying Marketo. See Generic Source Properties for more details.s3
- (Optional) Information that is required for querying Amazon S3. See S3 Source Properties for more details.salesforce
- (Optional) Information that is required for querying Salesforce. See Salesforce Source Properties for more details.sapo_data
- (Optional) Information that is required for querying SAPOData as a flow source. See SAPO Source Properties for more details.service_now
- (Optional) Information that is required for querying ServiceNow. See Generic Source Properties for more details.singular
- (Optional) Information that is required for querying Singular. See Generic Source Properties for more details.slack
- (Optional) Information that is required for querying Slack. See Generic Source Properties for more details.trend_micro
- (Optional) Information that is required for querying Trend Micro. See Generic Source Properties for more details.veeva
- (Optional) Information that is required for querying Veeva. See Veeva Source Properties for more details.zendesk
- (Optional) Information that is required for querying Zendesk. See Generic Source Properties for more details.Amplitude, Datadog, Dynatrace, Google Analytics, Infor Nexus, Marketo, ServiceNow, Singular, Slack, Trend Micro, and Zendesk source properties all support the following attributes:
object
- (Required) Object specified in the flow source.entity_name
- (Required) Entity specified in the custom connector as a source in the flow.custom_properties
- (Optional) Custom properties that are specific to the connector when it's used as a source in the flow. Maximum of 50 items.bucket_name
- (Required) Amazon S3 bucket name where the source files are stored.bucket_prefix
- (Optional) Object key for the Amazon S3 bucket in which the source files are stored.s3_input_format_config
- (Optional) When you use Amazon S3 as the source, the configuration format that you provide the flow input data. See S3 Input Format Config for details.s3_input_file_type
- (Optional) File type that Amazon AppFlow gets from your Amazon S3 bucket. Valid values are CSV
and JSON
.object
- (Required) Object specified in the Salesforce flow source.enable_dynamic_field_update
- (Optional, boolean) Flag that enables dynamic fetching of new (recently added) fields in the Salesforce objects while running a flow.include_deleted_records
- (Optional, boolean) Whether Amazon AppFlow includes deleted files in the flow run.object_path
- (Required) Object path specified in the SAPOData flow source.object
- (Required) Object specified in the Veeva flow source.document_type
- (Optional) Document type specified in the Veeva document extract flow.include_all_versions
- (Optional, boolean) Boolean value to include All Versions of files in Veeva document extract flow.include_renditions
- (Optional, boolean) Boolean value to include file renditions in Veeva document extract flow.include_source_files
- (Optional, boolean) Boolean value to include source files in Veeva document extract flow.datetime_type_field_name
- (Optional) Field that specifies the date time or timestamp field as the criteria to use when importing incremental records from the source.source_fields
- (Required) Source fields to which a particular task is applied.task_type
- (Required) Particular task implementation that Amazon AppFlow performs. Valid values are Arithmetic
, Filter
, Map
, Map_all
, Mask
, Merge
, Passthrough
, Truncate
, and Validate
.connector_operator
- (Optional) Operation to be performed on the provided source fields. See Connector Operator for details.destination_field
- (Optional) Field in a destination connector, or a field value against which Amazon AppFlow validates a source field.task_properties
- (Optional) Map used to store task-related information. The execution service looks for particular information based on the TaskType
. Valid keys are VALUE
, VALUES
, DATA_TYPE
, UPPER_BOUND
, LOWER_BOUND
, SOURCE_DATA_TYPE
, DESTINATION_DATA_TYPE
, VALIDATION_ACTION
, MASK_VALUE
, MASK_LENGTH
, TRUNCATE_LENGTH
, MATH_OPERATION_FIELDS_ORDER
, CONCAT_FORMAT
, SUBFIELD_CATEGORY_MAP
, and EXCLUDE_SOURCE_FIELDS_LIST
.amplitude
- (Optional) Operation to be performed on the provided Amplitude source fields. The only valid value is BETWEEN
.custom_connector
- (Optional) Operators supported by the custom connector. Valid values are PROJECTION
, LESS_THAN
, GREATER_THAN
, CONTAINS
, BETWEEN
, LESS_THAN_OR_EQUAL_TO
, GREATER_THAN_OR_EQUAL_TO
, EQUAL_TO
, NOT_EQUAL_TO
, ADDITION
, MULTIPLICATION
, DIVISION
, SUBTRACTION
, MASK_ALL
, MASK_FIRST_N
, MASK_LAST_N
, VALIDATE_NON_NULL
, VALIDATE_NON_ZERO
, VALIDATE_NON_NEGATIVE
, VALIDATE_NUMERIC
, and NO_OP
.datadog
- (Optional) Operation to be performed on the provided Datadog source fields. Valid values are PROJECTION
, BETWEEN
, EQUAL_TO
, ADDITION
, MULTIPLICATION
, DIVISION
, SUBTRACTION
, MASK_ALL
, MASK_FIRST_N
, MASK_LAST_N
, VALIDATE_NON_NULL
, VALIDATE_NON_ZERO
, VALIDATE_NON_NEGATIVE
, VALIDATE_NUMERIC
, and NO_OP
.dynatrace
- (Optional) Operation to be performed on the provided Dynatrace source fields. Valid values are PROJECTION
, BETWEEN
, EQUAL_TO
, ADDITION
, MULTIPLICATION
, DIVISION
, SUBTRACTION
, MASK_ALL
, MASK_FIRST_N
, MASK_LAST_N
, VALIDATE_NON_NULL
, VALIDATE_NON_ZERO
, VALIDATE_NON_NEGATIVE
, VALIDATE_NUMERIC
, and NO_OP
.google_analytics
- (Optional) Operation to be performed on the provided Google Analytics source fields. Valid values are PROJECTION
and BETWEEN
.infor_nexus
- (Optional) Operation to be performed on the provided Infor Nexus source fields. Valid values are PROJECTION
, BETWEEN
, EQUAL_TO
, ADDITION
, MULTIPLICATION
, DIVISION
, SUBTRACTION
, MASK_ALL
, MASK_FIRST_N
, MASK_LAST_N
, VALIDATE_NON_NULL
, VALIDATE_NON_ZERO
, VALIDATE_NON_NEGATIVE
, VALIDATE_NUMERIC
, and NO_OP
.marketo
- (Optional) Operation to be performed on the provided Marketo source fields. Valid values are PROJECTION
, BETWEEN
, EQUAL_TO
, ADDITION
, MULTIPLICATION
, DIVISION
, SUBTRACTION
, MASK_ALL
, MASK_FIRST_N
, MASK_LAST_N
, VALIDATE_NON_NULL
, VALIDATE_NON_ZERO
, VALIDATE_NON_NEGATIVE
, VALIDATE_NUMERIC
, and NO_OP
.s3
- (Optional) Operation to be performed on the provided Amazon S3 source fields. Valid values are PROJECTION
, LESS_THAN
, GREATER_THAN
, BETWEEN
, LESS_THAN_OR_EQUAL_TO
, GREATER_THAN_OR_EQUAL_TO
, EQUAL_TO
, NOT_EQUAL_TO
, ADDITION
, MULTIPLICATION
, DIVISION
, SUBTRACTION
, MASK_ALL
, MASK_FIRST_N
, MASK_LAST_N
, VALIDATE_NON_NULL
, VALIDATE_NON_ZERO
, VALIDATE_NON_NEGATIVE
, VALIDATE_NUMERIC
, and NO_OP
.salesforce
- (Optional) Operation to be performed on the provided Salesforce source fields. Valid values are PROJECTION
, LESS_THAN
, GREATER_THAN
, CONTAINS
, BETWEEN
, LESS_THAN_OR_EQUAL_TO
, GREATER_THAN_OR_EQUAL_TO
, EQUAL_TO
, NOT_EQUAL_TO
, ADDITION
, MULTIPLICATION
, DIVISION
, SUBTRACTION
, MASK_ALL
, MASK_FIRST_N
, MASK_LAST_N
, VALIDATE_NON_NULL
, VALIDATE_NON_ZERO
, VALIDATE_NON_NEGATIVE
, VALIDATE_NUMERIC
, and NO_OP
.sapo_data
- (Optional) Operation to be performed on the provided SAPOData source fields. Valid values are PROJECTION
, LESS_THAN
, GREATER_THAN
, CONTAINS
, BETWEEN
, LESS_THAN_OR_EQUAL_TO
, GREATER_THAN_OR_EQUAL_TO
, EQUAL_TO
, NOT_EQUAL_TO
, ADDITION
, MULTIPLICATION
, DIVISION
, SUBTRACTION
, MASK_ALL
, MASK_FIRST_N
, MASK_LAST_N
, VALIDATE_NON_NULL
, VALIDATE_NON_ZERO
, VALIDATE_NON_NEGATIVE
, VALIDATE_NUMERIC
, and NO_OP
.service_now
- (Optional) Operation to be performed on the provided ServiceNow source fields. Valid values are PROJECTION
, LESS_THAN
, GREATER_THAN
, CONTAINS
, BETWEEN
, LESS_THAN_OR_EQUAL_TO
, GREATER_THAN_OR_EQUAL_TO
, EQUAL_TO
, NOT_EQUAL_TO
, ADDITION
, MULTIPLICATION
, DIVISION
, SUBTRACTION
, MASK_ALL
, MASK_FIRST_N
, MASK_LAST_N
, VALIDATE_NON_NULL
, VALIDATE_NON_ZERO
, VALIDATE_NON_NEGATIVE
, VALIDATE_NUMERIC
, and NO_OP
.singular
- (Optional) Operation to be performed on the provided Singular source fields. Valid values are PROJECTION
, EQUAL_TO
, ADDITION
, MULTIPLICATION
, DIVISION
, SUBTRACTION
, MASK_ALL
, MASK_FIRST_N
, MASK_LAST_N
, VALIDATE_NON_NULL
, VALIDATE_NON_ZERO
, VALIDATE_NON_NEGATIVE
, VALIDATE_NUMERIC
, and NO_OP
.slack
- (Optional) Operation to be performed on the provided Slack source fields. Valid values are PROJECTION
, LESS_THAN
, GREATER_THAN
, BETWEEN
, LESS_THAN_OR_EQUAL_TO
, GREATER_THAN_OR_EQUAL_TO
, EQUAL_TO
, ADDITION
, MULTIPLICATION
, DIVISION
, SUBTRACTION
, MASK_ALL
, MASK_FIRST_N
, MASK_LAST_N
, VALIDATE_NON_NULL
, VALIDATE_NON_ZERO
, VALIDATE_NON_NEGATIVE
, VALIDATE_NUMERIC
, and NO_OP
.trendmicro
- (Optional) Operation to be performed on the provided Trend Micro source fields. Valid values are PROJECTION
, EQUAL_TO
, ADDITION
, MULTIPLICATION
, DIVISION
, SUBTRACTION
, MASK_ALL
, MASK_FIRST_N
, MASK_LAST_N
, VALIDATE_NON_NULL
, VALIDATE_NON_ZERO
, VALIDATE_NON_NEGATIVE
, VALIDATE_NUMERIC
, and NO_OP
.veeva
- (Optional) Operation to be performed on the provided Veeva source fields. Valid values are PROJECTION
, LESS_THAN
, GREATER_THAN
, CONTAINS
, BETWEEN
, LESS_THAN_OR_EQUAL_TO
, GREATER_THAN_OR_EQUAL_TO
, EQUAL_TO
, NOT_EQUAL_TO
, ADDITION
, MULTIPLICATION
, DIVISION
, SUBTRACTION
, MASK_ALL
, MASK_FIRST_N
, MASK_LAST_N
, VALIDATE_NON_NULL
, VALIDATE_NON_ZERO
, VALIDATE_NON_NEGATIVE
, VALIDATE_NUMERIC
, and NO_OP
.zendesk
- (Optional) Operation to be performed on the provided Zendesk source fields. Valid values are PROJECTION
, GREATER_THAN
, ADDITION
, MULTIPLICATION
, DIVISION
, SUBTRACTION
, MASK_ALL
, MASK_FIRST_N
, MASK_LAST_N
, VALIDATE_NON_NULL
, VALIDATE_NON_ZERO
, VALIDATE_NON_NEGATIVE
, VALIDATE_NUMERIC
, and NO_OP
.trigger_type
- (Required) Type of flow trigger. Valid values are Scheduled
, Event
, and OnDemand
.trigger_properties
- (Optional) Configuration details of a schedule-triggered flow as defined by the user. Currently, these settings only apply to the Scheduled
trigger type. See Scheduled Trigger Properties for details.The trigger_properties
block only supports one attribute: scheduled
, a block which in turn supports the following:
schedule_expression
- (Required) Scheduling expression that determines the rate at which the schedule will run, for example rate(5minutes)
.data_pull_mode
- (Optional) Whether a scheduled flow has an incremental data transfer or a complete data transfer for each flow run. Valid values are Incremental
and Complete
.first_execution_from
- (Optional) Date range for the records to import from the connector in the first flow run. Must be a valid RFC3339 timestamp.schedule_end_time
- (Optional) Scheduled end time for a schedule-triggered flow. Must be a valid RFC3339 timestamp.schedule_offset
- (Optional) Optional offset that is added to the time interval for a schedule-triggered flow. Maximum value of 36000.schedule_start_time
- (Optional) Scheduled start time for a schedule-triggered flow. Must be a valid RFC3339 timestamp.timezone
- (Optional) Time zone used when referring to the date and time of a scheduled-triggered flow, such as America/New_York
.resource "aws_appflow_flow" "example" {
# ... other configuration ...
trigger_config {
scheduled {
schedule_expression = "rate(1minutes)"
}
}
}
This resource exports the following attributes in addition to the arguments above:
arn
- Flow's ARN.flow_status
- The current status of the flow.tags_all
- Map of tags assigned to the resource, including those inherited from the provider default_tags
configuration block.In Terraform v1.5.0 and later, use an import
block to import AppFlow flows using the arn
. For example:
import {
to = aws_appflow_flow.example
id = "arn:aws:appflow:us-west-2:123456789012:flow/example-flow"
}
Using terraform import
, import AppFlow flows using the arn
. For example:
% terraform import aws_appflow_flow.example arn:aws:appflow:us-west-2:123456789012:flow/example-flow