Creates an Amazon Kinesis Data Analytics application. For information about creating a Kinesis Data Analytics application, see Creating an Application.
runtime_environment
(String) The runtime environment for the application.service_execution_role
(String) Specifies the IAM role that the application uses to access external resources.application_configuration
(Attributes) Use this parameter to configure the application. (see below for nested schema)application_description
(String) The description of the application.application_maintenance_configuration
(Attributes) Used to configure start of maintenance window. (see below for nested schema)application_mode
(String) To create a Kinesis Data Analytics Studio notebook, you must set the mode to INTERACTIVE
. However, for a Kinesis Data Analytics for Apache Flink application, the mode is optional.application_name
(String) The name of the application.run_configuration
(Attributes) Specifies run configuration (start parameters) of a Kinesis Data Analytics application. Evaluated on update for RUNNING applications an only. (see below for nested schema)tags
(Attributes List) A list of one or more tags to assign to the application. A tag is a key-value pair that identifies an application. Note that the maximum number of application tags includes system tags. The maximum number of user-defined application tags is 50. (see below for nested schema)id
(String) Uniquely identifies the resource.application_configuration
Optional:
application_code_configuration
(Attributes) The code location and type parameters for a Flink-based Kinesis Data Analytics application. (see below for nested schema)application_snapshot_configuration
(Attributes) Describes whether snapshots are enabled for a Flink-based Kinesis Data Analytics application. (see below for nested schema)environment_properties
(Attributes) Describes execution properties for a Flink-based Kinesis Data Analytics application. (see below for nested schema)flink_application_configuration
(Attributes) The creation and update parameters for a Flink-based Kinesis Data Analytics application. (see below for nested schema)sql_application_configuration
(Attributes) The creation and update parameters for a SQL-based Kinesis Data Analytics application. (see below for nested schema)vpc_configurations
(Attributes List) The array of descriptions of VPC configurations available to the application. (see below for nested schema)zeppelin_application_configuration
(Attributes) The configuration parameters for a Kinesis Data Analytics Studio notebook. (see below for nested schema)application_configuration.application_code_configuration
Required:
code_content
(Attributes) The location and type of the application code. (see below for nested schema)code_content_type
(String) Specifies whether the code content is in text or zip format.application_configuration.application_code_configuration.code_content
Optional:
s3_content_location
(Attributes) Information about the Amazon S3 bucket that contains the application code. (see below for nested schema)text_content
(String) The text-format code for a Flink-based Kinesis Data Analytics application.zip_file_content
(String) The zip-format code for a Flink-based Kinesis Data Analytics application.application_configuration.application_code_configuration.code_content.s3_content_location
Required:
bucket_arn
(String) The Amazon Resource Name (ARN) for the S3 bucket containing the application code.file_key
(String) The file key for the object containing the application code.Optional:
object_version
(String) The version of the object containing the application code.application_configuration.application_snapshot_configuration
Required:
snapshots_enabled
(Boolean) Describes whether snapshots are enabled for a Flink-based Kinesis Data Analytics application.application_configuration.environment_properties
Optional:
property_groups
(Attributes List) Describes the execution property groups. (see below for nested schema)application_configuration.environment_properties.property_groups
Optional:
property_group_id
(String) Describes the key of an application execution property key-value pair.property_map
(Map of String) Describes the value of an application execution property key-value pair.application_configuration.flink_application_configuration
Optional:
checkpoint_configuration
(Attributes) Describes an application's checkpointing configuration. Checkpointing is the process of persisting application state for fault tolerance. For more information, see Checkpoints for Fault Tolerance in the Apache Flink Documentation. (see below for nested schema)monitoring_configuration
(Attributes) Describes configuration parameters for Amazon CloudWatch logging for an application. (see below for nested schema)parallelism_configuration
(Attributes) Describes parameters for how an application executes multiple tasks simultaneously. (see below for nested schema)application_configuration.flink_application_configuration.checkpoint_configuration
Required:
configuration_type
(String) Describes whether the application uses Kinesis Data Analytics' default checkpointing behavior. You must set this property to CUSTOM
in order to set the CheckpointingEnabled
, CheckpointInterval
, or MinPauseBetweenCheckpoints
parameters.Optional:
checkpoint_interval
(Number) Describes the interval in milliseconds between checkpoint operations.checkpointing_enabled
(Boolean) Describes whether checkpointing is enabled for a Flink-based Kinesis Data Analytics application.min_pause_between_checkpoints
(Number) Describes the minimum time in milliseconds after a checkpoint operation completes that a new checkpoint operation can start. If a checkpoint operation takes longer than the CheckpointInterval, the application otherwise performs continual checkpoint operations. For more information, see Tuning Checkpointing in the Apache Flink Documentation.application_configuration.flink_application_configuration.monitoring_configuration
Required:
configuration_type
(String) Describes whether to use the default CloudWatch logging configuration for an application. You must set this property to CUSTOM in order to set the LogLevel or MetricsLevel parameters.Optional:
log_level
(String) Describes the verbosity of the CloudWatch Logs for an application.metrics_level
(String) Describes the granularity of the CloudWatch Logs for an application. The Parallelism level is not recommended for applications with a Parallelism over 64 due to excessive costs.application_configuration.flink_application_configuration.parallelism_configuration
Required:
configuration_type
(String) Describes whether the application uses the default parallelism for the Kinesis Data Analytics service. You must set this property to CUSTOM
in order to change your application's AutoScalingEnabled
, Parallelism
, or ParallelismPerKPU
properties.Optional:
auto_scaling_enabled
(Boolean) Describes whether the Kinesis Data Analytics service can increase the parallelism of the application in response to increased throughput.parallelism
(Number) Describes the initial number of parallel tasks that a Java-based Kinesis Data Analytics application can perform. The Kinesis Data Analytics service can increase this number automatically if ParallelismConfiguration:AutoScalingEnabled is set to true.parallelism_per_kpu
(Number) Describes the number of parallel tasks that a Java-based Kinesis Data Analytics application can perform per Kinesis Processing Unit (KPU) used by the application. For more information about KPUs, see Amazon Kinesis Data Analytics Pricing.application_configuration.sql_application_configuration
Optional:
inputs
(Attributes List) The array of Input objects describing the input streams used by the application. (see below for nested schema)application_configuration.sql_application_configuration.inputs
Required:
input_schema
(Attributes) Describes the format of the data in the streaming source, and how each data element maps to corresponding columns in the in-application stream that is being created. (see below for nested schema)name_prefix
(String) The name prefix to use when creating an in-application stream. Suppose that you specify a prefix "MyInApplicationStream"
. Kinesis Data Analytics then creates one or more (as per the InputParallelism count you specified) in-application streams with the names "MyInApplicationStream_001"
, "MyInApplicationStream_002"
, and so on.Optional:
input_parallelism
(Attributes) Describes the number of in-application streams to create. (see below for nested schema)input_processing_configuration
(Attributes) The InputProcessingConfiguration for the input. An input processor transforms records as they are received from the stream, before the application's SQL code executes. Currently, the only input processing configuration available is InputLambdaProcessor. (see below for nested schema)kinesis_firehose_input
(Attributes) If the streaming source is an Amazon Kinesis Data Firehose delivery stream, identifies the delivery stream's ARN. (see below for nested schema)kinesis_streams_input
(Attributes) If the streaming source is an Amazon Kinesis data stream, identifies the stream's Amazon Resource Name (ARN). (see below for nested schema)application_configuration.sql_application_configuration.inputs.input_schema
Required:
record_columns
(Attributes List) A list of RecordColumn
objects. (see below for nested schema)record_format
(Attributes) Specifies the format of the records on the streaming source. (see below for nested schema)Optional:
record_encoding
(String) Specifies the encoding of the records in the streaming source. For example, UTF-8.application_configuration.sql_application_configuration.inputs.kinesis_streams_input.record_columns
Required:
name
(String) The name of the column that is created in the in-application input stream or reference table.sql_type
(String) The type of column created in the in-application input stream or reference table.Optional:
mapping
(String) A reference to the data element in the streaming input or the reference data source.application_configuration.sql_application_configuration.inputs.kinesis_streams_input.record_format
Required:
record_format_type
(String) The type of record format.Optional:
mapping_parameters
(Attributes) When you configure application input at the time of creating or updating an application, provides additional mapping information specific to the record format (such as JSON, CSV, or record fields delimited by some delimiter) on the streaming source. (see below for nested schema)application_configuration.sql_application_configuration.inputs.kinesis_streams_input.record_format.mapping_parameters
Optional:
csv_mapping_parameters
(Attributes) Provides additional mapping information when the record format uses delimiters (for example, CSV). (see below for nested schema)json_mapping_parameters
(Attributes) Provides additional mapping information when JSON is the record format on the streaming source. (see below for nested schema)application_configuration.sql_application_configuration.inputs.kinesis_streams_input.record_format.mapping_parameters.csv_mapping_parameters
Required:
record_column_delimiter
(String) The column delimiter. For example, in a CSV format, a comma (",") is the typical column delimiter.record_row_delimiter
(String) The row delimiter. For example, in a CSV format, '\n' is the typical row delimiter.application_configuration.sql_application_configuration.inputs.kinesis_streams_input.record_format.mapping_parameters.json_mapping_parameters
Required:
record_row_path
(String) The path to the top-level parent that contains the records.application_configuration.sql_application_configuration.inputs.input_parallelism
Optional:
count
(Number) The number of in-application streams to create.application_configuration.sql_application_configuration.inputs.input_processing_configuration
Optional:
input_lambda_processor
(Attributes) The InputLambdaProcessor that is used to preprocess the records in the stream before being processed by your application code. (see below for nested schema)application_configuration.sql_application_configuration.inputs.kinesis_streams_input.input_lambda_processor
Required:
resource_arn
(String) The ARN of the Amazon Lambda function that operates on records in the stream.application_configuration.sql_application_configuration.inputs.kinesis_firehose_input
Required:
resource_arn
(String) The Amazon Resource Name (ARN) of the delivery stream.application_configuration.sql_application_configuration.inputs.kinesis_streams_input
Required:
resource_arn
(String) The ARN of the input Kinesis data stream to read.application_configuration.vpc_configurations
Required:
security_group_ids
(List of String) The array of SecurityGroup IDs used by the VPC configuration.subnet_ids
(List of String) The array of Subnet IDs used by the VPC configuration.application_configuration.zeppelin_application_configuration
Optional:
catalog_configuration
(Attributes) The Amazon Glue Data Catalog that you use in queries in a Kinesis Data Analytics Studio notebook. (see below for nested schema)custom_artifacts_configuration
(Attributes List) A list of CustomArtifactConfiguration objects. (see below for nested schema)deploy_as_application_configuration
(Attributes) The information required to deploy a Kinesis Data Analytics Studio notebook as an application with durable state. (see below for nested schema)monitoring_configuration
(Attributes) The monitoring configuration of a Kinesis Data Analytics Studio notebook. (see below for nested schema)application_configuration.zeppelin_application_configuration.catalog_configuration
Optional:
glue_data_catalog_configuration
(Attributes) The configuration parameters for the default Amazon Glue database. You use this database for Apache Flink SQL queries and table API transforms that you write in a Kinesis Data Analytics Studio notebook. (see below for nested schema)application_configuration.zeppelin_application_configuration.catalog_configuration.glue_data_catalog_configuration
Optional:
database_arn
(String) The Amazon Resource Name (ARN) of the database.application_configuration.zeppelin_application_configuration.custom_artifacts_configuration
Required:
artifact_type
(String) Set this to either UDF
or DEPENDENCY_JAR
. UDF
stands for user-defined functions. This type of artifact must be in an S3 bucket. A DEPENDENCY_JAR
can be in either Maven or an S3 bucket.Optional:
maven_reference
(Attributes) The parameters required to fully specify a Maven reference. (see below for nested schema)s3_content_location
(Attributes) The location of the custom artifacts. (see below for nested schema)application_configuration.zeppelin_application_configuration.custom_artifacts_configuration.maven_reference
Required:
artifact_id
(String) The artifact ID of the Maven reference.group_id
(String) The group ID of the Maven reference.version
(String) The version of the Maven reference.application_configuration.zeppelin_application_configuration.custom_artifacts_configuration.s3_content_location
Required:
bucket_arn
(String) The Amazon Resource Name (ARN) for the S3 bucket containing the application code.file_key
(String) The file key for the object containing the application code.Optional:
object_version
(String) The version of the object containing the application code.application_configuration.zeppelin_application_configuration.deploy_as_application_configuration
Required:
s3_content_location
(Attributes) The description of an Amazon S3 object that contains the Amazon Data Analytics application, including the Amazon Resource Name (ARN) of the S3 bucket, the name of the Amazon S3 object that contains the data, and the version number of the Amazon S3 object that contains the data. (see below for nested schema)application_configuration.zeppelin_application_configuration.deploy_as_application_configuration.s3_content_location
Required:
bucket_arn
(String) The Amazon Resource Name (ARN) of the S3 bucket.Optional:
base_path
(String) The base path for the S3 bucket.application_configuration.zeppelin_application_configuration.monitoring_configuration
Optional:
log_level
(String) The verbosity of the CloudWatch Logs for an application. You can set it to INFO
, WARN
, ERROR
, or DEBUG
.application_maintenance_configuration
Required:
application_maintenance_window_start_time
(String) The start time for the maintenance window.run_configuration
Optional:
application_restore_configuration
(Attributes) Describes the restore behavior of a restarting application. (see below for nested schema)flink_run_configuration
(Attributes) Describes the starting parameters for a Flink-based Kinesis Data Analytics application. (see below for nested schema)run_configuration.application_restore_configuration
Required:
application_restore_type
(String) Specifies how the application should be restored.Optional:
snapshot_name
(String) The identifier of an existing snapshot of application state to use to restart an application. The application uses this value if RESTORE_FROM_CUSTOM_SNAPSHOT is specified for the ApplicationRestoreType.run_configuration.flink_run_configuration
Optional:
allow_non_restored_state
(Boolean) When restoring from a snapshot, specifies whether the runtime is allowed to skip a state that cannot be mapped to the new program. Defaults to false. If you update your application without specifying this parameter, AllowNonRestoredState will be set to false, even if it was previously set to true.tags
Required:
key
(String) The key name of the tag. You can specify a value that's 1 to 128 Unicode characters in length and can't be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.value
(String) The value for the tag. You can specify a value that's 0 to 256 characters in length.Import is supported using the following syntax:
$ terraform import awscc_kinesisanalyticsv2_application.example <resource ID>