Types for Google Cloud Dataproc API Client#

class google.cloud.dataproc_v1beta2.types.AcceleratorConfig#

Specifies the type and number of accelerator cards attached to the instances of an instance group (see GPUs on Compute Engine).

accelerator_type_uri#

Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes Examples * https://www.googleapis.com/compute/beta/projects/[project _id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us- east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia- tesla-k80 Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

accelerator_count#

The number of the accelerator cards of this type exposed to this instance.

accelerator_count

Field google.cloud.dataproc.v1beta2.AcceleratorConfig.accelerator_count

accelerator_type_uri

Field google.cloud.dataproc.v1beta2.AcceleratorConfig.accelerator_type_uri

class google.cloud.dataproc_v1beta2.types.Any#
type_url#

Field google.protobuf.Any.type_url

value#

Field google.protobuf.Any.value

class google.cloud.dataproc_v1beta2.types.AutoscalingConfig#

Autoscaling Policy config associated with the cluster.

policy_uri#

Optional. The autoscaling policy used by the cluster. Only resource names including projectid and location (region) are valid. Examples: - https://www.googleapis.com/compute/v1/p rojects/[project_id]/locations/[dataproc_region]/autoscalingPo licies/[policy_id] - projects/[project_id]/locations/[dat aproc_region]/autoscalingPolicies/[policy_id] Note that the policy must be in the same project and Cloud Dataproc region.

policy_uri

Field google.cloud.dataproc.v1beta2.AutoscalingConfig.policy_uri

class google.cloud.dataproc_v1beta2.types.AutoscalingPolicy#

Describes an autoscaling policy for Dataproc cluster autoscaler.

id#

Required. The policy id. The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.

name#

Output only. The “resource name” of the policy, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}/autoscalingPolic ies/{policy_id}.

algorithm#

Required. Autoscaling algorithm for policy.

worker_config#

Required. Describes how the autoscaler will operate for primary workers.

secondary_worker_config#

Optional. Describes how the autoscaler will operate for secondary workers.

basic_algorithm#

Field google.cloud.dataproc.v1beta2.AutoscalingPolicy.basic_algorithm

id

Field google.cloud.dataproc.v1beta2.AutoscalingPolicy.id

name

Field google.cloud.dataproc.v1beta2.AutoscalingPolicy.name

secondary_worker_config

Field google.cloud.dataproc.v1beta2.AutoscalingPolicy.secondary_worker_config

worker_config

Field google.cloud.dataproc.v1beta2.AutoscalingPolicy.worker_config

class google.cloud.dataproc_v1beta2.types.BasicAutoscalingAlgorithm#

Basic algorithm for autoscaling.

yarn_config#

Required. YARN autoscaling configuration.

cooldown_period#

Optional. Duration between scaling events. A scaling period starts after the update operation from the previous event has completed. Bounds: [2m, 1d]. Default: 2m.

cooldown_period

Field google.cloud.dataproc.v1beta2.BasicAutoscalingAlgorithm.cooldown_period

yarn_config

Field google.cloud.dataproc.v1beta2.BasicAutoscalingAlgorithm.yarn_config

class google.cloud.dataproc_v1beta2.types.BasicYarnAutoscalingConfig#

Basic autoscaling configurations for YARN.

graceful_decommission_timeout#

Required. Timeout for YARN graceful decommissioning of Node Managers. Specifies the duration to wait for jobs to complete before forcefully removing workers (and potentially interrupting jobs). Only applicable to downscaling operations. Bounds: [0s, 1d].

scale_up_factor#

Required. Fraction of average pending memory in the last cooldown period for which to add workers. A scale-up factor of 1.0 will result in scaling up so that there is no pending memory remaining after the update (more aggressive scaling). A scale-up factor closer to 0 will result in a smaller magnitude of scaling up (less aggressive scaling). Bounds: [0.0, 1.0].

scale_down_factor#

Required. Fraction of average pending memory in the last cooldown period for which to remove workers. A scale-down factor of 1 will result in scaling down so that there is no available memory remaining after the update (more aggressive scaling). A scale-down factor of 0 disables removing workers, which can be beneficial for autoscaling a single job. Bounds: [0.0, 1.0].

scale_up_min_worker_fraction#

Optional. Minimum scale-up threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2-worker scale-up for the cluster to scale. A threshold of 0 means the autoscaler will scale up on any recommended change. Bounds: [0.0, 1.0]. Default: 0.0.

scale_down_min_worker_fraction#

Optional. Minimum scale-down threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2 worker scale-down for the cluster to scale. A threshold of 0 means the autoscaler will scale down on any recommended change. Bounds: [0.0, 1.0]. Default: 0.0.

graceful_decommission_timeout

Field google.cloud.dataproc.v1beta2.BasicYarnAutoscalingConfig.graceful_decommission_timeout

scale_down_factor

Field google.cloud.dataproc.v1beta2.BasicYarnAutoscalingConfig.scale_down_factor

scale_down_min_worker_fraction

Field google.cloud.dataproc.v1beta2.BasicYarnAutoscalingConfig.scale_down_min_worker_fraction

scale_up_factor

Field google.cloud.dataproc.v1beta2.BasicYarnAutoscalingConfig.scale_up_factor

scale_up_min_worker_fraction

Field google.cloud.dataproc.v1beta2.BasicYarnAutoscalingConfig.scale_up_min_worker_fraction

class google.cloud.dataproc_v1beta2.types.CancelJobRequest#

A request to cancel a job.

project_id#

Required. The ID of the Google Cloud Platform project that the job belongs to.

region#

Required. The Cloud Dataproc region in which to handle the request.

job_id#

Required. The job ID.

job_id

Field google.cloud.dataproc.v1beta2.CancelJobRequest.job_id

project_id

Field google.cloud.dataproc.v1beta2.CancelJobRequest.project_id

region

Field google.cloud.dataproc.v1beta2.CancelJobRequest.region

class google.cloud.dataproc_v1beta2.types.CancelOperationRequest#
name#

Field google.longrunning.CancelOperationRequest.name

class google.cloud.dataproc_v1beta2.types.Cluster#

Describes the identifying information, config, and status of a cluster of Compute Engine instances.

project_id#

Required. The Google Cloud Platform project ID that the cluster belongs to.

cluster_name#

Required. The cluster name. Cluster names within a project must be unique. Names of deleted clusters can be reused.

config#

Required. The cluster config. Note that Cloud Dataproc may set default values, and values may change when clusters are updated.

labels#

Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035. Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035. No more than 32 labels can be associated with a cluster.

status#

Output only. Cluster status.

status_history#

Output only. The previous cluster status.

cluster_uuid#

Output only. A cluster UUID (Unique Universal Identifier). Cloud Dataproc generates this value when it creates the cluster.

metrics#

Output only. Contains cluster daemon metrics such as HDFS and YARN stats. Beta Feature: This report is available for testing purposes only. It may be changed before final release.

class LabelsEntry#
key#

Field google.cloud.dataproc.v1beta2.Cluster.LabelsEntry.key

value#

Field google.cloud.dataproc.v1beta2.Cluster.LabelsEntry.value

cluster_name

Field google.cloud.dataproc.v1beta2.Cluster.cluster_name

cluster_uuid

Field google.cloud.dataproc.v1beta2.Cluster.cluster_uuid

config

Field google.cloud.dataproc.v1beta2.Cluster.config

labels

Field google.cloud.dataproc.v1beta2.Cluster.labels

metrics

Field google.cloud.dataproc.v1beta2.Cluster.metrics

project_id

Field google.cloud.dataproc.v1beta2.Cluster.project_id

status

Field google.cloud.dataproc.v1beta2.Cluster.status

status_history

Field google.cloud.dataproc.v1beta2.Cluster.status_history

class google.cloud.dataproc_v1beta2.types.ClusterConfig#

The cluster config.

config_bucket#

Optional. A Google Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster’s staging bucket according to the Google Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Cloud Dataproc staging bucket).

gce_cluster_config#

Optional. The shared Compute Engine config settings for all instances in a cluster.

master_config#

Optional. The Compute Engine config settings for the master instance in a cluster.

worker_config#

Optional. The Compute Engine config settings for worker instances in a cluster.

secondary_worker_config#

Optional. The Compute Engine config settings for additional worker instances in a cluster.

software_config#

Optional. The config settings for software inside the cluster.

lifecycle_config#

Optional. The config setting for auto delete cluster schedule.

initialization_actions#

Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node’s role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): :: ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1b eta2/instance/attributes/dataproc-role) if [[ “${ROLE}” == ‘Master’ ]]; then … master specific actions … else … worker specific actions … fi

encryption_config#

Optional. Encryption settings for the cluster.

autoscaling_config#

Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

endpoint_config#

Optional. Port/endpoint configuration for this cluster

security_config#

Optional. Security related configuration.

autoscaling_config

Field google.cloud.dataproc.v1beta2.ClusterConfig.autoscaling_config

config_bucket

Field google.cloud.dataproc.v1beta2.ClusterConfig.config_bucket

encryption_config

Field google.cloud.dataproc.v1beta2.ClusterConfig.encryption_config

endpoint_config

Field google.cloud.dataproc.v1beta2.ClusterConfig.endpoint_config

gce_cluster_config

Field google.cloud.dataproc.v1beta2.ClusterConfig.gce_cluster_config

initialization_actions

Field google.cloud.dataproc.v1beta2.ClusterConfig.initialization_actions

lifecycle_config

Field google.cloud.dataproc.v1beta2.ClusterConfig.lifecycle_config

master_config

Field google.cloud.dataproc.v1beta2.ClusterConfig.master_config

secondary_worker_config

Field google.cloud.dataproc.v1beta2.ClusterConfig.secondary_worker_config

security_config

Field google.cloud.dataproc.v1beta2.ClusterConfig.security_config

software_config

Field google.cloud.dataproc.v1beta2.ClusterConfig.software_config

worker_config

Field google.cloud.dataproc.v1beta2.ClusterConfig.worker_config

class google.cloud.dataproc_v1beta2.types.ClusterMetrics#

Contains cluster daemon metrics, such as HDFS and YARN stats.

Beta Feature: This report is available for testing purposes only. It may be changed before final release.

hdfs_metrics#

The HDFS metrics.

yarn_metrics#

The YARN metrics.

class HdfsMetricsEntry#
key#

Field google.cloud.dataproc.v1beta2.ClusterMetrics.HdfsMetricsEntry.key

value#

Field google.cloud.dataproc.v1beta2.ClusterMetrics.HdfsMetricsEntry.value

class YarnMetricsEntry#
key#

Field google.cloud.dataproc.v1beta2.ClusterMetrics.YarnMetricsEntry.key

value#

Field google.cloud.dataproc.v1beta2.ClusterMetrics.YarnMetricsEntry.value

hdfs_metrics

Field google.cloud.dataproc.v1beta2.ClusterMetrics.hdfs_metrics

yarn_metrics

Field google.cloud.dataproc.v1beta2.ClusterMetrics.yarn_metrics

class google.cloud.dataproc_v1beta2.types.ClusterOperation#

The cluster operation triggered by a workflow.

operation_id#

Output only. The id of the cluster operation.

error#

Output only. Error, if operation failed.

done#

Output only. Indicates the operation is done.

done

Field google.cloud.dataproc.v1beta2.ClusterOperation.done

error

Field google.cloud.dataproc.v1beta2.ClusterOperation.error

operation_id

Field google.cloud.dataproc.v1beta2.ClusterOperation.operation_id

class google.cloud.dataproc_v1beta2.types.ClusterOperationMetadata#

Metadata describing the operation.

cluster_name#

Output only. Name of the cluster for the operation.

cluster_uuid#

Output only. Cluster UUID for the operation.

status#

Output only. Current operation status.

status_history#

Output only. The previous operation status.

operation_type#

Output only. The operation type.

description#

Output only. Short description of operation.

labels#

Output only. Labels associated with the operation

warnings#

Output only. Errors encountered during operation execution.

class LabelsEntry#
key#

Field google.cloud.dataproc.v1beta2.ClusterOperationMetadata.LabelsEntry.key

value#

Field google.cloud.dataproc.v1beta2.ClusterOperationMetadata.LabelsEntry.value

cluster_name

Field google.cloud.dataproc.v1beta2.ClusterOperationMetadata.cluster_name

cluster_uuid

Field google.cloud.dataproc.v1beta2.ClusterOperationMetadata.cluster_uuid

description

Field google.cloud.dataproc.v1beta2.ClusterOperationMetadata.description

labels

Field google.cloud.dataproc.v1beta2.ClusterOperationMetadata.labels

operation_type

Field google.cloud.dataproc.v1beta2.ClusterOperationMetadata.operation_type

status

Field google.cloud.dataproc.v1beta2.ClusterOperationMetadata.status

status_history

Field google.cloud.dataproc.v1beta2.ClusterOperationMetadata.status_history

warnings

Field google.cloud.dataproc.v1beta2.ClusterOperationMetadata.warnings

class google.cloud.dataproc_v1beta2.types.ClusterOperationStatus#

The status of the operation.

state#

Output only. A message containing the operation state.

inner_state#

Output only. A message containing the detailed operation state.

details#

Output only. A message containing any operation metadata details.

state_start_time#

Output only. The time this state was entered.

details

Field google.cloud.dataproc.v1beta2.ClusterOperationStatus.details

inner_state

Field google.cloud.dataproc.v1beta2.ClusterOperationStatus.inner_state

state

Field google.cloud.dataproc.v1beta2.ClusterOperationStatus.state

state_start_time

Field google.cloud.dataproc.v1beta2.ClusterOperationStatus.state_start_time

class google.cloud.dataproc_v1beta2.types.ClusterSelector#

A selector that chooses target cluster for jobs based on metadata.

zone#

Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster. If unspecified, the zone of the first cluster matching the selector is used.

cluster_labels#

Required. The cluster labels. Cluster must have all labels to match.

class ClusterLabelsEntry#
key#

Field google.cloud.dataproc.v1beta2.ClusterSelector.ClusterLabelsEntry.key

value#

Field google.cloud.dataproc.v1beta2.ClusterSelector.ClusterLabelsEntry.value

cluster_labels

Field google.cloud.dataproc.v1beta2.ClusterSelector.cluster_labels

zone

Field google.cloud.dataproc.v1beta2.ClusterSelector.zone

class google.cloud.dataproc_v1beta2.types.ClusterStatus#

The status of a cluster and its instances.

state#

Output only. The cluster’s state.

detail#

Output only. Optional details of cluster’s state.

state_start_time#

Output only. Time when this state was entered.

substate#

Output only. Additional state information that includes status reported by the agent.

detail

Field google.cloud.dataproc.v1beta2.ClusterStatus.detail

state

Field google.cloud.dataproc.v1beta2.ClusterStatus.state

state_start_time

Field google.cloud.dataproc.v1beta2.ClusterStatus.state_start_time

substate

Field google.cloud.dataproc.v1beta2.ClusterStatus.substate

class google.cloud.dataproc_v1beta2.types.CreateAutoscalingPolicyRequest#

A request to create an autoscaling policy.

parent#

Required. The “resource name” of the region, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}.

policy#

The autoscaling policy to create.

parent

Field google.cloud.dataproc.v1beta2.CreateAutoscalingPolicyRequest.parent

policy

Field google.cloud.dataproc.v1beta2.CreateAutoscalingPolicyRequest.policy

class google.cloud.dataproc_v1beta2.types.CreateClusterRequest#

A request to create a cluster.

project_id#

Required. The ID of the Google Cloud Platform project that the cluster belongs to.

region#

Required. The Cloud Dataproc region in which to handle the request.

cluster#

Required. The cluster to create.

request_id#

Optional. A unique id used to identify the request. If the server receives two [CreateClusterRequest][google.cloud.datapr oc.v1beta2.CreateClusterRequest] requests with the same id, then the second request will be ignored and the first [google.longrunning.Operation][google.longrunning.Operation] created and stored in the backend is returned. It is recommended to always set this value to a UUID. The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

cluster

Field google.cloud.dataproc.v1beta2.CreateClusterRequest.cluster

project_id

Field google.cloud.dataproc.v1beta2.CreateClusterRequest.project_id

region

Field google.cloud.dataproc.v1beta2.CreateClusterRequest.region

request_id

Field google.cloud.dataproc.v1beta2.CreateClusterRequest.request_id

class google.cloud.dataproc_v1beta2.types.CreateWorkflowTemplateRequest#

A request to create a workflow template.

parent#

Required. The “resource name” of the region, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}

template#

Required. The Dataproc workflow template to create.

parent

Field google.cloud.dataproc.v1beta2.CreateWorkflowTemplateRequest.parent

template

Field google.cloud.dataproc.v1beta2.CreateWorkflowTemplateRequest.template

class google.cloud.dataproc_v1beta2.types.DeleteAutoscalingPolicyRequest#

A request to delete an autoscaling policy.

Autoscaling policies in use by one or more clusters will not be deleted.

name#

Required. The “resource name” of the autoscaling policy, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}/autoscalingPolic ies/{policy_id}.

name

Field google.cloud.dataproc.v1beta2.DeleteAutoscalingPolicyRequest.name

class google.cloud.dataproc_v1beta2.types.DeleteClusterRequest#

A request to delete a cluster.

project_id#

Required. The ID of the Google Cloud Platform project that the cluster belongs to.

region#

Required. The Cloud Dataproc region in which to handle the request.

cluster_name#

Required. The cluster name.

cluster_uuid#

Optional. Specifying the cluster_uuid means the RPC should fail (with error NOT_FOUND) if cluster with specified UUID does not exist.

request_id#

Optional. A unique id used to identify the request. If the server receives two [DeleteClusterRequest][google.cloud.datapr oc.v1beta2.DeleteClusterRequest] requests with the same id, then the second request will be ignored and the first [google.longrunning.Operation][google.longrunning.Operation] created and stored in the backend is returned. It is recommended to always set this value to a UUID. The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

cluster_name

Field google.cloud.dataproc.v1beta2.DeleteClusterRequest.cluster_name

cluster_uuid

Field google.cloud.dataproc.v1beta2.DeleteClusterRequest.cluster_uuid

project_id

Field google.cloud.dataproc.v1beta2.DeleteClusterRequest.project_id

region

Field google.cloud.dataproc.v1beta2.DeleteClusterRequest.region

request_id

Field google.cloud.dataproc.v1beta2.DeleteClusterRequest.request_id

class google.cloud.dataproc_v1beta2.types.DeleteJobRequest#

A request to delete a job.

project_id#

Required. The ID of the Google Cloud Platform project that the job belongs to.

region#

Required. The Cloud Dataproc region in which to handle the request.

job_id#

Required. The job ID.

job_id

Field google.cloud.dataproc.v1beta2.DeleteJobRequest.job_id

project_id

Field google.cloud.dataproc.v1beta2.DeleteJobRequest.project_id

region

Field google.cloud.dataproc.v1beta2.DeleteJobRequest.region

class google.cloud.dataproc_v1beta2.types.DeleteOperationRequest#
name#

Field google.longrunning.DeleteOperationRequest.name

class google.cloud.dataproc_v1beta2.types.DeleteWorkflowTemplateRequest#

A request to delete a workflow template.

Currently started workflows will remain running.

name#

Required. The “resource name” of the workflow template, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}/workflowTemplate s/{template_id}

version#

Optional. The version of workflow template to delete. If specified, will only delete the template if the current server version matches specified version.

name

Field google.cloud.dataproc.v1beta2.DeleteWorkflowTemplateRequest.name

version

Field google.cloud.dataproc.v1beta2.DeleteWorkflowTemplateRequest.version

class google.cloud.dataproc_v1beta2.types.DiagnoseClusterRequest#

A request to collect cluster diagnostic information.

project_id#

Required. The ID of the Google Cloud Platform project that the cluster belongs to.

region#

Required. The Cloud Dataproc region in which to handle the request.

cluster_name#

Required. The cluster name.

cluster_name

Field google.cloud.dataproc.v1beta2.DiagnoseClusterRequest.cluster_name

project_id

Field google.cloud.dataproc.v1beta2.DiagnoseClusterRequest.project_id

region

Field google.cloud.dataproc.v1beta2.DiagnoseClusterRequest.region

class google.cloud.dataproc_v1beta2.types.DiagnoseClusterResults#

The location of diagnostic output.

output_uri#

Output only. The Cloud Storage URI of the diagnostic output. The output report is a plain text file with a summary of collected diagnostics.

output_uri

Field google.cloud.dataproc.v1beta2.DiagnoseClusterResults.output_uri

class google.cloud.dataproc_v1beta2.types.DiskConfig#

Specifies the config of disk options for a group of VM instances.

boot_disk_type#

Optional. Type of the boot disk (default is “pd-standard”). Valid values: “pd-ssd” (Persistent Disk Solid State Drive) or “pd-standard” (Persistent Disk Hard Disk Drive).

boot_disk_size_gb#

Optional. Size in GB of the boot disk (default is 500GB).

num_local_ssds#

Optional. Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.

boot_disk_size_gb

Field google.cloud.dataproc.v1beta2.DiskConfig.boot_disk_size_gb

boot_disk_type

Field google.cloud.dataproc.v1beta2.DiskConfig.boot_disk_type

num_local_ssds

Field google.cloud.dataproc.v1beta2.DiskConfig.num_local_ssds

class google.cloud.dataproc_v1beta2.types.Duration#
nanos#

Field google.protobuf.Duration.nanos

seconds#

Field google.protobuf.Duration.seconds

class google.cloud.dataproc_v1beta2.types.Empty#
class google.cloud.dataproc_v1beta2.types.EncryptionConfig#

Encryption settings for the cluster.

gce_pd_kms_key_name#

Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

gce_pd_kms_key_name

Field google.cloud.dataproc.v1beta2.EncryptionConfig.gce_pd_kms_key_name

class google.cloud.dataproc_v1beta2.types.EndpointConfig#

Endpoint config for this cluster

http_ports#

Output only. The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.

enable_http_port_access#

Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.

class HttpPortsEntry#
key#

Field google.cloud.dataproc.v1beta2.EndpointConfig.HttpPortsEntry.key

value#

Field google.cloud.dataproc.v1beta2.EndpointConfig.HttpPortsEntry.value

enable_http_port_access

Field google.cloud.dataproc.v1beta2.EndpointConfig.enable_http_port_access

http_ports

Field google.cloud.dataproc.v1beta2.EndpointConfig.http_ports

class google.cloud.dataproc_v1beta2.types.FieldMask#
paths#

Field google.protobuf.FieldMask.paths

class google.cloud.dataproc_v1beta2.types.GceClusterConfig#

Common config settings for resources of Compute Engine cluster instances, applicable to all instances in the cluster.

zone_uri#

Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the “global” region. If omitted in a non-global Cloud Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present. A full URL, partial URI, or short name are valid. Examples: - htt ps://www.googleapis.com/compute/v1/projects/[project_id]/zones /[zone] - projects/[project_id]/zones/[zone] - us- central1-f

network_uri#

Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the “default” network of the project is used, if it exists. Cannot be a “Custom Subnet Network” (see Using Subnetworks for more information). A full URL, partial URI, or short name are valid. Examples: - htt ps://www.googleapis.com/compute/v1/projects/[project_id]/regio ns/global/default - projects/[project_id]/regions/global/default - default

subnetwork_uri#

Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri. A full URL, partial URI, or short name are valid. Examples: - htt ps://www.googleapis.com/compute/v1/projects/[project_id]/regio ns/us-east1/subnetworks/sub0 - projects/[project_id]/regions/us-east1/subnetworks/sub0 - sub0

internal_ip_only#

Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.

service_account#

Optional. The service account of the instances. Defaults to the default Compute Engine service account. Custom service accounts need permissions equivalent to the following IAM roles: - roles/logging.logWriter - roles/storage.objectAdmin (see https://cloud.google.com/compute/docs/access/service- accounts#custom_service_accounts for more information). Example: [account_id]@[project_id].iam.gserviceaccount.com

service_account_scopes#

Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: - https://www.googleapis.com/auth/cloud.useraccounts.readonly - https://www.googleapis.com/auth/devstorage.read_write - https://www.googleapis.com/auth/logging.write If no scopes are specified, the following defaults are also provided: - https://www.googleapis.com/auth/bigquery - https://www.googleapis.com/auth/bigtable.admin.table - https://www.googleapis.com/auth/bigtable.data - https://www.googleapis.com/auth/devstorage.full_control

tags#

The Compute Engine tags to add to all instances (see Tagging instances).

metadata#

The Compute Engine metadata entries to add to all instances (see Project and instance metadata).

reservation_affinity#

Optional. Reservation Affinity for consuming Zonal reservation.

class MetadataEntry#
key#

Field google.cloud.dataproc.v1beta2.GceClusterConfig.MetadataEntry.key

value#

Field google.cloud.dataproc.v1beta2.GceClusterConfig.MetadataEntry.value

internal_ip_only

Field google.cloud.dataproc.v1beta2.GceClusterConfig.internal_ip_only

metadata

Field google.cloud.dataproc.v1beta2.GceClusterConfig.metadata

network_uri

Field google.cloud.dataproc.v1beta2.GceClusterConfig.network_uri

reservation_affinity

Field google.cloud.dataproc.v1beta2.GceClusterConfig.reservation_affinity

service_account

Field google.cloud.dataproc.v1beta2.GceClusterConfig.service_account

service_account_scopes

Field google.cloud.dataproc.v1beta2.GceClusterConfig.service_account_scopes

subnetwork_uri

Field google.cloud.dataproc.v1beta2.GceClusterConfig.subnetwork_uri

tags

Field google.cloud.dataproc.v1beta2.GceClusterConfig.tags

zone_uri

Field google.cloud.dataproc.v1beta2.GceClusterConfig.zone_uri

class google.cloud.dataproc_v1beta2.types.GetAutoscalingPolicyRequest#

A request to fetch an autoscaling policy.

name#

Required. The “resource name” of the autoscaling policy, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}/autoscalingPolic ies/{policy_id}.

name

Field google.cloud.dataproc.v1beta2.GetAutoscalingPolicyRequest.name

class google.cloud.dataproc_v1beta2.types.GetClusterRequest#

Request to get the resource representation for a cluster in a project.

project_id#

Required. The ID of the Google Cloud Platform project that the cluster belongs to.

region#

Required. The Cloud Dataproc region in which to handle the request.

cluster_name#

Required. The cluster name.

cluster_name

Field google.cloud.dataproc.v1beta2.GetClusterRequest.cluster_name

project_id

Field google.cloud.dataproc.v1beta2.GetClusterRequest.project_id

region

Field google.cloud.dataproc.v1beta2.GetClusterRequest.region

class google.cloud.dataproc_v1beta2.types.GetJobRequest#

A request to get the resource representation for a job in a project.

project_id#

Required. The ID of the Google Cloud Platform project that the job belongs to.

region#

Required. The Cloud Dataproc region in which to handle the request.

job_id#

Required. The job ID.

job_id

Field google.cloud.dataproc.v1beta2.GetJobRequest.job_id

project_id

Field google.cloud.dataproc.v1beta2.GetJobRequest.project_id

region

Field google.cloud.dataproc.v1beta2.GetJobRequest.region

class google.cloud.dataproc_v1beta2.types.GetOperationRequest#
name#

Field google.longrunning.GetOperationRequest.name

class google.cloud.dataproc_v1beta2.types.GetWorkflowTemplateRequest#

A request to fetch a workflow template.

name#

Required. The “resource name” of the workflow template, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}/workflowTemplate s/{template_id}

version#

Optional. The version of workflow template to retrieve. Only previously instatiated versions can be retrieved. If unspecified, retrieves the current version.

name

Field google.cloud.dataproc.v1beta2.GetWorkflowTemplateRequest.name

version

Field google.cloud.dataproc.v1beta2.GetWorkflowTemplateRequest.version

class google.cloud.dataproc_v1beta2.types.HadoopJob#

A Cloud Dataproc job for running Apache Hadoop MapReduce jobs on Apache Hadoop YARN.

driver#

Required. Indicates the location of the driver’s main class. Specify either the jar file that contains the main class or the main class name. To specify both, add the jar file to jar_file_uris, and then specify the main class name in this property.

main_jar_file_uri#

The HCFS URI of the jar file containing the main class. Examples: ‘gs://foo-bucket/analytics-binaries/extract-useful- metrics-mr.jar’ ‘hdfs:/tmp/test-samples/custom-wordcount.jar’ ‘file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce- examples.jar’

main_class#

The name of the driver’s main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.

args#

Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.

jar_file_uris#

Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.

file_uris#

Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.

archive_uris#

Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.

properties#

Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.

logging_config#

Optional. The runtime log config for job execution.

class PropertiesEntry#
key#

Field google.cloud.dataproc.v1beta2.HadoopJob.PropertiesEntry.key

value#

Field google.cloud.dataproc.v1beta2.HadoopJob.PropertiesEntry.value

archive_uris

Field google.cloud.dataproc.v1beta2.HadoopJob.archive_uris

args

Field google.cloud.dataproc.v1beta2.HadoopJob.args

file_uris

Field google.cloud.dataproc.v1beta2.HadoopJob.file_uris

jar_file_uris

Field google.cloud.dataproc.v1beta2.HadoopJob.jar_file_uris

logging_config

Field google.cloud.dataproc.v1beta2.HadoopJob.logging_config

main_class

Field google.cloud.dataproc.v1beta2.HadoopJob.main_class

main_jar_file_uri

Field google.cloud.dataproc.v1beta2.HadoopJob.main_jar_file_uri

properties

Field google.cloud.dataproc.v1beta2.HadoopJob.properties

class google.cloud.dataproc_v1beta2.types.HiveJob#

A Cloud Dataproc job for running Apache Hive queries on YARN.

queries#

Required. The sequence of Hive queries to execute, specified as either an HCFS file URI or a list of queries.

query_file_uri#

The HCFS URI of the script that contains Hive queries.

query_list#

A list of queries.

continue_on_failure#

Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.

script_variables#

Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).

properties#

Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.

jar_file_uris#

Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.

class PropertiesEntry#
key#

Field google.cloud.dataproc.v1beta2.HiveJob.PropertiesEntry.key

value#

Field google.cloud.dataproc.v1beta2.HiveJob.PropertiesEntry.value

class ScriptVariablesEntry#
key#

Field google.cloud.dataproc.v1beta2.HiveJob.ScriptVariablesEntry.key

value#

Field google.cloud.dataproc.v1beta2.HiveJob.ScriptVariablesEntry.value

continue_on_failure

Field google.cloud.dataproc.v1beta2.HiveJob.continue_on_failure

jar_file_uris

Field google.cloud.dataproc.v1beta2.HiveJob.jar_file_uris

properties

Field google.cloud.dataproc.v1beta2.HiveJob.properties

query_file_uri

Field google.cloud.dataproc.v1beta2.HiveJob.query_file_uri

query_list

Field google.cloud.dataproc.v1beta2.HiveJob.query_list

script_variables

Field google.cloud.dataproc.v1beta2.HiveJob.script_variables

class google.cloud.dataproc_v1beta2.types.InstanceGroupAutoscalingPolicyConfig#

Configuration for the size bounds of an instance group, including its proportional size to other groups.

min_instances#

Optional. Minimum number of instances for this group. Primary workers - Bounds: [2, max_instances]. Default: 2. Secondary workers - Bounds: [0, max_instances]. Default: 0.

max_instances#

Optional. Maximum number of instances for this group. Required for primary workers. Note that by default, clusters will not use secondary workers. Required for secondary workers if the minimum secondary instances is set. Primary workers - Bounds: [min_instances, ). Required. Secondary workers - Bounds: [min_instances, ). Default: 0.

weight#

Optional. Weight for the instance group, which is used to determine the fraction of total workers in the cluster from this instance group. For example, if primary workers have weight 2, and secondary workers have weight 1, the cluster will have approximately 2 primary workers for each secondary worker. The cluster may not reach the specified balance if constrained by min/max bounds or other autoscaling settings. For example, if max_instances for secondary workers is 0, then only primary workers will be added. The cluster can also be out of balance when created. If weight is not set on any instance group, the cluster will default to equal weight for all groups: the cluster will attempt to maintain an equal number of workers in each group within the configured size bounds for each group. If weight is set for one group only, the cluster will default to zero weight on the unset group. For example if weight is set only on primary workers, the cluster will use primary workers only and no secondary workers.

max_instances

Field google.cloud.dataproc.v1beta2.InstanceGroupAutoscalingPolicyConfig.max_instances

min_instances

Field google.cloud.dataproc.v1beta2.InstanceGroupAutoscalingPolicyConfig.min_instances

weight

Field google.cloud.dataproc.v1beta2.InstanceGroupAutoscalingPolicyConfig.weight

class google.cloud.dataproc_v1beta2.types.InstanceGroupConfig#

Optional. The config settings for Compute Engine resources in an instance group, such as a master or worker group.

num_instances#

Optional. The number of VM instances in the instance group. For master instance groups, must be set to 1.

instance_names#

Output only. The list of instance names. Cloud Dataproc derives the names from cluster_name, num_instances, and the instance group.

image_uri#

Optional. The Compute Engine image resource used for cluster instances. It can be specified or may be inferred from SoftwareConfig.image_version.

machine_type_uri#

Optional. The Compute Engine machine type used for cluster instances. A full URL, partial URI, or short name are valid. Examples: - https://www.googleapis.com/compute/v1/projects /[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 - projects/[project_id]/zones/us- east1-a/machineTypes/n1-standard-2 - n1-standard-2 Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the machine type resource, for example, n1-standard-2.

disk_config#

Optional. Disk option config settings.

is_preemptible#

Optional. Specifies that this instance group contains preemptible instances.

managed_group_config#

Output only. The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.

accelerators#

Optional. The Compute Engine accelerator configuration for these instances. Beta Feature: This feature is still under development. It may be changed before final release.

min_cpu_platform#

Optional. Specifies the minimum cpu platform for the Instance Group. See [Cloud Dataproc→Minimum CPU Platform] (/dataproc/docs/concepts/compute/dataproc-min-cpu).

accelerators

Field google.cloud.dataproc.v1beta2.InstanceGroupConfig.accelerators

disk_config

Field google.cloud.dataproc.v1beta2.InstanceGroupConfig.disk_config

image_uri

Field google.cloud.dataproc.v1beta2.InstanceGroupConfig.image_uri

instance_names

Field google.cloud.dataproc.v1beta2.InstanceGroupConfig.instance_names

is_preemptible

Field google.cloud.dataproc.v1beta2.InstanceGroupConfig.is_preemptible

machine_type_uri

Field google.cloud.dataproc.v1beta2.InstanceGroupConfig.machine_type_uri

managed_group_config

Field google.cloud.dataproc.v1beta2.InstanceGroupConfig.managed_group_config

min_cpu_platform

Field google.cloud.dataproc.v1beta2.InstanceGroupConfig.min_cpu_platform

num_instances

Field google.cloud.dataproc.v1beta2.InstanceGroupConfig.num_instances

class google.cloud.dataproc_v1beta2.types.InstantiateInlineWorkflowTemplateRequest#

A request to instantiate an inline workflow template.

parent#

Required. The “resource name” of the workflow template region, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}

template#

Required. The workflow template to instantiate.

instance_id#

Deprecated. Please use request_id field instead.

request_id#

Optional. A tag that prevents multiple concurrent workflow instances with the same tag from running. This mitigates risk of concurrent instances started due to retries. It is recommended to always set this value to a UUID. The tag must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

instance_id

Field google.cloud.dataproc.v1beta2.InstantiateInlineWorkflowTemplateRequest.instance_id

parent

Field google.cloud.dataproc.v1beta2.InstantiateInlineWorkflowTemplateRequest.parent

request_id

Field google.cloud.dataproc.v1beta2.InstantiateInlineWorkflowTemplateRequest.request_id

template

Field google.cloud.dataproc.v1beta2.InstantiateInlineWorkflowTemplateRequest.template

class google.cloud.dataproc_v1beta2.types.InstantiateWorkflowTemplateRequest#

A request to instantiate a workflow template.

name#

Required. The “resource name” of the workflow template, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}/workflowTemplate s/{template_id}

version#

Optional. The version of workflow template to instantiate. If specified, the workflow will be instantiated only if the current version of the workflow template has the supplied version. This option cannot be used to instantiate a previous version of workflow template.

instance_id#

Deprecated. Please use request_id field instead.

request_id#

Optional. A tag that prevents multiple concurrent workflow instances with the same tag from running. This mitigates risk of concurrent instances started due to retries. It is recommended to always set this value to a UUID. The tag must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

parameters#

Optional. Map from parameter names to values that should be used for those parameters. Values may not exceed 100 characters.

class ParametersEntry#
key#

Field google.cloud.dataproc.v1beta2.InstantiateWorkflowTemplateRequest.ParametersEntry.key

value#

Field google.cloud.dataproc.v1beta2.InstantiateWorkflowTemplateRequest.ParametersEntry.value

instance_id

Field google.cloud.dataproc.v1beta2.InstantiateWorkflowTemplateRequest.instance_id

name

Field google.cloud.dataproc.v1beta2.InstantiateWorkflowTemplateRequest.name

parameters

Field google.cloud.dataproc.v1beta2.InstantiateWorkflowTemplateRequest.parameters

request_id

Field google.cloud.dataproc.v1beta2.InstantiateWorkflowTemplateRequest.request_id

version

Field google.cloud.dataproc.v1beta2.InstantiateWorkflowTemplateRequest.version

class google.cloud.dataproc_v1beta2.types.Job#

A Cloud Dataproc job resource.

reference#

Optional. The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id.

placement#

Required. Job information, including how, when, and where to run the job.

type_job#

Required. The application/framework-specific portion of the job.

hadoop_job#

Job is a Hadoop job.

spark_job#

Job is a Spark job.

pyspark_job#

Job is a Pyspark job.

hive_job#

Job is a Hive job.

pig_job#

Job is a Pig job.

spark_r_job#

Job is a SparkR job.

spark_sql_job#

Job is a SparkSql job.

status#

Output only. The job status. Additional application-specific status information may be contained in the type_job and yarn_applications fields.

status_history#

Output only. The previous job status.

yarn_applications#

Output only. The collection of YARN applications spun up by this job. Beta Feature: This report is available for testing purposes only. It may be changed before final release.

submitted_by#

Output only. The email address of the user submitting the job. For jobs submitted on the cluster, the address is username@hostname.

driver_output_resource_uri#

Output only. A URI pointing to the location of the stdout of the job’s driver program.

driver_control_files_uri#

Output only. If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as driver_output_uri.

labels#

Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035. Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035. No more than 32 labels can be associated with a job.

scheduling#

Optional. Job scheduling configuration.

job_uuid#

Output only. A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that may be reused over time.

class LabelsEntry#
key#

Field google.cloud.dataproc.v1beta2.Job.LabelsEntry.key

value#

Field google.cloud.dataproc.v1beta2.Job.LabelsEntry.value

driver_control_files_uri

Field google.cloud.dataproc.v1beta2.Job.driver_control_files_uri

driver_output_resource_uri

Field google.cloud.dataproc.v1beta2.Job.driver_output_resource_uri

hadoop_job

Field google.cloud.dataproc.v1beta2.Job.hadoop_job

hive_job

Field google.cloud.dataproc.v1beta2.Job.hive_job

job_uuid

Field google.cloud.dataproc.v1beta2.Job.job_uuid

labels

Field google.cloud.dataproc.v1beta2.Job.labels

pig_job

Field google.cloud.dataproc.v1beta2.Job.pig_job

placement

Field google.cloud.dataproc.v1beta2.Job.placement

pyspark_job

Field google.cloud.dataproc.v1beta2.Job.pyspark_job

reference

Field google.cloud.dataproc.v1beta2.Job.reference

scheduling

Field google.cloud.dataproc.v1beta2.Job.scheduling

spark_job

Field google.cloud.dataproc.v1beta2.Job.spark_job

spark_r_job

Field google.cloud.dataproc.v1beta2.Job.spark_r_job

spark_sql_job

Field google.cloud.dataproc.v1beta2.Job.spark_sql_job

status

Field google.cloud.dataproc.v1beta2.Job.status

status_history

Field google.cloud.dataproc.v1beta2.Job.status_history

submitted_by

Field google.cloud.dataproc.v1beta2.Job.submitted_by

yarn_applications

Field google.cloud.dataproc.v1beta2.Job.yarn_applications

class google.cloud.dataproc_v1beta2.types.JobPlacement#

Cloud Dataproc job config.

cluster_name#

Required. The name of the cluster where the job will be submitted.

cluster_uuid#

Output only. A cluster UUID generated by the Cloud Dataproc service when the job is submitted.

cluster_name

Field google.cloud.dataproc.v1beta2.JobPlacement.cluster_name

cluster_uuid

Field google.cloud.dataproc.v1beta2.JobPlacement.cluster_uuid

class google.cloud.dataproc_v1beta2.types.JobReference#

Encapsulates the full scoping used to reference a job.

project_id#

Required. The ID of the Google Cloud Platform project that the job belongs to.

job_id#

Optional. The job ID, which must be unique within the project. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters. If not specified by the caller, the job ID will be provided by the server.

job_id

Field google.cloud.dataproc.v1beta2.JobReference.job_id

project_id

Field google.cloud.dataproc.v1beta2.JobReference.project_id

class google.cloud.dataproc_v1beta2.types.JobScheduling#

Job scheduling options.

max_failures_per_hour#

Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed. A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window. Maximum value is 10.

max_failures_per_hour

Field google.cloud.dataproc.v1beta2.JobScheduling.max_failures_per_hour

class google.cloud.dataproc_v1beta2.types.JobStatus#

Cloud Dataproc job status.

state#

Output only. A state message specifying the overall job state.

details#

Output only. Optional job state details, such as an error description if the state is ERROR.

state_start_time#

Output only. The time when this state was entered.

substate#

Output only. Additional state information, which includes status reported by the agent.

details

Field google.cloud.dataproc.v1beta2.JobStatus.details

state

Field google.cloud.dataproc.v1beta2.JobStatus.state

state_start_time

Field google.cloud.dataproc.v1beta2.JobStatus.state_start_time

substate

Field google.cloud.dataproc.v1beta2.JobStatus.substate

class google.cloud.dataproc_v1beta2.types.KerberosConfig#

Specifies Kerberos related configuration.

enable_kerberos#

Optional. Flag to indicate whether to Kerberize the cluster.

root_principal_password_uri#

Required. The Cloud Storage URI of a KMS encrypted file containing the root principal password.

kms_key_uri#

Required. The uri of the KMS key used to encrypt various sensitive files.

keystore_uri#

Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self- signed certificate.

truststore_uri#

Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

keystore_password_uri#

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.

key_password_uri#

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.

truststore_password_uri#

Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.

cross_realm_trust_realm#

Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.

cross_realm_trust_kdc#

Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

cross_realm_trust_admin_server#

Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.

cross_realm_trust_shared_password_uri#

Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.

kdc_db_key_uri#

Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.

tgt_lifetime_hours#

Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.

cross_realm_trust_admin_server

Field google.cloud.dataproc.v1beta2.KerberosConfig.cross_realm_trust_admin_server

cross_realm_trust_kdc

Field google.cloud.dataproc.v1beta2.KerberosConfig.cross_realm_trust_kdc

cross_realm_trust_realm

Field google.cloud.dataproc.v1beta2.KerberosConfig.cross_realm_trust_realm

cross_realm_trust_shared_password_uri

Field google.cloud.dataproc.v1beta2.KerberosConfig.cross_realm_trust_shared_password_uri

enable_kerberos

Field google.cloud.dataproc.v1beta2.KerberosConfig.enable_kerberos

kdc_db_key_uri

Field google.cloud.dataproc.v1beta2.KerberosConfig.kdc_db_key_uri

key_password_uri

Field google.cloud.dataproc.v1beta2.KerberosConfig.key_password_uri

keystore_password_uri

Field google.cloud.dataproc.v1beta2.KerberosConfig.keystore_password_uri

keystore_uri

Field google.cloud.dataproc.v1beta2.KerberosConfig.keystore_uri

kms_key_uri

Field google.cloud.dataproc.v1beta2.KerberosConfig.kms_key_uri

root_principal_password_uri

Field google.cloud.dataproc.v1beta2.KerberosConfig.root_principal_password_uri

tgt_lifetime_hours

Field google.cloud.dataproc.v1beta2.KerberosConfig.tgt_lifetime_hours

truststore_password_uri

Field google.cloud.dataproc.v1beta2.KerberosConfig.truststore_password_uri

truststore_uri

Field google.cloud.dataproc.v1beta2.KerberosConfig.truststore_uri

class google.cloud.dataproc_v1beta2.types.LifecycleConfig#

Specifies the cluster auto-delete schedule configuration.

idle_delete_ttl#

Optional. The duration to keep the cluster alive while idling. Passing this threshold will cause the cluster to be deleted. Valid range: [10m, 14d]. Example: “10m”, the minimum value, to delete the cluster when it has had no jobs running for 10 minutes.

ttl#

Optional. Either the exact time the cluster should be deleted at or the cluster maximum age.

auto_delete_time#

Optional. The time when cluster will be auto-deleted.

auto_delete_ttl#

Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Valid range: [10m, 14d]. Example: “1d”, to delete the cluster 1 day after its creation..

auto_delete_time

Field google.cloud.dataproc.v1beta2.LifecycleConfig.auto_delete_time

auto_delete_ttl

Field google.cloud.dataproc.v1beta2.LifecycleConfig.auto_delete_ttl

idle_delete_ttl

Field google.cloud.dataproc.v1beta2.LifecycleConfig.idle_delete_ttl

class google.cloud.dataproc_v1beta2.types.ListAutoscalingPoliciesRequest#

A request to list autoscaling policies in a project.

parent#

Required. The “resource name” of the region, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}

page_size#

Optional. The maximum number of results to return in each response.

page_token#

Optional. The page token, returned by a previous call, to request the next page of results.

page_size

Field google.cloud.dataproc.v1beta2.ListAutoscalingPoliciesRequest.page_size

page_token

Field google.cloud.dataproc.v1beta2.ListAutoscalingPoliciesRequest.page_token

parent

Field google.cloud.dataproc.v1beta2.ListAutoscalingPoliciesRequest.parent

class google.cloud.dataproc_v1beta2.types.ListAutoscalingPoliciesResponse#

A response to a request to list autoscaling policies in a project.

policies#

Output only. Autoscaling policies list.

next_page_token#

Output only. This token is included in the response if there are more results to fetch.

next_page_token

Field google.cloud.dataproc.v1beta2.ListAutoscalingPoliciesResponse.next_page_token

policies

Field google.cloud.dataproc.v1beta2.ListAutoscalingPoliciesResponse.policies

class google.cloud.dataproc_v1beta2.types.ListClustersRequest#

A request to list the clusters in a project.

project_id#

Required. The ID of the Google Cloud Platform project that the cluster belongs to.

region#

Required. The Cloud Dataproc region in which to handle the request.

filter#

Optional. A filter constraining the clusters to list. Filters are case-sensitive and have the following syntax: field = value [AND [field = value]] … where field is one of status.state, clusterName, or labels.[KEY], and [KEY] is a label key. value can be * to match all values. status.state can be one of the following: ACTIVE, INACTIVE, CREATING, RUNNING, ERROR, DELETING, or UPDATING. ACTIVE contains the CREATING, UPDATING, and RUNNING states. INACTIVE contains the DELETING and ERROR states. clusterName is the name of the cluster provided at creation time. Only the logical AND operator is supported; space-separated items are treated as having an implicit AND operator. Example filter: status.state = ACTIVE AND clusterName = mycluster AND labels.env = staging AND labels.starred = *

page_size#

Optional. The standard List page size.

page_token#

Optional. The standard List page token.

filter

Field google.cloud.dataproc.v1beta2.ListClustersRequest.filter

page_size

Field google.cloud.dataproc.v1beta2.ListClustersRequest.page_size

page_token

Field google.cloud.dataproc.v1beta2.ListClustersRequest.page_token

project_id

Field google.cloud.dataproc.v1beta2.ListClustersRequest.project_id

region

Field google.cloud.dataproc.v1beta2.ListClustersRequest.region

class google.cloud.dataproc_v1beta2.types.ListClustersResponse#

The list of all clusters in a project.

clusters#

Output only. The clusters in the project.

next_page_token#

Output only. This token is included in the response if there are more results to fetch. To fetch additional results, provide this value as the page_token in a subsequent ListClustersRequest.

clusters

Field google.cloud.dataproc.v1beta2.ListClustersResponse.clusters

next_page_token

Field google.cloud.dataproc.v1beta2.ListClustersResponse.next_page_token

class google.cloud.dataproc_v1beta2.types.ListJobsRequest#

A request to list jobs in a project.

project_id#

Required. The ID of the Google Cloud Platform project that the job belongs to.

region#

Required. The Cloud Dataproc region in which to handle the request.

page_size#

Optional. The number of results to return in each response.

page_token#

Optional. The page token, returned by a previous call, to request the next page of results.

cluster_name#

Optional. If set, the returned jobs list includes only jobs that were submitted to the named cluster.

job_state_matcher#

Optional. Specifies enumerated categories of jobs to list. (default = match ALL jobs). If filter is provided, jobStateMatcher will be ignored.

filter#

Optional. A filter constraining the jobs to list. Filters are case-sensitive and have the following syntax: [field = value] AND [field [= value]] … where field is status.state or labels.[KEY], and [KEY] is a label key. value can be * to match all values. status.state can be either ACTIVE or NON_ACTIVE. Only the logical AND operator is supported; space-separated items are treated as having an implicit AND operator. Example filter: status.state = ACTIVE AND labels.env = staging AND labels.starred = *

cluster_name

Field google.cloud.dataproc.v1beta2.ListJobsRequest.cluster_name

filter

Field google.cloud.dataproc.v1beta2.ListJobsRequest.filter

job_state_matcher

Field google.cloud.dataproc.v1beta2.ListJobsRequest.job_state_matcher

page_size

Field google.cloud.dataproc.v1beta2.ListJobsRequest.page_size

page_token

Field google.cloud.dataproc.v1beta2.ListJobsRequest.page_token

project_id

Field google.cloud.dataproc.v1beta2.ListJobsRequest.project_id

region

Field google.cloud.dataproc.v1beta2.ListJobsRequest.region

class google.cloud.dataproc_v1beta2.types.ListJobsResponse#

A list of jobs in a project.

jobs#

Output only. Jobs list.

next_page_token#

Optional. This token is included in the response if there are more results to fetch. To fetch additional results, provide this value as the page_token in a subsequent ListJobsRequest.

jobs

Field google.cloud.dataproc.v1beta2.ListJobsResponse.jobs

next_page_token

Field google.cloud.dataproc.v1beta2.ListJobsResponse.next_page_token

class google.cloud.dataproc_v1beta2.types.ListOperationsRequest#
filter#

Field google.longrunning.ListOperationsRequest.filter

name#

Field google.longrunning.ListOperationsRequest.name

page_size#

Field google.longrunning.ListOperationsRequest.page_size

page_token#

Field google.longrunning.ListOperationsRequest.page_token

class google.cloud.dataproc_v1beta2.types.ListOperationsResponse#
next_page_token#

Field google.longrunning.ListOperationsResponse.next_page_token

operations#

Field google.longrunning.ListOperationsResponse.operations

class google.cloud.dataproc_v1beta2.types.ListWorkflowTemplatesRequest#

A request to list workflow templates in a project.

parent#

Required. The “resource name” of the region, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}

page_size#

Optional. The maximum number of results to return in each response.

page_token#

Optional. The page token, returned by a previous call, to request the next page of results.

page_size

Field google.cloud.dataproc.v1beta2.ListWorkflowTemplatesRequest.page_size

page_token

Field google.cloud.dataproc.v1beta2.ListWorkflowTemplatesRequest.page_token

parent

Field google.cloud.dataproc.v1beta2.ListWorkflowTemplatesRequest.parent

class google.cloud.dataproc_v1beta2.types.ListWorkflowTemplatesResponse#

A response to a request to list workflow templates in a project.

templates#

Output only. WorkflowTemplates list.

next_page_token#

Output only. This token is included in the response if there are more results to fetch. To fetch additional results, provide this value as the page_token in a subsequent ListWorkflowTemplatesRequest.

next_page_token

Field google.cloud.dataproc.v1beta2.ListWorkflowTemplatesResponse.next_page_token

templates

Field google.cloud.dataproc.v1beta2.ListWorkflowTemplatesResponse.templates

class google.cloud.dataproc_v1beta2.types.LoggingConfig#

The runtime logging config of the job.

driver_log_levels#

The per-package log levels for the driver. This may include “root” package name to configure rootLogger. Examples: ‘com.google = FATAL’, ‘root = INFO’, ‘org.apache = DEBUG’

class DriverLogLevelsEntry#
key#

Field google.cloud.dataproc.v1beta2.LoggingConfig.DriverLogLevelsEntry.key

value#

Field google.cloud.dataproc.v1beta2.LoggingConfig.DriverLogLevelsEntry.value

driver_log_levels

Field google.cloud.dataproc.v1beta2.LoggingConfig.driver_log_levels

class google.cloud.dataproc_v1beta2.types.ManagedCluster#

Cluster that is managed by the workflow.

cluster_name#

Required. The cluster name prefix. A unique cluster name will be formed by appending a random suffix. The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.

config#

Required. The cluster configuration.

labels#

Optional. The labels to associate with this cluster. Label keys must be between 1 and 63 characters long. Label values must be between 1 and 63 characters long. No more than 32 labels can be associated with a given cluster.

class LabelsEntry#
key#

Field google.cloud.dataproc.v1beta2.ManagedCluster.LabelsEntry.key

value#

Field google.cloud.dataproc.v1beta2.ManagedCluster.LabelsEntry.value

cluster_name

Field google.cloud.dataproc.v1beta2.ManagedCluster.cluster_name

config

Field google.cloud.dataproc.v1beta2.ManagedCluster.config

labels

Field google.cloud.dataproc.v1beta2.ManagedCluster.labels

class google.cloud.dataproc_v1beta2.types.ManagedGroupConfig#

Specifies the resources used to actively manage an instance group.

instance_template_name#

Output only. The name of the Instance Template used for the Managed Instance Group.

instance_group_manager_name#

Output only. The name of the Instance Group Manager for this group.

instance_group_manager_name

Field google.cloud.dataproc.v1beta2.ManagedGroupConfig.instance_group_manager_name

instance_template_name

Field google.cloud.dataproc.v1beta2.ManagedGroupConfig.instance_template_name

class google.cloud.dataproc_v1beta2.types.NodeInitializationAction#

Specifies an executable to run on a fully configured node and a timeout period for executable completion.

executable_file#

Required. Cloud Storage URI of executable file.

execution_timeout#

Optional. Amount of time executable has to complete. Default is 10 minutes. Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

executable_file

Field google.cloud.dataproc.v1beta2.NodeInitializationAction.executable_file

execution_timeout

Field google.cloud.dataproc.v1beta2.NodeInitializationAction.execution_timeout

class google.cloud.dataproc_v1beta2.types.Operation#
deserialize()#

Creates new method instance from given serialized data.

done#

Field google.longrunning.Operation.done

error#

Field google.longrunning.Operation.error

metadata#

Field google.longrunning.Operation.metadata

name#

Field google.longrunning.Operation.name

response#

Field google.longrunning.Operation.response

class google.cloud.dataproc_v1beta2.types.OperationInfo#
metadata_type#

Field google.longrunning.OperationInfo.metadata_type

response_type#

Field google.longrunning.OperationInfo.response_type

class google.cloud.dataproc_v1beta2.types.OrderedJob#

A job executed by the workflow.

step_id#

Required. The step id. The id must be unique among all jobs within the template. The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in [p rerequisiteStepIds][google.cloud.dataproc.v1beta2.OrderedJob.p rerequisite_step_ids] field from other steps. The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.

job_type#

Required. The job definition.

hadoop_job#

Job is a Hadoop job.

spark_job#

Job is a Spark job.

pyspark_job#

Job is a Pyspark job.

hive_job#

Job is a Hive job.

pig_job#

Job is a Pig job.

spark_sql_job#

Job is a SparkSql job.

labels#

Optional. The labels to associate with this job. Label keys must be between 1 and 63 characters long. Label values must be between 1 and 63 characters long. No more than 32 labels can be associated with a given job.

scheduling#

Optional. Job scheduling configuration.

prerequisite_step_ids#

Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.

class LabelsEntry#
key#

Field google.cloud.dataproc.v1beta2.OrderedJob.LabelsEntry.key

value#

Field google.cloud.dataproc.v1beta2.OrderedJob.LabelsEntry.value

hadoop_job

Field google.cloud.dataproc.v1beta2.OrderedJob.hadoop_job

hive_job

Field google.cloud.dataproc.v1beta2.OrderedJob.hive_job

labels

Field google.cloud.dataproc.v1beta2.OrderedJob.labels

pig_job

Field google.cloud.dataproc.v1beta2.OrderedJob.pig_job

prerequisite_step_ids

Field google.cloud.dataproc.v1beta2.OrderedJob.prerequisite_step_ids

pyspark_job

Field google.cloud.dataproc.v1beta2.OrderedJob.pyspark_job

scheduling

Field google.cloud.dataproc.v1beta2.OrderedJob.scheduling

spark_job

Field google.cloud.dataproc.v1beta2.OrderedJob.spark_job

spark_sql_job

Field google.cloud.dataproc.v1beta2.OrderedJob.spark_sql_job

step_id

Field google.cloud.dataproc.v1beta2.OrderedJob.step_id

class google.cloud.dataproc_v1beta2.types.ParameterValidation#

Configuration for parameter validation.

validation_type#

Required. The type of validation to be performed.

regex#

Validation based on regular expressions.

values#

Validation based on a list of allowed values.

regex

Field google.cloud.dataproc.v1beta2.ParameterValidation.regex

values

Field google.cloud.dataproc.v1beta2.ParameterValidation.values

class google.cloud.dataproc_v1beta2.types.PigJob#

A Cloud Dataproc job for running Apache Pig queries on YARN.

queries#

Required. The sequence of Pig queries to execute, specified as an HCFS file URI or a list of queries.

query_file_uri#

The HCFS URI of the script that contains the Pig queries.

query_list#

A list of queries.

continue_on_failure#

Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.

script_variables#

Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).

properties#

Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.

jar_file_uris#

Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.

logging_config#

Optional. The runtime log config for job execution.

class PropertiesEntry#
key#

Field google.cloud.dataproc.v1beta2.PigJob.PropertiesEntry.key

value#

Field google.cloud.dataproc.v1beta2.PigJob.PropertiesEntry.value

class ScriptVariablesEntry#
key#

Field google.cloud.dataproc.v1beta2.PigJob.ScriptVariablesEntry.key

value#

Field google.cloud.dataproc.v1beta2.PigJob.ScriptVariablesEntry.value

continue_on_failure

Field google.cloud.dataproc.v1beta2.PigJob.continue_on_failure

jar_file_uris

Field google.cloud.dataproc.v1beta2.PigJob.jar_file_uris

logging_config

Field google.cloud.dataproc.v1beta2.PigJob.logging_config

properties

Field google.cloud.dataproc.v1beta2.PigJob.properties

query_file_uri

Field google.cloud.dataproc.v1beta2.PigJob.query_file_uri

query_list

Field google.cloud.dataproc.v1beta2.PigJob.query_list

script_variables

Field google.cloud.dataproc.v1beta2.PigJob.script_variables

class google.cloud.dataproc_v1beta2.types.PySparkJob#

A Cloud Dataproc job for running Apache PySpark applications on YARN.

main_python_file_uri#

Required. The HCFS URI of the main Python file to use as the driver. Must be a .py file.

args#

Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.

python_file_uris#

Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.

jar_file_uris#

Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.

file_uris#

Optional. HCFS URIs of files to be copied to the working directory of Python drivers and distributed tasks. Useful for naively parallel tasks.

archive_uris#

Optional. HCFS URIs of archives to be extracted in the working directory of .jar, .tar, .tar.gz, .tgz, and .zip.

properties#

Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.

logging_config#

Optional. The runtime log config for job execution.

class PropertiesEntry#
key#

Field google.cloud.dataproc.v1beta2.PySparkJob.PropertiesEntry.key

value#

Field google.cloud.dataproc.v1beta2.PySparkJob.PropertiesEntry.value

archive_uris

Field google.cloud.dataproc.v1beta2.PySparkJob.archive_uris

args

Field google.cloud.dataproc.v1beta2.PySparkJob.args

file_uris

Field google.cloud.dataproc.v1beta2.PySparkJob.file_uris

jar_file_uris

Field google.cloud.dataproc.v1beta2.PySparkJob.jar_file_uris

logging_config

Field google.cloud.dataproc.v1beta2.PySparkJob.logging_config

main_python_file_uri

Field google.cloud.dataproc.v1beta2.PySparkJob.main_python_file_uri

properties

Field google.cloud.dataproc.v1beta2.PySparkJob.properties

python_file_uris

Field google.cloud.dataproc.v1beta2.PySparkJob.python_file_uris

class google.cloud.dataproc_v1beta2.types.QueryList#

A list of queries to run on a cluster.

queries#

Required. The queries to execute. You do not need to terminate a query with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of an Cloud Dataproc API snippet that uses a QueryList to specify a HiveJob: :: “hiveJob”: { “queryList”: { “queries”: [ “query1”, “query2”, “query3;query4”, ] } }

queries

Field google.cloud.dataproc.v1beta2.QueryList.queries

class google.cloud.dataproc_v1beta2.types.RegexValidation#

Validation based on regular expressions.

regexes#

Required. RE2 regular expressions used to validate the parameter’s value. The value must match the regex in its entirety (substring matches are not sufficient).

regexes

Field google.cloud.dataproc.v1beta2.RegexValidation.regexes

class google.cloud.dataproc_v1beta2.types.ReservationAffinity#

Reservation Affinity for consuming Zonal reservation.

consume_reservation_type#

Optional. Type of reservation to consume

key#

Optional. Corresponds to the label key of reservation resource.

values#

Optional. Corresponds to the label values of reservation resource.

consume_reservation_type

Field google.cloud.dataproc.v1beta2.ReservationAffinity.consume_reservation_type

key

Field google.cloud.dataproc.v1beta2.ReservationAffinity.key

values

Field google.cloud.dataproc.v1beta2.ReservationAffinity.values

class google.cloud.dataproc_v1beta2.types.SecurityConfig#

Security related configuration, including encryption, Kerberos, etc.

kerberos_config#

Kerberos related configuration.

kerberos_config

Field google.cloud.dataproc.v1beta2.SecurityConfig.kerberos_config

class google.cloud.dataproc_v1beta2.types.SoftwareConfig#

Specifies the selection and config of software inside the cluster.

image_version#

Optional. The version of software inside the cluster. It must be one of the supported Cloud Dataproc Versions, such as “1.2” (including a subminor version, such as “1.2.29”), or the “preview” version. If unspecified, it defaults to the latest Debian version.

properties#

Optional. The properties to set on daemon config files. Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: - capacity-scheduler: capacity-scheduler.xml - core: core-site.xml - distcp: distcp-default.xml - hdfs: hdfs-site.xml - hive: hive-site.xml - mapred: mapred-site.xml - pig: pig.properties - spark: spark-defaults.conf - yarn: yarn-site.xml For more information, see Cluster properties.

optional_components#

The set of optional components to activate on the cluster.

class PropertiesEntry#
key#

Field google.cloud.dataproc.v1beta2.SoftwareConfig.PropertiesEntry.key

value#

Field google.cloud.dataproc.v1beta2.SoftwareConfig.PropertiesEntry.value

image_version

Field google.cloud.dataproc.v1beta2.SoftwareConfig.image_version

optional_components

Field google.cloud.dataproc.v1beta2.SoftwareConfig.optional_components

properties

Field google.cloud.dataproc.v1beta2.SoftwareConfig.properties

class google.cloud.dataproc_v1beta2.types.SparkJob#

A Cloud Dataproc job for running Apache Spark applications on YARN.

driver#

Required. The specification of the main method to call to drive the job. Specify either the jar file that contains the main class or the main class name. To pass both a main jar and a main class in that jar, add the jar to CommonJob.jar_file_uris, and then specify the main class name in main_class.

main_jar_file_uri#

The HCFS URI of the jar file that contains the main class.

main_class#

The name of the driver’s main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.

args#

Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.

jar_file_uris#

Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.

file_uris#

Optional. HCFS URIs of files to be copied to the working directory of Spark drivers and distributed tasks. Useful for naively parallel tasks.

archive_uris#

Optional. HCFS URIs of archives to be extracted in the working directory of Spark drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

properties#

Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.

logging_config#

Optional. The runtime log config for job execution.

class PropertiesEntry#
key#

Field google.cloud.dataproc.v1beta2.SparkJob.PropertiesEntry.key

value#

Field google.cloud.dataproc.v1beta2.SparkJob.PropertiesEntry.value

archive_uris

Field google.cloud.dataproc.v1beta2.SparkJob.archive_uris

args

Field google.cloud.dataproc.v1beta2.SparkJob.args

file_uris

Field google.cloud.dataproc.v1beta2.SparkJob.file_uris

jar_file_uris

Field google.cloud.dataproc.v1beta2.SparkJob.jar_file_uris

logging_config

Field google.cloud.dataproc.v1beta2.SparkJob.logging_config

main_class

Field google.cloud.dataproc.v1beta2.SparkJob.main_class

main_jar_file_uri

Field google.cloud.dataproc.v1beta2.SparkJob.main_jar_file_uri

properties

Field google.cloud.dataproc.v1beta2.SparkJob.properties

class google.cloud.dataproc_v1beta2.types.SparkRJob#

A Cloud Dataproc job for running Apache SparkR applications on YARN.

main_r_file_uri#

Required. The HCFS URI of the main R file to use as the driver. Must be a .R file.

args#

Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.

file_uris#

Optional. HCFS URIs of files to be copied to the working directory of R drivers and distributed tasks. Useful for naively parallel tasks.

archive_uris#

Optional. HCFS URIs of archives to be extracted in the working directory of Spark drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

properties#

Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.

logging_config#

Optional. The runtime log config for job execution.

class PropertiesEntry#
key#

Field google.cloud.dataproc.v1beta2.SparkRJob.PropertiesEntry.key

value#

Field google.cloud.dataproc.v1beta2.SparkRJob.PropertiesEntry.value

archive_uris

Field google.cloud.dataproc.v1beta2.SparkRJob.archive_uris

args

Field google.cloud.dataproc.v1beta2.SparkRJob.args

file_uris

Field google.cloud.dataproc.v1beta2.SparkRJob.file_uris

logging_config

Field google.cloud.dataproc.v1beta2.SparkRJob.logging_config

main_r_file_uri

Field google.cloud.dataproc.v1beta2.SparkRJob.main_r_file_uri

properties

Field google.cloud.dataproc.v1beta2.SparkRJob.properties

class google.cloud.dataproc_v1beta2.types.SparkSqlJob#

A Cloud Dataproc job for running Apache Spark SQL queries.

queries#

Required. The sequence of Spark SQL queries to execute, specified as either an HCFS file URI or as a list of queries.

query_file_uri#

The HCFS URI of the script that contains SQL queries.

query_list#

A list of queries.

script_variables#

Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).

properties#

Optional. A mapping of property names to values, used to configure Spark SQL’s SparkConf. Properties that conflict with values set by the Cloud Dataproc API may be overwritten.

jar_file_uris#

Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.

logging_config#

Optional. The runtime log config for job execution.

class PropertiesEntry#
key#

Field google.cloud.dataproc.v1beta2.SparkSqlJob.PropertiesEntry.key

value#

Field google.cloud.dataproc.v1beta2.SparkSqlJob.PropertiesEntry.value

class ScriptVariablesEntry#
key#

Field google.cloud.dataproc.v1beta2.SparkSqlJob.ScriptVariablesEntry.key

value#

Field google.cloud.dataproc.v1beta2.SparkSqlJob.ScriptVariablesEntry.value

jar_file_uris

Field google.cloud.dataproc.v1beta2.SparkSqlJob.jar_file_uris

logging_config

Field google.cloud.dataproc.v1beta2.SparkSqlJob.logging_config

properties

Field google.cloud.dataproc.v1beta2.SparkSqlJob.properties

query_file_uri

Field google.cloud.dataproc.v1beta2.SparkSqlJob.query_file_uri

query_list

Field google.cloud.dataproc.v1beta2.SparkSqlJob.query_list

script_variables

Field google.cloud.dataproc.v1beta2.SparkSqlJob.script_variables

class google.cloud.dataproc_v1beta2.types.Status#
code#

Field google.rpc.Status.code

details#

Field google.rpc.Status.details

message#

Field google.rpc.Status.message

class google.cloud.dataproc_v1beta2.types.SubmitJobRequest#

A request to submit a job.

project_id#

Required. The ID of the Google Cloud Platform project that the job belongs to.

region#

Required. The Cloud Dataproc region in which to handle the request.

job#

Required. The job resource.

request_id#

Optional. A unique id used to identify the request. If the server receives two [SubmitJobRequest][google.cloud.dataproc.v 1beta2.SubmitJobRequest] requests with the same id, then the second request will be ignored and the first [Job][google.cloud.dataproc.v1beta2.Job] created and stored in the backend is returned. It is recommended to always set this value to a UUID. The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

job

Field google.cloud.dataproc.v1beta2.SubmitJobRequest.job

project_id

Field google.cloud.dataproc.v1beta2.SubmitJobRequest.project_id

region

Field google.cloud.dataproc.v1beta2.SubmitJobRequest.region

request_id

Field google.cloud.dataproc.v1beta2.SubmitJobRequest.request_id

class google.cloud.dataproc_v1beta2.types.TemplateParameter#

A configurable parameter that replaces one or more fields in the template. Parameterizable fields: - Labels - File uris - Job properties - Job arguments - Script variables - Main class (in HadoopJob and SparkJob) - Zone (in ClusterSelector)

name#

Required. Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.

fields#

Required. Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter’s list of field paths. A field path is similar in syntax to a [google.protobuf.FieldMask][google.protobuf.FieldMask]. For example, a field path that references the zone field of a workflow template’s cluster selector would be specified as placement.clusterSelector.zone. Also, field paths can reference fields using the following syntax: - Values in maps can be referenced by key: - labels[‘key’] - placement.clusterSelector.clusterLabels[‘key’] - placement.managedCluster.labels[‘key’] - placement.clusterSelector.clusterLabels[‘key’] - jobs[‘step-id’].labels[‘key’] - Jobs in the jobs list can be referenced by step-id: - jobs[‘step- id’].hadoopJob.mainJarFileUri - jobs[‘step- id’].hiveJob.queryFileUri - jobs[‘step- id’].pySparkJob.mainPythonFileUri - jobs[‘step- id’].hadoopJob.jarFileUris[0] - jobs[‘step- id’].hadoopJob.archiveUris[0] - jobs[‘step- id’].hadoopJob.fileUris[0] - jobs[‘step- id’].pySparkJob.pythonFileUris[0] - Items in repeated fields can be referenced by a zero-based index: - jobs[‘step- id’].sparkJob.args[0] - Other examples: - jobs[‘step- id’].hadoopJob.properties[‘key’] - jobs[‘step- id’].hadoopJob.args[0] - jobs[‘step- id’].hiveJob.scriptVariables[‘key’] - jobs[‘step- id’].hadoopJob.mainJarFileUri - placement.clusterSelector.zone It may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: - placement.clusterSelector.clusterLabels - jobs[‘step-id’].sparkJob.args

description#

Optional. Brief description of the parameter. Must not exceed 1024 characters.

validation#

Optional. Validation rules to be applied to this parameter’s value.

description

Field google.cloud.dataproc.v1beta2.TemplateParameter.description

fields

Field google.cloud.dataproc.v1beta2.TemplateParameter.fields

name

Field google.cloud.dataproc.v1beta2.TemplateParameter.name

validation

Field google.cloud.dataproc.v1beta2.TemplateParameter.validation

class google.cloud.dataproc_v1beta2.types.Timestamp#
nanos#

Field google.protobuf.Timestamp.nanos

seconds#

Field google.protobuf.Timestamp.seconds

class google.cloud.dataproc_v1beta2.types.UpdateAutoscalingPolicyRequest#

A request to update an autoscaling policy.

policy#

Required. The updated autoscaling policy.

policy

Field google.cloud.dataproc.v1beta2.UpdateAutoscalingPolicyRequest.policy

class google.cloud.dataproc_v1beta2.types.UpdateClusterRequest#

A request to update a cluster.

project_id#

Required. The ID of the Google Cloud Platform project the cluster belongs to.

region#

Required. The Cloud Dataproc region in which to handle the request.

cluster_name#

Required. The cluster name.

cluster#

Required. The changes to the cluster.

graceful_decommission_timeout#

Optional. Timeout for graceful YARN decomissioning. Graceful decommissioning allows removing nodes from the cluster without interrupting jobs in progress. Timeout specifies how long to wait for jobs in progress to finish before forcefully removing nodes (and potentially interrupting jobs). Default timeout is 0 (for forceful decommission), and the maximum allowed timeout is 1 day. Only supported on Dataproc image versions 1.2 and higher.

update_mask#

Required. Specifies the path, relative to Cluster, of the field to update. For example, to change the number of workers in a cluster to 5, the update_mask parameter would be specified as config.worker_config.num_instances, and the PATCH request body would specify the new value, as follows: :: { “config”:{ “workerConfig”:{ “numInstances”:”5” } } } Similarly, to change the number of preemptible workers in a cluster to 5, the update_mask parameter would be config.secondary_worker_config.num_instances, and the PATCH request body would be set as follows: :: { “config”:{ “secondaryWorkerConfig”:{ “numInstances”:”5” } } } Note: currently only the following fields can be updated: .. raw:: html <table> .. raw:: html <tr> .. raw:: html <td> Mask .. raw:: html </td> .. raw:: html <td> Purpose .. raw:: html </td> .. raw:: html </tr> .. raw:: html <tr> .. raw:: html <td> labels .. raw:: html </td> .. raw:: html <td> Updates labels .. raw:: html </td> .. raw:: html </tr> .. raw:: html <tr> .. raw:: html <td> config.worker_config.num_instances .. raw:: html </td> .. raw:: html <td> Resize primary worker group .. raw:: html </td> .. raw:: html </tr> .. raw:: html <tr> .. raw:: html <td> config.secondary_worker_config.num_instances .. raw:: html </td> .. raw:: html <td> Resize secondary worker group .. raw:: html </td> .. raw:: html </tr> .. raw:: html <tr> .. raw:: html <td> config.lifecycle_config.auto_delete_ttl .. raw:: html </td> .. raw:: html <td> Reset MAX TTL duration .. raw:: html </td> .. raw:: html </tr> .. raw:: html <tr> .. raw:: html <td> config.lifecycle_config.auto_delete_time .. raw:: html </td> .. raw:: html <td> Update MAX TTL deletion timestamp .. raw:: html </td> .. raw:: html </tr> .. raw:: html <tr> .. raw:: html <td> config.lifecycle_config.idle_delete_ttl .. raw:: html </td> .. raw:: html <td> Update Idle TTL duration .. raw:: html </td> .. raw:: html </tr> .. raw:: html <tr> .. raw:: html <td> config.autoscaling_config.policy_uri .. raw:: html </td> .. raw:: html <td> Use, stop using, or change autoscaling policies .. raw:: html </td> .. raw:: html </tr> .. raw:: html </table>

request_id#

Optional. A unique id used to identify the request. If the server receives two [UpdateClusterRequest][google.cloud.datapr oc.v1beta2.UpdateClusterRequest] requests with the same id, then the second request will be ignored and the first [google.longrunning.Operation][google.longrunning.Operation] created and stored in the backend is returned. It is recommended to always set this value to a UUID. The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

cluster

Field google.cloud.dataproc.v1beta2.UpdateClusterRequest.cluster

cluster_name

Field google.cloud.dataproc.v1beta2.UpdateClusterRequest.cluster_name

graceful_decommission_timeout

Field google.cloud.dataproc.v1beta2.UpdateClusterRequest.graceful_decommission_timeout

project_id

Field google.cloud.dataproc.v1beta2.UpdateClusterRequest.project_id

region

Field google.cloud.dataproc.v1beta2.UpdateClusterRequest.region

request_id

Field google.cloud.dataproc.v1beta2.UpdateClusterRequest.request_id

update_mask

Field google.cloud.dataproc.v1beta2.UpdateClusterRequest.update_mask

class google.cloud.dataproc_v1beta2.types.UpdateJobRequest#

A request to update a job.

project_id#

Required. The ID of the Google Cloud Platform project that the job belongs to.

region#

Required. The Cloud Dataproc region in which to handle the request.

job_id#

Required. The job ID.

job#

Required. The changes to the job.

update_mask#

Required. Specifies the path, relative to Job, of the field to update. For example, to update the labels of a Job the update_mask parameter would be specified as labels, and the PATCH request body would specify the new value. Note: Currently, labels is the only field that can be updated.

job

Field google.cloud.dataproc.v1beta2.UpdateJobRequest.job

job_id

Field google.cloud.dataproc.v1beta2.UpdateJobRequest.job_id

project_id

Field google.cloud.dataproc.v1beta2.UpdateJobRequest.project_id

region

Field google.cloud.dataproc.v1beta2.UpdateJobRequest.region

update_mask

Field google.cloud.dataproc.v1beta2.UpdateJobRequest.update_mask

class google.cloud.dataproc_v1beta2.types.UpdateWorkflowTemplateRequest#

A request to update a workflow template.

template#

Required. The updated workflow template. The template.version field must match the current version.

template

Field google.cloud.dataproc.v1beta2.UpdateWorkflowTemplateRequest.template

class google.cloud.dataproc_v1beta2.types.ValueValidation#

Validation based on a list of allowed values.

values#

Required. List of allowed values for the parameter.

values

Field google.cloud.dataproc.v1beta2.ValueValidation.values

class google.cloud.dataproc_v1beta2.types.WorkflowGraph#

The workflow graph.

nodes#

Output only. The workflow nodes.

nodes

Field google.cloud.dataproc.v1beta2.WorkflowGraph.nodes

class google.cloud.dataproc_v1beta2.types.WorkflowMetadata#

A Cloud Dataproc workflow template resource.

template#

Output only. The “resource name” of the template.

version#

Output only. The version of template at the time of workflow instantiation.

create_cluster#

Output only. The create cluster operation metadata.

graph#

Output only. The workflow graph.

delete_cluster#

Output only. The delete cluster operation metadata.

state#

Output only. The workflow state.

cluster_name#

Output only. The name of the target cluster.

parameters#

Map from parameter names to values that were used for those parameters.

start_time#

Output only. Workflow start time.

end_time#

Output only. Workflow end time.

cluster_uuid#

Output only. The UUID of target cluster.

class ParametersEntry#
key#

Field google.cloud.dataproc.v1beta2.WorkflowMetadata.ParametersEntry.key

value#

Field google.cloud.dataproc.v1beta2.WorkflowMetadata.ParametersEntry.value

cluster_name

Field google.cloud.dataproc.v1beta2.WorkflowMetadata.cluster_name

cluster_uuid

Field google.cloud.dataproc.v1beta2.WorkflowMetadata.cluster_uuid

create_cluster

Field google.cloud.dataproc.v1beta2.WorkflowMetadata.create_cluster

delete_cluster

Field google.cloud.dataproc.v1beta2.WorkflowMetadata.delete_cluster

end_time

Field google.cloud.dataproc.v1beta2.WorkflowMetadata.end_time

graph

Field google.cloud.dataproc.v1beta2.WorkflowMetadata.graph

parameters

Field google.cloud.dataproc.v1beta2.WorkflowMetadata.parameters

start_time

Field google.cloud.dataproc.v1beta2.WorkflowMetadata.start_time

state

Field google.cloud.dataproc.v1beta2.WorkflowMetadata.state

template

Field google.cloud.dataproc.v1beta2.WorkflowMetadata.template

version

Field google.cloud.dataproc.v1beta2.WorkflowMetadata.version

class google.cloud.dataproc_v1beta2.types.WorkflowNode#

The workflow node.

step_id#

Output only. The name of the node.

prerequisite_step_ids#

Output only. Node’s prerequisite nodes.

job_id#

Output only. The job id; populated after the node enters RUNNING state.

state#

Output only. The node state.

error#

Output only. The error detail.

error

Field google.cloud.dataproc.v1beta2.WorkflowNode.error

job_id

Field google.cloud.dataproc.v1beta2.WorkflowNode.job_id

prerequisite_step_ids

Field google.cloud.dataproc.v1beta2.WorkflowNode.prerequisite_step_ids

state

Field google.cloud.dataproc.v1beta2.WorkflowNode.state

step_id

Field google.cloud.dataproc.v1beta2.WorkflowNode.step_id

class google.cloud.dataproc_v1beta2.types.WorkflowTemplate#

A Cloud Dataproc workflow template resource.

id#

Required. The template id. The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters. .

name#

Output only. The “resource name” of the template, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}/workflowTemplate s/{template_id}

version#

Optional. Used to perform a consistent read-modify-write. This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.

create_time#

Output only. The time template was created.

update_time#

Output only. The time template was last updated.

labels#

Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance. Label keys must contain 1 to 63 characters, and must conform to RFC 1035. Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035. No more than 32 labels can be associated with a template.

placement#

Required. WorkflowTemplate scheduling information.

jobs#

Required. The Directed Acyclic Graph of Jobs to submit.

parameters#

Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.

class LabelsEntry#
key#

Field google.cloud.dataproc.v1beta2.WorkflowTemplate.LabelsEntry.key

value#

Field google.cloud.dataproc.v1beta2.WorkflowTemplate.LabelsEntry.value

create_time

Field google.cloud.dataproc.v1beta2.WorkflowTemplate.create_time

id

Field google.cloud.dataproc.v1beta2.WorkflowTemplate.id

jobs

Field google.cloud.dataproc.v1beta2.WorkflowTemplate.jobs

labels

Field google.cloud.dataproc.v1beta2.WorkflowTemplate.labels

name

Field google.cloud.dataproc.v1beta2.WorkflowTemplate.name

parameters

Field google.cloud.dataproc.v1beta2.WorkflowTemplate.parameters

placement

Field google.cloud.dataproc.v1beta2.WorkflowTemplate.placement

update_time

Field google.cloud.dataproc.v1beta2.WorkflowTemplate.update_time

version

Field google.cloud.dataproc.v1beta2.WorkflowTemplate.version

class google.cloud.dataproc_v1beta2.types.WorkflowTemplatePlacement#

Specifies workflow execution target.

Either managed_cluster or cluster_selector is required.

placement#

Required. Specifies where workflow executes; either on a managed cluster or an existing cluster chosen by labels.

managed_cluster#

Optional. A cluster that is managed by the workflow.

cluster_selector#

Optional. A selector that chooses target cluster for jobs based on metadata. The selector is evaluated at the time each job is submitted.

cluster_selector

Field google.cloud.dataproc.v1beta2.WorkflowTemplatePlacement.cluster_selector

managed_cluster

Field google.cloud.dataproc.v1beta2.WorkflowTemplatePlacement.managed_cluster

class google.cloud.dataproc_v1beta2.types.YarnApplication#

A YARN application created by a job. Application information is a subset of org.apache.hadoop.yarn.proto.YarnProtos.ApplicationReportProto.

Beta Feature: This report is available for testing purposes only. It may be changed before final release.

name#

Required. The application name.

state#

Required. The application state.

progress#

Required. The numerical progress of the application, from 1 to 100.

tracking_url#

Optional. The HTTP URL of the ApplicationMaster, HistoryServer, or TimelineServer that provides application- specific information. The URL uses the internal hostname, and requires a proxy server for resolution and, possibly, access.

name

Field google.cloud.dataproc.v1beta2.YarnApplication.name

progress

Field google.cloud.dataproc.v1beta2.YarnApplication.progress

state

Field google.cloud.dataproc.v1beta2.YarnApplication.state

tracking_url

Field google.cloud.dataproc.v1beta2.YarnApplication.tracking_url