![]() |
App Engine Python SDK
v1.6.9 rev.445
The Python runtime is available as an experimental Preview feature.
|
Public Member Functions | |
def | kind |
def | get_key_by_job_id |
def | get_by_job_id |
def | set_processed_counts |
def | get_processed |
def | __eq__ |
![]() | |
def | __new__ |
def | __init__ |
def | key |
def | put |
def | delete |
def | is_saved |
def | has_key |
def | dynamic_properties |
def | instance_properties |
def | parent |
def | parent_key |
def | to_xml |
def | get |
def | get_by_key_name |
def | get_by_id |
def | get_or_insert |
def | all |
def | gql |
def | from_entity |
def | kind |
def | entity_type |
def | properties |
def | fields |
Static Public Member Functions | |
def | create_new |
def | new_mapreduce_id |
Public Attributes | |
chart_width | |
chart_url | |
Static Public Attributes | |
string | RESULT_SUCCESS = "success" |
string | RESULT_FAILED = "failed" |
string | RESULT_ABORTED = "aborted" |
tuple | mapreduce_spec = json_util.JsonProperty(MapreduceSpec, indexed=False) |
tuple | active = db.BooleanProperty(default=True, indexed=False) |
tuple | last_poll_time = db.DateTimeProperty(required=True) |
tuple | counters_map |
tuple | app_id = db.StringProperty(required=False, indexed=True) |
tuple | writer_state = json_util.JsonProperty(dict, indexed=False) |
tuple | active_shards = db.IntegerProperty(default=0, indexed=False) |
tuple | failed_shards = db.IntegerProperty(default=0, indexed=False) |
tuple | aborted_shards = db.IntegerProperty(default=0, indexed=False) |
tuple | result_status = db.StringProperty(required=False, choices=_RESULTS) |
tuple | chart_url = db.TextProperty(default="") |
tuple | chart_width = db.IntegerProperty(default=300, indexed=False) |
tuple | sparkline_url = db.TextProperty(default="") |
tuple | start_time = db.DateTimeProperty(auto_now_add=True) |
![]() | |
save = put | |
Properties | |
processed = property(get_processed) | |
Holds accumulated state of mapreduce execution. MapreduceState is stored in datastore with a key name equal to the mapreduce ID. Only controller tasks can write to MapreduceState. Properties: mapreduce_spec: cached deserialized MapreduceSpec instance. read-only active: if this MR is still running. last_poll_time: last time controller job has polled this mapreduce. counters_map: shard's counters map as CountersMap. Mirrors counters_map_json. chart_url: last computed mapreduce status chart url. This chart displays the progress of all the shards the best way it can. sparkline_url: last computed mapreduce status chart url in small format. result_status: If not None, the final status of the job. active_shards: How many shards are still processing. This starts as 0, then set by KickOffJob handler to be the actual number of input readers after input splitting, and is updated by Controller task as shards finish. start_time: When the job started. writer_state: Json property to be used by writer to store its state. This is filled when single output per job. Will be deprecated. Use OutputWriter.get_filenames instead.
|
static |
Create a new MapreduceState. Args: mapreduce_id: Mapreduce id as string. gettime: Used for testing.
def google.appengine.ext.mapreduce.model.MapreduceState.get_by_job_id | ( | cls, | |
mapreduce_id | |||
) |
Retrieves the instance of state for a Job. Args: mapreduce_id: The mapreduce job to retrieve. Returns: instance of MapreduceState for passed id.
def google.appengine.ext.mapreduce.model.MapreduceState.get_key_by_job_id | ( | cls, | |
mapreduce_id | |||
) |
Retrieves the Key for a Job. Args: mapreduce_id: The job to retrieve. Returns: Datastore Key that can be used to fetch the MapreduceState.
def google.appengine.ext.mapreduce.model.MapreduceState.get_processed | ( | self | ) |
Number of processed entities. Returns: The total number of processed entities as int.
def google.appengine.ext.mapreduce.model.MapreduceState.kind | ( | cls | ) |
Returns entity kind.
|
static |
Generate new mapreduce id.
def google.appengine.ext.mapreduce.model.MapreduceState.set_processed_counts | ( | self, | |
shards_processed | |||
) |
Updates a chart url to display processed count for each shard. Args: shards_processed: list of integers with number of processed entities in each shard
|
static |