description: Executes a given program in stackless auto-batching mode.

tfp.experimental.auto_batching.stackless.execute

Executes a given program in stackless auto-batching mode.

Compare auto_batching.virtual_machine.execute, which executes the program in full auto-batching mode.

Advantages:

Disadvantages:

Algorithm:

This is a reimplementation in TensorFlow Eager of [1].

[1] James Bradbury and Chunli Fu, "Automatic Batching as a Compiler Pass in PyTorch", Workshop on Systems for ML and Open Source Software at NeurIPS 2018.

program A instructions.Program to execute.
backend Object implementing required backend operations.
block_code_cache Dict used to enable caching of defun+XLA across multiple calls to execute. If None is provided, we use a new dict per call to execute which can still achieve caching across depths of the call stack. This caching has no real effect unless calls to backend.wrap_straightline_callable have some effect.
*inputs Input arrays, each of shape [batch_size, e1, ..., eE]. The batch size must be the same for all inputs. The other dimensions must agree with the declared shapes of the variables they will be stored in, but need not in general be the same as one another.

results A list of the output values. Each returned value is an array of shape [batch_size, e1, ..., eE]. The results are returned in the same order as the variables appear in program.out_vars.