quimb.tensor.optimize#
Support for optimizing tensor networks using automatic differentiation to automatically derive gradients for input to scipy optimizers.
Functions
|
|
|
Convert a tensor network's arrays to constants. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Parse tensor network to: |
Classes
|
Stateful |
|
|
|
|
|
Class wrapper so picklable. |
|
|
|
Stateful |
|
Stateful |
|
Stateful |
|
Globally optimize tensors within a tensor network with respect to any loss function via automatic differentiation. |
|
|
|
|
|
Object for mapping a sequence of mixed real/complex n-dimensional arrays to a single numpy vector and back and forth. |
- class quimb.tensor.optimize.ADAM[source]#
Stateful
scipy.optimize.minimize
compatible implementation of ADAM - http://arxiv.org/pdf/1412.6980.pdf.Adapted from
autograd/misc/optimizers.py
.
- class quimb.tensor.optimize.MakeArrayFn(tn_opt, loss_fn, norm_fn, autodiff_backend)[source]#
Class wrapper so picklable.
- class quimb.tensor.optimize.NADAM[source]#
Stateful
scipy.optimize.minimize
compatible implementation of NADAM - [Dozat - http://cs229.stanford.edu/proj2015/054_report.pdf].Adapted from
autograd/misc/optimizers.py
.
- class quimb.tensor.optimize.RMSPROP[source]#
Stateful
scipy.optimize.minimize
compatible implementation of root mean squared prop: See Adagrad paper for details.Adapted from
autograd/misc/optimizers.py
.
- class quimb.tensor.optimize.SGD[source]#
Stateful
scipy.optimize.minimize
compatible implementation of stochastic gradient descent with momentum.Adapted from
autograd/misc/optimizers.py
.
- class quimb.tensor.optimize.TNOptimizer(tn, loss_fn, norm_fn=None, loss_constants=None, loss_kwargs=None, tags=None, shared_tags=None, constant_tags=None, loss_target=None, optimizer='L-BFGS-B', progbar=True, bounds=None, autodiff_backend='AUTO', executor=None, **backend_opts)[source]#
Globally optimize tensors within a tensor network with respect to any loss function via automatic differentiation. If parametrized tensors are used, optimize the parameters rather than the raw arrays.
- Parameters
tn (TensorNetwork) – The core tensor network structure within which to optimize tensors.
loss_fn (callable or sequence of callable) – The function that takes
tn
(as well asloss_constants
andloss_kwargs
) and returns a single real ‘loss’ to be minimized. For Hamiltonians which can be represented as a sum over terms, an iterable collection of terms (e.g. list) can be given instead. In that case each term is evaluated independently and the sum taken as loss_fn. This can reduce the total memory requirements or allow for parallelization (seeexecutor
).norm_fn (callable, optional) – A function to call before
loss_fn
that prepares or ‘normalizes’ the raw tensor network in some way.loss_constants (dict, optional) – Extra tensor networks, tensors, dicts/list/tuples of arrays, or arrays which will be supplied to
loss_fn
but also converted to the correct backend array type.loss_kwargs (dict, optional) – Extra options to supply to
loss_fn
(unlikeloss_constants
these are assumed to be simple options that don’t need conversion).tags (str, or sequence of str, optional) – If supplied, only optimize tensors with any of these tags.
shared_tags (str, or sequence of str, optional) – If supplied, each tag in
shared_tags
corresponds to a group of tensors to be optimized together.constant_tags (str, or sequence of str, optional) – If supplied, skip optimizing tensors with any of these tags. This ‘opt-out’ mode is overridden if either
tags
orshared_tags
is supplied.loss_target (float, optional) – Stop optimizing once this loss value is reached.
optimizer (str, optional) – Which
scipy.optimize.minimize
optimizer to use (the'method'
kwarg of that function). In addition,quimb
implements a few custom optimizers compatible with this interface that you can reference by name -{'adam', 'nadam', 'rmsprop', 'sgd'}
.executor (None or Executor, optional) – To be used with term-by-term Hamiltonians. If supplied, this executor is used to parallelize the evaluation. Otherwise each term is evaluated in sequence. It should implement the basic concurrent.futures (PEP 3148) interface.
progbar (bool, optional) – Whether to show live progress.
bounds (None or (float, float), optional) – Constrain the optimized tensor entries within this range (if the scipy optimizer supports it).
autodiff_backend ({'jax', 'autograd', 'tensorflow', 'torch'}, optional) – Which backend library to use to perform the automatic differentation (and computation).
backend_opts – Supplied to the backend function compiler and array handler. For example
jit_fn=True
ordevice='cpu'
.
- get_tn_opt()[source]#
Extract the optimized tensor network, this is a three part process:
inject the current optimized vector into the target tensor network,
run it through
norm_fn
,drop any tags used to identify variables.
- Returns
tn_opt
- Return type
- property nevals#
The number of gradient evaluations.
- optimize(n, tol=None, jac=True, hessp=False, **options)[source]#
Run the optimizer for
n
function evaluations, usingscipy.optimize.minimize()
as the driver for the vectorized computation. Supplying the gradient and hessian vector product is controlled by thejac
andhessp
options respectively.- Parameters
n (int) – Notionally the maximum number of iterations for the optimizer, note that depending on the optimizer being used, this may correspond to number of function evaluations rather than just iterations.
tol (None or float, optional) – Tolerance for convergence, note that various more specific tolerances can usually be supplied to
options
, depending on the optimizer being used.jac (bool, optional) – Whether to supply the jacobian, i.e. gradient, of the loss function.
hessp (bool, optional) – Whether to supply the hessian vector product of the loss function.
options – Supplied to
scipy.optimize.minimize()
.
- Returns
tn_opt
- Return type
- optimize_basinhopping(n, nhop, temperature=1.0, jac=True, hessp=False, **options)[source]#
Run the optimizer for using
scipy.optimize.basinhopping()
as the driver for the vectorized computation. This performsnhop
local optimization each withn
iterations.- Parameters
n (int) – Number of iterations per local optimization.
nhop (int) – Number of local optimizations to hop between.
temperature (float, optional) – H
options – Supplied to the inner
scipy.optimize.minimize()
call.
- Returns
tn_opt
- Return type
- optimize_ipopt(n, tol=None, **options)[source]#
Run the optimizer for
n
function evaluations, usingipopt
as the backend library to run the optimization via the python packagecyipopt
.- Parameters
n (int) – The maximum number of iterations for the optimizer.
- Returns
tn_opt
- Return type
- optimize_nevergrad(n)[source]#
Run the optimizer for
n
function evaluations, usingnevergrad
as the backend library to run the optimization. As the name suggests, the gradient is not required for this method.- Parameters
n (int) – The maximum number of iterations for the optimizer.
- Returns
tn_opt
- Return type
- optimize_nlopt(n, ftol_rel=None, ftol_abs=None, xtol_rel=None, xtol_abs=None)[source]#
Run the optimizer for
n
function evaluations, usingnlopt
as the backend library to run the optimization. Whether the gradient is computed depends on whichoptimizer
is selected, see valid options at https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/.- Parameters
n (int) – The maximum number of iterations for the optimizer.
ftol_rel (float, optional) – Set relative tolerance on function value.
ftol_abs (float, optional) – Set absolute tolerance on function value.
xtol_rel (float, optional) – Set relative tolerance on optimization parameters.
xtol_abs (float, optional) – Set absolute tolerances on optimization parameters.
- Returns
tn_opt
- Return type
- property optimizer#
The underlying optimizer that works with the vectorized functions.
- reset(tn=None, clear_info=True, loss_target=None)[source]#
Reset this optimizer without losing the compiled loss and gradient functions.
- Parameters
tn (TensorNetwork, optional) – Set this tensor network as the current state of the optimizer, it must exactly match the original tensor network.
clear_info (bool, optional) – Clear the tracked losses and iterations.
- class quimb.tensor.optimize.Vectorizer(arrays)[source]#
Object for mapping a sequence of mixed real/complex n-dimensional arrays to a single numpy vector and back and forth.
- Parameters
array (sequence of array) – The set of arrays to map into a single real vector.
- quimb.tensor.optimize.constant_tn(tn, to_constant)[source]#
Convert a tensor network’s arrays to constants.
- quimb.tensor.optimize.parse_network_to_backend(tn, to_constant, tags=None, shared_tags=None, constant_tags=None)[source]#
Parse tensor network to:
identify the dimension of the optimisation space and the initial point of the optimisation from the current values in the tensor network,
add variable tags to individual tensors so that optimisation vector values can be efficiently reinserted into the tensor network.
There are two different modes:
‘opt in’ : tags (and optionally shared_tags) are specified and only these tensor tags will be optimised over. In this case constant_tags is ignored if it is passed,
‘opt out’ : tags is not specified. In this case all tensors will be optimised over, unless they have one of constant_tags tags.
- Parameters
tn (TensorNetwork) – The initial tensor network to parse.
to_constant (Callable) – Function that fixes a tensor as constant.
tags (str, or sequence of str, optional) – Set of opt-in tags to optimise.
shared_tags (str, or sequence of str, optional) – Subset of opt-in tags to joint optimise i.e. all tensors with tag s in shared_tags will correspond to the same optimisation variables.
constant_tags (str, or sequence of str, optional) – Set of opt-out tags if tags not passed.
- Returns
tn_ag (TensorNetwork) – Tensor network tagged for reinsertion of optimisation variable values.
variables (list) – List of variables extracted from
tn
.