quimb.tensor.tensor_core#
Core tensor network tools.
Functions
|
Get the set of MPS tensors representing the COPY tensor with dimension size |
|
Get the tensor representing the COPY operation with dimension size |
|
Get the set of tree tensors representing the COPY tensor with dimension size |
|
Direct product of two arrays. |
|
Getting any indices connecting the Tensor(s) or TensorNetwork(s) |
|
Get the size of the bonds linking tensors or tensor networks |
|
Connect two tensors by setting a shared index for the specified dimensions. |
Get a default dictionary that will populate with symbol entries as they are accessed. |
|
|
Get the 'ith' symbol. |
|
Return all the tags in found in |
|
Group bonds into left only, shared, and right only. |
|
Maybe unwrap a |
|
Inplace addition of a new bond between tensors |
|
|
|
Non-variadic ordered set union taking any sequence of iterables. |
|
Helper function for padding tensor with random entries. |
|
Return a guaranteed unique, shortish identifier, optional appended to |
|
|
|
Parse a |
|
Gauge the bond between two tensors such that the norm of the 'columns' of the tensors on each side is the same for each index of the bond. |
|
Inplace 'canonization' of two tensors. This gauges the bond between the two such that |
|
Inplace compress between the two single tensors. It follows the following steps to minimize the size of SVD performed::. |
|
Efficiently contract multiple tensors, combining their tags. |
|
Direct product of two Tensors. |
|
If |
|
If two tensors share multibonds, fuse them together and return the left indices, bond if it exists, and right indices. |
|
Compute the Frobenius norm distance between two tensor networks: |
|
Optimize the fit of |
|
Optimize the fit of |
|
Sum of two tensor networks, whose indices should match exactly, using direct products. |
|
Decompose this tensor into two tensors. |
|
Register an __array_function__ implementation for TNLinearOperator objects. |
Classes
|
A |
|
A tensor whose data array is lazily generated from a set of parameters and a function. |
|
Get a linear operator - something that replicates the matrix-vector operation - for an arbitrary uncontracted TensorNetwork, e.g. |
|
A labelled, tagged n-dimensional array. |
|
A collection of (as yet uncontracted) Tensors. |
- quimb.tensor.tensor_core.COPY_mps_tensors(d, inds, tags=None, dtype=<class 'float'>)[source]#
Get the set of MPS tensors representing the COPY tensor with dimension size
d
and number of dimensionslen(inds)
, with exterior indicesinds
.- Parameters
- Returns
The
len(inds)
tensors describing the MPS, with physical legs ordered as supplied ininds
.- Return type
List[Tensor]
- quimb.tensor.tensor_core.COPY_tensor(d, inds, tags=None, dtype=<class 'float'>)[source]#
Get the tensor representing the COPY operation with dimension size
d
and number of dimensionslen(inds)
, with exterior indicesinds
.- Parameters
- Returns
The tensor describing the MPS, of size
d**len(inds)
.- Return type
- quimb.tensor.tensor_core.COPY_tree_tensors(d, inds, tags=None, dtype=<class 'float'>, ssa_path=None)[source]#
Get the set of tree tensors representing the COPY tensor with dimension size
d
and number of dimensionslen(inds)
, with exterior indicesinds
. The tree is generated by cycling through pairs.- Parameters
- Returns
The
len(inds) - 2
tensors describing the TTN, with physical legs ordered as supplied ininds
.- Return type
List[Tensor]
- class quimb.tensor.tensor_core.IsoTensor(data=1.0, inds=(), tags=None, left_inds=None)[source]#
A
Tensor
subclass which keeps itsleft_inds
by default even when its data is changed.- fuse(*args, inplace=False, **kwargs)[source]#
Combine groups of indices into single indices.
- Parameters
fuse_map (dict_like or sequence of tuples.) – Mapping like:
{new_ind: sequence of existing inds, ...}
or an ordered mapping like[(new_ind_1, old_inds_1), ...]
in which case the output tensor’s fused inds will be ordered. In both cases the new indices are created at the beginning of the tensor’s shape.- Returns
The transposed, reshaped and re-labeled tensor.
- Return type
- modify(**kwargs)[source]#
Overwrite the data of this tensor in place.
- Parameters
data (array, optional) – New data.
apply (callable, optional) – A function to apply to the current data. If data is also given this is applied subsequently.
inds (sequence of str, optional) – New tuple of indices.
tags (sequence of str, optional) – New tags.
left_inds (sequence of str, optional) – New grouping of indices to be ‘on the left’.
- class quimb.tensor.tensor_core.PTensor(fn, params, inds=(), tags=None, left_inds=None)[source]#
A tensor whose data array is lazily generated from a set of parameters and a function.
- Parameters
fn (callable) – The function that generates the tensor data from
params
.params (sequence of numbers) – The initial parameters supplied to the generating function like
fn(params)
.inds (optional) – Should match the shape of
fn(params)
, seeTensor
.tags (optional) – See
Tensor
.left_inds (optional) – See
Tensor
.
See also
- class quimb.tensor.tensor_core.TNLinearOperator(*args, **kwargs)[source]#
Get a linear operator - something that replicates the matrix-vector operation - for an arbitrary uncontracted TensorNetwork, e.g:
: --O--O--+ +-- : --+ : | | | : | : --O--O--O-O-- : acting on --V : | | : | : --+ +---- : --+ left_inds^ ^right_inds
This can then be supplied to scipy’s sparse linear algebra routines. The
left_inds
/right_inds
convention is that the linear operator will have shape matching(*left_inds, *right_inds)
, so that theright_inds
are those that will be contracted in a normal matvec / matmat operation:_matvec = --0--v , _rmatvec = v--0--
- Parameters
tns (sequence of Tensors or TensorNetwork) – A representation of the hamiltonian
left_inds (sequence of str) – The ‘left’ inds of the effective hamiltonian network.
right_inds (sequence of str) – The ‘right’ inds of the effective hamiltonian network. These should be ordered the same way as
left_inds
.ldims (tuple of int, or None) – The dimensions corresponding to left_inds. Will figure out if None.
rdims (tuple of int, or None) – The dimensions corresponding to right_inds. Will figure out if None.
optimize (str, optional) – The path optimizer to use for the ‘matrix-vector’ contraction.
backend (str, optional) – The array backend to use for the ‘matrix-vector’ contraction.
is_conj (bool, optional) – Whether this object should represent the adjoint operator.
See also
TNLinearOperator1D
- split(left_inds, method='svd', get=None, absorb='both', max_bond=None, cutoff=1e-10, cutoff_mode='rel', renorm=None, ltags=None, rtags=None, stags=None, bond_ind=None, right_inds=None)[source]#
Decompose this tensor into two tensors.
- Parameters
T (Tensor or TNLinearOperator) – The tensor (network) to split.
left_inds (str or sequence of str) – The index or sequence of inds, which
T
should already have, to split to the ‘left’. You can supplyNone
here if you supplyright_inds
instead.method (str, optional) –
How to split the tensor, only some methods allow bond truncation:
'svd'
: full SVD, allows truncation.'eig'
: full SVD via eigendecomp, allows truncation.'svds'
: iterative svd, allows truncation.'isvd'
: iterative svd using interpolative methods, allows truncation.'rsvd'
: randomized iterative svd with truncation.'eigh'
: full eigen-decomposition, tensor must he hermitian.'eigsh'
: iterative eigen-decomposition, tensor must be hermitian.'qr'
: full QR decomposition.'lq'
: full LR decomposition.'cholesky'
: full cholesky decomposition, tensor must be positive.
get ({None, 'arrays', 'tensors', 'values'}) –
If given, what to return instead of a TN describing the split:
None
: a tensor network of the two (or three) tensors.'arrays'
: the raw data arrays as a tuple(l, r)
or(l, s, r)
depending onabsorb
.'tensors '
: the new tensors as a tuple(Tl, Tr)
or(Tl, Ts, Tr)
depending onabsorb
.'values'
: only compute and return the singular valuess
.
absorb ({'both', 'left', 'right', None}, optional) – Whether to absorb the singular values into both, the left, or the right unitary matrix respectively, or neither. If neither (
absorb=None
) then the singular values will be returned separately in their own 1D tensor or array. In that case ifget=None
the tensor network returned will have a hyperedge corresponding to the new bond index connecting three tensors. Ifget='tensors'
orget='arrays'
then a tuple like(left, s, right)
is returned.max_bond (None or int) – If integer, the maxmimum number of singular values to keep, regardless of
cutoff
.cutoff (float, optional) – The threshold below which to discard singular values, only applies to rank revealing methods (not QR, LQ, or cholesky).
cutoff_mode ({'sum2', 'rel', 'abs', 'rsum2'}) –
Method with which to apply the cutoff threshold:
'rel'
: values less thancutoff * s[0]
discarded.'abs'
: values less thancutoff
discarded.'sum2'
: sum squared of values discarded must be< cutoff
.'rsum2'
: sum squared of values discarded must be less thancutoff
times the total sum of squared values.'sum1'
: sum values discarded must be< cutoff
.'rsum1'
: sum of values discarded must be less thancutoff
times the total sum of values.
renorm ({None, bool, or int}, optional) – Whether to renormalize the kept singular values, assuming the bond has a canonical environment, corresponding to maintaining the frobenius or nuclear norm. If
None
(the default) then this is automatically turned on only forcutoff_method in {'sum2', 'rsum2', 'sum1', 'rsum1'}
withmethod in {'svd', 'eig', 'eigh'}
.ltags (sequence of str, optional) – Add these new tags to the left tensor.
rtags (sequence of str, optional) – Add these new tags to the right tensor.
stags (sequence of str, optional) – Add these new tags to the singular value tensor.
bond_ind (str, optional) – Explicitly name the new bond, else a random one will be generated.
right_inds (sequence of str, optional) – Explicitly give the right indices, otherwise they will be worked out. This is a minor performance feature.
- Returns
Depending on if
get
isNone
,'tensors'
,'arrays'
, or'values'
. In the first three cases, ifabsorb
is set, then the returned objects correspond to(left, right)
whereas ifabsorb=None
the returned objects correspond to(left, singular_values, right)
.- Return type
TensorNetwork or tuple[Tensor] or tuple[array] or 1D-array
- class quimb.tensor.tensor_core.Tensor(data=1.0, inds=(), tags=None, left_inds=None)[source]#
A labelled, tagged n-dimensional array. The index labels are used instead of axis numbers to identify dimensions, and are preserved through operations. The tags are used to identify the tensor within networks, and are combined when tensors are contracted together.
- Parameters
data (numpy.ndarray) – The n-dimensional data.
inds (sequence of str) – The index labels for each dimension. Must match the number of dimensions of
data
.tags (sequence of str, optional) – Tags with which to identify and group this tensor. These will be converted into a
oset
.left_inds (sequence of str, optional) – Which, if any, indices to group as ‘left’ indices of an effective matrix. This can be useful, for example, when automatically applying unitary constraints to impose a certain flow on a tensor network but at the atomistic (Tensor) level.
Examples
Basic construction:
>>> from quimb import randn >>> from quimb.tensor import Tensor >>> X = Tensor(randn((2, 3, 4)), inds=['a', 'b', 'c'], tags={'X'}) >>> Y = Tensor(randn((3, 4, 5)), inds=['b', 'c', 'd'], tags={'Y'})
Indices are automatically aligned, and tags combined, when contracting:
>>> X @ Y Tensor(shape=(2, 5), inds=('a', 'd'), tags={'Y', 'X'})
- property H#
Conjugate this tensors data (does nothing to indices).
- add_owner(tn, tid)[source]#
Add
tn
as owner of this Tensor - it’s tag and ind maps will be updated whenever this tensor is retagged or reindexed.
- add_tag(tag)[source]#
Add a tag to this tensor. Unlike
self.tags.add
this also updates any TensorNetworks viewing this Tensor.
- check_owners()[source]#
Check if this tensor is ‘owned’ by any alive TensorNetworks. Also trim any weakrefs to dead TensorNetworks.
- collapse_repeated(inplace=False)[source]#
Take the diagonals of any repeated indices, such that each index only appears once.
- contract(*, output_inds=None, optimize=None, get=None, backend=None, preserve_tensor=False, **contract_opts)[source]#
Efficiently contract multiple tensors, combining their tags.
- Parameters
tensors (sequence of Tensor) – The tensors to contract.
output_inds (sequence of str) – If given, the desired order of output indices, else defaults to the order they occur in the input indices. You need to supply this if the tensors supplied have any hyper indices.
optimize ({None, str, path_like, PathOptimizer}, optional) –
The contraction path optimization strategy to use.
None
: use the default strategy,str: use the preset strategy with the given name,
path_like: use this exact path,
opt_einsum.PathOptimizer
: find the path using this optimizer.cotengra.HyperOptimizer
: find and perform the contraction usingcotengra
.cotengra.ContractionTree
: use this exact tree and perform contraction usingcotengra
.
Contraction with
cotengra
might be a bit more efficient but the main reason would be to handle sliced contraction automatically.get ({None, 'expression', 'path-info', 'opt_einsum'}, optional) –
What to return. If:
None
(the default) - return the resulting scalar or Tensor.'expression'
- return theopt_einsum
expression that performs the contraction and operates on the raw arrays.'symbol-map'
- return the dict mappingopt_einsum
symbols to tensor indices.'path-info'
- return the fullopt_einsum
path object with detailed information such as flop cost. The symbol-map is also added to thequimb_symbol_map
attribute.
backend ({'auto', 'numpy', 'jax', 'cupy', 'tensorflow', ...}, optional) – Which backend to use to perform the contraction. Must be a valid
opt_einsum
backend with the relevant library installed.preserve_tensor (bool, optional) – Whether to return a tensor regardless of whether the output object is a scalar (has no indices) or not.
contract_opts – Passed to
opt_einsum.contract_expression
oropt_einsum.contract_path
.
- Return type
scalar or Tensor
- copy(deep=False, virtual=False)[source]#
Copy this tensor.
Note
By default (
deep=False
), the underlying array will not be copied.
- direct_product(T2, sum_inds=(), inplace=False)[source]#
Direct product of two Tensors. Any axes included in
sum_inds
must be the same size and will be summed over rather than concatenated. Summing over contractions of TensorNetworks equates to contracting a TensorNetwork made of direct products of each set of tensors. I.e. (a1 @ b1) + (a2 @ b2) == (a1 (+) a2) @ (b1 (+) b2).- Parameters
- Returns
Like
T1
, but with each dimension doubled in size if not insum_inds
.- Return type
- distance(tnB, xAA=None, xAB=None, xBB=None, method='auto', **contract_opts)[source]#
Compute the Frobenius norm distance between two tensor networks:
\[D(A, B) = | A - B |_{\mathrm{fro}} = \mathrm{Tr} [(A - B)^{\dagger}(A - B)]^{1/2} = ( \langle A | A \rangle - 2 \mathrm{Re} \langle A | B \rangle| + \langle B | B \rangle ) ^{1/2}\]which should have a matching external indices.
- Parameters
tnA (TensorNetwork or Tensor) – The first tensor network operator.
tnB (TensorNetwork or Tensor) – The second tensor network operator.
xAA (None or scalar) – The value of
A.H @ A
if you already know it (or it doesn’t matter).xAB (None or scalar) – The value of
A.H @ B
if you already know it (or it doesn’t matter).xBB (None or scalar) – The value of
B.H @ B
if you already know it (or it doesn’t matter).method ({'auto', 'overlap', 'dense'}, optional) – How to compute the distance. If
'overlap'
, the default, the distance will be computed as the sum of overlaps, without explicitly forming the dense operators. If'dense'
, the operators will be directly formed and the norm computed, which can be quicker when the exterior dimensions are small. If'auto'
, the dense method will be used if the total operator (outer) size is<= 2**16
.contract_opts – Supplied to
contract()
.
- Returns
D
- Return type
- entropy(left_inds, method='svd')[source]#
Return the entropy associated with splitting this tensor according to
left_inds
.- Parameters
left_inds (sequence of str) – A subset of this tensors indices that defines ‘left’.
method ({'svd', 'eig'}) – Whether to use the SVD or eigenvalue decomposition to get the singular values.
- Return type
- expand_ind(ind, size)[source]#
Inplace increase the size of the dimension of
ind
, the new array entries will be filled with zeros.
- filter_bonds(other)[source]#
Sort this tensor’s indices into a list of those that it shares and doesn’t share with another tensor.
- flip(ind, inplace=False)[source]#
Reverse the axis on this tensor corresponding to
ind
. Like performing e.g.X[:, :, ::-1, :]
.
- fuse(fuse_map, inplace=False)[source]#
Combine groups of indices into single indices.
- Parameters
fuse_map (dict_like or sequence of tuples.) – Mapping like:
{new_ind: sequence of existing inds, ...}
or an ordered mapping like[(new_ind_1, old_inds_1), ...]
in which case the output tensor’s fused inds will be ordered. In both cases the new indices are created at the beginning of the tensor’s shape.- Returns
The transposed, reshaped and re-labeled tensor.
- Return type
- gate(G, ind, inplace=False, **contract_opts)[source]#
Gate this tensor - contract a matrix into one of its indices without changing its indices. Unlike
contract
,G
is a raw array and the tensor remains with the same set of indices.- Parameters
G (2D array_like) – The matrix to gate the tensor index with.
ind (str) – Which index to apply the gate to.
- Return type
Examples
Create a random tensor of 4 qubits:
>>> t = qtn.rand_tensor( ... shape=[2, 2, 2, 2], ... inds=['k0', 'k1', 'k2', 'k3'], ... )
Create another tensor with an X gate applied to qubit 2:
>>> Gt = t.gate(qu.pauli('X'), 'k2')
The contraction of these two tensors is now the expectation of that operator:
>>> t.H @ Gt -4.108910576149794
- graph(*args, **kwargs)#
Plot a graph of this tensor and its indices.
- isel(selectors, inplace=False)[source]#
Select specific values for some dimensions/indices of this tensor, thereby removing them. Analogous to
X[:, :, 3, :, :]
with arrays.- Parameters
- Return type
Examples
>>> T = rand_tensor((2, 3, 4), inds=('a', 'b', 'c')) >>> T.isel({'b': -1}) Tensor(shape=(2, 4), inds=('a', 'c'), tags=())
See also
- largest_element()[source]#
Return the largest element, in terms of absolute magnitude, of this tensor.
- modify(**kwargs)[source]#
Overwrite the data of this tensor in place.
- Parameters
data (array, optional) – New data.
apply (callable, optional) – A function to apply to the current data. If data is also given this is applied subsequently.
inds (sequence of str, optional) – New tuple of indices.
tags (sequence of str, optional) – New tags.
left_inds (sequence of str, optional) – New grouping of indices to be ‘on the left’.
- multiply_index_diagonal(ind, x, inplace=False)[source]#
Multiply this tensor by 1D array
x
as if it were a diagonal tensor being contracted into indexind
.
- new_bond(T2, size=1, name=None, axis1=0, axis2=0)#
Inplace addition of a new bond between tensors
T1
andT2
. The size of the new bond can be specified, in which case the new array parts will be filled with zeros.- Parameters
T1 (Tensor) – First tensor to modify.
T2 (Tensor) – Second tensor to modify.
size (int, optional) – Size of the new dimension.
name (str, optional) – Name for the new index.
axis1 (int, optional) – Position on the first tensor for the new dimension.
axis2 (int, optional) – Position on the second tensor for the new dimension.
- new_ind(name, size=1, axis=0)[source]#
Inplace add a new index - a named dimension. If
size
is specified to be greater than one then the new array entries will be filled with zeros.
- new_ind_with_identity(name, left_inds, right_inds, axis=0)[source]#
Inplace add a new index, where the newly stacked array entries form the identity from
left_inds
toright_inds
. Selecting 0 or 1 for the new indexname
thus is like ‘turning off’ this tensor if viewed as an operator.- Parameters
name (str) – Name of the new index.
left_inds (tuple[str]) – Names of the indices forming the left hand side of the operator.
right_inds (tuple[str]) – Names of the indices forming the right hand side of the operator. The dimensions of these must match those of
left_inds
.axis (int, optional) – Position of the new index.
- norm()[source]#
Frobenius norm of this tensor:
\[\|t\|_F = \sqrt{\mathrm{Tr} \left(t^{\dagger} t\right)}\]where the trace is taken over all indices. Equivalent to the square root of the sum of squared singular values across any partition.
- randomize(dtype=None, inplace=False, **randn_opts)[source]#
Randomize the entries of this tensor.
- Parameters
- Return type
- reindex(index_map, inplace=False)[source]#
Rename the indices of this tensor, optionally in-place.
- Parameters
index_map (dict-like) – Mapping of pairs
{old_ind: new_ind, ...}
.inplace (bool, optional) – If
False
(the default), a copy of this tensor with the changed inds will be returned.
- retag(retag_map, inplace=False)[source]#
Rename the tags of this tensor, optionally, in-place.
- Parameters
retag_map (dict-like) – Mapping of pairs
{old_tag: new_tag, ...}
.inplace (bool, optional) – If
False
(the default), a copy of this tensor with the changed tags will be returned.
- singular_values(left_inds, method='svd')[source]#
Return the singular values associated with splitting this tensor according to
left_inds
.- Parameters
left_inds (sequence of str) – A subset of this tensors indices that defines ‘left’.
method ({'svd', 'eig'}) – Whether to use the SVD or eigenvalue decomposition to get the singular values.
- Returns
The singular values.
- Return type
1d-array
- split(left_inds, method='svd', get=None, absorb='both', max_bond=None, cutoff=1e-10, cutoff_mode='rel', renorm=None, ltags=None, rtags=None, stags=None, bond_ind=None, right_inds=None)[source]#
Decompose this tensor into two tensors.
- Parameters
T (Tensor or TNLinearOperator) – The tensor (network) to split.
left_inds (str or sequence of str) – The index or sequence of inds, which
T
should already have, to split to the ‘left’. You can supplyNone
here if you supplyright_inds
instead.method (str, optional) –
How to split the tensor, only some methods allow bond truncation:
'svd'
: full SVD, allows truncation.'eig'
: full SVD via eigendecomp, allows truncation.'svds'
: iterative svd, allows truncation.'isvd'
: iterative svd using interpolative methods, allows truncation.'rsvd'
: randomized iterative svd with truncation.'eigh'
: full eigen-decomposition, tensor must he hermitian.'eigsh'
: iterative eigen-decomposition, tensor must be hermitian.'qr'
: full QR decomposition.'lq'
: full LR decomposition.'cholesky'
: full cholesky decomposition, tensor must be positive.
get ({None, 'arrays', 'tensors', 'values'}) –
If given, what to return instead of a TN describing the split:
None
: a tensor network of the two (or three) tensors.'arrays'
: the raw data arrays as a tuple(l, r)
or(l, s, r)
depending onabsorb
.'tensors '
: the new tensors as a tuple(Tl, Tr)
or(Tl, Ts, Tr)
depending onabsorb
.'values'
: only compute and return the singular valuess
.
absorb ({'both', 'left', 'right', None}, optional) – Whether to absorb the singular values into both, the left, or the right unitary matrix respectively, or neither. If neither (
absorb=None
) then the singular values will be returned separately in their own 1D tensor or array. In that case ifget=None
the tensor network returned will have a hyperedge corresponding to the new bond index connecting three tensors. Ifget='tensors'
orget='arrays'
then a tuple like(left, s, right)
is returned.max_bond (None or int) – If integer, the maxmimum number of singular values to keep, regardless of
cutoff
.cutoff (float, optional) – The threshold below which to discard singular values, only applies to rank revealing methods (not QR, LQ, or cholesky).
cutoff_mode ({'sum2', 'rel', 'abs', 'rsum2'}) –
Method with which to apply the cutoff threshold:
'rel'
: values less thancutoff * s[0]
discarded.'abs'
: values less thancutoff
discarded.'sum2'
: sum squared of values discarded must be< cutoff
.'rsum2'
: sum squared of values discarded must be less thancutoff
times the total sum of squared values.'sum1'
: sum values discarded must be< cutoff
.'rsum1'
: sum of values discarded must be less thancutoff
times the total sum of values.
renorm ({None, bool, or int}, optional) – Whether to renormalize the kept singular values, assuming the bond has a canonical environment, corresponding to maintaining the frobenius or nuclear norm. If
None
(the default) then this is automatically turned on only forcutoff_method in {'sum2', 'rsum2', 'sum1', 'rsum1'}
withmethod in {'svd', 'eig', 'eigh'}
.ltags (sequence of str, optional) – Add these new tags to the left tensor.
rtags (sequence of str, optional) – Add these new tags to the right tensor.
stags (sequence of str, optional) – Add these new tags to the singular value tensor.
bond_ind (str, optional) – Explicitly name the new bond, else a random one will be generated.
right_inds (sequence of str, optional) – Explicitly give the right indices, otherwise they will be worked out. This is a minor performance feature.
- Returns
Depending on if
get
isNone
,'tensors'
,'arrays'
, or'values'
. In the first three cases, ifabsorb
is set, then the returned objects correspond to(left, right)
whereas ifabsorb=None
the returned objects correspond to(left, singular_values, right)
.- Return type
TensorNetwork or tuple[Tensor] or tuple[array] or 1D-array
- symmetrize(ind1, ind2, inplace=False)[source]#
Hermitian symmetrize this tensor for indices
ind1
andind2
. I.e.T = (T + T.conj().T) / 2
, where the transpose is taken only over the specified indices.
- to_dense(*inds_seq, to_qarray=True)[source]#
Convert this Tensor into an dense array, with a single dimension for each of inds in
inds_seqs
. E.g. to convert several sites into a density matrix:T.to_dense(('k0', 'k1'), ('b0', 'b1'))
.
- trace(left_inds, right_inds, preserve_tensor=False, inplace=False)[source]#
Trace index or indices
left_inds
withright_inds
, removing them.- Parameters
left_inds (str or sequence of str) – The left indices to trace, order matching
right_inds
.right_inds (str or sequence of str) – The right indices to trace, order matching
left_inds
.preserve_tensor (bool, optional) – If
True
, a tensor will be returned even if no indices remain.inplace (bool, optional) – Perform the trace inplace.
- Returns
z
- Return type
Tensor or scalar
- transpose(*output_inds, inplace=False)[source]#
Transpose this tensor - permuting the order of both the data and the indices. This operation is mainly for ensuring a certain data layout since for most operations the specific order of indices doesn’t matter.
Note to compute the tranditional ‘transpose’ of an operator within a contraction for example, you would just use reindexing not this.
- Parameters
output_inds (sequence of str) – The desired output sequence of indices.
inplace (bool, optional) – Perform the tranposition inplace.
- Returns
tt – The transposed tensor.
- Return type
See also
- transpose_like(other, inplace=False)[source]#
Transpose this tensor to match the indices of
other
, allowing for one index to be different. E.g. ifself.inds = ('a', 'b', 'c', 'x')
andother.inds = ('b', 'a', 'd', 'c')
then ‘x’ will be aligned with ‘d’ and the output inds will be('b', 'a', 'x', 'c')
- Parameters
- Returns
tt – The transposed tensor.
- Return type
See also
- unfuse(unfuse_map, shape_map, inplace=False)[source]#
Reshape single indices into groups of multiple indices
- Parameters
unfuse_map (dict_like or sequence of tuples.) – Mapping like:
{existing_ind: sequence of new inds, ...}
or an ordered mapping like[(old_ind_1, new_inds_1), ...]
in which case the output tensor’s new inds will be ordered. In both cases the new indices are created at the old index’s position of the tensor’s shapeshape_map (dict_like or sequence of tuples) – Mapping like:
{old_ind: new_ind_sizes, ...}
or an ordered mapping like[(old_ind_1, new_ind_sizes_1), ...]
.
- Returns
The transposed, reshaped and re-labeled tensor
- Return type
- unitize(left_inds=None, inplace=False, method='qr')[source]#
Make this tensor unitary (or isometric) with respect to
left_inds
. The underlying method is set bymethod
.- Parameters
left_inds (sequence of str) – The indices to group together and treat as the left hand side of a matrix.
inplace (bool, optional) – Whether to perform the unitization inplace.
method ({'qr', 'exp', 'mgs'}, optional) –
How to generate the unitary matrix. The options are:
’qr’: use a QR decomposition directly.
’exp’: exponential the padded, anti-hermitian part of the array
’mgs’: use a explicit modified-gram-schmidt procedure
Generally, ‘qr’ is the fastest and best approach, however currently
tensorflow
cannot back-propagate through for instance, making the other two methods necessary.
- Return type
- class quimb.tensor.tensor_core.TensorNetwork(ts, *, virtual=False, check_collisions=True)[source]#
A collection of (as yet uncontracted) Tensors.
- Parameters
ts (sequence of Tensor or TensorNetwork) – The objects to combine. The new network will copy these (but not the underlying data) by default. For a view set
virtual=True
.virtual (bool, optional) – Whether the TensorNetwork should be a view onto the tensors it is given, or a copy of them. E.g. if a virtual TN is constructed, any changes to a Tensor’s indices or tags will propagate to all TNs viewing that Tensor.
check_collisions (bool, optional) – If True, the default, then
TensorNetwork
instances with double indices which match anotherTensorNetwork
instances double indices will have those indices’ names mangled. Can be explicitly turned off when it is known that no collisions will take place – i.e. when not adding any new tensors.
- tensor_map#
Mapping of unique ids to tensors, like``{tensor_id: tensor, …}``. I.e. this is where the tensors are ‘stored’ by the network.
- Type
- tag_map#
Mapping of tags to a set of tensor ids which have those tags. I.e.
{tag: {tensor_id_1, tensor_id_2, ...}}
. Thus to select those tensors could do:map(tensor_map.__getitem__, tag_map[tag])
.- Type
- ind_map#
Like
tag_map
but for indices. Soind_map[ind]]
returns the tensor ids of those tensors withind
.- Type
- exponent#
A scalar prefactor for the tensor network, stored in base 10 like
10**exponent
. This is mostly for conditioning purposes and will be0.0
unless you use useequalize_norms(value)
ortn.strip_exponent(tid_or_tensor)
.- Type
- property H#
Conjugate all the tensors in this network (leaves all indices).
- add(t, virtual=False, check_collisions=True)[source]#
Add Tensor, TensorNetwork or sequence thereof to self.
- add_tag(tag, where=None, which='all')[source]#
Add tag to every tensor in this network, or if
where
is specified, the tensors matching those tags – i.e. adds the tag to all tensors inself.select_tensors(where, which=which)
.
- add_tensor(tensor, tid=None, virtual=False)[source]#
Add a single tensor to this network - mangle its tid if neccessary.
- antidiag_gauge(output_inds=None, atol=1e-12, cache=None, inplace=False)[source]#
Flip the order of any bonds connected to antidiagonal tensors. Whilst this is just a gauge fixing (with the gauge being the flipped identity) it then allows
diagonal_reduce
to then simplify those indices.- Parameters
output_inds (sequence of str, optional) – Which indices to explicitly consider as outer legs of the tensor network and thus not flip. If not given, these will be taken as all the indices that appear once.
atol (float, optional) – When identifying antidiagonal tensors, the absolute tolerance with which to compare to zero with.
cache (None or set) – Persistent cache used to mark already checked tensors.
inplace – Whether to perform the antidiagonal gauging inplace.
bool – Whether to perform the antidiagonal gauging inplace.
optional – Whether to perform the antidiagonal gauging inplace.
- Return type
See also
full_simplify
,rank_simplify
,diagonal_reduce
,column_reduce
- property arrays#
Get the tuple of raw arrays containing all the tensor network data.
- aslinearoperator(left_inds, right_inds, ldims=None, rdims=None, backend=None, optimize=None)[source]#
View this
TensorNetwork
as aTNLinearOperator
.
- balance_bonds(inplace=False)[source]#
Apply
tensor_balance_bond()
to all bonds in this tensor network.- Parameters
inplace (bool, optional) – Whether to perform the bond balancing inplace or not.
- Return type
- canonize_around(tags, which='all', min_distance=0, max_distance=None, include=None, exclude=None, span_opts=None, absorb='right', gauge_links=False, link_absorb='both', equalize_norms=False, inplace=False, **canonize_opts)[source]#
Expand a locally canonical region around
tags
:--●---●-- | | | | --●---v---v---●-- | | | | | | --●--->---v---v---<---●-- | | | | | | | | ●--->--->---O---O---<---<---● | | | | | | | | --●--->---^---^---^---●-- | | | | | | --●---^---^---●-- | | | | --●---●-- <=====> max_distance = 2 e.g.
Shown on a grid here but applicable to arbitrary geometry. This is a way of gauging a tensor network that results in a canonical form if the geometry is described by a tree (e.g. an MPS or TTN). The canonizations proceed inwards via QR decompositions.
The sequence generated by round-robin expanding the boundary of the originally specified tensors - it will only be unique for trees.
- Parameters
tags (str, or sequence or str) – Tags defining which set of tensors to locally canonize around.
which ({'all', 'any', '!all', '!any'}, optional) – How select the tensors based on tags.
min_distance (int, optional) – How close, in terms of graph distance, to canonize tensors away. See
get_tree_span()
.max_distance (None or int, optional) – How far, in terms of graph distance, to canonize tensors away. See
get_tree_span()
.include (sequence of str, optional) – How to build the spanning tree to canonize along. See
get_tree_span()
.exclude (sequence of str, optional) – How to build the spanning tree to canonize along. See
get_tree_span()
.{'min' (distance_sort) – How to build the spanning tree to canonize along. See
get_tree_span()
.'max'} – How to build the spanning tree to canonize along. See
get_tree_span()
.optional – How to build the spanning tree to canonize along. See
get_tree_span()
.absorb ({'right', 'left', 'both'}, optional) – As we canonize inwards from tensor A to tensor B which to absorb the singular values into.
gauge_links (bool, optional) – Whether to gauge the links between branches of the spanning tree generated (in a Simple Update like fashion).
link_absorb ({'both', 'right', 'left'}, optional) – If performing the link gauging, how to absorb the singular values.
equalize_norms (bool or float, optional) – Scale the norms of tensors acted on to this value, accumulating the log10 scaled factors in
self.exponent
.inplace (bool, optional) – Whether to perform the canonization inplace.
- Return type
See also
- canonize_between(tags1, tags2, absorb='right', **canonize_opts)[source]#
‘Canonize’ the bond between the two single tensors in this network specified by
tags1
andtags2
usingtensor_canonize_bond
:| | | | | | | | --●----●----●----●-- --●----●----●----●-- /| /| /| /| /| /| /| /| | | | | | | | | --●----1----2----●-- ==> --●---->~~~~R----●-- /| /| /| /| /| /| /| /| | | | | | | | | --●----●----●----●-- --●----●----●----●-- /| /| /| /| /| /| /| /|
This is an inplace operation. This can only be used to put a TN into truly canonical form if the geometry is a tree, such as an MPS.
- Parameters
tags1 – Tags uniquely identifying the first (‘left’) tensor, which will become an isometry.
tags2 (str or sequence of str) – Tags uniquely identifying the second (‘right’) tensor.
absorb ({'left', 'both', 'right'}, optional) – Which side of the bond to absorb the non-isometric operator.
canonize_opts – Supplied to
tensor_canonize_bond()
.
See also
- column_reduce(output_inds=None, atol=1e-12, cache=None, inplace=False)[source]#
Find bonds on this tensor network which have tensors where all but one column (of the respective index) is non-zero, allowing the ‘cutting’ of that bond.
- Parameters
output_inds (sequence of str, optional) – Which indices to explicitly consider as outer legs of the tensor network and thus not slice. If not given, these will be taken as all the indices that appear once.
atol (float, optional) – When identifying singlet column tensors, the absolute tolerance with which to compare to zero with.
cache (None or set) – Persistent cache used to mark already checked tensors.
inplace – Whether to perform the column reductions inplace.
bool – Whether to perform the column reductions inplace.
optional – Whether to perform the column reductions inplace.
- Return type
See also
full_simplify
,rank_simplify
,diagonal_reduce
,antidiag_gauge
- compress_between(tags1, tags2, max_bond=None, cutoff=1e-10, absorb='both', canonize_distance=0, canonize_opts=None, equalize_norms=False, **compress_opts)[source]#
Compress the bond between the two single tensors in this network specified by
tags1
andtags2
usingtensor_compress_bond()
:| | | | | | | | ==●====●====●====●== ==●====●====●====●== /| /| /| /| /| /| /| /| | | | | | | | | ==●====1====2====●== ==> ==●====L----R====●== /| /| /| /| /| /| /| /| | | | | | | | | ==●====●====●====●== ==●====●====●====●== /| /| /| /| /| /| /| /|
This is an inplace operation. The compression is unlikely to be optimal with respect to the frobenius norm, unless the TN is already canonicalized at the two tensors. The
absorb
kwarg can be specified to yield an isometry on either the left or right resulting tensors.- Parameters
tags1 – Tags uniquely identifying the first (‘left’) tensor.
tags2 (str or sequence of str) – Tags uniquely identifying the second (‘right’) tensor.
max_bond (int or None, optional) – The maxmimum bond dimension.
cutoff (float, optional) – The singular value cutoff to use.
canonize_distance (int, optional) – How far to locally canonize around the target tensors first.
canonize_opts (None or dict, optional) – Other options for the local canonization.
equalize_norms (bool or float, optional) – If set, rescale the norms of all tensors modified to this value, stripping the rescaling factor into the
exponent
attribute.compress_opts – Supplied to
tensor_compress_bond()
.
See also
- compute_contracted_inds(*tids, output_inds=None)[source]#
Get the indices describing the tensor contraction of tensors corresponding to
tids
.
- compute_shortest_distances(tids=None, exclude_inds=())[source]#
Compute the minimum graph distances between all or some nodes
tids
.
- conj(mangle_inner=False, inplace=False)[source]#
Conjugate all the tensors in this network (leaves all indices).
- contract(tags=Ellipsis, inplace=False, **opts)[source]#
Contract some, or all, of the tensors in this network. This method dispatches to
contract_structured
orcontract_tags
.- Parameters
tags (sequence of str) – Any tensors with any of these tags with be contracted. Set to
...
(Ellipsis
) to contract all tensors, the default.inplace (bool, optional) – Whether to perform the contraction inplace. This is only valid if not all tensors are contracted (which doesn’t produce a TN).
opts – Passed to
tensor_contract
.
- Returns
The result of the contraction, still a
TensorNetwork
if the contraction was only partial.- Return type
TensorNetwork, Tensor or scalar
See also
- contract_around(tags, which='all', min_distance=0, max_distance=None, span_opts=None, max_bond=None, cutoff=1e-10, canonize_distance=0, canonize_opts=None, gauge_boundary_only=False, compress_opts=None, equalize_norms=False, inplace=False, **kwargs)[source]#
Perform a compressed contraction inwards towards the tensors identified by
tags
.
- contract_between(tags1, tags2, **contract_opts)[source]#
Contract the two tensors specified by
tags1
andtags2
respectively. This is an inplace operation. No-op if the tensor specified bytags1
andtags2
is the same tensor.- Parameters
tags1 – Tags uniquely identifying the first tensor.
tags2 (str or sequence of str) – Tags uniquely identifying the second tensor.
contract_opts – Supplied to
tensor_contract()
.
- contract_cumulative(tags_seq, inplace=False, **opts)[source]#
Cumulative contraction of tensor network. Contract the first set of tags, then that set with the next set, then both of those with the next and so forth. Could also be described as an manually ordered contraction of all tags in
tags_seq
.- Parameters
tags_seq (sequence of sequence of str) – The list of tag-groups to cumulatively contract.
inplace (bool, optional) – Whether to perform the contraction inplace.
- Returns
The result of the contraction, still a
TensorNetwork
if the contraction was only partial.- Return type
TensorNetwork, Tensor or scalar
See also
- contract_tags(tags, which='any', inplace=False, **opts)[source]#
Contract the tensors that match any or all of
tags
.- Parameters
tags (sequence of str) – The list of tags to filter the tensors by. Use
all
or...
(Ellipsis
) to contract all tensors.inplace (bool, optional) – Whether to perform the contraction inplace.
which ({'all', 'any'}) – Whether to require matching all or any of the tags.
- Returns
The result of the contraction, still a
TensorNetwork
if the contraction was only partial.- Return type
TensorNetwork, Tensor or scalar
See also
- contraction_cost(optimize=None, **contract_opts)[source]#
Compute the ‘contraction cost’ of this tensor network. This is defined as log10 of the total number of scalar operations during the contraction sequence. Multiply by 2 to estimate FLOPS for real dtype, and by 8 to estimate FLOPS for complex dtype.
- contraction_info(optimize=None, **contract_opts)[source]#
Compute the
opt_einsum.PathInfo
object decsribing the contraction of this entire tensor network using path optimizeroptimize
.
- contraction_path(optimize=None, **contract_opts)[source]#
Compute the contraction path, a sequence of (int, int), for the contraction of this entire tensor network using path optimizer
optimize
.
- contraction_tree(optimize=None, output_inds=None)[source]#
Return the
cotengra.ContractionTree
corresponding to contracting this entire tensor network with path finderoptimize
.
- contraction_width(optimize=None, **contract_opts)[source]#
Compute the ‘contraction width’ of this tensor network. This is defined as log2 of the maximum tensor size produced during the contraction sequence. If every index in the network has dimension 2 this corresponds to the maximum rank tensor produced.
- copy(virtual=False, deep=False)[source]#
Copy this
TensorNetwork
. Ifdeep=False
, (the default), then everything but the actual numeric data will be copied.
- cut_between(left_tags, right_tags, left_ind, right_ind)[source]#
Cut the bond between the tensors specified by
left_tags
andright_tags
, giving them the new indsleft_ind
andright_ind
respectively.
- cut_iter(*inds)[source]#
Cut and iterate over one or more indices in this tensor network. Each network yielded will have that index removed, and the sum of all networks will equal the original network. This works by iterating over the product of all combinations of each bond supplied to
isel
. As such, the number of networks produced is exponential in the number of bonds cut.- Parameters
inds (sequence of str) – The bonds to cut.
- Yields
TensorNetwork
Examples
Here we’ll cut the two extra bonds of a cyclic MPS and sum the contraction of the resulting 49 OBC MPS norms:
>>> psi = MPS_rand_state(10, bond_dim=7, cyclic=True) >>> norm = psi.H & psi >>> bnds = bonds(norm[0], norm[-1]) >>> sum(tn ^ all for tn in norm.cut_iter(*bnds)) 1.0
See also
- delete(tags, which='all')[source]#
Delete any tensors which match all or any of
tags
.- Parameters
tags (str or sequence of str) – The tags to match.
which ({'all', 'any'}, optional) – Whether to match all or any of the tags.
- diagonal_reduce(output_inds=None, atol=1e-12, cache=None, inplace=False)[source]#
Find tensors with diagonal structure and collapse those axes. This will create a tensor ‘hyper’ network with indices repeated 2+ times, as such, output indices should be explicitly supplied when contracting, as they can no longer be automatically inferred. For example:
>>> tn_diag = tn.diagonal_reduce() >>> tn_diag.contract(all, output_inds=[])
- Parameters
output_inds (sequence of str, optional) – Which indices to explicitly consider as outer legs of the tensor network and thus not replace. If not given, these will be taken as all the indices that appear once.
atol (float, optional) – When identifying diagonal tensors, the absolute tolerance with which to compare to zero with.
cache (None or set) – Persistent cache used to mark already checked tensors.
inplace – Whether to perform the diagonal reduction inplace.
bool – Whether to perform the diagonal reduction inplace.
optional – Whether to perform the diagonal reduction inplace.
- Return type
See also
- distance(tnB, xAA=None, xAB=None, xBB=None, method='auto', **contract_opts)[source]#
Compute the Frobenius norm distance between two tensor networks:
\[D(A, B) = | A - B |_{\mathrm{fro}} = \mathrm{Tr} [(A - B)^{\dagger}(A - B)]^{1/2} = ( \langle A | A \rangle - 2 \mathrm{Re} \langle A | B \rangle| + \langle B | B \rangle ) ^{1/2}\]which should have a matching external indices.
- Parameters
tnA (TensorNetwork or Tensor) – The first tensor network operator.
tnB (TensorNetwork or Tensor) – The second tensor network operator.
xAA (None or scalar) – The value of
A.H @ A
if you already know it (or it doesn’t matter).xAB (None or scalar) – The value of
A.H @ B
if you already know it (or it doesn’t matter).xBB (None or scalar) – The value of
B.H @ B
if you already know it (or it doesn’t matter).method ({'auto', 'overlap', 'dense'}, optional) – How to compute the distance. If
'overlap'
, the default, the distance will be computed as the sum of overlaps, without explicitly forming the dense operators. If'dense'
, the operators will be directly formed and the norm computed, which can be quicker when the exterior dimensions are small. If'auto'
, the dense method will be used if the total operator (outer) size is<= 2**16
.contract_opts – Supplied to
contract()
.
- Returns
D
- Return type
- distribute_exponent()[source]#
Distribute the exponent
p
of this tensor network (i.e. corresponding totn * 10**p
) equally among all tensors.
- draw(color=None, *, output_inds=None, highlight_inds=(), highlight_tids=(), highlight_inds_color=(1.0, 0.2, 0.2, 1.0), highlight_tids_color=(1.0, 0.2, 0.2, 1.0), show_inds=None, show_tags=None, show_scalars=True, custom_colors=None, title=None, legend=True, fix=None, k=None, iterations=200, initial_layout='spectral', use_forceatlas2=1000, use_spring_weight=False, node_color=None, node_scale=1.0, node_size=None, node_shape='o', node_outline_size=None, node_outline_darkness=0.8, node_hatch='', edge_color=None, edge_scale=1.0, edge_alpha=0.5, multiedge_spread=0.1, show_left_inds=True, arrow_closeness=1.1, arrow_length=1.0, arrow_overhang=1.0, arrow_linewidth=1.0, label_color=None, font_size=10, font_size_inner=7, figsize=(6, 6), margin=None, xlims=None, ylims=None, get=None, return_fig=False, ax=None)#
Plot this tensor network as a networkx graph using matplotlib, with edge width corresponding to bond dimension.
- Parameters
color (sequence of tags, optional) – If given, uniquely color any tensors which have each of the tags. If some tensors have more than of the tags, only one color will show.
output_inds (sequence of str, optional) – For hyper tensor networks explicitly specify which indices should be drawn as outer indices. If not set, the outer indices are assumed to be those that only appear on a single tensor.
highlight_inds (iterable, optional) – Highlight these edges.
highlight_tids (iterable, optional) – Highlight these nodes.
highlight_inds_color – What color to use for
highlight_inds
nodes.highlight_tids_color (tuple[float], optional) – What color to use for
highlight_tids
nodes.show_inds ({None, False, True, 'all', 'bond-size'}, optional) – Explicitly turn on labels for each tensors indices.
show_tags ({None, False, True}, optional) – Explicitly turn on labels for each tensors tags.
show_scalars (bool, optional) – Whether to show scalar tensors (floating nodes with no edges).
custom_colors (sequence of colors, optional) – Supply a custom sequence of colors to match the tags given in
color
.title (str, optional) – Set a title for the axis.
legend (bool, optional) – Whether to draw a legend for the colored tags.
fix (dict[tags_ind_or_tid], (float, float)], optional) – Used to specify actual relative positions for each tensor node. Each key should be a sequence of tags that uniquely identifies a tensor, a
tid
, or aind
, and each value should be a(x, y)
coordinate tuple.k (float, optional) – The optimal distance between nodes.
iterations (int, optional) – How many iterations to perform when when finding the best layout using node repulsion. Ramp this up if the graph is drawing messily.
initial_layout ({'spectral', 'kamada_kawai', 'circular', 'planar', ) – ‘random’, ‘shell’, ‘bipartite’, …}, optional The name of a networkx layout to use before iterating with the spring layout. Set
iterations=0
if you just want to use this layout only.use_forceatlas2 (bool or int, optional) – Whether to try and use
forceatlas2
(fa2
) for the spring layout relaxation instead ofnetworkx
. If an integer, only try and use beyond that many nodes (it can give messier results on smaller graphs).use_spring_weight (bool, optional) – Whether to use inverse bond sizes as spring weights to the force repulsion layout algorithms.
node_color (tuple[float], optional) – Default color of nodes.
node_size (None, float or dict, optional) – How big to draw the tensors. Can be a global single value, or a dict containing values for specific tags or tids. This is in absolute figure units. See
node_scale
simply scale the node sizes up or down.node_scale (float, optional) – Scale the node sizes by this factor, in addition to the automatica scaling based on the number of tensors.
node_shape (None, str or dict, optional) – What shape to draw the tensors. Should correspond to a matplotlib scatter marker. Can be a global single value, or a dict containing values for specific tags or tids.
node_outline_size (None, float or dict, optional) – The width of the border of each node. Can be a global single value, or a dict containing values for specific tags or tids.
node_outline_darkness (float, optional) – Darkening of nodes outlines.
edge_color (tuple[float], optional) – Default color of edges.
edge_scale (float, optional) – How much to scale the width of the edges.
edge_alpha (float, optional) – Set the alpha (opacity) of the drawn edges.
multiedge_spread (float, optional) – How much to spread the lines of multi-edges.
show_left_inds (bool, optional) – Whether to show
tensor.left_inds
as incoming arrows.arrow_closeness (float, optional) – How close to draw the arrow to its target.
arrow_length (float, optional) – The size of the arrow with respect to the edge.
arrow_overhang (float, optional) – Varies the arrowhead between a triangle (0.0) and ‘V’ (1.0).
label_color (tuple[float], optional) – Color to draw labels with.
font_size (int, optional) – Font size for drawing tags and outer indices.
font_size_inner (int, optional) – Font size for drawing inner indices.
figsize (tuple of int) – The size of the drawing.
margin (None or float, optional) – Specify an argument for
ax.margin
, else the plot limits will try and be computed based on the node positions and node sizes.xlims (None or tuple, optional) – Explicitly set the x plot range.
xlims – Explicitly set the y plot range.
get ({None, 'pos', 'graph'}, optional) –
If
None
then plot as normal, else if:'pos'
, return the plotting positions of eachtid
andind
drawn as a node, this can supplied to subsequent calls asfix=pos
to maintain positions, even as the graph structure changes.'graph'
, return thenetworkx.Graph
object. Note that this will potentially have extra nodes representing output and hyper indices.
return_fig (bool, optional) – If True and
ax is None
then return the figure created rather than executingpyplot.show()
.ax (matplotlib.Axis, optional) – Draw the graph on this axis rather than creating a new figure.
- draw_tree_span(tags, which='all', min_distance=0, max_distance=None, include=None, exclude=None, ndim_sort='max', distance_sort='min', weight_bonds=True, color='order', colormap='Spectral', **draw_opts)[source]#
Visualize a generated tree span out of the tensors tagged by
tags
.- Parameters
tags (str or sequence of str) – Tags specifiying a region of tensors to span out of.
which ({'all', 'any': '!all', '!any'}, optional) – How to select tensors based on the tags.
min_distance (int, optional) – See
get_tree_span()
.max_distance (None or int, optional) – See
get_tree_span()
.include (sequence of str, optional) – See
get_tree_span()
.exclude (sequence of str, optional) – See
get_tree_span()
.distance_sort ({'min', 'max'}, optional) – See
get_tree_span()
.color ({'order', 'distance'}, optional) – Whether to color nodes based on the order of the contraction or the graph distance from the specified region.
colormap (str) – The name of a
matplotlib
colormap to use.
See also
- drop_tags(tags)[source]#
Remove a tag from any tensors in this network which have it. Inplace operation.
- Parameters
tags (str or sequence of str) – The tag or tags to drop.
- property dtype#
The dtype of this TensorNetwork, this is the minimal common type of all the tensors data.
- equalize_norms(value=None, inplace=False)[source]#
Make the Frobenius norm of every tensor in this TN equal without changing the overall value if
value=None
, or set the norm of every tensor tovalue
by scalar multiplication only.- Parameters
- Return type
- expand_bond_dimension(new_bond_dim, rand_strength=0.0, inds_to_expand=None, inplace=False)[source]#
Increase the dimension of bonds to at least
new_bond_dim
.
- fit(tn_target, method='als', tol=1e-09, inplace=False, progbar=False, **fitting_opts)[source]#
Optimize the entries of this tensor network with respect to a least squares fit of
tn_target
which should have the same outer indices. Depending onmethod
this callstensor_network_fit_als()
ortensor_network_fit_autodiff()
. The quantity minimized is:\[D(A, B) = | A - B |_{\mathrm{fro}} = \mathrm{Tr} [(A - B)^{\dagger}(A - B)]^{1/2} = ( \langle A | A \rangle - 2 \mathrm{Re} \langle A | B \rangle| + \langle B | B \rangle ) ^{1/2}\]- Parameters
tn_target (TensorNetwork) – The target tensor network to try and fit the current one to.
method ({'als', 'autodiff'}, optional) – Whether to use alternating least squares (ALS) or automatic differentiation to perform the optimization. Generally ALS is better for simple geometries, autodiff better for complex ones.
tol (float, optional) – The target norm distance.
inplace (bool, optional) – Update the current tensor network in place.
progbar (bool, optional) – Show a live progress bar of the fitting process.
fitting_opts – Supplied to either
tensor_network_fit_als()
ortensor_network_fit_autodiff()
.
- Returns
tn_opt – The optimized tensor network.
- Return type
- flip(inds, inplace=False)[source]#
Flip the dimension corresponding to indices
inds
on all tensors that share it.
- classmethod from_TN(tn, like=None, inplace=False, **kwargs)[source]#
Construct a specific tensor network subclass (i.e. one with some promise about structure/geometry and tags/inds such as an MPS) from a generic tensor network which should have that structure already.
- Parameters
cls (class) – The TensorNetwork subclass to convert
tn
to.tn (TensorNetwork) – The TensorNetwork to convert.
like (TensorNetwork, optional) – If specified, try and retrieve the neccesary attribute values from this tensor network.
inplace (bool, optional) – Whether to perform the conversion inplace or not.
kwargs – Extra properties of the TN subclass that should be specified.
- full_simplify(seq='ADCR', output_inds=None, atol=1e-12, equalize_norms=False, cache=None, inplace=False, progbar=False, rank_simplify_opts=None, loop_simplify_opts=None, custom_methods=())[source]#
Perform a series of tensor network ‘simplifications’ in a loop until there is no more reduction in the number of tensors or indices. Note that apart from rank-reduction, the simplification methods make use of the non-zero structure of the tensors, and thus changes to this will potentially produce different simplifications.
- Parameters
seq (str, optional) –
Which simplifications and which order to perform them in.
'A'
: stands forantidiag_gauge
'D'
: stands fordiagonal_reduce
'C'
: stands forcolumn_reduce
'R'
: stands forrank_simplify
'S'
: stands forsplit_simplify
'L'
: stands forloop_simplify
If you want to keep the tensor network ‘simple’, i.e. with no hyperedges, then don’t use
'D'
(moreover'A'
is redundant).output_inds (sequence of str, optional) – Explicitly set which indices of the tensor network are output indices and thus should not be modified. If not specified the tensor network is assumed to be a ‘standard’ one where indices that only appear once are the output indices.
atol (float, optional) – The absolute tolerance when indentifying zero entries of tensors and performing low-rank decompositions.
equalize_norms (bool or float) – Actively renormalize the tensors during the simplification process. Useful for very large TNs. If True, the norms, in the formed of stripped exponents, will be redistributed at the end. If an actual number, the final tensors will all have this norm, and the scaling factor will be stored as a base-10 exponent in
tn.exponent
.cache (None or set) – A persistent cache for each simplification process to mark already processed tensors.
progbar (bool, optional) – Show a live progress bar of the simplification process.
inplace (bool, optional) – Whether to perform the simplification inplace.
- Return type
- fuse_multibonds(inplace=False)[source]#
Fuse any multi-bonds (more than one index shared by the same pair of tensors) into a single bond.
- gate_inds(G, inds, contract=False, tags=None, info=None, inplace=False, **compress_opts)[source]#
Apply the ‘gate’
G
to indicesinds
, propagating them to the outside, as if applyingG @ x
.- Parameters
G (array_ike) – The gate array to apply, should match or be factorable into the shape
(*phys_dims, *phys_dims)
.inds (str or sequence or str,) – The index or indices to apply the gate to.
contract ({False, True, 'split', 'reduce-split'}, optional) –
How to apply the gate:
False: gate is added to network and nothing is contracted, tensor network structure is thus not maintained.
True: gate is contracted with all tensors involved, tensor network structure is thus only maintained if gate acts on a single site only.
’split’: contract all involved tensors then split the result back into two.
’reduce-split’: factor the two physical indices into ‘R-factors’ using QR decompositions on the original site tensors, then contract the gate, split it and reabsorb each side. Much cheaper than
'split'
.
The final two methods are relevant for two site gates only, for single site gates they use the
contract=True
option which also maintains the structure of the TN. See below for a pictorial description of each method.tags (str or sequence of str, optional) – Tags to add to the new gate tensor.
info (None or dict, optional) – Used to store extra optional information such as the singular values if not absorbed.
inplace (bool, optional) – Whether to perform the gate operation inplace on the tensor network or not.
compress_opts – Supplied to
tensor_split()
for anycontract
methods that involve splitting. Ignored otherwise.
- Returns
G_tn
- Return type
Notes
The
contract
options look like the following (for two site gates).contract=False
:. . <- inds │ │ GGGGG │╱ │╱ ──●───●── ╱ ╱
contract=True
:│╱ │╱ ──GGGGG── ╱ ╱
contract='split'
:│╱ │╱ │╱ │╱ ──GGGGG── ==> ──G┄┄┄G── ╱ ╱ ╱ ╱ <SVD>
contract='reduce-split'
:│ │ │ │ GGGGG GGG │ │ │╱ │╱ ==> ╱│ │ ╱ ==> ╱│ │ ╱ │╱ │╱ ──●───●── ──>─●─●─<── ──>─GGG─<── ==> ──G┄┄┄G── ╱ ╱ ╱ ╱ ╱ ╱ ╱ ╱ <QR> <LQ> <SVD>
For one site gates when one of the ‘split’ methods is supplied
contract=True
is assumed.
- gauge_all_canonize(max_iterations=5, absorb='both', equalize_norms=False, inplace=False, **kwargs)[source]#
Iterative gauge all the bonds in this tensor network with a basic ‘canonization’ strategy.
- gauge_all_random(iterations=1, unitary=True, inplace=False)[source]#
Gauge all the bonds in this network randomly. This is largely for testing purposes.
- gauge_all_simple(max_iterations=5, tol=0.0, smudge=1e-12, power=1.0, gauges=None, inplace=False)[source]#
Iterative gauge all the bonds in this tensor network with a ‘simple update’ like strategy.
- gauge_local(tags, which='all', max_distance=1, max_iterations='max_distance', method='canonize', inplace=False, **gauge_local_opts)[source]#
Iteratively gauge all bonds in the tagged sub tensor network according to one of several strategies.
- gauge_simple_insert(gauges, smudge=0.0)[source]#
Insert the simple update style bond gauges found in
gauges
if they are present in this tensor network. The gauges inserted are also returned so that they can be removed later.- Parameters
gauges (dict[str, array_like]) – The store of bond gauges, the keys being indices and the values being the vectors. Only bonds present in this dictionary will be gauged.
- Returns
outer (list[(Tensor, str, array_like)]) – The sequence of gauges applied to outer indices, each a tuple of the tensor, the index and the gauge vector.
inner (list[((Tensor, Tensor), str, array_like)]) – The sequence of gauges applied to inner indices, each a tuple of the two inner tensors, the inner bond and the gauge vector applied.
- static gauge_simple_remove(outer=None, inner=None)[source]#
Remove the simple update style bond gauges inserted by
gauge_simple_insert
.
- gauge_simple_temp(gauges, smudge=1e-12, ungauge_outer=True, ungauge_inner=True)[source]#
Context manager that temporarily inserts simple update style bond gauges into this tensor network, before optionally ungauging them.
- Parameters
self (TensorNetwork) – The TensorNetwork to be gauge-bonded.
gauges (dict[str, array_like]) – The store of gauge bonds, the keys being indices and the values being the vectors. Only bonds present in this dictionary will be gauged.
ungauge_outer (bool, optional) – Whether to ungauge the outer bonds.
ungauge_inner (bool, optional) – Whether to ungauge the inner bonds.
- Yields
outer (list[(Tensor, int, array_like)]) – The tensors, indices and gauges that were performed on outer indices.
inner (list[((Tensor, Tensor), int, array_like)]) – The tensors, indices and gauges that were performed on inner bonds.
Examples
>>> tn = TN_rand_reg(10, 4, 3) >>> tn ^ all -51371.66630218866
>>> gauges = {} >>> tn.gauge_all_simple_(gauges=gauges) >>> len(gauges) 20
>>> tn ^ all 28702551.673767876
>>> with gauged_bonds(tn, gauges): ... # temporarily insert gauges ... print(tn ^ all) -51371.66630218887
>>> tn ^ all 28702551.67376789
- gen_loops(max_loop_length=None)[source]#
Generate sequences of tids that represent loops in the TN.
- Parameters
max_loop_length (None or int) – Set the maximum number of tensors that can appear in a loop. If
None
, wait until any loop is found and set that as the maximum length.- Yields
tuple[int]
- geometry_hash(output_inds=None, strict_index_order=False)[source]#
A hash of this tensor network’s shapes & geometry. A useful check for determinism. Moreover, if this matches for two tensor networks then they can be contracted using the same tree for the same cost. Order of tensors matters for this - two isomorphic tensor networks with shuffled tensor order will not have the same hash value. Permuting the indices of individual of tensors or the output does not matter unless you set
strict_index_order=True
.- Parameters
output_inds (None or sequence of str, optional) – Manually specify which indices are output indices and their order, otherwise assumed to be all indices that appear once.
strict_index_order (bool, optional) – If
False
, then the permutation of the indices of each tensor and the output does not matter.
- Return type
Examples
If we transpose some indices, then only the strict hash changes:
>>> tn = qtn.TN_rand_reg(100, 3, 2, seed=0) >>> tn.geometry_hash() '18c702b2d026dccb1a69d640b79d22f3e706b6ad'
>>> tn.geometry_hash(strict_index_order=True) 'c109fdb43c5c788c0aef7b8df7bb83853cf67ca1'
>>> t = tn['I0'] >>> t.transpose_(t.inds[2], t.inds[1], t.inds[0]) >>> tn.geometry_hash() '18c702b2d026dccb1a69d640b79d22f3e706b6ad'
>>> tn.geometry_hash(strict_index_order=True) '52c32c1d4f349373f02d512f536b1651dfe25893'
- get_equation(output_inds=None)[source]#
Get the ‘equation’ describing this tensor network, in
einsum
style with a single unicode letter per index. The symbols are generated in the order they appear on the tensors.- Parameters
output_inds (None or sequence of str, optional) – Manually specify which are the output indices.
- Returns
eq
- Return type
Examples
>>> tn = qtn.TN_rand_reg(10, 3, 2) >>> tn.get_equation() 'abc,dec,fgb,hia,jke,lfk,mnj,ing,omd,ohl->'
See also
- get_inputs_output_size_dict(output_inds=None)[source]#
Get a tuple of
inputs
,output
andsize_dict
suitable for e.g. passing to path optimizers. The symbols are generated in the order they appear on the tensors.- Parameters
output_inds (None or sequence of str, optional) – Manually specify which are the output indices.
- Returns
inputs (tuple[str])
output (str)
size_dict (dict[str, ix])
See also
- get_symbol_map()[source]#
Get the mapping of the current indices to
einsum
style single unicode characters. The symbols are generated in the order they appear on the tensors.See also
- get_tree_span(tids, min_distance=0, max_distance=None, include=None, exclude=None, ndim_sort='max', distance_sort='min', sorter=None, weight_bonds=True, inwards=True)[source]#
Generate a tree on the tensor network graph, fanning out from the tensors identified by
tids
, up to a maximum ofmax_distance
away. The tree can be visualized withdraw_tree_span()
.- Parameters
tids (sequence of str) – The nodes that define the region to span out of.
min_distance (int, optional) – Don’t add edges to the tree until this far from the region. For example,
1
will not include the last merges from neighboring tensors in the region defined bytids
.max_distance (None or int, optional) – Terminate branches once they reach this far away. If
None
there is no limit,include (sequence of str, optional) – If specified, only
tids
specified here can be part of the tree.exclude (sequence of str, optional) – If specified,
tids
specified here cannot be part of the tree.ndim_sort ({'min', 'max', 'none'}, optional) – When expanding the tree, how to choose what nodes to expand to next, once connectivity to the current surface has been taken into account.
distance_sort ({'min', 'max', 'none'}, optional) – When expanding the tree, how to choose what nodes to expand to next, once connectivity to the current surface has been taken into account.
weight_bonds (bool, optional) – Whether to weight the ‘connection’ of a candidate tensor to expand out to using bond size as well as number of bonds.
- Returns
The ordered list of merges, each given as tuple
(tid1, tid2, d)
indicating mergetid1 -> tid2
at distanced
.- Return type
See also
- graph(color=None, *, output_inds=None, highlight_inds=(), highlight_tids=(), highlight_inds_color=(1.0, 0.2, 0.2, 1.0), highlight_tids_color=(1.0, 0.2, 0.2, 1.0), show_inds=None, show_tags=None, show_scalars=True, custom_colors=None, title=None, legend=True, fix=None, k=None, iterations=200, initial_layout='spectral', use_forceatlas2=1000, use_spring_weight=False, node_color=None, node_scale=1.0, node_size=None, node_shape='o', node_outline_size=None, node_outline_darkness=0.8, node_hatch='', edge_color=None, edge_scale=1.0, edge_alpha=0.5, multiedge_spread=0.1, show_left_inds=True, arrow_closeness=1.1, arrow_length=1.0, arrow_overhang=1.0, arrow_linewidth=1.0, label_color=None, font_size=10, font_size_inner=7, figsize=(6, 6), margin=None, xlims=None, ylims=None, get=None, return_fig=False, ax=None)#
Plot this tensor network as a networkx graph using matplotlib, with edge width corresponding to bond dimension.
- Parameters
color (sequence of tags, optional) – If given, uniquely color any tensors which have each of the tags. If some tensors have more than of the tags, only one color will show.
output_inds (sequence of str, optional) – For hyper tensor networks explicitly specify which indices should be drawn as outer indices. If not set, the outer indices are assumed to be those that only appear on a single tensor.
highlight_inds (iterable, optional) – Highlight these edges.
highlight_tids (iterable, optional) – Highlight these nodes.
highlight_inds_color – What color to use for
highlight_inds
nodes.highlight_tids_color (tuple[float], optional) – What color to use for
highlight_tids
nodes.show_inds ({None, False, True, 'all', 'bond-size'}, optional) – Explicitly turn on labels for each tensors indices.
show_tags ({None, False, True}, optional) – Explicitly turn on labels for each tensors tags.
show_scalars (bool, optional) – Whether to show scalar tensors (floating nodes with no edges).
custom_colors (sequence of colors, optional) – Supply a custom sequence of colors to match the tags given in
color
.title (str, optional) – Set a title for the axis.
legend (bool, optional) – Whether to draw a legend for the colored tags.
fix (dict[tags_ind_or_tid], (float, float)], optional) – Used to specify actual relative positions for each tensor node. Each key should be a sequence of tags that uniquely identifies a tensor, a
tid
, or aind
, and each value should be a(x, y)
coordinate tuple.k (float, optional) – The optimal distance between nodes.
iterations (int, optional) – How many iterations to perform when when finding the best layout using node repulsion. Ramp this up if the graph is drawing messily.
initial_layout ({'spectral', 'kamada_kawai', 'circular', 'planar', ) – ‘random’, ‘shell’, ‘bipartite’, …}, optional The name of a networkx layout to use before iterating with the spring layout. Set
iterations=0
if you just want to use this layout only.use_forceatlas2 (bool or int, optional) – Whether to try and use
forceatlas2
(fa2
) for the spring layout relaxation instead ofnetworkx
. If an integer, only try and use beyond that many nodes (it can give messier results on smaller graphs).use_spring_weight (bool, optional) – Whether to use inverse bond sizes as spring weights to the force repulsion layout algorithms.
node_color (tuple[float], optional) – Default color of nodes.
node_size (None, float or dict, optional) – How big to draw the tensors. Can be a global single value, or a dict containing values for specific tags or tids. This is in absolute figure units. See
node_scale
simply scale the node sizes up or down.node_scale (float, optional) – Scale the node sizes by this factor, in addition to the automatica scaling based on the number of tensors.
node_shape (None, str or dict, optional) – What shape to draw the tensors. Should correspond to a matplotlib scatter marker. Can be a global single value, or a dict containing values for specific tags or tids.
node_outline_size (None, float or dict, optional) – The width of the border of each node. Can be a global single value, or a dict containing values for specific tags or tids.
node_outline_darkness (float, optional) – Darkening of nodes outlines.
edge_color (tuple[float], optional) – Default color of edges.
edge_scale (float, optional) – How much to scale the width of the edges.
edge_alpha (float, optional) – Set the alpha (opacity) of the drawn edges.
multiedge_spread (float, optional) – How much to spread the lines of multi-edges.
show_left_inds (bool, optional) – Whether to show
tensor.left_inds
as incoming arrows.arrow_closeness (float, optional) – How close to draw the arrow to its target.
arrow_length (float, optional) – The size of the arrow with respect to the edge.
arrow_overhang (float, optional) – Varies the arrowhead between a triangle (0.0) and ‘V’ (1.0).
label_color (tuple[float], optional) – Color to draw labels with.
font_size (int, optional) – Font size for drawing tags and outer indices.
font_size_inner (int, optional) – Font size for drawing inner indices.
figsize (tuple of int) – The size of the drawing.
margin (None or float, optional) – Specify an argument for
ax.margin
, else the plot limits will try and be computed based on the node positions and node sizes.xlims (None or tuple, optional) – Explicitly set the x plot range.
xlims – Explicitly set the y plot range.
get ({None, 'pos', 'graph'}, optional) –
If
None
then plot as normal, else if:'pos'
, return the plotting positions of eachtid
andind
drawn as a node, this can supplied to subsequent calls asfix=pos
to maintain positions, even as the graph structure changes.'graph'
, return thenetworkx.Graph
object. Note that this will potentially have extra nodes representing output and hyper indices.
return_fig (bool, optional) – If True and
ax is None
then return the figure created rather than executingpyplot.show()
.ax (matplotlib.Axis, optional) – Draw the graph on this axis rather than creating a new figure.
- graph_tree_span(tags, which='all', min_distance=0, max_distance=None, include=None, exclude=None, ndim_sort='max', distance_sort='min', weight_bonds=True, color='order', colormap='Spectral', **draw_opts)#
Visualize a generated tree span out of the tensors tagged by
tags
.- Parameters
tags (str or sequence of str) – Tags specifiying a region of tensors to span out of.
which ({'all', 'any': '!all', '!any'}, optional) – How to select tensors based on the tags.
min_distance (int, optional) – See
get_tree_span()
.max_distance (None or int, optional) – See
get_tree_span()
.include (sequence of str, optional) – See
get_tree_span()
.exclude (sequence of str, optional) – See
get_tree_span()
.distance_sort ({'min', 'max'}, optional) – See
get_tree_span()
.color ({'order', 'distance'}, optional) – Whether to color nodes based on the order of the contraction or the graph distance from the specified region.
colormap (str) – The name of a
matplotlib
colormap to use.
See also
- hyperinds_resolve(mode='dense', sorter=None, inplace=False)[source]#
Convert this into a regular tensor network, where all indices appear at most twice, by inserting COPY tensor or tensor networks for each hyper index.
- Parameters
mode ({'dense', 'mps', 'tree'}, optional) – What type of COPY tensor(s) to insert.
sorter (None or callable, optional) – If given, a function to sort the indices that a single hyperindex will be turned into. Th function is called like
tids.sort(key=sorter)
.inplace (bool, optional) – Whether to insert the COPY tensors inplace.
- Return type
- inner_inds()[source]#
Tuple of interior indices, assumed to be any indices that appear twice or more (this only holds generally for non-hyper tensor networks).
- insert_gauge(U, where1, where2, Uinv=None, tol=1e-10)[source]#
Insert the gauge transformation
U^-1 @ U
into the bond between the tensors,T1
andT2
, defined bywhere1
andwhere2
. The resulting tensors at those locations will beT1 @ U^-1
andU @ T2
.- Parameters
U (array) – The gauge to insert.
where1 (str, sequence of str, or int) – Tags defining the location of the ‘left’ tensor.
where2 (str, sequence of str, or int) – Tags defining the location of the ‘right’ tensor.
Uinv (array) – The inverse gauge,
U @ Uinv == Uinv @ U == eye
, to insert. If not given will be calculated usingnumpy.linalg.inv()
.
- insert_operator(A, where1, where2, tags=None, inplace=False)[source]#
Insert an operator on the bond between the specified tensors, e.g.:
| | | | --1---2-- -> --1-A-2-- | |
- Parameters
A (array) – The operator to insert.
where1 (str, sequence of str, or int) – The tags defining the ‘left’ tensor.
where2 (str, sequence of str, or int) – The tags defining the ‘right’ tensor.
tags (str or sequence of str) – Tags to add to the new operator’s tensor.
inplace (bool, optional) – Whether to perform the insertion inplace.
- isel(selectors, inplace=False)[source]#
Select specific values for some dimensions/indices of this tensor network, thereby removing them.
- Parameters
- Return type
See also
- largest_element()[source]#
Return the ‘largest element’, in terms of absolute magnitude, of this tensor network. This is defined as the product of the largest elements of each tensor in the network, which would be the largest single term occuring if the TN was summed explicitly.
- loop_simplify(output_inds=None, max_loop_length=None, max_inds=10, cutoff=1e-12, loops=None, cache=None, equalize_norms=False, inplace=False, **split_opts)[source]#
Try and simplify this tensor network by identifying loops and checking for low-rank decompositions across groupings of the loops outer indices.
- Parameters
max_loop_length (None or int, optional) – Largest length of loop to search for, if not set, the size will be set to the length of the first (and shortest) loop found.
cutoff (float, optional) – Cutoff to use for the operator decomposition.
loops (None, sequence or callable) – Loops to check, or a function that generates them.
cache (set, optional) – For performance reasons can supply a cache for already checked loops.
inplace (bool, optional) – Whether to replace the loops inplace.
split_opts – Supplied to
tensor_split()
.
- Return type
- make_norm(mangle_append='*', layer_tags=('KET', 'BRA'), return_all=False)[source]#
Make the norm tensor network of this tensor network
tn.H & tn
.
- make_tids_consecutive(tid0=0)[source]#
Reset the tids - node identifies - to be consecutive integers.
- mangle_inner_(append=None, which=None)[source]#
Generate new index names for internal bonds, meaning that when this tensor network is combined with another, there should be no collisions.
- Parameters
append (None or str, optional) – Whether and what to append to the indices to perform the mangling. If
None
a whole new random UUID will be generated.which (sequence of str, optional) – Which indices to rename, if
None
(the default), all inner indices.
- multiply(x, inplace=False, spread_over=8)[source]#
Scalar multiplication of this tensor network with
x
.- Parameters
x (scalar) – The number to multiply this tensor network by.
inplace (bool, optional) – Whether to perform the multiplication inplace.
spread_over (int, optional) – How many tensors to try and spread the multiplication over, in order that the effect of multiplying by a very large or small scalar is not concentrated.
- multiply_each(x, inplace=False)[source]#
Scalar multiplication of each tensor in this tensor network with
x
. If trying to spread a multiplicative factorfac
uniformly over all tensors in the network and the number of tensors is large, then callingmultiply(fac)
can be inaccurate due to precision loss. If one has a routine that can precisely compute thex
to be applied to each tensor, then this function avoids the potential inaccuracies inmultiply()
.- Parameters
x (scalar) – The number that multiplies each tensor in the network
inplace (bool, optional) – Whether to perform the multiplication inplace.
- new_bond(tags1, tags2, **opts)[source]#
Inplace addition of a dummmy (size 1) bond between the single tensors specified by by
tags1
andtags2
.- Parameters
tags1 (sequence of str) – Tags identifying the first tensor.
tags2 (sequence of str) – Tags identifying the second tensor.
opts – Supplied to
new_bond()
.
See also
- norm(**contract_opts)[source]#
Frobenius norm of this tensor network. Computed by exactly contracting the TN with its conjugate:
\[\|T\|_F = \sqrt{\mathrm{Tr} \left(T^{\dagger} T\right)}\]where the trace is taken over all indices. Equivalent to the square root of the sum of squared singular values across any partition.
- property num_indices#
The total number of indices in the tensor network.
- property num_tensors#
The total number of tensors in the tensor network.
- outer_dims_inds()[source]#
Get the ‘outer’ pairs of dimension and indices, i.e. as if this tensor network was fully contracted.
- outer_inds()[source]#
Tuple of exterior indices, assumed to be any lone indices (this only holds generally for non-hyper tensor networks).
- partition(tags, which='any', inplace=False)[source]#
Split this TN into two, based on which tensors have any or all of
tags
. Unlikepartition_tensors
, both results are TNs which inherit the structure of the initial TN.- Parameters
tags (sequence of str) – The tags to split the network with.
which ({'any', 'all'}) – Whether to split based on matching any or all of the tags.
inplace (bool) – If True, actually remove the tagged tensors from self.
- Returns
untagged_tn, tagged_tn – The untagged and tagged tensor networs.
- Return type
See also
- partition_tensors(tags, inplace=False, which='any')[source]#
Split this TN into a list of tensors containing any or all of
tags
and aTensorNetwork
of the the rest.- Parameters
tags (sequence of str) – The list of tags to filter the tensors by. Use
...
(Ellipsis
) to filter all.inplace (bool, optional) – If true, remove tagged tensors from self, else create a new network with the tensors removed.
which ({'all', 'any'}) – Whether to require matching all or any of the tags.
- Returns
(u_tn, t_ts) – The untagged tensor network, and the sequence of tagged Tensors.
- Return type
(TensorNetwork, tuple of Tensors)
See also
- randomize(dtype=None, seed=None, inplace=False, **randn_opts)[source]#
Randomize every tensor in this TN - see
quimb.tensor.tensor_core.Tensor.randomize()
.- Parameters
dtype ({None, str}, optional) – The data type of the random entries. If left as the default
None
, then the data type of the current array will be used.seed (None or int, optional) – Seed for the random number generator.
inplace (bool, optional) – Whether to perform the randomization inplace, by default
False
.randn_opts – Supplied to
randn()
.
- Return type
- rank_simplify(output_inds=None, equalize_norms=False, cache=None, max_combinations=500, inplace=False)[source]#
Simplify this tensor network by performing contractions that don’t increase the rank of any tensors.
- Parameters
output_inds (sequence of str, optional) – Explicitly set which indices of the tensor network are output indices and thus should not be modified.
equalize_norms (bool or float) – Actively renormalize the tensors during the simplification process. Useful for very large TNs. The scaling factor will be stored as an exponent in
tn.exponent
.cache (None or set) – Persistent cache used to mark already checked tensors.
inplace (bool, optional) – Whether to perform the rand reduction inplace.
- Return type
See also
- reduce_inds_onto_bond(inda, indb, tags=None, drop_tags=False)[source]#
Use QR factorization to ‘pull’ the indices
inda
andindb
off of their respective tensors and onto the bond between them. This is an inplace operation.
- reindex(index_map, inplace=False)[source]#
Rename indices for all tensors in this network, optionally in-place.
- Parameters
index_map (dict-like) – Mapping of pairs
{old_ind: new_ind, ...}
.
- replace_section_with_svd(start, stop, eps, **replace_with_svd_opts)[source]#
Take a 1D tensor network, and replace a section with a SVD. See
replace_with_svd()
.- Parameters
start (int) – Section start index.
stop (int) – Section stop index, not included itself.
eps (float) – Precision of SVD.
replace_with_svd_opts – Supplied to
replace_with_svd()
.
- Return type
- replace_with_identity(where, which='any', inplace=False)[source]#
Replace all tensors marked by
where
with an identity. E.g. ifX
denotewhere
tensors:---1 X--X--2--- ---1---2--- | | | | ==> | X--X--X | |
- Parameters
where (tag or seq of tags) – Tags specifying the tensors to replace.
which ({'any', 'all'}) – Whether to replace tensors matching any or all the tags
where
.inplace (bool) – Perform operation in place.
- Returns
The TN, with section replaced with identity.
- Return type
See also
- replace_with_svd(where, left_inds, eps, *, which='any', right_inds=None, method='isvd', max_bond=None, absorb='both', cutoff_mode='rel', renorm=None, ltags=None, rtags=None, keep_tags=True, start=None, stop=None, inplace=False)[source]#
Replace all tensors marked by
where
with an iteratively constructed SVD. E.g. ifX
denotewhere
tensors::__ ___: ---X X--X X--- : \ / : | | | | ==> : U~s~VH---: ---X--X--X--X--- :__/ \ : | +--- : \__: X left_inds : right_inds
- Parameters
where (tag or seq of tags) – Tags specifying the tensors to replace.
left_inds (ind or sequence of inds) – The indices defining the left hand side of the SVD.
eps (float) – The tolerance to perform the SVD with, affects the number of singular values kept. See
quimb.linalg.rand_linalg.estimate_rank()
.which ({'any', 'all', '!any', '!all'}, optional) – Whether to replace tensors matching any or all the tags
where
, prefix with ‘!’ to invert the selection.right_inds (ind or sequence of inds, optional) – The indices defining the right hand side of the SVD, these can be automatically worked out, but for hermitian decompositions the order is important and thus can be given here explicitly.
method (str, optional) – How to perform the decomposition, if not an iterative method the subnetwork dense tensor will be formed first, see
tensor_split()
for options.max_bond (int, optional) – The maximum bond to keep, defaults to no maximum (-1).
ltags (sequence of str, optional) – Tags to add to the left tensor.
rtags (sequence of str, optional) – Tags to add to the right tensor.
keep_tags (bool, optional) – Whether to propagate tags found in the subnetwork to both new tensors or drop them, defaults to
True
.start (int, optional) – If given, assume can use
TNLinearOperator1D
.stop (int, optional) – If given, assume can use
TNLinearOperator1D
.inplace (bool, optional) – Perform operation in place.
- Return type
See also
- retag(tag_map, inplace=False)[source]#
Rename tags for all tensors in this network, optionally in-place.
- Parameters
tag_map (dict-like) – Mapping of pairs
{old_tag: new_tag, ...}
.inplace (bool, optional) – Perform operation inplace or return copy (default).
- select(tags, which='all', virtual=True)[source]#
Get a TensorNetwork comprising tensors that match all or any of
tags
, inherit the network properties/structure fromself
. This returns a view of the tensors not a copy.- Parameters
- Returns
tagged_tn – A tensor network containing the tagged tensors.
- Return type
See also
select_tensors
,select_neighbors
,partition
,partition_tensors
- select_local(tags, which='all', max_distance=1, fillin=False, reduce_outer=None, virtual=True, include=None, exclude=None)[source]#
Select a local region of tensors, based on graph distance
max_distance
to any tagged tensors.- Parameters
tags (str or sequence of str) – The tag or tag sequence defining the initial region.
which ({'all', 'any', '!all', '!any'}, optional) – Whether to require matching all or any of the tags.
max_distance (int, optional) – The maximum distance to the initial tagged region.
fillin (bool or int, optional) –
Once the local region has been selected based on graph distance, whether and how many times to ‘fill-in’ corners by adding tensors connected multiple times. For example, if
R
is an initially tagged tensor andx
are locally selected tensors:fillin=0 fillin=1 fillin=2 | | | | | | | | | | | | | | | -o-o-x-o-o- -o-x-x-x-o- -x-x-x-x-x- | | | | | | | | | | | | | | | -o-x-x-x-o- -x-x-x-x-x- -x-x-x-x-x- | | | | | | | | | | | | | | | -x-x-R-x-x- -x-x-R-x-x- -x-x-R-x-x-
reduce_outer ({'sum', 'svd', 'svd-sum', 'reflect'}, optional) – Whether and how to reduce any outer indices of the selected region.
virtual (bool, optional) – Whether the returned tensor network should be a view of the tensors or a copy (
virtual=False
).include (sequence of int, optional) – Only include tensor with these
tids
.exclude (sequence of int, optional) – Only include tensor without these
tids
.
- Return type
- select_neighbors(tags, which='any')[source]#
Select any neighbouring tensors to those specified by
tags
.self- Parameters
tags (sequence of str, int) – Tags specifying tensors.
which ({'any', 'all'}, optional) – How to select tensors based on
tags
.
- Returns
The neighbouring tensors.
- Return type
See also
- select_tensors(tags, which='all')[source]#
Return the sequence of tensors that match
tags
. Ifwhich='all'
, each tensor must contain every tag. Ifwhich='any'
, each tensor can contain any of the tags.- Parameters
tags (str or sequence of str) – The tag or tag sequence.
which ({'all', 'any'}) – Whether to require matching all or any of the tags.
- Returns
tagged_tensors – The tagged tensors.
- Return type
tuple of Tensor
See also
- property shape#
Actual, i.e. exterior, shape of this TensorNetwork.
- split(left_inds, method='svd', get=None, absorb='both', max_bond=None, cutoff=1e-10, cutoff_mode='rel', renorm=None, ltags=None, rtags=None, stags=None, bond_ind=None, right_inds=None)[source]#
Decompose this tensor into two tensors.
- Parameters
T (Tensor or TNLinearOperator) – The tensor (network) to split.
left_inds (str or sequence of str) – The index or sequence of inds, which
T
should already have, to split to the ‘left’. You can supplyNone
here if you supplyright_inds
instead.method (str, optional) –
How to split the tensor, only some methods allow bond truncation:
'svd'
: full SVD, allows truncation.'eig'
: full SVD via eigendecomp, allows truncation.'svds'
: iterative svd, allows truncation.'isvd'
: iterative svd using interpolative methods, allows truncation.'rsvd'
: randomized iterative svd with truncation.'eigh'
: full eigen-decomposition, tensor must he hermitian.'eigsh'
: iterative eigen-decomposition, tensor must be hermitian.'qr'
: full QR decomposition.'lq'
: full LR decomposition.'cholesky'
: full cholesky decomposition, tensor must be positive.
get ({None, 'arrays', 'tensors', 'values'}) –
If given, what to return instead of a TN describing the split:
None
: a tensor network of the two (or three) tensors.'arrays'
: the raw data arrays as a tuple(l, r)
or(l, s, r)
depending onabsorb
.'tensors '
: the new tensors as a tuple(Tl, Tr)
or(Tl, Ts, Tr)
depending onabsorb
.'values'
: only compute and return the singular valuess
.
absorb ({'both', 'left', 'right', None}, optional) – Whether to absorb the singular values into both, the left, or the right unitary matrix respectively, or neither. If neither (
absorb=None
) then the singular values will be returned separately in their own 1D tensor or array. In that case ifget=None
the tensor network returned will have a hyperedge corresponding to the new bond index connecting three tensors. Ifget='tensors'
orget='arrays'
then a tuple like(left, s, right)
is returned.max_bond (None or int) – If integer, the maxmimum number of singular values to keep, regardless of
cutoff
.cutoff (float, optional) – The threshold below which to discard singular values, only applies to rank revealing methods (not QR, LQ, or cholesky).
cutoff_mode ({'sum2', 'rel', 'abs', 'rsum2'}) –
Method with which to apply the cutoff threshold:
'rel'
: values less thancutoff * s[0]
discarded.'abs'
: values less thancutoff
discarded.'sum2'
: sum squared of values discarded must be< cutoff
.'rsum2'
: sum squared of values discarded must be less thancutoff
times the total sum of squared values.'sum1'
: sum values discarded must be< cutoff
.'rsum1'
: sum of values discarded must be less thancutoff
times the total sum of values.
renorm ({None, bool, or int}, optional) – Whether to renormalize the kept singular values, assuming the bond has a canonical environment, corresponding to maintaining the frobenius or nuclear norm. If
None
(the default) then this is automatically turned on only forcutoff_method in {'sum2', 'rsum2', 'sum1', 'rsum1'}
withmethod in {'svd', 'eig', 'eigh'}
.ltags (sequence of str, optional) – Add these new tags to the left tensor.
rtags (sequence of str, optional) – Add these new tags to the right tensor.
stags (sequence of str, optional) – Add these new tags to the singular value tensor.
bond_ind (str, optional) – Explicitly name the new bond, else a random one will be generated.
right_inds (sequence of str, optional) – Explicitly give the right indices, otherwise they will be worked out. This is a minor performance feature.
- Returns
Depending on if
get
isNone
,'tensors'
,'arrays'
, or'values'
. In the first three cases, ifabsorb
is set, then the returned objects correspond to(left, right)
whereas ifabsorb=None
the returned objects correspond to(left, singular_values, right)
.- Return type
TensorNetwork or tuple[Tensor] or tuple[array] or 1D-array
- split_simplify(atol=1e-12, equalize_norms=False, cache=None, inplace=False)[source]#
Find tensors which have low rank SVD decompositions across any combination of bonds and perform them.
- Parameters
atol (float, optional) – Cutoff used when attempting low rank decompositions.
equalize_norms (bool or float) – Actively renormalize the tensors during the simplification process. Useful for very large TNs. The scaling factor will be stored as an exponent in
tn.exponent
.cache (None or set) – Persistent cache used to mark already checked tensors.
inplace – Whether to perform the split simplification inplace.
bool – Whether to perform the split simplification inplace.
optional – Whether to perform the split simplification inplace.
- split_tensor(tags, left_inds, **split_opts)[source]#
Split the single tensor uniquely identified by
tags
, adding the resulting tensors from the decomposition back into the network. Inplace operation.
- squeeze(fuse=False, inplace=False)[source]#
Drop singlet bonds and dimensions from this tensor network. If
fuse=True
also fuse all multibonds between tensors.
- strip_exponent(tid_or_tensor, value=None)[source]#
Scale the elements of tensor corresponding to
tid
so that the norm of the array is some value, which defaults to1
. The log of the scaling factor, base 10, is then accumulated in theexponent
attribute.
- subgraphs(virtual=False)[source]#
Split this tensor network into disconneceted subgraphs.
- Parameters
virtual (bool, optional) – Whether the tensor networks should view the original tensors or not - by default take copies.
- Return type
- property tensors#
Get the tuple of tensors in this tensor network.
- tensors_sorted()[source]#
Return a tuple of tensors sorted by their respective tags, such that the tensors of two networks with the same tag structure can be iterated over pairwise.
- tids_are_connected(tids)[source]#
Check whether nodes
tids
are connected.- Parameters
tids (sequence of int) – Nodes to check.
- Return type
- quimb.tensor.tensor_core.array_direct_product(X, Y, sum_axes=())[source]#
Direct product of two arrays.
- Parameters
X (numpy.ndarray) – First tensor.
Y (numpy.ndarray) – Second tensor, same shape as
X
.sum_axes (sequence of int) – Axes to sum over rather than direct product, e.g. physical indices when adding tensor networks.
- Returns
Z – Same shape as
X
andY
, but with every dimension the sum of the two respective dimensions, unless it is included insum_axes
.- Return type
- quimb.tensor.tensor_core.bonds(t1, t2)[source]#
Getting any indices connecting the Tensor(s) or TensorNetwork(s)
t1
andt2
.
- quimb.tensor.tensor_core.bonds_size(t1, t2)[source]#
Get the size of the bonds linking tensors or tensor networks
t1
andt2
.
- quimb.tensor.tensor_core.connect(t1, t2, ax1, ax2)[source]#
Connect two tensors by setting a shared index for the specified dimensions. This is an inplace operation that will also affect any tensor networks viewing these tensors.
- Parameters
Examples
>>> X = rand_tensor([2, 3], inds=['a', 'b']) >>> Y = rand_tensor([3, 4], inds=['c', 'd'])
>>> tn = (X | Y) # is *view* of tensors (``&`` would copy them) >>> print(tn) TensorNetwork([ Tensor(shape=(2, 3), inds=('a', 'b'), tags=()), Tensor(shape=(3, 4), inds=('c', 'd'), tags=()), ])
>>> connect(X, Y, 1, 0) # modifies tensors *and* viewing TN >>> print(tn) TensorNetwork([ Tensor(shape=(2, 3), inds=('a', '_e9021e0000002'), tags=()), Tensor(shape=(3, 4), inds=('_e9021e0000002', 'd'), tags=()), ])
>>> tn ^ all Tensor(shape=(2, 4), inds=('a', 'd'), tags=())
- quimb.tensor.tensor_core.empty_symbol_map()[source]#
Get a default dictionary that will populate with symbol entries as they are accessed.
- quimb.tensor.tensor_core.get_tags(ts)[source]#
Return all the tags in found in
ts
.- Parameters
ts (Tensor, TensorNetwork or sequence of either) – The objects to combine tags from.
- quimb.tensor.tensor_core.group_inds(t1, t2)[source]#
Group bonds into left only, shared, and right only.
- quimb.tensor.tensor_core.maybe_unwrap(t, preserve_tensor=False, equalize_norms=False)[source]#
Maybe unwrap a
TensorNetwork
orTensor
into aTensor
or scalar, depending on how many tensors and indices it has.
- quimb.tensor.tensor_core.new_bond(T1, T2, size=1, name=None, axis1=0, axis2=0)[source]#
Inplace addition of a new bond between tensors
T1
andT2
. The size of the new bond can be specified, in which case the new array parts will be filled with zeros.- Parameters
T1 (Tensor) – First tensor to modify.
T2 (Tensor) – Second tensor to modify.
size (int, optional) – Size of the new dimension.
name (str, optional) – Name for the new index.
axis1 (int, optional) – Position on the first tensor for the new dimension.
axis2 (int, optional) – Position on the second tensor for the new dimension.
- quimb.tensor.tensor_core.oset_union(xs)[source]#
Non-variadic ordered set union taking any sequence of iterables.
- quimb.tensor.tensor_core.rand_padder(vector, pad_width, iaxis, kwargs)[source]#
Helper function for padding tensor with random entries.
- quimb.tensor.tensor_core.rand_uuid(base='')[source]#
Return a guaranteed unique, shortish identifier, optional appended to
base
.Examples
>>> rand_uuid() '_2e1dae1b'
>>> rand_uuid('virt-bond') 'virt-bond_bf342e68'
- quimb.tensor.tensor_core.tensor_balance_bond(t1, t2, smudge=1e-06)[source]#
Gauge the bond between two tensors such that the norm of the ‘columns’ of the tensors on each side is the same for each index of the bond.
- quimb.tensor.tensor_core.tensor_canonize_bond(T1, T2, absorb='right', gauges=None, gauge_smudge=1e-06, **split_opts)[source]#
Inplace ‘canonization’ of two tensors. This gauges the bond between the two such that
T1
is isometric:| | | | | | --1---2-- => -->~R-2-- => -->~~~O-- | | | | | | . ... <QR> contract
- Parameters
T1 (Tensor) – The tensor to be isometrized.
T2 (Tensor) – The tensor to absorb the R-factor into.
absorb ({'right', 'left', 'both', None}, optional) – Which tensor to effectively absorb the singular values into.
split_opts – Supplied to
tensor_split()
, with modified defaults ofmethod=='qr'
andabsorb='right'
.
- quimb.tensor.tensor_core.tensor_compress_bond(T1, T2, reduced=True, absorb='both', gauges=None, gauge_smudge=1e-06, info=None, **compress_opts)[source]#
Inplace compress between the two single tensors. It follows the following steps to minimize the size of SVD performed:
a)│ │ b)│ │ c)│ │ ━━●━━━●━━ -> ━━>━━○━━○━━<━━ -> ━━>━━━M━━━<━━ │ │ │ .... │ │ │ <*> <*> contract <*> QR LQ -><- SVD d)│ │ e)│ │ -> ━━>━━━ML──MR━━━<━━ -> ━━●───●━━ │.... ....│ │ │ contract contract ^compressed bond -><- -><-
- Parameters
T1 (Tensor) – The left tensor.
T2 (Tensor) – The right tensor.
max_bond (int or None, optional) – The maxmimum bond dimension.
cutoff (float, optional) – The singular value cutoff to use.
reduced (bool, optional) – Whether to perform the QR reduction as above or not.
absorb ({'both', 'left', 'right', None}, optional) – Where to absorb the singular values after decomposition.
info (None or dict, optional) – A dict for returning extra information such as the singular values.
compress_opts – Supplied to
tensor_split()
.
- quimb.tensor.tensor_core.tensor_contract(*tensors, output_inds=None, optimize=None, get=None, backend=None, preserve_tensor=False, **contract_opts)[source]#
Efficiently contract multiple tensors, combining their tags.
- Parameters
tensors (sequence of Tensor) – The tensors to contract.
output_inds (sequence of str) – If given, the desired order of output indices, else defaults to the order they occur in the input indices. You need to supply this if the tensors supplied have any hyper indices.
optimize ({None, str, path_like, PathOptimizer}, optional) –
The contraction path optimization strategy to use.
None
: use the default strategy,str: use the preset strategy with the given name,
path_like: use this exact path,
opt_einsum.PathOptimizer
: find the path using this optimizer.cotengra.HyperOptimizer
: find and perform the contraction usingcotengra
.cotengra.ContractionTree
: use this exact tree and perform contraction usingcotengra
.
Contraction with
cotengra
might be a bit more efficient but the main reason would be to handle sliced contraction automatically.get ({None, 'expression', 'path-info', 'opt_einsum'}, optional) –
What to return. If:
None
(the default) - return the resulting scalar or Tensor.'expression'
- return theopt_einsum
expression that performs the contraction and operates on the raw arrays.'symbol-map'
- return the dict mappingopt_einsum
symbols to tensor indices.'path-info'
- return the fullopt_einsum
path object with detailed information such as flop cost. The symbol-map is also added to thequimb_symbol_map
attribute.
backend ({'auto', 'numpy', 'jax', 'cupy', 'tensorflow', ...}, optional) – Which backend to use to perform the contraction. Must be a valid
opt_einsum
backend with the relevant library installed.preserve_tensor (bool, optional) – Whether to return a tensor regardless of whether the output object is a scalar (has no indices) or not.
contract_opts – Passed to
opt_einsum.contract_expression
oropt_einsum.contract_path
.
- Return type
scalar or Tensor
- quimb.tensor.tensor_core.tensor_direct_product(T1, T2, sum_inds=(), inplace=False)[source]#
Direct product of two Tensors. Any axes included in
sum_inds
must be the same size and will be summed over rather than concatenated. Summing over contractions of TensorNetworks equates to contracting a TensorNetwork made of direct products of each set of tensors. I.e. (a1 @ b1) + (a2 @ b2) == (a1 (+) a2) @ (b1 (+) b2).- Parameters
- Returns
Like
T1
, but with each dimension doubled in size if not insum_inds
.- Return type
- quimb.tensor.tensor_core.tensor_fuse_squeeze(t1, t2, squeeze=True, gauges=None)[source]#
If
t1
andt2
share more than one bond fuse it, and if the size of the shared dimenion(s) is 1, squeeze it. Inplace operation.
- quimb.tensor.tensor_core.tensor_make_single_bond(t1, t2, gauges=None)[source]#
If two tensors share multibonds, fuse them together and return the left indices, bond if it exists, and right indices. Handles simple
gauges
.
- quimb.tensor.tensor_core.tensor_network_distance(tnA, tnB, xAA=None, xAB=None, xBB=None, method='auto', **contract_opts)[source]#
Compute the Frobenius norm distance between two tensor networks:
\[D(A, B) = | A - B |_{\mathrm{fro}} = \mathrm{Tr} [(A - B)^{\dagger}(A - B)]^{1/2} = ( \langle A | A \rangle - 2 \mathrm{Re} \langle A | B \rangle| + \langle B | B \rangle ) ^{1/2}\]which should have a matching external indices.
- Parameters
tnA (TensorNetwork or Tensor) – The first tensor network operator.
tnB (TensorNetwork or Tensor) – The second tensor network operator.
xAA (None or scalar) – The value of
A.H @ A
if you already know it (or it doesn’t matter).xAB (None or scalar) – The value of
A.H @ B
if you already know it (or it doesn’t matter).xBB (None or scalar) – The value of
B.H @ B
if you already know it (or it doesn’t matter).method ({'auto', 'overlap', 'dense'}, optional) – How to compute the distance. If
'overlap'
, the default, the distance will be computed as the sum of overlaps, without explicitly forming the dense operators. If'dense'
, the operators will be directly formed and the norm computed, which can be quicker when the exterior dimensions are small. If'auto'
, the dense method will be used if the total operator (outer) size is<= 2**16
.contract_opts – Supplied to
contract()
.
- Returns
D
- Return type
- quimb.tensor.tensor_core.tensor_network_fit_als(tn, tn_target, tags=None, steps=100, tol=1e-09, solver='solve', enforce_pos=False, pos_smudge=None, tnAA=None, tnAB=None, xBB=None, contract_optimize='greedy', inplace=False, progbar=False)[source]#
Optimize the fit of
tn
with respect totn_target
using alternating least squares (ALS). This minimizes the norm of the difference between the two tensor networks, which must have matching outer indices, using overlaps.- Parameters
tn (TensorNetwork) – The tensor network to fit.
tn_target (TensorNetwork) – The target tensor network to fit
tn
to.tags (sequence of str, optional) – If supplied, only optimize tensors matching any of given tags.
steps (int, optional) – The maximum number of ALS steps.
tol (float, optional) – The target norm distance.
solver ({'solve', 'lstsq', ...}, optional) – The underlying driver function used to solve the local minimization, e.g.
numpy.linalg.solve
for'solve'
withnumpy
backend.enforce_pos (bool, optional) – Whether to enforce positivity of the locally formed environments, which can be more stable.
pos_smudge (float, optional) – If enforcing positivity, the level below which to clip eigenvalues for make the local environment positive definite.
tnAA (TensorNetwork, optional) – If you have already formed the overlap
tn.H & tn
, maybe approximately, you can supply it here. The unconjugated layer should have tag'__KET__'
and the conjugated layer'__BRA__'
. Each tensor being optimized should have tag'__VAR{i}__'
.tnAB (TensorNetwork, optional) – If you have already formed the overlap
tn_target.H & tn
, maybe approximately, you can supply it here. Each tensor being optimized should have tag'__VAR{i}__'
.xBB (float, optional) – If you have already know, have computed
tn_target.H @ tn_target
, or it doesn’t matter, you can supply the value here.contract_optimize (str, optional) – The contraction path optimized used to contract the local environments. Note
'greedy'
is the default in order to maximize shared work.inplace (bool, optional) – Update
tn
in place.progbar (bool, optional) – Show a live progress bar of the fitting process.
- Return type
- quimb.tensor.tensor_core.tensor_network_fit_autodiff(tn, tn_target, steps=1000, tol=1e-09, autodiff_backend='autograd', contract_optimize='auto-hq', distance_method='auto', inplace=False, progbar=False, **kwargs)[source]#
Optimize the fit of
tn
with respect totn_target
using automatic differentation. This minimizes the norm of the difference between the two tensor networks, which must have matching outer indices, using overlaps.- Parameters
tn (TensorNetwork) – The tensor network to fit.
tn_target (TensorNetwork) – The target tensor network to fit
tn
to.steps (int, optional) – The maximum number of autodiff steps.
tol (float, optional) – The target norm distance.
autodiff_backend (str, optional) – Which backend library to use to perform the gradient computation.
contract_optimize (str, optional) – The contraction path optimized used to contract the overlaps.
distance_method ({'auto', 'dense', 'overlap'}, optional) – Supplied to
tensor_network_distance()
, controls how the distance is computed.inplace (bool, optional) – Update
tn
in place.progbar (bool, optional) – Show a live progress bar of the fitting process.
kwargs – Passed to
TNOptimizer
.
- quimb.tensor.tensor_core.tensor_network_sum(tnA, tnB)[source]#
Sum of two tensor networks, whose indices should match exactly, using direct products.
- Parameters
tnA (TensorNetwork) – The first tensor network.
tnB (TensorNetwork) – The second tensor network.
- Returns
The sum of
tnA
andtnB
, with increased bond dimensions.- Return type
- quimb.tensor.tensor_core.tensor_split(T, left_inds, method='svd', get=None, absorb='both', max_bond=None, cutoff=1e-10, cutoff_mode='rel', renorm=None, ltags=None, rtags=None, stags=None, bond_ind=None, right_inds=None)[source]#
Decompose this tensor into two tensors.
- Parameters
T (Tensor or TNLinearOperator) – The tensor (network) to split.
left_inds (str or sequence of str) – The index or sequence of inds, which
T
should already have, to split to the ‘left’. You can supplyNone
here if you supplyright_inds
instead.method (str, optional) –
How to split the tensor, only some methods allow bond truncation:
'svd'
: full SVD, allows truncation.'eig'
: full SVD via eigendecomp, allows truncation.'svds'
: iterative svd, allows truncation.'isvd'
: iterative svd using interpolative methods, allows truncation.'rsvd'
: randomized iterative svd with truncation.'eigh'
: full eigen-decomposition, tensor must he hermitian.'eigsh'
: iterative eigen-decomposition, tensor must be hermitian.'qr'
: full QR decomposition.'lq'
: full LR decomposition.'cholesky'
: full cholesky decomposition, tensor must be positive.
get ({None, 'arrays', 'tensors', 'values'}) –
If given, what to return instead of a TN describing the split:
None
: a tensor network of the two (or three) tensors.'arrays'
: the raw data arrays as a tuple(l, r)
or(l, s, r)
depending onabsorb
.'tensors '
: the new tensors as a tuple(Tl, Tr)
or(Tl, Ts, Tr)
depending onabsorb
.'values'
: only compute and return the singular valuess
.
absorb ({'both', 'left', 'right', None}, optional) – Whether to absorb the singular values into both, the left, or the right unitary matrix respectively, or neither. If neither (
absorb=None
) then the singular values will be returned separately in their own 1D tensor or array. In that case ifget=None
the tensor network returned will have a hyperedge corresponding to the new bond index connecting three tensors. Ifget='tensors'
orget='arrays'
then a tuple like(left, s, right)
is returned.max_bond (None or int) – If integer, the maxmimum number of singular values to keep, regardless of
cutoff
.cutoff (float, optional) – The threshold below which to discard singular values, only applies to rank revealing methods (not QR, LQ, or cholesky).
cutoff_mode ({'sum2', 'rel', 'abs', 'rsum2'}) –
Method with which to apply the cutoff threshold:
'rel'
: values less thancutoff * s[0]
discarded.'abs'
: values less thancutoff
discarded.'sum2'
: sum squared of values discarded must be< cutoff
.'rsum2'
: sum squared of values discarded must be less thancutoff
times the total sum of squared values.'sum1'
: sum values discarded must be< cutoff
.'rsum1'
: sum of values discarded must be less thancutoff
times the total sum of values.
renorm ({None, bool, or int}, optional) – Whether to renormalize the kept singular values, assuming the bond has a canonical environment, corresponding to maintaining the frobenius or nuclear norm. If
None
(the default) then this is automatically turned on only forcutoff_method in {'sum2', 'rsum2', 'sum1', 'rsum1'}
withmethod in {'svd', 'eig', 'eigh'}
.ltags (sequence of str, optional) – Add these new tags to the left tensor.
rtags (sequence of str, optional) – Add these new tags to the right tensor.
stags (sequence of str, optional) – Add these new tags to the singular value tensor.
bond_ind (str, optional) – Explicitly name the new bond, else a random one will be generated.
right_inds (sequence of str, optional) – Explicitly give the right indices, otherwise they will be worked out. This is a minor performance feature.
- Returns
Depending on if
get
isNone
,'tensors'
,'arrays'
, or'values'
. In the first three cases, ifabsorb
is set, then the returned objects correspond to(left, right)
whereas ifabsorb=None
the returned objects correspond to(left, singular_values, right)
.- Return type
TensorNetwork or tuple[Tensor] or tuple[array] or 1D-array