quimb.tensor.circuit#

Functions

apply_Rx(psi, theta, i[, parametrize])

Apply an X-rotation of theta to tensor network wavefunction psi.

apply_Ry(psi, theta, i[, parametrize])

Apply a Y-rotation of theta to tensor network wavefunction psi.

apply_Rz(psi, theta, i[, parametrize])

Apply a Z-rotation of theta to tensor network wavefunction psi.

apply_U1(psi, lamda, i[, parametrize])

apply_U2(psi, phi, lamda, i[, parametrize])

apply_U3(psi, theta, phi, lamda, i[, ...])

apply_cu1(psi, lamda, i, j[, parametrize])

apply_cu2(psi, phi, lamda, i, j[, parametrize])

apply_cu3(psi, theta, phi, lamda, i, j[, ...])

apply_fsim(psi, theta, phi, i, j[, parametrize])

apply_fsimg(psi, theta, zeta, chi, gamma, ...)

apply_rzz(psi, gamma, i, j[, parametrize])

apply_su4(psi, theta1, phi1, lamda1, theta2, ...)

See https://arxiv.org/abs/quant-ph/0308006 - Fig.

apply_swap(psi, i, j, **gate_opts)

build_gate_1(gate[, tags])

Build a function that applies gate to a tensor network wavefunction.

build_gate_2(gate[, tags])

Build a function that applies gate to a tensor network wavefunction.

cu1(lamda)

cu1_param_gen(params)

cu2(phi, lamda)

cu2_param_gen(params)

cu3(theta, phi, lamda)

cu3_param_gen(params)

fsim_param_gen(params)

fsimg_param_gen(params)

parse_qasm(qasm)

Parse qasm from a string.

parse_qasm_file(fname, **kwargs)

Parse a qasm file.

parse_qasm_url(url, **kwargs)

Parse a qasm url.

rehearsal_dict(tn, info)

rx_gate_param_gen(params)

ry_gate_param_gen(params)

rz_gate_param_gen(params)

rzz(gamma)

The gate describing an Ising interaction evolution, or 'ZZ'-rotation.

rzz_param_gen(params)

sample_bitstring_from_prob_ndarray(p)

Sample a bitstring from n-dimensional tensor p of probabilities.

su4_gate_param_gen(params)

See https://arxiv.org/abs/quant-ph/0308006 - Fig.

u1_gate_param_gen(params)

u2_gate_param_gen(params)

u3_gate_param_gen(params)

Classes

Circuit([N, psi0, gate_opts, tags, ...])

Class for simulating quantum circuits using tensor networks.

CircuitDense([N, psi0, gate_opts, tags])

Quantum circuit simulation keeping the state in full dense form.

CircuitMPS([N, psi0, gate_opts, tags])

Quantum circuit simulation keeping the state always in a MPS form.

class quimb.tensor.circuit.Circuit(N=None, psi0=None, gate_opts=None, tags=None, psi0_dtype='complex128', psi0_tag='PSI0', bra_site_ind_id='b{}')[source]#

Class for simulating quantum circuits using tensor networks.

Parameters
  • N (int, optional) – The number of qubits.

  • psi0 (TensorNetwork1DVector, optional) – The initial state, assumed to be |00000....0> if not given. The state is always copied and the tag PSI0 added.

  • gate_opts (dict_like, optional) – Default keyword arguments to supply to each gate_TN_1D() call during the circuit.

  • tags (str or sequence of str, optional) – Tag(s) to add to the initial wavefunction tensors (whether these are propagated to the rest of the circuit’s tensors depends on gate_opts).

  • psi0_dtype (str, optional) – Ensure the initial state has this dtype.

  • psi0_tag (str, optional) – Ensure the initial state has this tag.

  • bra_site_ind_id (str, optional) – Use this to label ‘bra’ site indices when creating certain (mostly internal) intermediate tensor networks.

psi#

The current wavefunction.

Type

TensorNetwork1DVector

Examples

Create 3-qubit GHZ-state:

>>> qc = qtn.Circuit(3)
>>> gates = [
        ('H', 0),
        ('H', 1),
        ('CNOT', 1, 2),
        ('CNOT', 0, 2),
        ('H', 0),
        ('H', 1),
        ('H', 2),
    ]
>>> qc.apply_gates(gates)
>>> qc.psi
<TensorNetwork1DVector(tensors=12, indices=14, L=3, max_bond=2)>
>>> qc.psi.to_dense().round(4)
qarray([[ 0.7071+0.j],
        [ 0.    +0.j],
        [ 0.    +0.j],
        [-0.    +0.j],
        [-0.    +0.j],
        [ 0.    +0.j],
        [ 0.    +0.j],
        [ 0.7071+0.j]])
>>> for b in qc.sample(10):
...     print(b)
000
000
111
000
111
111
000
111
000
000
amplitude(b, optimize='auto-hq', simplify_sequence='ADCRS', simplify_atol=1e-12, simplify_equalize_norms=False, backend='auto', dtype='complex128', target_size=None, rehearse=False)[source]#

Get the amplitude coefficient of bitstring b.

\[c_b = \langle b | \psi \rangle\]
Parameters
  • b (str or sequence of int) – The bitstring to compute the transition amplitude for.

  • optimize (str, optional) – Contraction path optimizer to use for the amplitude, can be a reusable path optimizer as only called once (though path won’t be cached for later use in that case).

  • simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see full_simplify().

  • simplify_atol (float, optional) – The tolerance with which to compare to zero when applying full_simplify().

  • simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.

  • backend (str, optional) – Backend to perform the contraction with, e.g. 'numpy', 'cupy' or 'jax'. Passed to opt_einsum.

  • dtype (str, optional) – Data type to cast the TN to before contraction.

  • target_size (None or int, optional) – The largest size of tensor to allow. If specified and any contraction involves tensors bigger than this, ‘slice’ the contraction into independent parts and sum them individually. Requires cotengra currently.

  • rehearse (bool or "tn", optional) – If True, generate and cache the simplified tensor network and contraction path but don’t actually perform the contraction. Returns a dict with keys "tn" and 'info' with the tensor network that will be contracted and the corresponding contraction path if so.

amplitude_rehearse(b='random', simplify_sequence='ADCRS', simplify_atol=1e-12, simplify_equalize_norms=False, optimize='auto-hq', dtype='complex128', rehearse=True)[source]#

Perform just the tensor network simplifications and contraction path finding associated with computing a single amplitude (caching the results) but don’t perform the actual contraction.

Parameters
  • b ('random', str or sequence of int) – The bitstring to rehearse computing the transition amplitude for, if 'random' (the default) a random bitstring will be used.

  • optimize (str, optional) – Contraction path optimizer to use for the marginal, can be a reusable path optimizer as only called once (though path won’t be cached for later use in that case).

  • simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see full_simplify().

  • simplify_atol (float, optional) – The tolerance with which to compare to zero when applying full_simplify().

  • simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.

  • backend (str, optional) – Backend to perform the marginal contraction with, e.g. 'numpy', 'cupy' or 'jax'. Passed to opt_einsum.

  • dtype (str, optional) – Data type to cast the TN to before contraction.

Return type

dict

apply_gate(gate_id, *gate_args, gate_round=None, **gate_opts)[source]#

Apply a single gate to this tensor network quantum circuit. If gate_round is supplied the tensor(s) added will be tagged with 'ROUND_{gate_round}'. Alternatively, putting an integer first like so:

circuit.apply_gate(10, 'H', 7)

Is automatically translated to:

circuit.apply_gate('H', 7, gate_round=10)
Parameters
  • gate_id (str) – Which type of gate to apply.

  • gate_args (list[str]) – The argument to supply to it.

  • gate_round (int, optional) – The gate round. If gate_id is integer-like, will also be taken from here, with then gate_id, gate_args = gate_args[0], gate_args[1:].

  • gate_opts – Supplied to the gate function, options here will override the default gate_opts.

apply_gate_raw(U, where, tags=None, gate_round=None, **gate_opts)[source]#

Apply the raw array U as a gate on qubits in where. It will be assumed to be unitary for the sake of computing reverse lightcones.

apply_gates(gates)[source]#

Apply a sequence of gates to this tensor network quantum circuit.

Parameters

gates (list[list[str]]) – The sequence of gates to apply.

calc_qubit_ordering(qubits=None, method='greedy-lightcone')[source]#

Get a order to measure qubits in, by greedily choosing whichever has the smallest reverse lightcone followed by whichever expands this lightcone least.

Parameters

qubits (None or sequence of int) – The qubits to generate a lightcone ordering for, if None, assume all qubits.

Returns

The order to ‘measure’ qubits in.

Return type

tuple[int]

compute_marginal(where, fix=None, optimize='auto-hq', backend='auto', dtype='complex64', simplify_sequence='ADCRS', simplify_atol=1e-06, simplify_equalize_norms=True, target_size=None, rehearse=False)[source]#

Compute the probability tensor of qubits in where, given possibly fixed qubits in fix and tracing everything else having removed redundant unitary gates.

Parameters
  • where (sequence of int) – The qubits to compute the marginal probability distribution of.

  • fix (None or dict[int, str], optional) – Measurement results on other qubits to fix.

  • optimize (str, optional) – Contraction path optimizer to use for the marginal, can be a reusable path optimizer as only called once (though path won’t be cached for later use in that case).

  • backend (str, optional) – Backend to perform the marginal contraction with, e.g. 'numpy', 'cupy' or 'jax'. Passed to opt_einsum.

  • dtype (str, optional) – Data type to cast the TN to before contraction.

  • simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see full_simplify().

  • simplify_atol (float, optional) – The tolerance with which to compare to zero when applying full_simplify().

  • simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.

  • rehearse (bool or "tn", optional) – Whether to perform the marginal contraction or just return the associated TN and contraction path information.

  • target_size (None or int, optional) – The largest size of tensor to allow. If specified and any contraction involves tensors bigger than this, ‘slice’ the contraction into independent parts and sum them individually. Requires cotengra currently.

classmethod from_qasm(qasm, **quantum_circuit_opts)[source]#

Generate a Circuit instance from a qasm string.

classmethod from_qasm_file(fname, **quantum_circuit_opts)[source]#

Generate a Circuit instance from a qasm file.

classmethod from_qasm_url(url, **quantum_circuit_opts)[source]#

Generate a Circuit instance from a qasm url.

get_psi_reverse_lightcone(where, keep_psi0=False)[source]#

Get just the bit of the wavefunction in the reverse lightcone of sites in where - i.e. causally linked.

Parameters
  • where (int, or sequence of int) – The sites to propagate the the lightcone back from, supplied to get_reverse_lightcone_tags().

  • keep_psi0 (bool, optional) – Keep the tensors corresponding to the initial wavefunction regardless of whether they are outside of the lightcone.

Returns

psi_lc

Return type

TensorNetwork1DVector

get_psi_simplified(seq='ADCRS', atol=1e-12, equalize_norms=False)[source]#

Get the full wavefunction post local tensor network simplification.

Parameters
  • seq (str, optional) – Which local tensor network simplifications to perform and in which order, see full_simplify().

  • atol (float, optional) – The tolerance with which to compare to zero when applying full_simplify().

  • equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.

Returns

psi

Return type

TensorNetwork1DVector

get_rdm_lightcone_simplified(where, seq='ADCRS', atol=1e-12, equalize_norms=False)[source]#

Get a simplified TN of the norm of the wavefunction, with gates outside reverse lightcone of where cancelled, and physical indices within where preserved so that they can be fixed (sliced) or used as output indices.

Parameters
  • where (int or sequence of int) – The region assumed to be the target density matrix essentially. Supplied to get_reverse_lightcone_tags().

  • seq (str, optional) – Which local tensor network simplifications to perform and in which order, see full_simplify().

  • atol (float, optional) – The tolerance with which to compare to zero when applying full_simplify().

  • equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.

Return type

TensorNetwork

get_reverse_lightcone_tags(where)[source]#

Get the tags of gates in this circuit corresponding to the ‘reverse’ lightcone propagating backwards from registers in where.

Parameters

where (int or sequence of int) – The register or register to get the reverse lightcone of.

Returns

The sequence of gate tags (GATE_{i}, …) corresponding to the lightcone.

Return type

tuple[str]

get_uni(transposed=False)[source]#

Tensor network representation of the unitary operator (i.e. with the initial state removed).

local_expectation(G, where, optimize='auto-hq', simplify_sequence='ADCRS', simplify_atol=1e-12, simplify_equalize_norms=False, backend='auto', dtype='complex128', target_size=None, rehearse=False)[source]#

Compute the a single expectation value of operator G, acting on sites where, making use of reverse lightcone cancellation.

\[\langle \psi_{\bar{q}} | G_{\bar{q}} | \psi_{\bar{q}} \rangle\]

where \(\bar{q}\) is the set of qubits \(G\) acts one and \(\psi_{\bar{q}}\) is the circuit wavefunction only with gates in the causal cone of this set. If you supply a tuple or list of gates then the expectations will be computed simulteneously.

Parameters
  • G (array or tuple[array] or list[array]) – The raw operator(s) to find the expectation of.

  • where (int or sequence of int) – Which qubits the operator acts on.

  • optimize (str, optional) – Contraction path optimizer to use for the local expectation, can be a custom path optimizer as only called once (though path won’t be cached for later use in that case).

  • simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see full_simplify().

  • simplify_atol (float, optional) – The tolerance with which to compare to zero when applying full_simplify().

  • simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.

  • backend (str, optional) – Backend to perform the marginal contraction with, e.g. 'numpy', 'cupy' or 'jax'. Passed to opt_einsum.

  • dtype (str, optional) – Data type to cast the TN to before contraction.

  • target_size (None or int, optional) – The largest size of tensor to allow. If specified and any contraction involves tensors bigger than this, ‘slice’ the contraction into independent parts and sum them individually. Requires cotengra currently.

  • gate_opts (None or dict_like) – Options to use when applying G to the wavefunction.

  • rehearse (bool or "tn", optional) – If True, generate and cache the simplified tensor network and contraction path but don’t actually perform the contraction. Returns a dict with keys 'tn' and 'info' with the tensor network that will be contracted and the corresponding contraction path if so.

Return type

scalar, tuple[scalar] or dict

partial_trace(keep, optimize='auto-hq', simplify_sequence='ADCRS', simplify_atol=1e-12, simplify_equalize_norms=False, backend='auto', dtype='complex128', target_size=None, rehearse=False)[source]#

Perform the partial trace on the circuit wavefunction, retaining only qubits in keep, and making use of reverse lightcone cancellation:

\[\rho_{\bar{q}} = Tr_{\bar{p}} |\psi_{\bar{q}} \rangle \langle \psi_{\bar{q}}|\]

Where \(\bar{q}\) is the set of qubits to keep, \(\psi_{\bar{q}}\) is the circuit wavefunction only with gates in the causal cone of this set, and \(\bar{p}\) is the remaining qubits.

Parameters
  • keep (int or sequence of int) – The qubit(s) to keep as we trace out the rest.

  • optimize (str, optional) – Contraction path optimizer to use for the reduced density matrix, can be a custom path optimizer as only called once (though path won’t be cached for later use in that case).

  • simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see full_simplify().

  • simplify_atol (float, optional) – The tolerance with which to compare to zero when applying full_simplify().

  • simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.

  • backend (str, optional) – Backend to perform the marginal contraction with, e.g. 'numpy', 'cupy' or 'jax'. Passed to opt_einsum.

  • dtype (str, optional) – Data type to cast the TN to before contraction.

  • target_size (None or int, optional) – The largest size of tensor to allow. If specified and any contraction involves tensors bigger than this, ‘slice’ the contraction into independent parts and sum them individually. Requires cotengra currently.

  • rehearse (bool or "tn", optional) – If True, generate and cache the simplified tensor network and contraction path but don’t actually perform the contraction. Returns a dict with keys "tn" and 'info' with the tensor network that will be contracted and the corresponding contraction path if so.

Return type

array or dict

property psi#

Tensor network representation of the wavefunction.

sample(C, qubits=None, order=None, group_size=1, max_marginal_storage=1048576, seed=None, optimize='auto-hq', backend='auto', dtype='complex64', simplify_sequence='ADCRS', simplify_atol=1e-06, simplify_equalize_norms=False, target_size=None)[source]#

Sample the circuit given by gates, C times, using lightcone cancelling and caching marginal distribution results. This is a generator. This proceeds as a chain of marginal computations.

Assuming we have group_size=1, and some ordering of the qubits, \(\{q_0, q_1, q_2, q_3, \ldots\}\) we first compute:

\[p(q_0) = \mathrm{diag} \mathrm{Tr}_{1, 2, 3,\ldots} | \psi_{0} \rangle \langle \psi_{0} |\]

I.e. simply the probability distribution on a single qubit, conditioned on nothing. The subscript on \(\psi\) refers to the fact that we only need gates from the causal cone of qubit 0. From this we can sample an outcome, either 0 or 1, if we call this \(r_0\) we can then move on to the next marginal:

\[p(q_1 | r_0) = \mathrm{diag} \mathrm{Tr}_{2, 3,\ldots} \langle r_0 | \psi_{0, 1} \rangle \langle \psi_{0, 1} | r_0 \rangle\]

I.e. the probability distribution of the next qubit, given our prior result. We can sample from this to get \(r_1\). Then we compute:

\[p(q_2 | r_0 r_1) = \mathrm{diag} \mathrm{Tr}_{3,\ldots} \langle r_0 r_1 | \psi_{0, 1, 2} \rangle \langle \psi_{0, 1, 2} | r_0 r_1 \rangle\]

Eventually we will reach the ‘final marginal’, which we can compute as

\[|\langle r_0 r_1 r_2 r_3 \ldots | \psi \rangle|^2\]

since there is nothing left to trace out.

Parameters
  • C (int) – The number of times to sample.

  • qubits (None or sequence of int, optional) – Which qubits to measure, defaults (None) to all qubits.

  • order (None or sequence of int, optional) – Which order to measure the qubits in, defaults (None) to an order based on greedily expanding the smallest reverse lightcone. If specified it should be a permutation of qubits.

  • group_size (int, optional) – How many qubits to group together into marginals, the larger this is the fewer marginals need to be computed, which can be faster at the cost of higher memory. The marginal themselves will each be of size 2**group_size.

  • max_marginal_storage (int, optional) – The total cumulative number of marginal probabilites to cache, once this is exceeded caching will be turned off.

  • seed (None or int, optional) – A random seed, passed to numpy.random.seed if given.

  • optimize (str, optional) – Contraction path optimizer to use for the marginals, shouldn’t be a reusable path optimizer as called on many different TNs. Passed to opt_einsum.contract_path(). If you want to use a custom path optimizer register it with a name using opt_einsum.paths.register_path_fn after which the paths will be cached on name.

  • backend (str, optional) – Backend to perform the marginal contraction with, e.g. 'numpy', 'cupy' or 'jax'. Passed to opt_einsum.

  • dtype (str, optional) – Data type to cast the TN to before contraction.

  • simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see full_simplify().

  • simplify_atol (float, optional) – The tolerance with which to compare to zero when applying full_simplify().

  • simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.

  • target_size (None or int, optional) – The largest size of tensor to allow. If specified and any contraction involves tensors bigger than this, ‘slice’ the contraction into independent parts and sum them individually. Requires cotengra currently.

Yields

bitstrings (sequence of str)

sample_chaotic(C, marginal_qubits, max_marginal_storage=1048576, seed=None, optimize='auto-hq', backend='auto', dtype='complex64', simplify_sequence='ADCRS', simplify_atol=1e-06, simplify_equalize_norms=False, target_size=None)[source]#

Sample from this circuit, assuming it to be chaotic. Which is to say, only compute and sample correctly from the final marginal, assuming that the distribution on the other qubits is uniform. Given marginal_qubits=5 for instance, for each sample a random bit-string \(r_0 r_1 r_2 \ldots r_{N - 6}\) for the remaining \(N - 5\) qubits will be chosen, then the final marginal will be computed as

\[p(q_{N-5}q_{N-4}q_{N-3}q_{N-2}q_{N-1} | r_0 r_1 r_2 \ldots r_{N-6}) = |\langle r_0 r_1 r_2 \ldots r_{N - 6} | \psi \rangle|^2\]

and then sampled from. Note the expression on the right hand side has 5 open indices here and so is a tensor, however if marginal_qubits is not too big then the cost of contracting this is very similar to a single amplitude.

Note

This method assumes the circuit is chaotic, if its not, then the samples produced will not be an accurate representation of the probability distribution.

Parameters
  • C (int) – The number of times to sample.

  • marginal_qubits (int or sequence of int) – The number of qubits to treat as marginal, or the actual qubits. If an int is given then the qubits treated as marginal will be circuit.calc_qubit_ordering()[:marginal_qubits].

  • seed (None or int, optional) – A random seed, passed to numpy.random.seed if given.

  • optimize (str, optional) – Contraction path optimizer to use for the marginal, can be a reusable path optimizer as only called once (though path won’t be cached for later use in that case).

  • backend (str, optional) – Backend to perform the marginal contraction with, e.g. 'numpy', 'cupy' or 'jax'. Passed to opt_einsum.

  • dtype (str, optional) – Data type to cast the TN to before contraction.

  • simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see full_simplify().

  • simplify_atol (float, optional) – The tolerance with which to compare to zero when applying full_simplify().

  • simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.

  • target_size (None or int, optional) – The largest size of tensor to allow. If specified and any contraction involves tensors bigger than this, ‘slice’ the contraction into independent parts and sum them individually. Requires cotengra currently.

Yields

str

sample_chaotic_rehearse(marginal_qubits, result=None, optimize='auto-hq', simplify_sequence='ADCRS', simplify_atol=1e-06, simplify_equalize_norms=False, dtype='complex64', rehearse=True)[source]#

Rehearse chaotic sampling (perform just the TN simplifications and contraction path finding).

Parameters
  • marginal_qubits (int or sequence of int) – The number of qubits to treat as marginal, or the actual qubits. If an int is given then the qubits treated as marginal will be circuit.calc_qubit_ordering()[:marginal_qubits].

  • result (None or dict[int, str], optional) – Explicitly check the computational cost of this result, assumed to be all zeros if not given.

  • optimize (str, optional) – Contraction path optimizer to use for the marginal, can be a reusable path optimizer as only called once (though path won’t be cached for later use in that case).

  • simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see full_simplify().

  • simplify_atol (float, optional) – The tolerance with which to compare to zero when applying full_simplify().

  • simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.

  • dtype (str, optional) – Data type to cast the TN to before contraction.

Returns

The contraction path information for the main computation, the key is the qubits that formed the final marginal. The value is itself a dict with keys 'tn' - a representative tensor network - and 'info' - the contraction path information.

Return type

dict[tuple[int], dict]

sample_rehearse(qubits=None, order=None, group_size=1, result=None, optimize='auto-hq', simplify_sequence='ADCRS', simplify_atol=1e-06, simplify_equalize_norms=False, rehearse=True, progbar=False)[source]#

Perform the preparations and contraction path findings for sample(), caching various intermedidate objects, but don’t perform the main contractions.

Parameters
  • qubits (None or sequence of int, optional) – Which qubits to measure, defaults (None) to all qubits.

  • order (None or sequence of int, optional) – Which order to measure the qubits in, defaults (None) to an order based on greedily expanding the smallest reverse lightcone.

  • group_size (int, optional) – How many qubits to group together into marginals, the larger this is the fewer marginals need to be computed, which can be faster at the cost of higher memory. The marginal’s size itself is exponential in group_size.

  • result (None or dict[int, str], optional) – Explicitly check the computational cost of this result, assumed to be all zeros if not given.

  • optimize (str, optional) – Contraction path optimizer to use for the marginals, shouldn’t be a reusable path optimizer as called on many different TNs. Passed to opt_einsum.contract_path(). If you want to use a custom path optimizer register it with a name using opt_einsum.paths.register_path_fn after which the paths will be cached on name.

  • simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see full_simplify().

  • simplify_atol (float, optional) – The tolerance with which to compare to zero when applying full_simplify().

  • simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.

  • progbar (bool, optional) – Whether to show the progress of finding each contraction path.

Returns

One contraction path info object per grouped marginal computation. The keys of the dict are the qubits the marginal is computed for, the values are a dict containing a representative simplified tensor network (key: ‘tn’) and the main contraction path info (key: ‘info’).

Return type

dict[tuple[int], dict]

simulate_counts(C, seed=None, reverse=False, **to_dense_opts)[source]#

Simulate measuring all qubits in the computational basis many times. Unlike sample(), this generates all the samples simulteneously using the full wavefunction constructed from to_dense(), then calling simulate_counts().

Warning

Because this constructs the full wavefunction it always requires exponential memory in the number of qubits, regardless of circuit depth and structure.

Parameters
  • C (int) – The number of ‘experimental runs’, i.e. total counts.

  • seed (int, optional) – A seed for reproducibility.

  • reverse (bool, optional) – Whether to reverse the order of the subsystems, to match the convention of qiskit for example.

  • to_dense_opts – Suppled to to_dense().

Returns

results – The number of recorded counts for each

Return type

dict[str, int]

to_dense(reverse=False, optimize='auto-hq', simplify_sequence='R', simplify_atol=1e-12, simplify_equalize_norms=False, backend='auto', dtype=None, target_size=None, rehearse=False)[source]#

Generate the dense representation of the final wavefunction.

Parameters
  • reverse (bool, optional) – Whether to reverse the order of the subsystems, to match the convention of qiskit for example.

  • optimize (str, optional) – Contraction path optimizer to use for the contraction, can be a single path optimizer as only called once (though path won’t be cached for later use in that case).

  • dtype (str, optional) – If given, convert the tensors to this dtype prior to contraction.

  • simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see full_simplify().

  • simplify_atol (float, optional) – The tolerance with which to compare to zero when applying full_simplify().

  • simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.

  • backend (str, optional) – Backend to perform the contraction with, e.g. 'numpy', 'cupy' or 'jax'. Passed to opt_einsum.

  • dtype – Data type to cast the TN to before contraction.

  • target_size (None or int, optional) – The largest size of tensor to allow. If specified and any contraction involves tensors bigger than this, ‘slice’ the contraction into independent parts and sum them individually. Requires cotengra currently.

  • rehearse (bool, optional) – If True, generate and cache the simplified tensor network and contraction path but don’t actually perform the contraction. Returns a dict with keys 'tn' and 'info' with the tensor network that will be contracted and the corresponding contraction path if so.

Returns

psi – The densely represented wavefunction with dtype data.

Return type

qarray

update_params_from(tn)[source]#

Assuming tn is a tensor network with tensors tagged GATE_{i} corresponding to this circuit (e.g. from circ.psi or circ.uni) but with updated parameters, update the current circuit parameters and tensors with those values.

This is an inplace modification of the Circuit.

Parameters

tn (TensorNetwork) – The tensor network to find the updated parameters from.

xeb(samples_or_counts, cache=None, cache_maxsize=1048576, progbar=False, **amplitude_opts)[source]#

Compute the linear cross entropy benchmark (XEB) for samples or counts, amplitude per amplitude.

Parameters
  • samples_or_counts (Iterable[str] or Dict[str, int]) – Either the raw bitstring samples or a dict mapping bitstrings to the number of counts observed.

  • cache (dict, optional) – A dictionary to store the probabilities in, if not supplied quimb.utils.LRU(cache_maxsize) will be used.

  • cache_maxsize – The maximum size of the cache to be used.

  • optional – The maximum size of the cache to be used.

  • progbar – Whether to show progress as the bitstrings are iterated over.

  • optional – Whether to show progress as the bitstrings are iterated over.

  • amplitude_opts – Supplied to amplitude().

xeb_ex(optimize='auto-hq', simplify_sequence='R', simplify_atol=1e-12, simplify_equalize_norms=False, dtype=None, backend=None, autojit=False, progbar=False, **contract_opts)[source]#

Compute the exactly expected XEB for this circuit. The main feature here is that if you supply a cotengra optimizer that searches for sliced indices then the XEB will be computed without constructing the full wavefunction.

Parameters
  • optimize (str or PathOptimizer, optional) – Contraction path optimizer.

  • simplify_sequence (str, optional) – Simplifications to apply to tensor network prior to contraction.

  • simplify_sequence – Which local tensor network simplifications to perform and in which order, see full_simplify().

  • simplify_atol (float, optional) – The tolerance with which to compare to zero when applying full_simplify().

  • dtype (str, optional) – Data type to cast the TN to before contraction.

  • backend (str, optional) – Convert tensors to, and then use contractions from, this library.

  • autojit (bool, optional) – Apply autoray.autojit to the contraciton and map-reduce.

  • progbar (bool, optional) – Show progress in terms of number of wavefunction chunks processed.

class quimb.tensor.circuit.CircuitDense(N=None, psi0=None, gate_opts=None, tags=None)[source]#

Quantum circuit simulation keeping the state in full dense form.

calc_qubit_ordering(qubits=None)[source]#

Qubit ordering doesn’t matter for a dense wavefunction.

get_psi_reverse_lightcone(where, keep_psi0=False)[source]#

Override get_psi_reverse_lightcone as for a dense wavefunction the lightcone is not meaningful.

property psi#

Tensor network representation of the wavefunction.

class quimb.tensor.circuit.CircuitMPS(N=None, psi0=None, gate_opts=None, tags=None)[source]#

Quantum circuit simulation keeping the state always in a MPS form. If you think the circuit will not build up much entanglement, or you just want to keep a rigorous handle on how much entanglement is present, this can be useful.

calc_qubit_ordering(qubits=None)[source]#

MPS already has a natural ordering.

get_psi_reverse_lightcone(where, keep_psi0=False)[source]#

Override get_psi_reverse_lightcone as for an MPS the lightcone is not meaningful.

property psi#

Tensor network representation of the wavefunction.

quimb.tensor.circuit.apply_Rx(psi, theta, i, parametrize=False, **gate_opts)[source]#

Apply an X-rotation of theta to tensor network wavefunction psi.

quimb.tensor.circuit.apply_Ry(psi, theta, i, parametrize=False, **gate_opts)[source]#

Apply a Y-rotation of theta to tensor network wavefunction psi.

quimb.tensor.circuit.apply_Rz(psi, theta, i, parametrize=False, **gate_opts)[source]#

Apply a Z-rotation of theta to tensor network wavefunction psi.

quimb.tensor.circuit.apply_su4(psi, theta1, phi1, lamda1, theta2, phi2, lamda2, theta3, phi3, lamda3, theta4, phi4, lamda4, t1, t2, t3, i, j, parametrize=False, **gate_opts)[source]#

See https://arxiv.org/abs/quant-ph/0308006 - Fig. 7.

quimb.tensor.circuit.build_gate_1(gate, tags=None)[source]#

Build a function that applies gate to a tensor network wavefunction.

quimb.tensor.circuit.build_gate_2(gate, tags=None)[source]#

Build a function that applies gate to a tensor network wavefunction.

quimb.tensor.circuit.parse_qasm(qasm)[source]#

Parse qasm from a string.

Parameters

qasm (str) – The full string of the qasm file.

Returns

circuit_info – Information about the circuit:

  • circuit_info[‘n’]: the number of qubits

  • circuit_info[‘n_gates’]: the number of gates in total

  • circuit_info[‘gates’]: list[list[str]], list of gates, each of which is a list of strings read from a line of the qasm file.

Return type

dict

quimb.tensor.circuit.parse_qasm_file(fname, **kwargs)[source]#

Parse a qasm file.

quimb.tensor.circuit.parse_qasm_url(url, **kwargs)[source]#

Parse a qasm url.

quimb.tensor.circuit.rzz(gamma)[source]#

The gate describing an Ising interaction evolution, or ‘ZZ’-rotation.

\[\mathrm{RZZ}(\gamma) = \exp(-i \gamma Z_i Z_j)\]
quimb.tensor.circuit.sample_bitstring_from_prob_ndarray(p)[source]#

Sample a bitstring from n-dimensional tensor p of probabilities.

Examples

>>> import numpy as np
>>> p = np.zeros(shape=(2, 2, 2, 2, 2))
>>> p[0, 1, 0, 1, 1] = 1.0
>>> sample_bitstring_from_prob_ndarray(p)
'01011'
quimb.tensor.circuit.su4_gate_param_gen(params)[source]#

See https://arxiv.org/abs/quant-ph/0308006 - Fig. 7.