Installation
Contents
1. Installation#
quimb
itself is a pure python package and can now be found on PyPI:
pip install quimb
However it is recommended to first install the main dependencies using e.g. conda
, as below.
The code is hosted on github and if the dependencies are satisfied, a development version can be installed with pip directly from there:
pip install --no-deps -U git+https://github.com/jcmgray/quimb.git@develop
1.1. Required Dependencies#
The core packages quimb
requires are:
For ease and performance (i.e. mkl compiled libraries), conda is the recommended distribution with which to install these.
In addition, the tensor network library, quimb.tensor
, requires:
opt_einsum
efficiently optimizes tensor contraction expressions. It can be installed with pip
or from conda-forge and is a required dependency since various bits of the core quimb
module now make use tensor-network functionality behind the scenes.
autoray
allows backend agnostic numeric code for various tensor network operations so that many libraries other than numpy
can be used. It can be installed via pip
from pypi or via conda
from conda-forge.
1.2. Optional Dependencies#
Plotting tensor networks as colored graphs with weighted edges requires:
Fast, multi-threaded random number generation no longer (with numpy>1.17) requires randomgen though its bit generators can still be used.
Finally, fast and optionally distributed partial eigen-solving, SVD, exponentiation etc. can be accelerated with slepc4py
and its dependencies:
mpi4py (v2.1.0+)
An MPI implementation (OpenMPI recommended, the 1.10.x series seems most robust for spawning processes)
For best performance of some routines, (e.g. shift invert eigen-solving), petsc must be configured with certain options. Pip can handle this compilation and installation, for example the following script installs everything necessary on Ubuntu:
#!/bin/bash
# install build tools, OpenMPI, and OpenBLAS
sudo apt install -y openmpi-bin libopenmpi-dev gfortran bison flex cmake valgrind curl autoconf libopenblas-base libopenblas-dev
# optimization flags, e.g. for intel you might want "-O3 -xHost"
export OPTFLAGS="-O3 -march=native -s -DNDEBUG"
# petsc options, here configured for real
export PETSC_CONFIGURE_OPTIONS="--with-scalar-type=complex --download-mumps --download-scalapack --download-parmetis --download-metis --COPTFLAGS='$OPTFLAGS' --CXXOPTFLAGS='$OPTFLAGS' --FOPTFLAGS='$OPTFLAGS'"
# make sure using all the same version
export PETSC_VERSION=3.14.0
pip install petsc==$PETSC_VERSION --no-binary :all:
pip install petsc4py==$PETSC_VERSION --no-binary :all:
pip install slepc==$PETSC_VERSION --no-binary :all:
pip install slepc4py==$PETSC_VERSION --no-binary :all:
Note
For the most control and best performance it is recommended to compile and install these (apart from MPI if you are e.g. on a cluster) manually - see the PETSc instructions.
It is possible to compile several versions of PETSc/SLEPc side by side, for example a --with-scalar-type=complex
and/or a --with-precision=single
version, naming them with different values of PETSC_ARCH
. When loading PETSc/SLEPc, quimb
respects PETSC_ARCH
if it is set, but it cannot dynamically switch between them.