operations¶
The Primitive operators in operations need to be instantiated before being used.
Neural Network Operators¶
API Name |
Description |
Supported Platforms |
Computes inverse hyperbolic cosine of the input element-wise. |
|
|
Updates gradients by the Adaptive Moment Estimation (Adam) algorithm. |
|
|
Updates gradients by Adaptive Moment Estimation (Adam) algorithm. |
|
|
Updates relevant entries according to the adadelta scheme. |
|
|
Updates relevant entries according to the adagrad scheme. |
|
|
Updates relevant entries according to the adagradv2 scheme. |
|
|
Updates relevant entries according to the adamax scheme. |
|
|
Updates relevant entries according to the AddSign algorithm. |
|
|
Optimizer that implements the centered RMSProp algorithm. |
|
|
Updates relevant entries according to the following. |
|
|
Optimizer that implements the Momentum algorithm. |
|
|
Updates relevant entries according to the AddSign algorithm. |
|
|
Updates relevant entries according to the proximal adagrad algorithm. |
|
|
Updates relevant entries according to the FOBOS(Forward Backward Splitting) algorithm. |
|
|
Optimizer that implements the Root Mean Square prop(RMSProp) algorithm. |
|
|
Average pooling operation. |
|
|
|
||
It’s similar to operator DynamicRNN. |
Deprecated |
|
Batch Normalization for input data and updated parameters. |
|
|
Adds sigmoid activation function to input predict, and uses the given logits to compute binary cross entropy between the target and the output. |
|
|
Returns sum of input and bias tensor. |
|
|
Computes the binary cross entropy between the target and the output. |
|
|
For the BatchNorm operation this operator update the moving averages for training and is used in conjunction with BNTrainingUpdate. |
|
|
For the BatchNorm operation, this operator update the moving averages for training and is used in conjunction with BNTrainingReduce. |
|
|
Compute accidental hits of sampled classes which happen to match target classes. |
|
|
2D convolution layer. |
|
|
Computes the gradients of convolution with respect to the input. |
|
|
3D convolution layer. |
|
|
Compute a 3D transposed convolution, which is also known as a deconvolution (although it is not an actual deconvolution). |
|
|
Performs greedy decoding on the logits given in inputs. |
|
|
Calculates the CTC (Connectionist Temporal Classification) loss and the gradient. |
|
|
Returns the dimension index in the destination data format given in the source data format. |
|
|
Returns the depth-wise convolution value for the input. |
|
|
During training, randomly zeroes some of the elements of the input tensor with probability. |
|
|
During training, randomly zeroes some of the channels of the input tensor with probability 1-keep_prob from a Bernoulli distribution. |
|
|
During training, randomly zeroes some of the channels of the input tensor with probability 1-keep_prob from a Bernoulli distribution. |
|
|
Applies dropout mask on the input tensor. |
|
|
Generates the mask value for the input shape. |
|
|
Applies a single-layer gated recurrent unit (GRU) to an input sequence. |
|
|
Applies a recurrent neural network to the input. |
|
|
Computes exponential linear: |
|
|
Fast Gaussian Error Linear Units activation function. |
|
|
Flattens a tensor without changing its batch size on the 0-th axis. |
|
|
Computes the remainder of division element-wise. |
|
|
Merges the duplicate value of the gradient and then updates parameters by the Adaptive Moment Estimation (Adam) algorithm. |
|
|
Merges the duplicate value of the gradient and then updates parameters by the Adaptive Moment Estimation (LazyAdam) algorithm. |
|
|
Merges the duplicate value of the gradient and then updates relevant entries according to the proximal adagrad algorithm. |
|
|
Gaussian Error Linear Units activation function. |
|
|
Returns the next element in the dataset queue. |
|
|
Hard sigmoid activation function. |
|
|
Hard swish activation function. |
|
|
Computes the Kullback-Leibler divergence between the target and the output. |
|
|
Calculates half of the L2 norm of a tensor without using the sqrt. |
|
|
L2 normalization Operator. |
|
|
Conducts LARS (layer-wise adaptive rate scaling) update on the sum of squares of gradient. |
|
|
Applies the Layer Normalization to the input tensor. |
|
|
Log Softmax activation function. |
|
|
Local Response Normalization. |
|
|
Performs the Long Short-Term Memory (LSTM) on the input. |
|
|
Max pooling operation. |
|
|
3D max pooling operation. |
|
|
Performs max pooling on the input Tensor and returns both max values and indices. |
|
|
Pads the input tensor according to the paddings and mode. |
|
|
Computes MISH(A Self Regularized Non-Monotonic Neural Activation Function) of input tensors element-wise. |
|
|
Gets the negative log likelihood loss between logits and labels. |
|
|
Computes a one-hot tensor. |
|
|
Pads the input tensor according to the paddings. |
|
|
Parametric Rectified Linear Unit activation function. |
|
|
Computes ReLU (Rectified Linear Unit) of input tensors element-wise. |
|
|
Computes ReLU (Rectified Linear Unit) upper bounded by 6 of input tensors element-wise. |
|
|
Computes ReLU (Rectified Linear Unit) of input tensors element-wise. |
|
|
Resizes an image to a certain size using the bilinear interpolation. |
|
|
Computes the RNNTLoss and its gradient with respect to the softmax outputs. |
|
|
Computes the Region of Interest (RoI) Align operator. |
|
|
Computes SeLU (scaled exponential Linear Unit) of input tensors element-wise. |
|
|
Computes the stochastic gradient descent. |
|
|
Sigmoid activation function. |
|
|
Uses the given logits to compute sigmoid cross entropy between the target and the output. |
|
|
Computes smooth L1 loss, a robust L1 loss. |
|
|
Softmax operation. |
|
|
Gets the softmax cross-entropy value between logits and labels with one-hot encoding. |
|
|
Softplus activation function. |
|
|
Softsign activation function. |
|
|
Updates relevant entries according to the adagrad scheme. |
|
|
Updates relevant entries according to the adagrad scheme, one more epsilon attribute than SparseApplyAdagrad. |
|
|
Updates relevant entries according to the proximal adagrad algorithm. |
|
|
Computes the softmax cross-entropy value between logits and sparse encoding labels. |
|
|
Stacks a list of tensors in specified axis. |
|
|
Tanh activation function. |
|
|
Finds values and indices of the k largest entries along the last dimension. |
|
|
Unstacks tensor in specified axis. |
|
Math Operators¶
API Name |
Description |
Supported Platforms |
Returns absolute value of a tensor element-wise. |
|
|
Computes accumulation of all input tensors element-wise. |
|
|
Computes arccosine of input tensors element-wise. |
|
|
Adds two input tensors element-wise. |
|
|
Computes addition of all input tensors element-wise. |
|
|
Returns True if abs(x1-x2) is smaller than tolerance element-wise, otherwise False. |
|
|
Computes arcsine of input tensors element-wise. |
|
|
Computes inverse hyperbolic sine of the input element-wise. |
|
|
Updates a Parameter by adding a value to it. |
|
|
Updates a Parameter by subtracting a value from it. |
|
|
Computes the trigonometric inverse tangent of the input element-wise. |
|
|
Returns arctangent of input_x/input_y element-wise. |
|
|
Computes inverse hyperbolic tangent of the input element-wise. |
|
|
Computes matrix multiplication between two tensors by batch. |
|
|
Computes BesselI0e of input element-wise. |
|
|
Computes BesselI1e of input element-wise. |
|
|
Returns bitwise and of two tensors element-wise. |
|
|
Returns bitwise or of two tensors element-wise. |
|
|
Returns bitwise xor of two tensors element-wise. |
|
|
Rounds a tensor up to the closest integer element-wise. |
|
|
Computes cosine of input element-wise. |
|
|
Computes hyperbolic cosine of input element-wise. |
|
|
Computes the cumulative product of the tensor x along axis. |
|
|
Computes the cumulative sum of input tensor along axis. |
|
|
Computes the quotient of dividing the first input tensor by the second input tensor element-wise. |
|
|
Computes a safe divide and returns 0 if the y is zero. |
|
|
Creates a tensor filled with input_x dtype minimum value. |
|
|
Computes the equivalence between two tensors element-wise. |
|
|
Computes the number of the same elements of two tensors. |
|
|
Computes the Gauss error function of input_x element-wise. |
|
|
Computes the complementary error function of input_x element-wise. |
|
|
Returns exponential of a tensor element-wise. |
|
|
Returns exponential then minus 1 of a tensor element-wise. |
|
|
Determines if the elements contain Not a Number(NaN), infinite or negative infinite. |
|
|
Rounds a tensor down to the closest integer element-wise. |
|
|
Divides the first input tensor by the second input tensor element-wise and round down to the closest integer. |
|
|
Computes the boolean value of \(x > y\) element-wise. |
|
|
Computes the boolean value of \(x >= y\) element-wise. |
|
|
Returns a rank 1 histogram counting the number of entries in values that fall into every bin. |
|
|
Adds tensor y to specified axis and indices of tensor x. |
|
|
Adds v into specified rows of x. |
|
|
Subtracts v into specified rows of x. |
|
|
Computes Inv(Reciprocal) of input tensor element-wise. |
|
|
Flips all bits of input tensor element-wise. |
|
|
Determines which elements are inf or -inf for each position |
|
|
Determines which elements are NaN for each position. |
|
|
Computes the boolean value of \(x < y\) element-wise. |
|
|
Computes the boolean value of \(x <= y\) element-wise. |
|
|
Generates values in an interval (inclusive of start and stop) and returns the corresponding interpolated array with num number of ticks. |
|
|
Returns the natural logarithm of a tensor element-wise. |
|
|
Returns the natural logarithm of one plus the input tensor element-wise. |
|
|
Computes the “logical AND” of two tensors element-wise. |
|
|
Computes the “logical NOT” of a tensor element-wise. |
|
|
Computes the “logical OR” of two tensors element-wise. |
|
|
Multiplies matrix a and matrix b. |
|
|
Returns the inverse of the input matrix. |
|
|
Computes the maximum of input tensors element-wise. |
|
|
Computes the minimum of input tensors element-wise. |
|
|
Computes the remainder of dividing the first input tensor by the second input tensor element-wise. |
|
|
Multiplies two tensors element-wise. |
|
|
Computes input_x * input_y element-wise. |
|
|
Returns a tensor with negative values of the input tensor element-wise. |
|
|
When object detection problem is performed in the computer vision field, object detection algorithm generates a plurality of bounding boxes. |
|
|
Computes the non-equivalence of two tensors element-wise. |
|
|
Allocates a flag to store the overflow status. |
|
|
Clears the flag which stores the overflow status. |
|
|
Updates the flag which is the output tensor of NPUAllocFloatStatus with the latest overflow status. |
|
|
Computes a tensor to the power of the second input. |
|
|
Divides the first input tensor by the second input tensor in floating-point type element-wise. |
|
|
Returns reciprocal of a tensor element-wise. |
|
|
Reduces a dimension of a tensor by the “logicalAND” of all elements in the dimension. |
|
|
Reduces a dimension of a tensor by the “logical OR” of all elements in the dimension. |
|
|
Reduces a dimension of a tensor by the maximum value in this dimension. |
|
|
Reduces a dimension of a tensor by averaging all elements in the dimension. |
|
|
Reduces a dimension of a tensor by the minimum value in the dimension. |
|
|
Reduces a dimension of a tensor by multiplying all elements in the dimension. |
|
|
Reduces a dimension of a tensor by summing all elements in the dimension. |
|
|
Returns half to even of a tensor element-wise. |
|
|
Computes reciprocal of square root of input tensor element-wise. |
|
|
Performs sign on the tensor element-wise. |
|
|
Computes sine of the input element-wise. |
|
|
Computes hyperbolic sine of the input element-wise. |
|
|
Returns square root of a tensor element-wise. |
|
|
Returns square of a tensor element-wise. |
|
|
Subtracts the second input tensor from the first input tensor element-wise and returns square of it. |
|
|
Returns the square sum of a tensor element-wise |
|
|
Subtracts the second input tensor from the first input tensor element-wise. |
|
|
Computes tangent of input_x element-wise. |
|
|
Divides the first input tensor by the second input tensor element-wise for integer types, negative numbers will round fractional quantities towards zero. |
|
|
Returns the remainder of division element-wise. |
|
|
Divides the first input tensor by the second input tensor element-wise. |
|
|
Computes the first input tensor multiplied by the logarithm of second input tensor element-wise. |
|
Array Operators¶
API Name |
Description |
Supported Platforms |
Updates relevant entries according to the FTRL scheme. |
|
|
Returns the indices of the maximum value of a tensor across the axis. |
|
|
Calculates the maximum value with the corresponding index. |
|
|
Returns the indices of the minimum value of a tensor across the axis. |
|
|
Calculates the minimum value with corresponding index, and returns indices and values. |
|
|
Divides batch dimension with blocks and interleaves these blocks back into spatial dimensions. |
|
|
Divides batch dimension with blocks and interleaves these blocks back into spatial dimensions. |
|
|
Broadcasts input tensor to a given shape. |
|
|
Returns a tensor with the new specified data type. |
|
|
Connect tensor in the specified axis. |
|
|
Rearranges blocks of depth data into spatial dimensions. |
|
|
Returns the data type of the input tensor as mindspore.dtype. |
|
|
Returns the shape of the input tensor. |
|
|
Computes the Levenshtein Edit Distance. |
|
|
Returns a slice of input tensor based on the specified indices. |
|
|
Adds an additional dimension at the given axis. |
|
|
Creates a tensor with ones on the diagonal and zeros the rest. |
|
|
Creates a tensor filled with a scalar value. |
|
|
Merges the duplicate value of the gradient and then updates relevant entries according to the FTRL-proximal scheme. |
|
|
Returns a slice of the input tensor based on the specified indices and axis. |
|
|
Gathers values along an axis specified by dim. |
|
|
Gathers slices from a tensor by indices. |
|
|
Returns a Tensor with the same shape and contents as input. |
|
|
Updates specified rows with values in v. |
|
|
Computes the inverse of an index permutation. |
|
|
Determines which elements are finite for each position. |
|
|
Checks whether an object is an instance of a target type. |
|
|
Checks whether this type is a sub-class of another type. |
|
|
Generates coordinate matrices from given coordinate tensors. |
|
|
Creates a tensor filled with value ones. |
|
|
Creates a new tensor. |
|
|
Extends the last dimension of the input tensor from 1 to pad_dim_size, by filling with 0. |
|
|
Concats tensor in the first dimension. |
|
|
Generates n random samples from 0 to n-1 without repeating. |
|
|
Returns the rank of a tensor. |
|
|
Reshapes the input tensor with the same values based on a given shape tuple. |
|
|
Resizes the input tensor by using the nearest neighbor algorithm. |
|
|
Reverses variable length slices. |
|
|
Reverses specific dimensions of a tensor. |
|
|
Returns an integer that is closest to x element-wise. |
|
|
Checks whether the data type and shape of two tensors are the same. |
|
|
Casts the input scalar to another type. |
|
|
Converts a scalar to a Tensor. |
|
|
Converts a scalar to a Tensor, and converts the data type to the specified type. |
|
|
Updates the value of the input tensor through the addition operation. |
|
|
Updates the value of the input tensor through the divide operation. |
|
|
Updates the value of the input tensor through the maximum operation. |
|
|
Updates the value of the input tensor through the minimum operation. |
|
|
Updates the value of the input tensor through the multiply operation. |
|
|
Scatters a tensor into a new tensor depending on the specified indices. |
|
|
Applies sparse addition to individual values or slices in a tensor. |
|
|
Applies sparse subtraction to individual values or slices in a tensor. |
|
|
Updates tensor values by using input indices and value. |
|
|
Applies sparse addition to the input using individual values or slices. |
|
|
Updates the value of the input tensor through the subtraction operation. |
|
|
Updates tensor values by using input indices and value. |
|
|
Returns the selected elements, either from input \(x\) or input \(y\), depending on the condition. |
|
|
Returns the shape of the input tensor. |
|
|
Returns the size of a tensor. |
|
|
Slices a tensor in the specified shape. |
|
|
Sorts the elements of the input tensor along a given dimension in ascending order by value. |
|
|
Divides spatial dimensions into blocks and combines the block size with the original batch. |
|
|
Divides spatial dimensions into blocks and combines the block size with the original batch. |
|
|
Rearranges blocks of spatial data into depth. |
|
|
Updates relevant entries according to the FTRL-proximal scheme. |
|
|
Updates relevant entries according to the FTRL-proximal scheme. |
|
|
Returns a slice of input tensor based on the specified indices and axis. |
|
|
Splits the input tensor into output_num of tensors along the given axis and output numbers. |
|
|
Returns a tensor with the same type but dimensions of 1 are removed based on axis. |
|
|
Extracts a strided slice of a tensor. |
|
|
Creates a new tensor by updating the positions in input_x indicicated by indices, with values from update. |
|
|
Replicates a tensor with given multiples times. |
|
|
Permutes the dimensions of the input tensor according to input permutation. |
|
|
Converts a tuple to a tensor. |
|
|
Returns the unique elements of input tensor and also return a tensor containing the index of each value of input tensor corresponding to the output unique tensor. |
|
|
Returns unique elements and relative indexes in 1-D tensor, filled with padding num. |
|
|
Computes the maximum along segments of a tensor. |
|
|
Computes the minimum of a tensor along segments. |
|
|
Computes the product of a tensor along segments. |
|
|
Computes the sum of a tensor along segments. |
|
|
Creates a tensor filled with value zeros. |
|
|
Creates a new tensor. |
|
Communication Operators¶
API Name |
Description |
Supported Platforms |
Gathers tensors from the specified communication group. |
|
|
Reduces the tensor data across all devices in such a way that all devices will get the same final result. |
|
|
Broadcasts the tensor to the whole group. |
|
|
Operation options for reducing tensors. |
|
|
Reduces and scatters tensors from the specified communication group. |
|
Debug Operators¶
API Name |
Description |
Supported Platforms |
Outputs the tensor to protocol buffer through histogram summary operator. |
|
|
Outputs the image tensor to protocol buffer through image summary operator. |
|
|
Attaches callback to the graph node that will be invoked on the node’s gradient. |
|
|
Outputs the tensor or string to stdout. |
|
|
Outputs a scalar to a protocol buffer through a scalar summary operator. |
|
|
Outputs a tensor to a protocol buffer through a tensor summary operator. |
|
Random Operators¶
API Name |
Description |
Supported Platforms |
Produces random positive floating-point values x, distributed according to probability density function: |
|
|
Generates random labels with a log-uniform distribution for sampled_candidates. |
|
|
Returns a tensor sampled from the multinomial probability distribution located in the corresponding row of tensor input. |
|
|
Produces random non-negative integer values i, distributed according to discrete probability function: |
|
|
Generates random samples from a given categorical distribution tensor. |
|
|
Generates a random sample as index tensor with a mask tensor from a given tensor. |
|
|
Generates random numbers according to the Laplace random number distribution (mean=0, lambda=1). |
|
|
Generates random numbers according to the standard Normal (or Gaussian) random number distribution. |
|
|
Uniform candidate sampler. |
|
|
Produces random integer values i, uniformly distributed on the closed interval [minval, maxval), that is, distributed according to the discrete probability function: |
|
|
Produces random floating-point values i, uniformly distributed to the interval [0, 1). |
|
Sponge Operators¶
API Name |
Description |
Supported Platforms |
Add the potential energy caused by angle terms to the total potential energy of each atom. |
|
|
Calculate the energy caused by 3-atoms angle term. |
|
|
Calculate the force exerted by angles made of 3 atoms on the corresponding atoms. |
|
|
Calculate angle force and potential energy together. |
|
|
Add the potential energy caused by simple harmonic bonds to the total potential energy of each atom. |
|
|
Calculate the harmonic potential energy between each bonded atom pair. |
|
|
Calculate the force exerted by the simple harmonic bond on the corresponding atoms. |
|
|
Calculate bond force and harmonic potential energy together. |
|
|
Calculate bond force and the virial coefficient caused by simple harmonic bond for each atom together. |
|
|
Add the potential energy caused by dihedral terms to the total potential energy of each atom. |
|
|
Calculate the potential energy caused by dihedral terms for each 4-atom pair. |
|
|
Calculate the force exerted by the dihedral term which made of 4-atoms on the corresponding atoms. |
|
|
Calculate dihedral force and potential energy together. |
|
|
Add the potential energy caused by Coulumb energy correction for each necessary dihedral 1,4 terms to the total potential energy of each atom. |
|
|
Calculate the Coulumb part of 1,4 dihedral energy correction for each necessary dihedral terms on the corresponding atoms. |
|
|
Add the potenrial energy caused by Lennard-Jones energy correction for each necessary dihedral 1,4 terms to the total potential energy of each atom. |
|
|
Calculate the Lennard-Jones and Coulumb energy correction and force correction for each necessary dihedral 1,4 terms together and add them to the total force and potential energy for each atom. |
|
|
Calculate the Lennard-Jones part of 1,4 dihedral energy correction for each necessary dihedral terms on the corresponding atoms. |
|
|
Calculate the Lennard-Jones part of 1,4 dihedral force correction for each necessary dihedral terms on the corresponding atoms. |
|
|
Calculate the Lennard-Jones part and the Coulomb part of force correction for each necessary dihedral 1,4 terms. |
|
|
Calculate the Van der Waals interaction energy described by Lennard-Jones potential for each atom. |
|
|
Calculate the Van der Waals interaction force described by Lennard-Jones potential energy for each atom. |
|
|
Calculate the Lennard-Jones force and PME direct force together. |
|
|
One step of classical leap frog algorithm to solve the finite difference Hamiltonian equations of motion for certain system, using Langevin dynamics with Liu’s thermostat scheme. |
|
|
Update (or construct if first time) the Verlet neighbor list for the calculation of short-ranged force. |
|
|
Calculate the Coulumb energy of the system using PME method. |
|
|
Calculate the excluded part of long-range Coulumb force using PME(Particle Meshed Ewald) method. |
|
|
Calculate the reciprocal part of long-range Coulumb force using PME(Particle Meshed Ewald) method. |
|
Image Operators¶
API Name |
Description |
Supported Platforms |
Extracts crops from the input image tensor and resizes them. |
|
Sparse Operators¶
API Name |
Description |
Supported Platforms |
Converts a sparse representation into a dense tensor. |
To Be Developed |
|
|
Other Operators¶
API Name |
Description |
Supported Platforms |
Assigns Parameter with a value. |
|
|
Decodes bounding boxes locations. |
|
|
Encodes bounding boxes locations. |
|
|
Checks bounding box. |
|
|
Depend is used for processing dependency operations. |
|
|
Determines whether the targets are in the top k predictions. |
|
|
Calculates intersection over union for boxes. |
|
|
Updates log_probs with repeat n-grams. |
|
|
Calculates population count. |
|