mindspore.ops.ApplyFtrl¶
-
class
mindspore.ops.
ApplyFtrl
(*args, **kwargs)[source]¶ Updates relevant entries according to the FTRL scheme.
- Parameters
use_locking (bool) – Use locks for updating operation if true . Default: False.
- Inputs:
var (Parameter) - The variable to be updated. The data type must be float16 or float32.
accum (Parameter) - The accumulation to be updated, must be same type and shape as var.
linear (Parameter) - the linear coefficient to be updated, must be same type and shape as var.
grad (Tensor) - Gradient. The data type must be float16 or float32.
lr (Union[Number, Tensor]) - The learning rate value, must be positive. Default: 0.001. It must be a float number or a scalar tensor with float16 or float32 data type.
l1 (Union[Number, Tensor]) - l1 regularization strength, must be greater than or equal to zero. Default: 0.0. It must be a float number or a scalar tensor with float16 or float32 data type.
l2 (Union[Number, Tensor]) - l2 regularization strength, must be greater than or equal to zero. Default: 0.0. It must be a float number or a scalar tensor with float16 or float32 data type.
lr_power (Union[Number, Tensor]) - Learning rate power controls how the learning rate decreases during training, must be less than or equal to zero. Use fixed learning rate if lr_power is zero. Default: -0.5. It must be a float number or a scalar tensor with float16 or float32 data type.
- Outputs:
var (Tensor) - represents the updated var. As the input parameters has been updated in-place, this value is always zero when the platforms is GPU.
- Raises
- Supported Platforms:
Ascend
GPU
Examples
>>> import mindspore >>> import mindspore.nn as nn >>> import numpy as np >>> from mindspore import Parameter, Tensor >>> import mindspore.context as context >>> from mindspore.ops import operations as ops >>> class ApplyFtrlNet(nn.Cell): ... def __init__(self): ... super(ApplyFtrlNet, self).__init__() ... self.apply_ftrl = ops.ApplyFtrl() ... self.lr = 0.001 ... self.l1 = 0.0 ... self.l2 = 0.0 ... self.lr_power = -0.5 ... self.var = Parameter(Tensor(np.random.rand(2, 2).astype(np.float32)), name="var") ... self.accum = Parameter(Tensor(np.random.rand(2, 2).astype(np.float32)), name="accum") ... self.linear = Parameter(Tensor(np.random.rand(2, 2).astype(np.float32)), name="linear") ... ... def construct(self, grad): ... out = self.apply_ftrl(self.var, self.accum, self.linear, grad, self.lr, self.l1, self.l2, ... self.lr_power) ... return out ... >>> np.random.seed(0) >>> net = ApplyFtrlNet() >>> input_x = Tensor(np.random.randint(-4, 4, (2, 2)), mindspore.float32) >>> output = net(input_x) >>> output Tensor(shape=[2, 2], dtype=Float32, value= [[ 4.61418092e-01, 5.30964255e-01], [ 2.68715084e-01, 3.82065028e-01]])