If |
Then |
Ref |
---|---|---|
used the |
switch to |
|
used Trainer’s flag |
use DDP with |
|
implemented |
port your logic to |
|
implemented |
port your logic to |
|
implemented |
port your logic to |
|
used Trainer’s flag |
switch to |
|
used Trainer’s flag |
implement particular offload logic in your custom metric or turn it on in |
|
used Trainer’s flag |
overwrite |
|
used Trainer’s flag |
use |
|
relied on the |
switch to manual optimization |
|
relied on the |
switch to manual optimization |
|
were using |
switch to PyTorch native mixed precision |
|
used Trainer’s flag |
use PyTorch native mixed precision |
|
used Trainer’s flag |
use PyTorch native mixed precision |
|
used Trainer’s flag |
use PyTorch native mixed precision |
|
used Trainer’s attribute |
use PyTorch native mixed precision |
|
used Trainer’s attribute |
use PyTorch native mixed precision |
|
used Trainer’s attribute |
use PyTorch native mixed precision |
|
use the |
consider using PyTorch’s native FSDP implementation or outsourced implementation into own project |
|
used |
use native FSDP instead |
|
used |
use native FSDP instead |
|
used |
use native FSDP instead |
|
used |
use native FSDP instead |
|
used |
use native FSDP instead |
|
used |
use native FSDP instead |
|
used |
pass this option and via dictionary of |
|
used |
pass this option and via dictionary of |
|
have customized loops |
implement your training loop with Fabric. |
|
have customized loops |
implement your training loop with Fabric. |
|
have customized loops |
implement your training loop with Fabric. |
|
used the Trainer’s |
implement your training loop with Fabric |
|
used the Trainer’s |
implement your training loop with Fabric |
|
used the Trainer’s |
implement your training loop with Fabric |
|
used the Trainer’s |
implement your training loop with Fabric |
|
used the |
being marked as protected |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used declaring optimizer frequencies in the dictionary returned from |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used |
use manual optimization |
|
used Trainer’s |
use manual optimization |
|
used |
||
used training integration with Horovod |
install standalone package/project |
|
used training integration with ColossalAI |
install standalone package/project |
|
used |
use Torch’s Quantization directly |
|
had any logic except reducing the DP outputs in |
port it to |
|
had any logic except reducing the DP outputs in |
port it to |
|
had any logic except reducing the DP outputs in |
port it to |
|
used |
switch to general |
|
used the automatic addition of a moving average of the |
use |
|
rely on the |
access them via |
|
need to pass a dictionary to |
pass them independently. |