You are viewing main version, which requires installation from source. If you'd like
regular pip install, checkout the latest stable version (v1.23.3).
GaudiConfig
Here is a description of each configuration parameter:
use_fused_adam
enables to decide whether to use the custom fused implementation of the ADAM optimizer provided by Intel® Gaudi® AI Accelerator.use_fused_clip_norm
enables to decide whether to use the custom fused implementation of gradient norm clipping provided by Intel® Gaudi® AI Accelerator.use_torch_autocast
enables PyTorch autocast; used to define good pre-defined config; users should favor--bf16
training argumentautocast_bf16_ops
list of operations that should be run with bf16 precision under autocast context; using environment flag PT_HPU_AUTOCAST_LOWER_PRECISION_OPS_LIST is a preffered way for operator autocast list overrideautocast_fp32_ops
list of operations that should be run with fp32 precision under autocast context; using environment flag PT_HPU_AUTOCAST_FP32_OPS_LIST is a preffered way for operator autocast list override
You can find examples of Gaudi configurations in the Habana model repository on the Hugging Face Hub. For instance, for BERT Large we have:
{
"use_fused_adam": true,
"use_fused_clip_norm": true,
}
To instantiate yourself a Gaudi configuration in your script, you can do the following
from optimum.habana import GaudiConfig
gaudi_config = GaudiConfig.from_pretrained(
gaudi_config_name,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
token=model_args.token,
)
and pass it to the trainer with the gaudi_config
argument.