Shortcuts

Linear Warmup Cosine Annealing

Note

We rely on the community to keep these updated and working. If something doesn’t work, we’d really appreciate a contribution to fix!

class pl_bolts.optimizers.lr_scheduler.LinearWarmupCosineAnnealingLR(optimizer, warmup_epochs, max_epochs, warmup_start_lr=0.0, eta_min=0.0, last_epoch=- 1)[source]

Bases: torch.optim.lr_scheduler._LRScheduler

Warning

The feature LinearWarmupCosineAnnealingLR is currently marked under review. The compatibility with other Lightning projects is not guaranteed and API may change at any time. The API and functionality may change without warning in future releases. More details: https://lightning-bolts.readthedocs.io/en/latest/stability.html

Sets the learning rate of each parameter group to follow a linear warmup schedule between warmup_start_lr and base_lr followed by a cosine annealing schedule between base_lr and eta_min.

Warning

It is recommended to call step() for LinearWarmupCosineAnnealingLR after each iteration as calling it after each epoch will keep the starting lr at warmup_start_lr for the first epoch which is 0 in most cases.

Warning

passing epoch to step() is being deprecated and comes with an EPOCH_DEPRECATION_WARNING. It calls the _get_closed_form_lr() method for this scheduler instead of get_lr(). Though this does not change the behavior of the scheduler, when passing epoch param to step(), the user should call the step() function before calling train and validation methods.

Example

>>> import torch.nn as nn
>>> from torch.optim import Adam
>>> #
>>> layer = nn.Linear(10, 1)
>>> optimizer = Adam(layer.parameters(), lr=0.02)
>>> scheduler = LinearWarmupCosineAnnealingLR(optimizer, warmup_epochs=10, max_epochs=40)
>>> # the default case
>>> for epoch in range(40):
...     # train(...)
...     # validate(...)
...     scheduler.step()
>>> # passing epoch param case
>>> for epoch in range(40):
...     scheduler.step(epoch)
...     # train(...)
...     # validate(...)
Parameters
  • optimizer (Optimizer) – Wrapped optimizer.

  • warmup_epochs (int) – Maximum number of iterations for linear warmup

  • max_epochs (int) – Maximum number of iterations

  • warmup_start_lr (float) – Learning rate to start the linear warmup. Default: 0.

  • eta_min (float) – Minimum learning rate. Default: 0.

  • last_epoch (int) – The index of last epoch. Default: -1.

get_lr()[source]

Compute learning rate using chainable form of the scheduler.

Return type

List[float]

Read the Docs v: latest
Versions
latest
stable
Downloads
pdf
html
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.