🏁Finetuning from Last Checkpoint
Checkpointing allows you to save your finetuning progress so you can pause it and then continue.
You must edit the Trainer
first to add save_strategy
and save_steps
. Below saves a checkpoint every 50 steps to the folder outputs
.
Then in the trainer do:
Which will start from the latest checkpoint and continue training.
Wandb Integration
Then in TrainingArguments()
set
To train the model, do trainer.train()
; to resume training, do
Last updated