Trainer

Supervised Trainer

class kospeech.trainer.supervised_trainer.SupervisedTrainer(optimizer: kospeech.optim.Optimizer, criterion: torch.nn.modules.module.Module, trainset_list: list, validset: kospeech.data.data_loader.SpectrogramDataset, num_workers: int, device: torch.device, print_every: int, save_result_every: int, checkpoint_every: int, teacher_forcing_step: float = 0.2, min_teacher_forcing_ratio: float = 0.8, architecture: str = 'las', vocab: kospeech.vocabs.Vocabulary = None, joint_ctc_attention: bool = False)[source]

The SupervisedTrainer class helps in setting up training framework in a supervised setting.

Parameters
  • optimizer (kospeech.optim.__init__.Optimizer) – optimizer for training

  • criterion (torch.nn.Module) – loss function

  • trainset_list (list) – list of training datset

  • validset (kospeech.data.data_loader.SpectrogramDataset) – validation dataset

  • num_workers (int) – number of using cpu cores

  • device (torch.device) – device - ‘cuda’ or ‘cpu’

  • print_every (int) – number of timesteps to print result after

  • save_result_every (int) – number of timesteps to save result after

  • checkpoint_every (int) – number of timesteps to checkpoint after

train(model: torch.nn.modules.module.Module, batch_size: int, epoch_time_step: int, num_epochs: int, teacher_forcing_ratio: float = 0.99, resume: bool = False) → torch.nn.modules.module.Module[source]

Run training for a given model.

Parameters
  • model (torch.nn.Module) – model to train

  • batch_size (int) – batch size for experiment

  • epoch_time_step (int) – number of time step for training

  • num_epochs (int) – number of epochs for training

  • teacher_forcing_ratio (float) – teacher forcing ratio (default 0.99)

  • resume (bool, optional) – resume training with the latest checkpoint, (default False)