Mastering Transformers by Savaş Yıldırım and Meysam Asgari-Chenaghlu

Mastering Transformers by Savaş Yıldırım and Meysam Asgari-Chenaghlu

Author:Savaş Yıldırım and Meysam Asgari-Chenaghlu
Language: eng
Format: epub
Publisher: Packt Publishing Pvt Ltd
Published: 2021-07-30T00:00:00+00:00


The following lines detect the device and define the AdamW optimizer properly:from transformers import AdamW

device =
torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')

model.to(device)

optimizer = AdamW(model.parameters(), lr=1e-3)

So far, we know how to implement forward propagation, which is where we process a batch of examples. Here, batch data is fed in the forward direction through the neural network. In a single step, each layer from the first to the final one is processed by the batch data, as per the activation function, and is passed to the successive layer. To go through the entire dataset in several epochs, we designed two nested loops: the outer loop is for the epoch, while the inner loop is for the steps for each batch. The inner part is made up of two blocks; one is for training, while the other one is for evaluating each epoch. As you may have noticed, we called model.train() at the first training loop, and when we moved the second evaluation block, we called model.eval(). This is important as we put the model into training and inference mode.



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.