Transfer Learning for Natural Language Processing by Paul Azunre
Author:Paul Azunre [Azunre, Paul]
Language: eng
Format: epub, pdf
Tags: computers, Artificial Intelligence, Natural Language Processing, Data Science, Neural Networks, Machine Learning
ISBN: 9781617297267
Google: bGI7EAAAQBAJ
Publisher: Simon and Schuster
Published: 2021-08-31T23:39:27.656520+00:00
Figure 6.8 Suggested ULMFiT rate schedule for the case of 10,000 total iterations. The rate increases linearly for 10% of the total number of iterations (i.e., 1,000), up to a maximum of 0.01, and then decreases linearly afterward to 0.
6.3.2 Target task classifier fine-tuning
In addition to techniques for fine-tuning the language model on a small dataset representing the data distribution for the new scenario, ULMFiT provides two techniques for refining the task-specific layers: concat pooling and gradual unfreezing.
At the time ULMFiT was developed, it was standard practice to pass the hidden state of the final unit of an LSTM-based language model to the task-specific layer. The authors instead recommend concatenating these final hidden states with the max-pooled and mean-pooled hidden states of all time steps (as many of them as can fit in memory). In the bidirectional context, they do this separately for forward and backward language models and average predictions. This process, which they call concat pooling, performs a similar function to the bidirectional language modeling approach described for ELMo.
In order to reduce the risks of catastrophic forgetting when fine-tuning, the authors suggest unfreezing and tuning gradually. This process starts with the last layer, which contains the least general knowledge and is the only one unfrozen and refined at the first epoch. At the second epoch, an additional layer is unfrozen, and the process is repeated. The process continues until all task-specific layers are unfrozen and fine-tuned at the last iteration of this gradual unfreezing process.
As a reminder, these techniques will be explored in the code in chapter 9, which will cover various adaptation strategies.
Download
Transfer Learning for Natural Language Processing by Paul Azunre.pdf
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.
Exploring Deepfakes by Bryan Lyon and Matt Tora(7730)
Robo-Advisor with Python by Aki Ranin(7626)
Offensive Shellcode from Scratch by Rishalin Pillay(6106)
Microsoft 365 and SharePoint Online Cookbook by Gaurav Mahajan Sudeep Ghatak Nate Chamberlain Scott Brewster(5025)
Ego Is the Enemy by Ryan Holiday(4958)
Management Strategies for the Cloud Revolution: How Cloud Computing Is Transforming Business and Why You Can't Afford to Be Left Behind by Charles Babcock(4438)
Python for ArcGIS Pro by Silas Toms Bill Parker(4184)
Elevating React Web Development with Gatsby by Samuel Larsen-Disney(3890)
Machine Learning at Scale with H2O by Gregory Keys | David Whiting(3627)
Learning C# by Developing Games with Unity 2021 by Harrison Ferrone(3285)
Speed Up Your Python with Rust by Maxwell Flitton(3231)
Liar's Poker by Michael Lewis(3225)
OPNsense Beginner to Professional by Julio Cesar Bueno de Camargo(3195)
Extreme DAX by Michiel Rozema & Henk Vlootman(3172)
Agile Security Operations by Hinne Hettema(3124)
Linux Command Line and Shell Scripting Techniques by Vedran Dakic and Jasmin Redzepagic(3109)
Essential Cryptography for JavaScript Developers by Alessandro Segala(3083)
Cryptography Algorithms by Massimo Bertaccini(3001)
AI-Powered Commerce by Andy Pandharikar & Frederik Bussler(2983)
