AIdventure - GPT-1 - Improving Language Understanding by Generative Pre-Training

GPT-1 - Improving Language Understanding by Generative Pre-Training

GPT-1 - Improving Language Understanding by Generative Pre-Training
Mario Parreño#nlp#transformer#paper#decoder

GPT explores a semi-supervised approach for language understanding tasks using a combination of unsupervised pre-training, assuming access to a large corpus of unlabeled text, and datasets with manually annotated training examples for supervised fine-tuning. To do so, GPT employ a two-stage training procedure:

  1. First, it uses a language modeling objetive on the unlabeled data to learn the initial parameters of a neural network model.
  2. Subsequently, it adapts the model parameters to a target task using the corresponding supervised objective.

Furthermore, this approach showcases zero-shot behaviors of the pre-trained model on different settings, demonstrating that GPT acquires useful linguistic knowledge for downstream tasks during the unsupervised pre-training phase.

Architecture

GPT model architecture is a multi-layer causal Transformer Decoder, almost identifical to the original Transformer implementation. If you want more details about the Transformer architecture, you can check out my Transformer blog post.

We can denote the number of the Transformer decoder blocks as LL, the hidden size as HH, and the number of self-attention heads as AA. GPT initial model design is the following:

GPT model configurations.

Model NameLL (Transformer blocks)HH (Hidden size)AA (Self-Attention heads)
GPTGPT1276812

Additionally, GPT uses a bytepair encoding (BPE) vocabulary with 40.00040.000 merges. Authors use the ftfy library to clean the raw text in BookCorpus dataset, standardize some punctuation and whitespace, and use the spaCy tokenizer.

Pre-training

Learn effectively from raw text is crucial to alleviating the dependence on supervised learning. Even in cases were considerable supervision is available, learning good representations in an unsupervised fashion can provide a significant performance boost.

Given a unsupervised corpus of tokens, GPT uses a standard language modeling objective to maximize the likelihood. This task consists of predicting a token given its previous context. As in the Transformer, this task can be performed in an unsupervised way by taking sequences of tokens and adding a padding on the initial input, typically a special token, <s> for our illustration.

GPT model architecture for pre-training. The model receives a sequence of tokens shifted right as input, and outputs the sequence of tokens. The model is trained to predict the next token based only on its previous context.
GPT model architecture for pre-training. The model receives a sequence of tokens shifted right as input, and outputs the sequence of tokens. The model is trained to predict the next token based only on its previous context.

Fine-tuning

After training the model, GPT adapts the parameters to a supervised target task. Given a labeled dataset CC, where each instance consists of a sequence of input tokens, x1,,xmx^1, \dots, x^m, along with a label yy. The input are passed through the pre-trained model to obtain the final transformer block’s activation hlmh_{l}^{m} (<e>), which is then fed into an added linear output layers with parameters WyW_y to predict yy. Authors additionally found that including language modeling as an auxiliary objective to the fine-tuning helped improving generalization and accelerating convergence.

GPT setup does not require fine-tuning target tasks to be in the same domain as the unlabeled corpus used during pre-training. During transfer, GPT utilizes task-specific input adaptations, always processing structured text input as a single contiguous sequence of tokens. Taking that into account, minimal changes to the architecture of the pre-trained model are done.

Task-specific input transformations

For some tasks, like text classification, we can directly fine-tune GPT as described above. For other tasks, it is possible to convert structured inputs into an ordered sequence that the pre-rtained model can process.These input transformations allow GPT to avoid making extensive changes to the architecture across tasks.

Textual entailment. For entailment taks, simply concatenate the premise pp and hypothesis hh token sequences, with a delimiter token ($) in between. Process and obtain final transformer block’s activation.

Similarity. For similarity taks there is no inherent ordering of the two sentences being compared. To reflect this, authors modify the input sequence to obtain both possible sentence orderings and process each independently to produce two sequence representations hlmh_{l}^{m} which are added element-wise before being fed into the linear output layer.

Question Answering and Commonsense Reasoning. For these tasks, we are given a context document zz, a question qq, and a set of possible answers {ak}\{a_{k}\}. Authors concatenate the document context and question with each possible answer, adding a delimiter token in between to get [z;q;$;ak]\left [ z;q;\$;a_k \right ]. Each of these sequences are processed independently to obtain scores that are later normalized via a softmax layer to produce an output distribution over possible answers.

Glossary

  • LL: Number of Transformer decoder blocks.
  • HH: Size of the embeddings. An embedding is a learnable representation of the words of the vocabulary.
  • AA: Number of self-attention heads.
  • w: Input sequence length.

Credits

Table of Contents