Contextual Word Representation
Learn to generate contextual word representations with BERT-as-Service. Get token-level embeddings considering sentence context. Install & use today!
Computing Contextual Word Representation with BERT-as-Service
This document explains how to generate contextual word-level embeddings using BERT-as-Service. Unlike sentence embeddings, this method provides individual vector representations for each token in a sentence, considering its surrounding context.
Prerequisites
Before you begin, ensure you have BERT-as-Service installed. If not, you can install it via pip:
pip install bert-serving-server bert-serving-client
You will also need a pre-trained BERT model downloaded to your local machine. For this example, we'll assume you have the uncased_L-12_H-768_A-12
model.
Step 1: Start the BERT Server Without Pooling
To obtain token-level representations, you need to configure the BERT server with a specific pooling_strategy
and max_seq_len
.
pooling_strategy=NONE
: This is crucial for disabling pooling and obtaining embeddings for each token individually.max_seq_len
: This defines the maximum number of tokens the server will process for a single input. Tokens beyond this length will be truncated, and sentences shorter than this will be padded.
Execute the following command in your terminal to start the BERT server:
!nohup bert-serving-start -pooling_strategy NONE -max_seq_len=20 -model_dir=./uncased_L-12_H-768_A-12 > out.file 2>&1 &
nohup
: Ensures the server continues to run even if you close your terminal.-model_dir
: Specifies the path to your downloaded BERT model.> out.file 2>&1 &
: Redirects server output and errors to a file and runs the process in the background.
Step 2: Import and Initialize the BERT Client
In your Python script or notebook, import the BertClient
and initialize it to connect to the running BERT server.
from bert_serving.client import BertClient
bc = BertClient()
Step 3: Define the Input Sentence
Prepare the sentence for which you want to generate word embeddings.
sentence = 'The weather is great today'
Step 4: Generate Contextual Word Vectors
Use the encode
method of the BertClient
to get the contextual representations for the tokens in your sentence.
vec = bc.encode([sentence])
Step 5: Check the Shape of the Output
The vec
variable will contain the embeddings. Let's inspect its shape to understand the output structure.
print(vec.shape)
Expected Output:
(1, 20, 768)
This shape indicates:
1
: The number of input samples (in this case, one sentence).20
: The maximum sequence length defined for the BERT server. This is the length of the output token dimension, including special tokens and padding.768
: The dimensionality of the embedding vector for each token.
Understanding Token Representations
Each element in the vec
tensor corresponds to the embedding of a specific token within the input sentence, considering its context. The structure is as follows:
vec[sample_index][token_index]
For our example sentence ['The', 'weather', 'is', 'great', 'today']
with max_seq_len=20
, the breakdown is:
vec[0][0]
: Embedding for the special[CLS]
token.vec[0][1]
: Embedding for the token "The".vec[0][2]
: Embedding for the token "weather".vec[0][3]
: Embedding for the token "is".vec[0][4]
: Embedding for the token "great".vec[0][5]
: Embedding for the token "today".vec[0][6]
: Embedding for the special[SEP]
token.vec[0][7]
tovec[0][19]
: Embeddings for padding tokens, used to fill up tomax_seq_len
.
Each of these token vectors is a 768-dimensional embedding that captures the meaning and contextual nuances of the word within the given sentence.
Conclusion
By configuring bert-as-service
with pooling_strategy=NONE
, you can effectively extract rich, context-aware word-level embeddings. These embeddings are highly valuable for various NLP tasks that require understanding individual token meanings in context, such as:
Named Entity Recognition (NER): Identifying and classifying named entities (e.g., persons, organizations, locations).
Part-of-Speech (POS) Tagging: Assigning grammatical tags to each word (e.g., noun, verb, adjective).
Token-Level Classification: Performing classification tasks on a per-token basis.
Fine-Grained Text Analysis: Deeper analysis of textual sentiment, intent, or other specific linguistic features at the word level.
This ability to generate token-level embeddings complements sentence embeddings and provides a more granular understanding of text, enabling more sophisticated NLP applications.
SEO Keywords
bert-as-service word embeddings
contextual token embeddings BERT
pooling_strategy NONE BERT
token-level BERT representations
bert-as-service max_seq_len
extract word vectors with BERT
BERT embeddings for NER
BERT token classification use case
Interview Questions
What is the difference between sentence embeddings and token-level embeddings in BERT? Sentence embeddings provide a single vector representation for an entire sentence, capturing its overall meaning. Token-level embeddings, on the other hand, provide a distinct vector for each individual token within a sentence, reflecting its meaning in that specific context.
How do you configure bert-as-service to generate word-level embeddings? You start the BERT server with
pooling_strategy=NONE
and specify amax_seq_len
.What does
pooling_strategy=NONE
achieve in the BERT server setup? It instructs the server to disable any form of pooling (like average or max pooling) and instead return the raw output embeddings from the BERT model's last layer for each token.What is the purpose of specifying
max_seq_len
in BERT?max_seq_len
sets the maximum number of tokens that BERT will process for a single input sequence. Sentences longer than this will be truncated, and shorter sentences will be padded with special tokens up to this length. This ensures consistent input batch sizes for the model.How many dimensions does each token vector in BERT typically have? The dimensionality of each token vector depends on the specific BERT model used. For models like
bert-base-uncased
(which hasH=768
hidden units), each token vector is typically 768-dimensional.What special tokens are included in BERT token outputs, and what do they represent? BERT outputs typically include embeddings for
[CLS]
(Classification) at the beginning of the sequence, used for sequence-level classification tasks, and[SEP]
(Separator) at the end of sentences or between sentence pairs, to delineate distinct segments.Why are contextual embeddings important for NLP tasks like NER and POS tagging? Contextual embeddings are crucial because the meaning of a word often depends on its surrounding words. Tasks like NER and POS tagging require understanding this context to correctly identify entities or assign grammatical roles to words that might have ambiguous meanings in isolation (e.g., "bank" as a financial institution vs. a river bank).
How do padding tokens affect the output shape of BERT embeddings? Padding tokens fill the sequence up to
max_seq_len
. Their embeddings are included in the output tensor's second dimension, contributing to the total token count but generally representing a neutral or ignorable state for downstream processing if handled correctly.What kind of downstream tasks benefit from token-level embeddings? Token-level embeddings are beneficial for tasks requiring fine-grained understanding, such as Named Entity Recognition (NER), Part-of-Speech (POS) tagging, token classification, sentiment analysis at the word level, and question answering.
How is the vector output structured when no pooling strategy is applied in BERT? When
pooling_strategy=NONE
, the output is a 3D tensor:(batch_size, sequence_length, hidden_size)
.batch_size
is the number of input sentences,sequence_length
is themax_seq_len
(including special tokens and padding), andhidden_size
is the dimensionality of the embeddings for each token.