Improving Word Representations via Global Context and Multiple Word Prototypes
Introduction
The improvement in Natural Language Processing (NLP) in modern times is vital in developing Artificial Intelligence (AI). Word representations are helpful when it comes to NLP tasks. Learning algorithms are part of NLP systems, but word representation is limited as it has only local context with a single representation for each word.
The difficulty arises in this system as many words are polysemous, and the different meanings can only be found when the global context is taken into account. Multiple-word prototypes come in handy at this point of machine learning on which?NLP ?is based. Let us delve more into this subject.
Vector-space Models (VSM)
VSMs are word representations with vectors that can capture syntactic and semantic information of words. You can use VSM to induce similarity measures by calculating the distance between the vectors. The applications of this are endless such as question answering, document classification and information retrieval. Most VSMs' problems are that the single-word prototype cannot capture polysemy and homonymy.
The introduction of multi-prototype vector space models in the last decade has contributed a lot to the growth of NLP and, in turn, machine learning. The clustering of contexts helps in word sense discrimination, after which multiple word prototypes are created using these contexts. The machine needs to capture both the semantics and syntax of words so that the cluster can be created accurately. The difference between local context and global context comes up at this point. Local context is not always accurate when predicting word meanings since a word may have another meaning in a distinct region, such as a paragraph. The global context is helpful for language comprehension regarding words with varied meanings.
领英推荐
Global Context
Earlier, people assumed that the local context of a word was the only defining facet of it. With the advent of machine learning, proposing models for word representations became necessary. It was soon found out that the local context was not always accurate as this method assumes that the surrounding similar words are semantically linked. But unsupervised word embedding can only partially define the semantics of a word.?
But if the global context is used along with the local context, this can be beneficial for word representations. Global contexts refer to larger semantic parts, such as the paragraph of the page where you come across the word. Global context complements local context and can effectively capture varied segments of word semantics.
Multiple Word Prototype
Traditionally, the VSMs used a single prototype vector to judge the semantic similarity of words. This model says that the meaning of a word is independent of context, but this is not always true. It depends on context many times; hence, a single prototype VSM cannot handle phenomena such as polysemy and homonymy.?
If you use a multiple-word prototype VSM, it becomes easier to find homonymous and polysemous words. Closely related words can be found better using this multiple-word prototype model in machine learning.
Final Thoughts
In the age of machine learning and artificial intelligence, a multi-prototype approach for word representation in neural network architecture is helpful. Using global context for word representations has become an important tool for finding polysemous words having multi-faceted meanings in various parts, such as the paragraph or document where it appears. The?cloud platform of E2E Networks ?is using this approach to provide service to thousands of loyal customers on a daily basis.