In this paper we extend Skip-Gram and Continuous Bag-of-Words Distributional word representations models via global context information. We use a corpus extracted from Wikipedia, corpus where articles are organized in a hierarchy of categories. These categories provide useful topical information about each article. We present the four new approaches, how to enrich word meaning representation with such information. We experiment with the English Wikipedia and evaluate our models on standard word similarity and word analogy datasets. Proposed models signi cantly outperform other word representation methods when similar size training data of similar size is used and provide similar performance compared with methods trained on much larger datasets. Our new approach shows, that increasing the amount of unlabelled data does not necessarily increase the performance of word embeddings as much as introducing the global or sub-word information, especially when training time is taken into the consideration.
Restaurant Reviews CZ ABSA - 2.15k reviews with their related target and category
The work done is described in the paper: https://doi.org/10.13053/CyS-20-3-2469