Issue Date
2021-01-31Subjects
negative samplingknowledge graph embedding
generative adversarial network
Subject Categories::G700 Artificial Intelligence
Metadata
Show full item recordAbstract
Knowledge graph embedding (KGE) is to project entities and relations of a knowledge graph (KG) into a low-dimensional vector space, which has made steady progress in recent years. Conventional KGE methods, especially translational distance-based models, are trained through discriminating positive samples from negative ones. Most KGs store only positive samples for space efficiency. Negative sampling thus plays a crucial role in encoding triples of a KG. The quality of generated negative samples has a direct impact on the performance of learnt knowledge representation in a myriad of downstream tasks, such as recommendation, link prediction and node classification. We summarize current negative sampling approaches in KGE into three categories, static distribution-based, dynamic distribution-based and custom cluster-based respectively. Based on this categorization we discuss the most prevalent existing approaches and their characteristics. It is a hope that this review can provide some guidelines for new thoughtCitation
Qian J, Li G, Atkinson K, Yue Y (2021) 'Understanding negative sampling in knowledge graph embedding', International Journal of Artificial Intelligence and Applications , 12 (1)Publisher
AICCAdditional Links
https://aircconline.com/abstract/ijaia/v12n1/12121ijaia05.htmlType
ArticleLanguage
enISSN
0976-2191EISSN
0975-900xae974a485f413a2113503eed53cd6c53
10.5121/ijaia.2021.12105
Scopus Count
Collections
The following license files are associated with this item:
- Creative Commons
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivatives 4.0 International