Download PDFOpen PDF in browser

Learning Long-text Semantic Similarity with Multi-Granularity Semantic Embedding Based on Knowledge Enhancement

EasyChair Preprint no. 3167

13 pagesDate: April 13, 2020

Abstract

We propose a new method of semantic similarity calculation-"multi-granular semantic embedding model based on knowledge enhancement (MSE based knowledge)" to solve the similarity and relevance of long text semantic matching. The method firstly enhances semantics through the external knowledge base DBpedia, and simultaneously considers semantic attributes and relationships on the vector representation of key entities. Secondly, each long text is expressed as a multi-granularity vector: character vectors constructed based on one-dimensional convolution, word vectors constructed based on external knowledge sources and pre-trained word vectors, and sentence vectors constructed based on bidirectional LSTM. Furthermore, we use the Siamese network framework to calculate the final similarity. To get better results, we add the attention mechanism after the character vector representation to further weight the key characters. In the end, we evaluate the method on two popular data sets (LP50 and MSRP). Experimental results show that the method in this paper makes better use of long text knowledge and achieves higher accuracy with less time cost.

Keyphrases: Artificial Intelligence, deep learning, Natural Language Processing, Semantic similarity calculation

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:3167,
  author = {Deguang Peng and Bohui Hao and Xianlun Tang and Yingjie Chen and Jian Sun},
  title = {Learning Long-text Semantic Similarity with Multi-Granularity Semantic Embedding Based on Knowledge Enhancement},
  howpublished = {EasyChair Preprint no. 3167},

  year = {EasyChair, 2020}}
Download PDFOpen PDF in browser