Knowledge Graph Generation

Knowledge Graph Generation

Anjali Daisy
Copyright: © 2020 |Pages: 7
DOI: 10.4018/978-1-7998-1159-6.ch007
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Nowadays, as computer systems are expected to be intelligent, techniques that help modern applications to understand human languages are in much demand. Amongst all the techniques, the latent semantic models are the most important. They exploit the latent semantics of lexicons and concepts of human languages and transform them into tractable and machine-understandable numerical representations. Without that, languages are nothing but combinations of meaningless symbols for the machine. To provide such learning representation, embedding models for knowledge graphs have attracted much attention in recent years since they intuitively transform important concepts and entities in human languages into vector representations, and realize relational inferences among them via simple vector calculation. Such novel techniques have effectively resolved a few tasks like knowledge graph completion and link prediction, and show the great potential to be incorporated into more natural language processing (NLP) applications.
Chapter Preview
Top

Embeddings

Embedding-based techniques project discrete concepts or words to a low-dimensional and continuous vector space where co-occurred concepts or words are located close to each other. Compared to conventional discrete representations (e.g., the one-hot encoding, embedding provides more strong representations, particularly for concepts that infrequently appear in corpora,(Narayanan et al., 2012) but are with significance of meaning. In this section, we state the background of embedding-based approaches that are frequently used in NLP tasks. We start with a brief introduction to word embeddings, then focus on addressing the past advance of knowledge graph embeddings (King,1983).

Complete Chapter List

Search this Book:
Reset