PH.D DEFENCE - PUBLIC SEMINAR

Deep Neural Networks for Relation Extraction

Speaker
Mr Tapas Nayak
Advisor
Dr Ng Hwee Tou, Provost'S Chair Professor, School of Computing


02 Nov 2020 Monday, 08:30 AM to 10:00 AM

Zoom presentation

Join Zoom Meeting
https://nus-sg.zoom.us/j/87077193550?pwd=R1ZCUWhrTEZRQzBUSTJkeE9iM3BPZz09

Meeting ID: 870 7719 3550
Password: 269529

Abstract: A knowledge base (KB) is a useful resource for many natural language processing tasks. KBs contain real-world entities and relations among them which can help downstream tasks such as question answering. A triplet of two entities and a relation between them is called a relation tuple. Existing KBs such as Freebase, Wikidata, and DBpedia contain a large number of relation tuples. But these KBs are built by crowd workers and it takes much time and effort to build them. The automatic extraction of relation tuples from natural language texts is referred to as relation extraction. In this thesis, we tackle this task using novel deep neural network models.

First, we use a pipeline approach for this task, where we assume that the entities have already been identified by an external named entity recognition system. We propose a syntax-focused multi-factor attention model to find the relation between two entities. We use the syntactic distance of words from the entities to determine their importance in establishing the relation between the two given entities. We also use multi-factor attention to focus on multiple pieces of evidence present in a text to support the relation. Our proposed model achieves significant improvements over prior works on widely used relation extraction datasets.

Second, we tackle the task of joint entity and relation extraction, where entities are not identified beforehand. There may be multiple relation tuples present in a sentence, and these relations may share one or both entities among them. Extracting such relation tuples with full entity names from sentences is a difficult task. We propose two approaches to use encoder-decoder networks for joint extraction of entities and relations. In the first approach, we propose a representation scheme for relation tuples that enables the decoder to generate one token at a time (like machine translation models) and still extract all the tuples present in a sentence, with full entity names of different lengths and with overlapping entities. Next, we propose a pointer network-based decoding approach where an entire tuple is generated at every time step. Our proposed models outperform prior works on widely used relation extraction datasets.

Finally, we extend our work to multi-hop relation extraction. Distantly supervised relation extraction models mostly focus on sentence-level relation extraction, where the two entities (subject and object entity) of a relation tuple must appear in the same sentence. This assumption is overly strict and for a large number of relations, we may not find sentences containing the two entities. To solve this problem, we propose multi-hop relation extraction, where the two entities of a relation tuple may appear in two different documents but these documents are connected via some common entities. We can find a chain of entities from the subject entity to the object entity via the common entities. The relation between the two entities can be established using this entity chain. Following this multi-hop approach, we create a dataset for 2-hop relation extraction, where each chain contains exactly two documents. This dataset covers a higher number of relations than previous sentence-level or document-level extraction datasets that are available in the public domain. To solve this task, we propose a hierarchical entity graph convolutional network (HEGCN) model that consists of a two-level hierarchy of graph convolutional networks (GCNs). The first-level GCN of the hierarchy captures the relations among the entity mentions within the documents, and the second-level GCN of the hierarchy captures the relations among the entity mentions across the documents in a chain. Our proposed HEGCN model improves the performance on our 2-hop relation extraction dataset and it can be readily extended to N-hop datasets.