Intelligent & Interactive Systems Talk Series

Monday, January 30, 2017

2:30 PM3:30 PM

-

130 Informatics East

http://vision.soic.indiana.edu/iis-talk-series/

RelSifter: Scoring Triples from Type-like Relations

Prashant Shiralkar, IU SoIC

(Joint work with Mihai Avram, Giovanni Luca Ciampaglia, Filippo Menczer, Alessandro Flammini)

We present RelSifter, a supervised learning approach to the problem of assigning relevance scores to triples expressing type-like relations such as ‘profession’ and ‘nationality.’ To provide additional contextual information about individuals and relations we supplement the data provided as part of the WSDM 2017 Triple Score contest with Wikidata and DBpedia, two large-scale knowledge graphs (KG). Our hypothesis is that any type relation, i.e., a specific profession like ‘actor’ or ‘scientist,’ can be described by the set of typical “activities” of people known to have that type relation.

For example, actors are known to star in movies, and scientists are known for their academic affiliations. In a KG, this information is to be found on a properly defined subset of the second-degree neighbors of the type relation. This form of local information can be used as part of a learning algorithm to predict relevance scores for new, unseen triples. When scoring ‘profession’ and ‘nationality’

triples our experiments based on this approach result in an accuracy equal to 73% and 78%, respectively. These performance metrics are roughly equivalent or only slightly below the state of the art prior to the present contest. This suggests that our approach can be effective for evaluating facts, despite the skewness in the number of facts per individual mined from KGs.

 ***

Learning Latent Sub-events in Activity Videos Using Temporal Attention Filters

AJ Piergiovanni, IU SoIC

(Joint work with Michael Ryoo, Chenyou Fan)

In this paper, we newly introduce the concept of temporal attention filters, and describe how they can be used for human activity recognition from videos. Many high-level activities are often composed of multiple temporal parts (e.g., sub-events) with different duration/speed, and our objective is to make the model explicitly learn such temporal structure using multiple attention filters and benefit from them. Our temporal filters are designed to be fully differentiable, allowing end-of-end training of the temporal filters together with the underlying frame-based or segment-based convolutional neural network architectures. This paper presents an approach of learning a set of optimal static temporal attention filters to be shared across different videos, and extends this approach to dynamically adjust attention filters per testing video using recurrent long short-term memory networks (LSTMs). This allows our temporal attention filters to learn latent sub-events specific to each activity. We experimentally confirm that the proposed concept of temporal attention filters benefits the activity recognition, and we visualize the learned latent sub-events.