Our primary focus is linguistically-informed Neural Natural Language Processing (NNLP). Towards that end, we are working on various funded projects, as described below.
Text understanding can fundamentally be viewed as a process of composition: the meaning of smaller units is composed to compute the meaning of larger units and eventually of sentences and documents. Our hypothesis is that optimal generalization in deep learning requires that more regular processes of composition are learned as composition functions whereas units that are the output of less regular processes are learned as static embeddings. We investigate novel representation learning algorithms and architectures to test this hypothesis. The envisioned goal of the project is a new robust and powerful text representation that captures all aspects of form and meaning that NLP needs for successful processing of text.
A common approach in representing text as input for deep learning models is to use heuristically induced word pieces. Such a representation is easier to process than characters, but does not incur the high cost of large word vocabularies. In this project, we will investigate alternatives to currently used heuristics that are a more natural representation of the semantics of text.
Argument validation is the task of classifying a given argument as valid or invalid based on its linguistic form, the larger document context, world knowledge and other factors. This project aims to combine representation learning (both static and contextualized embeddings) and relational machine learning to solve this task.
Based on an examination of 4000 years of history of literature, this program’s goal is to synthesize theory and practice of European traditions with those of the East Asian and South Asian cultural spheres as well as the Jewish and Arab worlds. A particular focus will be on digital humanities methods.