Low-Dimensional Embeddings of Logic
Tim Rocktaschel, Matko Bosnjak, Sameer Singh, Sabastian Riedel

Citation
Tim Rocktaschel, Matko Bosnjak, Sameer Singh, Sabastian Riedel. "Low-Dimensional Embeddings of Logic". 2014 Workshop on Semantic Parsing (SP14), ACL, 26, June, 2014.

Abstract
Many machine reading approaches, from shallow information extraction to deep semantic parsing, map natural language to symbolic representations of meaning. Representations such as first-order logic capture the richness of natural language and support complex reasoning, but often fail in practice due to their reliance on logical background knowledge and the difficulty of scaling up inference. In contrast, low-dimensional embeddings (i.e. distributional representations) are efficient and enable generalization, but it is unclear how reasoning with embeddings could support the full power of symbolic representations such as first-order logic. In this proof-ofconcept paper we address this by learning embeddings that simulate the behavior of first-order logic.

Electronic downloads

Citation formats  
  • HTML
    Tim Rocktaschel, Matko Bosnjak, Sameer Singh, Sabastian
    Riedel. <a
    href="http://www.terraswarm.org/pubs/315.html"
    >Low-Dimensional Embeddings of Logic</a>, 2014
    Workshop on Semantic Parsing (SP14), ACL, 26, June, 2014.
  • Plain text
    Tim Rocktaschel, Matko Bosnjak, Sameer Singh, Sabastian
    Riedel. "Low-Dimensional Embeddings of Logic".
    2014 Workshop on Semantic Parsing (SP14), ACL, 26, June,
    2014.
  • BibTeX
    @inproceedings{RocktaschelBosnjakSinghRiedel14_LowDimensionalEmbeddingsOfLogic,
        author = {Tim Rocktaschel and Matko Bosnjak and Sameer Singh
                  and Sabastian Riedel},
        title = {Low-Dimensional Embeddings of Logic},
        booktitle = {2014 Workshop on Semantic Parsing (SP14)},
        organization = {ACL},
        day = {26},
        month = {June},
        year = {2014},
        abstract = {Many machine reading approaches, from shallow
                  information extraction to deep semantic parsing,
                  map natural language to symbolic representations
                  of meaning. Representations such as first-order
                  logic capture the richness of natural language and
                  support complex reasoning, but often fail in
                  practice due to their reliance on logical
                  background knowledge and the difficulty of scaling
                  up inference. In contrast, low-dimensional
                  embeddings (i.e. distributional representations)
                  are efficient and enable generalization, but it is
                  unclear how reasoning with embeddings could
                  support the full power of symbolic representations
                  such as first-order logic. In this proof-ofconcept
                  paper we address this by learning embeddings that
                  simulate the behavior of first-order logic.},
        URL = {http://terraswarm.org/pubs/315.html}
    }
    

Posted by Barb Hoversten on 19 May 2014.
Groups: services

Notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright.