How Ontologies Fit Into AI

How Ontologies Fit Into AI

  • Roy, A. Park, Y.J. and Pan, S.(2017, Sept. 21). Domain-SpecificWord Embeddings from Sparse Cybersecurity Texts, arXiv 1709.07470v1 [cs.CL]. pdf, accessed Apr. 25, 2018 by A.J.M.
  • Abstract: Classic word embedding methods such as Word2Vec and GloVe work well when they are given a large text corpus. When the input texts are sparse as in many specialized domains (e.g., cybersecurity), these methods often fail to produce high-quality vectors. In this paper, we describe a novel method to train domain-specific word embeddings from sparse texts. In addition to domain texts, our method also leverages diverse types of domain knowledge such as domain vocabulary and semantic relations.

  • Maximilian Lam (2018, Mar. 31). Word2Bits – QuantizedWord Vectors arXiv 1803.05651v3 [cs.CL]. pdf, accessed Apr. 25, 2018 by A.J.M.
  • Abstract: Word vectors require significant amounts of memory and storage, posing issues to resource limited devices like mobile phones and GPUs. We show that high quality quantized word vectors using 1-2 bits per parameter can be learned by introducing a quantization function into Word2Vec. We furthermore show that training with the quantization function acts as a regularizer. We train word vectors on English Wikipedia (2017) and evaluate them on standard word similarity and analogy tasks and on question answering (SQuAD). Our quantized word vectors not only take 8-16x less space than full precision (32 bit) word vectors but also outperform them on word similarity tasks and question answering.

  • Alshargi, F., Shekarpourz, S., Soru, S., Shethy, A., and Quasthoff, U. (2018, Mar. 12). Concept2vec: Metrics for Evaluating Quality of Embeddings for Ontological Concepts, arXiv:1803.04488v1 [cs.CL] pdf, accessed Apr. 25, 2018 by A.J.M.
  • Abstract: Although there is an emerging trend towards generating embeddings for primarily unstructured data, and recently for structured data, there is not yet any systematic suite for measuring the quality of embeddings. This deficiency is further sensed with respect to embeddings generated for structured data because there are no concrete evaluation metrics measuring the quality of encoded structure as well as semantic patterns in the embedding space. In this paper, we introduce a framework containing three distinct tasks concerned with the individual aspects of ontological concepts: (i) the categorization aspect, (ii) the hierarchical aspect, and (iii) the relational aspect. Then, in the scope of each task, a number of intrinsic metrics are proposed for evaluating the quality of the embeddings. Furthermore, w.r.t. this framework multiple experimental studies were run to compare the quality of the available embedding models. Employing this framework in future research can reduce misjudgment and provide greater insight about quality comparisons of embeddings for ontological concepts.

  • Shi, B.X. and Weninger, T. (2018, Feb. 23). Visualizing the Flow of Discourse with a Concept Ontology, arXiv:1802.08614v1 [cs.CL]. arXiv:1803.04488v1 [cs.CL] pdf, accessed Apr. 25, 2018 by A.J.M.
  • Abstract: Understanding and visualizing human discourse has long being a challenging task. Although recent work on argument mining have shown success in classifying the role of various sentences, the task of recognizing concepts and understanding the ways in which they are discussed remains challenging. Given an email thread or a transcript of a group discussion, our task is to extract the relevant concepts and understand how they are referenced and re-referenced throughout the discussion. In the present work, we present a preliminary approach for extracting and visualizing group discourse by adapting Wikipedia’s category hierarchy to be an external concept ontology. From a user study, we found that our method achieved better results than 4 strong alternative approaches, and we illustrate our visualization method based on the extracted discourse flows.

  • Smaili, F.Z., Xin Gao, X., and Hoehndorf, R. (2018, Jan. 31). Onto2Vec: joint vector-based representation of biological entities and their ontology-based annotations, arXiv:1802.00864v1 [q-bio.QM]. arXiv:1803.04488v1 [cs.CL] pdf, accessed Apr. 25, 2018 by A.J.M.
  • Abstract:
    Motivation: Biological knowledge is widely represented in the form of ontology-based annotations: ontologies describe the phenomena assumed to exist within a domain, and the annotations associate a (kind of) biological entity with a set of phenomena within the domain. The structure and information contained in ontologies and their annotations makes them valuable for developing machine learning, data analysis and knowledge extraction algorithms; notably, semantic similarity is widely used to identify relations between biological entities, and ontology-based annotations are frequently used as features in machine learning applications. …
    Results: We propose the Onto2Vec method, an approach to learn feature vectors for biological entities based on their annotations to biomedical ontologies. Our method can be applied to a wide range of bioinformatics research problems such as similarity-based prediction of interactions between proteins, classification of interaction types using supervised learning, or clustering. …

     
    Italian-renaissance-border-2-thin
     

    Live free or die, my friend –

    AJ Maren

    Live free or die: Death is not the worst of evils.
    Attr. to Gen. John Stark, American Revolutionary War

     
    Italian-renaissance-border-2-thin
     

    Most Crucial To-Reads (Journal and arXiv)

     

    • Chen, X.L., Li, L.-J., Li, F.-F., and Gupta, A. (2018, Mar. 29). Iterative visual reasoning beyond convolutions, arXiv:1803.11189 [cs.CV]. online access, accessed April 1, 2018 by AJM.

     

    Abstract: We present a novel framework for iterative visual reasoning. Our framework goes beyond current recognition systems that lack the capability to reason beyond stack of convolutions. The framework consists of two core modules: a local module that uses spatial memory to store previous beliefs with parallel updates; and a global graph-reasoning module. Our graph module has three components: a) a knowledge graph where we represent classes as nodes and build edges to encode different types of semantic relationships between them; b) a region graph of the current image where regions in the image are nodes and spatial relationships between these regions are edges; c) an assignment graph that assigns regions to classes. Both the local module and the global module roll-out iteratively and cross-feed predictions to each other to refine estimates. The final predictions are made by combining the best of both modules with an attention mechanism. We show strong performance over plain ConvNets, e.g. achieving an 8.4% absolute improvement on ADE measured by per-class average precision. Analysis also shows that the framework is resilient to missing regions for reasoning.

     

    Academic Articles and Books on Representations for AI

    • Brooks, R.A., Creiner, R., and Binford, T.O. (1979). The ACRONYM model-based vision system. IJCAI’79 Proc. of the 6th Int’l Joint Conference on Artificial intelligence – Volume 1 (Tokyo, Japan: August 20 – 23, 1979), 105-113. Publ: Morgan Kaufmann Publishers Inc. San Francisco, CA, USA.
    • Lowe, D. (1984). Perceptual Organization and Visual Recognition. Doctoral dissertation, Stanford University. pdf, accessed April 18, 2018 by AJM.
    • Marr D. (1982). Vision (San Francisco: W.H. Freeman).
    • McCarthy, J., et al. (1980, May). Final Report: Basic Research in Artificial Intelligence and Foundations of Programming. Stanford Artificial Intelligence Laboratory, Memo AIM 337; Computer Science Dept., Report No. STAN-CS-80-808. pdf, accessed April 18, 2018 by AJM.
    • McClamrock, R. (1991, May). Marr’s Three Levels: A Re-evaluation. Minds and Machines, 1 (2), 185–196. online access, accessed April 1, 2018 by AJM.
    • Newell, A. (1980, Aug 19). The knowledge level. Presidential Address, American Association for Artificial Intelligence. AAAI80, Stanford University. Later published in Artificial Intelligence and AI Magazine (1981, July). online access, accessed April 1, 2018 by AJM.
    • Warren, W.H. (2012). Does this computational theory solve the right problem? Marr, Gibson, and the goal of vision. Perception, 41(9): 1053–1060. doi: 10.1068/p7327. online access, accessed April 1, 2018 by AJM.

     

    Useful Blogs

    • AI Business (2016, November 14). Dichotomy of Intelligence – a Thorny Journey Towards Human-Level Intelligence. blogpost, accessed April 18, 2018 by AJM.
    • Bergman, M. (2018, Feb. 21) Desiderata for Knowledge Graphs. online access, accessed April 18, 2018 by AJM.
    • Deshpande, A. (2016, August 24). The 9 Deep Learning Papers You Need To Know About (Understanding CNNs Part 3). Post 3, accessed April 18, 2018 by AJM.
    • Deshpande, A. (2016, July 29). A Beginner’s Guide To Understanding Convolutional Neural Networks Part 2. Post 1, accessed April 18, 2018 by AJM.
    • Matthen, M. (2014, July 28). Thomas Natsoulas: Consciousness and Perceptual Experience: An Ecological and Phenomenological Approach – A Review of the Book. Book Review Online, accessed April 18, 2018 by AJM. AJM’s Note: Look at this review for suggestions of researchers who are NOT Natsoulas, who have made very valuable contributions to computer vision and particularly to the perceptual underpinnings of human vision that carry over to good computer vision systems.
    • Singhal, A. (2012, May 16). Introducing the Knowledge Graph: things, not strings. Google. online access, accessed April 1, 2018 by AJM.

     
    Italian-renaissance-border-2-thin
     

    Previous Related Posts

     
    Italian-renaissance-border-2-thin
     

    Leave a Reply

    Your email address will not be published. Required fields are marked *