Browsed by
Category: Artificial Intelligence

Future Directions in AI: Fundamentals (Part 1) – New YouTube Vid)

Future Directions in AI: Fundamentals (Part 1) – New YouTube Vid)

Are you an AI expert, or are you planning to be? There are three fundamental challenges that will underlie the major AI evolutions over the next decade. These are the three areas where you NEED to understand the fundamentals – before AI moves so fast that you’ll never catch up.  Let them guide your deep study for the year ahead.  Check them out in this new YouTube post: Live free or die, my friend – AJ Maren Live free or…

Read More Read More

Interpreting Karl Friston (Round Deux)

Interpreting Karl Friston (Round Deux)

He might be getting a Nobel prize some day. But – no one can understand him. You don’t believe me? Have a quick glance at Scott Alexander’s article, “God Help Us, Let’s Try To Understand Friston On Free Energy”. We’re referring, of course, to Karl Friston. I’ve spent the past three-and-a-half years studying Friston’s approach to free energy, which he treats as the guiding principle in the brain. He has extended the classic variational Bayes treatment (frontier-material in machine learning)…

Read More Read More

Generative vs. Discriminative – Where It All Began

Generative vs. Discriminative – Where It All Began

Working Through Salakhutdinov and Hinton’s “An Efficient Learning Procedure for Deep Boltzmann Machines”   We can accomplish a lot, using multiple layers trained with backpropagation. However (as we all know), there are limits to how many layers that we can train at once, if we’re relying strictly on backpropagation (or any other gradient-descent learning rule). This is what stalled out the neural networks community, from the mid-1990’s to the mid-2000’s. The breakthrough came from Hinton and his group, with a…

Read More Read More

How Ontologies Fit Into AI

How Ontologies Fit Into AI

Roy, A. Park, Y.J. and Pan, S.(2017, Sept. 21). Domain-SpecificWord Embeddings from Sparse Cybersecurity Texts, arXiv 1709.07470v1 [cs.CL]. pdf, accessed Apr. 25, 2018 by A.J.M. Abstract: Classic word embedding methods such as Word2Vec and GloVe work well when they are given a large text corpus. When the input texts are sparse as in many specialized domains (e.g., cybersecurity), these methods often fail to produce high-quality vectors. In this paper, we describe a novel method to train domain-specific word embeddings from…

Read More Read More

Ontologies, Knowledge Graphs, and AI: Getting from “Here” to “There” (Part 2)

Ontologies, Knowledge Graphs, and AI: Getting from “Here” to “There” (Part 2)

A Principled Approach to AI: Representations and Transitions:   In the last post, on “Moving Between Representation Levels: The Key to an AI System (Part 1),” I re-introduced one of the most important and fundamental AI topics: how we can effectively use multiple representation levels. If we’re going to build (or gauge the properties of) an AI system, we need a framework. The notion of representations, and of moving between representation levels, is as fundamental as we can get. In…

Read More Read More

Moving Between Representation Levels – the Key to Making an AI System Work (Part 1)

Moving Between Representation Levels – the Key to Making an AI System Work (Part 1)

Representation Levels: The Key to Understanding AI   “No computation without representation” Jerry Fodor (1975). The Language of Thought, p.34. online access.   One of the key notions underlying artificial intelligence (AI) systems is not only that of knowledge representation, but that a good AI system will successively move disparate pieces of low-level, or signal-level information up the abstraction ladder. For example, an image understanding system will have a low-level component that extracts edges and regions from the image (or…

Read More Read More

Artificial General Intelligence: Getting There from Here

Artificial General Intelligence: Getting There from Here

What We Need to Create Artificial General Intelligence (AGI):   A brief recap: We know that we want to have neural networks (including deep learning) do something besides being sausage factories. We’ve know that the key missing step – a first principles step – to making this happen is to give the network something to do when it is not responding to inputs. Also, we’ve introduced something that the neural network CAN do; it can do free energy minimization with…

Read More Read More

The Big, Bad, Scary Free Energy Equation (and New Experimental Results)

The Big, Bad, Scary Free Energy Equation (and New Experimental Results)

The 2-D Cluster Variation Method Free Energy Equation – in All Its Scary Glory:   You know, my dear, that we’ve been leading up to this moment for a while now. I’ve hinted. I’ve teased and been coy. But now, it’s time to be full frontal. We’re going to look at a new form of a free energy equation; a cluster variation method (CVM) equation. It deals not only with how many units are in state A or state B,…

Read More Read More

A “First Principles” Approach to Artificial General Intelligence

A “First Principles” Approach to Artificial General Intelligence

What We Need to Take the Next Tiny, Incremental Little Step: The “next big thing” is likely to be the next small thing – a tiny step, an incremental shift in perspective. However, a perspective shift is all that we need in order to make some real advances towards general artificial intelligence (GAI). In the second chapter of the ongoing book , I share the following figure (and sorry, the chapter itself is not released yet): Now, we’ve actually been…

Read More Read More

A “Hidden Layer” Guiding Principle – What We Minimally Need

A “Hidden Layer” Guiding Principle – What We Minimally Need

Putting It Into Practice: If we’re going to move our neural network-type architectures into a new, more powerful realm of AI capability, we need to bust out of the “sausage-making” mentality that has governed them thus far, as we discussed last week. To do this, we need to give our hidden layer(s) something to do besides respond to input stimulus. It’s very realistic that this “something” should be free energy minimization, because that’s one of the strongest principles in the…

Read More Read More