Blog

A “First Principles” Approach to General AI

A “First Principles” Approach to General AI

What We Need to Take the Next Tiny, Incremental Little Step: The “next big thing” is likely to be the next small thing – a tiny step, an incremental shift in perspective. However, a perspective shift is all that we need in order to make some real advances towards general artificial intelligence (GAI). In the second chapter of the ongoing book , I share the following figure (and sorry, the chapter itself is not released yet): Now, we’ve actually been…

Read More Read More

A “Hidden Layer” Guiding Principle – What We Minimally Need

A “Hidden Layer” Guiding Principle – What We Minimally Need

Putting It Into Practice: If we’re going to move our neural network-type architectures into a new, more powerful realm of AI capability, we need to bust out of the “sausage-making” mentality that has governed them thus far, as we discussed last week. To do this, we need to give our hidden layer(s) something to do besides respond to input stimulus. It’s very realistic that this “something” should be free energy minimization, because that’s one of the strongest principles in the…

Read More Read More

How Getting to a Free Energy Bottom Helps Us Get to the Top

How Getting to a Free Energy Bottom Helps Us Get to the Top

Free Energy Minimization Gives an AI Engine Something Useful to Do:   Cutting to the chase: we need free energy minimization in a computational engine, or AI system, because it gives the system something to do besides being a sausage-making machine, as I described in yesterday’s blog on What’s Next for AI. Right now, deep learning systems are constrained to be simple input/output devices. We force-feed them with stimulus at one end, and they poop out (excuse me, “pop out”)…

Read More Read More

What’s Next for AI (Beyond Deep Learning)

What’s Next for AI (Beyond Deep Learning)

The Next Big Step:   We know there’s got to be something. Right now, deep learning systems are like sausage-making machines. You put raw materials in at one end, turn the crank, and at the other end, you get output – nicely wrapped-up sausages. Wherever you are in your studies of machine learning / deep learning / neural networks / AI, you know there’s got to be more. If we’re going to make anything like general artificial intelligence (GAI), we…

Read More Read More

Statistical Mechanics, the Future of AI, and Personal Stories

Statistical Mechanics, the Future of AI, and Personal Stories

Statistical Mechanics and Personal Stories (On the Same Page!)   Yikes! It’s Thursday morning already. I haven’t written to you for three weeks. That’s long enough that I have to pause and search my memory for my username to get into the website. Thanksgiving was lovely. The Thursday after that was grading, all day – and for several days before and after. By now, I (and most of you) have had a few days of recovery, from what has been…

Read More Read More

Third Stage Boost – Part 2: Implications of Neuromorphic Computing

Third Stage Boost – Part 2: Implications of Neuromorphic Computing

Neuromorphic Computing: Statistical Mechanics & Criticality   Last week, I suggested that we were on the verge of something new, and referenced an article by von Bubnoff: A brain built from atomic switches [that] can learn, together with the follow-on article Brain Built on Switches. The key innovation described in this article was a silver mesh, as shown in the following figure. This mesh is a “network of microscopically thin intersecting silver wires,” grown via a combination of electrochemical and…

Read More Read More

Third Stage Boost: Statistical Mechanics and Neuromorphic Computing – Part 1

Third Stage Boost: Statistical Mechanics and Neuromorphic Computing – Part 1

Next-Generation Neural Network Architectures: More Brain-Like   Three generations of artificial intelligence.. The third generation is emerging … right about … now. That’s what is shown in this figure, presented in log-time scale. Brief history of AI in log-time scale The first generation of AI, symbolic AI, began conceptually around 1954, and lasted until 1986; 32 years. On the log-time scale shown in the figure above, this entire era takes place under the first curve; the black bell-shaped curve on…

Read More Read More

Machine Learning: Multistage Boost Process

Machine Learning: Multistage Boost Process

Three Stages to Orbital Altitude in Machine Learning Several years ago, Regina Dugan (then Director of DARPA) gave a talk in which she showed a clip of epic NASA launch fails. Not just one, but many fails. The theme was that we had to risk failure in order to succeed with innovation. This YouTube vid of rocket launch failures isn’t the exact clip that she showed (the “action” doesn’t kick in for about a minute), but it’s pretty close. For…

Read More Read More

Neg-Log-Sum-Exponent-Neg-Energy – That’s the Easy Part!

Neg-Log-Sum-Exponent-Neg-Energy – That’s the Easy Part!

The Surprising (Hidden) “Gotcha” in This Energy Equation: A couple of days ago, I was doing one of my regular weekly online “Synch” sessions with my Deep Learning students. In a sort of “Beware, here there be dragons!” moment, I showed them this energy equation from the Hinton et al. (2012) Nature review paper on acoustic speech modeling: One of my students pointed out, “That equation looks kind of simple.” Well, he’s right. And I kind of bungled the answer,…

Read More Read More

Neural Network Architectures: Determining the Number of Hidden Nodes

Neural Network Architectures: Determining the Number of Hidden Nodes

Figuring Out the Number of Hidden Nodes: Then and Now   One of the most demanding questions in developing neural networks (of any size or complexity) is determining the architecture: number of layers, nodes-per-layer, and other factors. This was an important question in the late 1980’s and early 1990’s, when neural networks first emerged. Deciding on the network architecture details is even more challenging today. In this post, we’re going to look at some strategies for deciding on the number…

Read More Read More