Browsed by
Category: Entropy

Wrapping Our Heads Around Entropy

Wrapping Our Heads Around Entropy

Entropy – the Most Powerful Force in the ‘Verse:   Actually, that’s not quite true. The most powerful force in the ‘verse is free energy minimization. However, entropy is half of the free energy equation, and it’s usually the more complex half. So, if we understand entropy, then we can understand free energy minimization. If we understand free energy minimization, then we understand all the energy-based machine learning models, including the (restricted) Boltzmann machine and one of its most commonly-used…

Read More Read More

Figuring Out the Puzzle (in a 2-D CVM Grid)

Figuring Out the Puzzle (in a 2-D CVM Grid)

The Conundrum – and How to Solve It: We left off last week with a bit of a cliff-hanger; a puzzle with the 2-D CVM. (CVM stands for Cluster Variation Method; it’s a more complex form of a free energy equation that I discussed two weeks ago in this blogpost on The Big, Bad, Scary Free Energy Equation (and New Experimental Results); while not entirely unknown, it’s still not very common yet.) We asked ourselves: which of the two grids…

Read More Read More

2-D Cluster Variation Method: Code V&V

2-D Cluster Variation Method: Code V&V

New Code (Not Released Yet): V&V the Code Before We Play:   Well, my darling, as you gathered from last week’s post, the world has shifted. Up until now, when we were talking about having a new free energy function to use inside a neural network, we had to do “Gedankenexperiments” (German for “thought experiments”). Now, though, there’s working code – and I so LOVE seeing the numbers and graphs come out; teasing it, playing with it … stroking it…

Read More Read More

The Big, Bad, Scary Free Energy Equation (and New Experimental Results)

The Big, Bad, Scary Free Energy Equation (and New Experimental Results)

The 2-D Cluster Variation Method Free Energy Equation – in All Its Scary Glory:   You know, my dear, that we’ve been leading up to this moment for a while now. I’ve hinted. I’ve teased and been coy. But now, it’s time to be full frontal. We’re going to look at a new form of a free energy equation; a cluster variation method (CVM) equation. It deals not only with how many units are in state A or state B,…

Read More Read More

A “First Principles” Approach to General AI

A “First Principles” Approach to General AI

What We Need to Take the Next Tiny, Incremental Little Step: The “next big thing” is likely to be the next small thing – a tiny step, an incremental shift in perspective. However, a perspective shift is all that we need in order to make some real advances towards general artificial intelligence (GAI). In the second chapter of the ongoing book , I share the following figure (and sorry, the chapter itself is not released yet): Now, we’ve actually been…

Read More Read More

A “Hidden Layer” Guiding Principle – What We Minimally Need

A “Hidden Layer” Guiding Principle – What We Minimally Need

Putting It Into Practice: If we’re going to move our neural network-type architectures into a new, more powerful realm of AI capability, we need to bust out of the “sausage-making” mentality that has governed them thus far, as we discussed last week. To do this, we need to give our hidden layer(s) something to do besides respond to input stimulus. It’s very realistic that this “something” should be free energy minimization, because that’s one of the strongest principles in the…

Read More Read More

Machine Learning: Multistage Boost Process

Machine Learning: Multistage Boost Process

Three Stages to Orbital Altitude in Machine Learning Several years ago, Regina Dugan (then Director of DARPA) gave a talk in which she showed a clip of epic NASA launch fails. Not just one, but many fails. The theme was that we had to risk failure in order to succeed with innovation. This YouTube vid of rocket launch failures isn’t the exact clip that she showed (the “action” doesn’t kick in for about a minute), but it’s pretty close. For…

Read More Read More

Seven Essential Machine Learning Equations: A Cribsheet (Really, the Précis)

Seven Essential Machine Learning Equations: A Cribsheet (Really, the Précis)

Making Machine Learning As Simple As Possible Albert Einstein is credited with saying, Everything should be made as simple as possible, but not simpler. Machine learning is not simple. In fact, once you get beyond the simple “building blocks” approach of stacking things higher and deeper (sometimes made all too easy with advanced deep learning packages), you are in the midst of some complex stuff. However, it does not need to be more complex than it has to be.  …

Read More Read More

A Tale of Two Probabilities

A Tale of Two Probabilities

Probabilities: Statistical Mechanics and Bayesian:   Machine learning fuses several different lines of thought, including statistical mechanics, Bayesian probability theory, and neural networks. There are two different ways of thinking about probability in machine learning; one comes from statistical mechanics, and the other from Bayesian logic. Both are important. They are also very different. While these two different ways of thinking about probability are usually very separate, they come together in some of the more advanced machine learning topics, such…

Read More Read More

Seven Statistical Mechanics / Bayesian Equations That You Need to Know

Seven Statistical Mechanics / Bayesian Equations That You Need to Know

Essential Statistical Mechanics for Deep Learning   If you’re self-studying machine learning, and feel that statistical mechanics is suddenly showing up more than it used to, you’re not alone. Within the past couple of years, statistical mechanics (statistical thermodynamics) has become a more integral topic, along with the Kullback-Leibler divergence measure and several inference methods for machine learning, including the expectation maximization (EM) algorithm along with variational Bayes.     Statistical mechanics has always played a strong role in machine…

Read More Read More