Browsed by
Category: Statistical Mechanics

Transition to Object-Oriented Python for the Cluster Variation Method

Transition to Object-Oriented Python for the Cluster Variation Method

The Cluster Variation Method – A Topographic Approach:   Object-oriented programming is essential for working with the Cluster Variation Method (CVM), especially if we’re going to insert a CVM layer into a neural network. The reason is that approaching free energy minima via changing node states requires dealing with node, net, and grid topographies. If we’re going to be at all strategic in moving towards free energy minima, then we can’t just pick nodes at random. We need to know…

Read More Read More

What We Really Need to Know about Entropy

What We Really Need to Know about Entropy

There’s This Funny Little “Gotcha” Secret about Entropy: Nobody mentions this secret. (At least in polite society.) But here’s the thing – entropy shows up in all sorts of information theory and machine learning algorithms. And it shows up ALONE, as though it sprung – pure and holy – from the head of the famed Ludwig Boltzmann. What’s wrong with this is that: entropy never lives alone, in isolation. In the real world, entropy exists – always – hand-in-hand with…

Read More Read More

Wrapping Our Heads Around Entropy

Wrapping Our Heads Around Entropy

Entropy – the Most Powerful Force in the ‘Verse:   Actually, that’s not quite true. The most powerful force in the ‘verse is free energy minimization. However, entropy is half of the free energy equation, and it’s usually the more complex half. So, if we understand entropy, then we can understand free energy minimization. If we understand free energy minimization, then we understand all the energy-based machine learning models, including the (restricted) Boltzmann machine and one of its most commonly-used…

Read More Read More

Artificial General Intelligence: Getting There from Here

Artificial General Intelligence: Getting There from Here

What We Need to Create Artificial General Intelligence (AGI):   A brief recap: We know that we want to have neural networks (including deep learning) do something besides being sausage factories. We’ve know that the key missing step – a first principles step – to making this happen is to give the network something to do when it is not responding to inputs. Also, we’ve introduced something that the neural network CAN do; it can do free energy minimization with…

Read More Read More

Figuring Out the Puzzle (in a 2-D CVM Grid)

Figuring Out the Puzzle (in a 2-D CVM Grid)

The Conundrum – and How to Solve It: We left off last week with a bit of a cliff-hanger; a puzzle with the 2-D CVM. (CVM stands for Cluster Variation Method; it’s a more complex form of a free energy equation that I discussed two weeks ago in this blogpost on The Big, Bad, Scary Free Energy Equation (and New Experimental Results); while not entirely unknown, it’s still not very common yet.) We asked ourselves: which of the two grids…

Read More Read More

2-D Cluster Variation Method: Code V&V

2-D Cluster Variation Method: Code V&V

New Code (Not Released Yet): V&V the Code Before We Play:   Well, my darling, as you gathered from last week’s post, the world has shifted. Up until now, when we were talking about having a new free energy function to use inside a neural network, we had to do “Gedankenexperiments” (German for “thought experiments”). Now, though, there’s working code – and I so LOVE seeing the numbers and graphs come out; teasing it, playing with it … stroking it…

Read More Read More

The Big, Bad, Scary Free Energy Equation (and New Experimental Results)

The Big, Bad, Scary Free Energy Equation (and New Experimental Results)

The 2-D Cluster Variation Method Free Energy Equation – in All Its Scary Glory:   You know, my dear, that we’ve been leading up to this moment for a while now. I’ve hinted. I’ve teased and been coy. But now, it’s time to be full frontal. We’re going to look at a new form of a free energy equation; a cluster variation method (CVM) equation. It deals not only with how many units are in state A or state B,…

Read More Read More

A “First Principles” Approach to Artificial General Intelligence

A “First Principles” Approach to Artificial General Intelligence

What We Need to Take the Next Tiny, Incremental Little Step: The “next big thing” is likely to be the next small thing – a tiny step, an incremental shift in perspective. However, a perspective shift is all that we need in order to make some real advances towards general artificial intelligence (GAI). In the second chapter of the ongoing book , I share the following figure (and sorry, the chapter itself is not released yet): Now, we’ve actually been…

Read More Read More

A “Hidden Layer” Guiding Principle – What We Minimally Need

A “Hidden Layer” Guiding Principle – What We Minimally Need

Putting It Into Practice: If we’re going to move our neural network-type architectures into a new, more powerful realm of AI capability, we need to bust out of the “sausage-making” mentality that has governed them thus far, as we discussed last week. To do this, we need to give our hidden layer(s) something to do besides respond to input stimulus. It’s very realistic that this “something” should be free energy minimization, because that’s one of the strongest principles in the…

Read More Read More

How Getting to a Free Energy Bottom Helps Us Get to the Top

How Getting to a Free Energy Bottom Helps Us Get to the Top

Free Energy Minimization Gives an AI Engine Something Useful to Do:   Cutting to the chase: we need free energy minimization in a computational engine, or AI system, because it gives the system something to do besides being a sausage-making machine, as I described in yesterday’s blog on What’s Next for AI. Right now, deep learning systems are constrained to be simple input/output devices. We force-feed them with stimulus at one end, and they poop out (excuse me, “pop out”)…

Read More Read More