New Code (Not Released Yet): V&V the Code Before We Play:
Well, my darling, as you gathered from last week’s post, the world has shifted.
Up until now, when we were talking about having a new free energy function to use inside a neural network, we had to do “Gedankenexperiments” (German for “thought experiments”).
Now, though, there’s working code – and I so LOVE seeing the numbers and graphs come out; teasing it, playing with it … stroking it to greater performance …
What We Have Now …
Actually, my intro to this post was going to be much more mundane … sort of a mutual commiseration about how bo-o-o-ring code V&V is … and that’s true. True, if (if and ONLY if) you’re really not all that excited about the code.
But if the code is really a brand-new toy, and it’s like Christmas and we’ve just unwrapped the biggest box of Legos that money could find — well, then. That’s different. Code V&V is the other side of documentation; it’s not so much how the thing is put together; it’s an assurance that as it is currently put together, it works. The bridge will take some weight, for example.
Which brings me to why I’m just juicy-level ebullient this morning; I got the V&V finished over the weekend, and have spent an enormous amount of time (in an exhausted and caffeine-fueled state) getting the V&V document itself reworked and posted to arXiv. And the post succeeded; the V&V is up on arXiv now, and you can access it here as the arXiv abstract, and here’s the pdf.
And while you’re at it, why not check out this cool new tool developed by Andrej Karpathy, whom I’ve mentioned several times (at least in my deep learning class): it’s the arxiv sanity preserver, which gives you ORGANIZED access to recent arXiv works, and it’s a lot of fun.
Back to the Code … A Nice Little Example
So here’s an example … and it’s not a full example; not just yet. But we’re going to tease our way into something that is really very new; there are (to the very best of my knowledge) no other discussions like this out there on the web, so we’re going to take it slow and easy. Just a little bit at a time.
Here are two different grid topographies:
The motivation behind these two examples is work from the brain research community; I cite a lot of relevant work in my 2016 2-D CVM paper, so if you’re interested, you can read a good summary there and find all sorts of good references.
The ideas that various brain researchers have proposed are that there are various kinds of neural connectivity patterns:
- Scale free – a fractal pattern is scale-free; the same design repeats itself at multiple levels of resolution, similarly a continent might have a ring of islands around it, and then those islands would have their own rings of smaller islands, etc.,
- Small-world – all the active neurons (or neuronal collectives or columns) connect (through various intermediaries) to all other active ones; it’s a kind of “six-degrees-of-freedom” notion.
- Rich club – all the active neuron groups connect more-or-less tightly with (and only with) other active neuron groups; the idea is that members of a “rich club” see each other at the yacht club, the board meetings, the society galas, etc. Just like the “small world” concept, only more densely and tightly interconnected.
One of These Is at Equilibrium; the Other One Isn’t
Of the two examples shown in the preceding figure, one is at equilibrium and the other one isn’t.
And your guess as to which one is at equilibrium is probably wrong!
Before we get into this, let’s review what we know about entropy.
We know that entropy represents the distribution among possible states. We also know that systems tend towards maximal entropy.
In fact, free energy minimization is the process of trading off the system’s entropy, or drive towards a configuration with maximal distribution over possible states, and the system’s enthalpy, which is its overall energy level (which it seeks to minimize at equilibrium).
When we look at the two configurations in the figure above, even though we haven’t played much at all with the notion that entropy can include local patterns (what-is-next-to-what), our gut sense tells us that the system on the right has a very low entropy. A high-entropy system (our gut sense and intuition tell us) would just look messier.
We know that high-entropy systems have a high degree of disorder, and the one on the right is just way too ordered.
When we look at the system on our left, we’d say that there’s a lot more “distribution among possible states.” We see black (state A) nodes next to white (state B) and next to black (state A). We see all the possible combinations of triplets (what-is-next-to-what-is-next-to-what).
And yes, you’d be correct. The system on the left DOES have a greater entropy than the one on the right.
And yet, the one on the left is NOT at equilibrium, and the one on the right IS at equilibrium.
Just the first of many little intellectual delights that we’ll play with in the weeks ahead.
Live free or die, my friend –
Live free or die: Death is not the worst of evils.
Attr. to Gen. John Stark, American Revolutionary War
P.S. Yes, I will release the code. At some time … in the future. In the hopefully not too terribly distant future.
But at the moment, I’m having such a fun time playing with it … all by myself … and I just have to be selfish. For a little while.
And a lady has to have her private pleasures …
The Essential References
If you’re going to follow along, there are now two valuable papers – the older, more tutorial paper on the 2-D Cluster Variation Method, and (just published last night) the Code V&V:
- Maren, A.J. (2018) Free Energy Minimization Using the 2-D Cluster Variation Method: Initial Code Verification and Validation, THM TR2018-001(ajm), arXiv:1801.08113 [cs.NE] arXiv abstract.
- Maren, A.J. (2016) The Cluster Variation Method: A Primer for Neuroscientists. Brain Sciences, 6(4), 44. doi:10.3390/brainsci6040044 pdf
Previous Related Posts
- The Big, Bad, Scary Free Energy Equation (and New Experimental Results)
- A “First Principles” Approach to General AI
- A “First Principles” Approach to General AI
- A Hidden Layer Guiding Principle: What We Minimally Need
- How Getting to a Free Energy Bottom Helps Us Get to the Top
- What’s Next for AI: Beyond Deep Learning