Book

Book

Book – Statistical Mechanics, Neural Networks, and Artificial Intelligence

Are you frustrated, burned-out, and just plain exhausted with teaching yourself deep learning / machine learning, and finding yourself stuck?

Very likely, you actually have been making substantial progress.

Italian-renaissance-border-2-thin

Already signed up? Just want access?
Click Here for Contents and Links
 
Italian-renaissance-border-2-thin
 
If you haven’t signed up yet, and if you want to know as soon as new chapters and YouTube vids are published, read on and Opt-In!
 

You’ve learned the basics about neural networks and backpropagation. You’ve developed and tested some neural network / deep learning capabilities: classification engines (simple, semi-deep, and even deep). You’ve done autoencoders, CNNs (convolutional neural networks), RNNs (recurrent neural networks), RBNs (radial basis function networks), and others.

Maybe you’ve even stumbled a bit through RBMs (restricted Boltzmann machines), and have noticed – right around there – that the going was getting a whole lot rougher.

Something happened, something changed, and all of a sudden, you’re feeling stuck.

Worse than stuck.

You’re feeling like you’ve been plopped into this strange landscape where everyone expects you to speak their language, and its filled with terms like partition function, equilibrium, expectation maximization, and other arcane phrases.

My friend, you have wandered into the realm of statistical mechanics applied to deep learning / machine learning. And it is, indeed, a very different world.

What Just Happened

What just happened, in a nutshell, is that you’re not in Kansas any more.

On the wagon train journey from Missouri to the gold rush in California, you left the Kansas tall grass country a while ago. You’ve even passed through the Wyoming short grass country.

At some point, the nature of machine learning changes. Going forward requires learning statistical mechanics.

You’re reached Fort Laramie in Wyoming, in the foothills of the Rockies. Going forward, different rules apply.

Why This Has Been So Hard for Everyone

Getting caught in the machine learning landscape without a statistical mechanics background is like being caught in Donner Pass with a broken axle. "
Getting caught in the machine learning landscape without a statistical mechanics background is like being caught in Donner Pass with a broken axle. “Winter is coming. “

You’ve probably tried to teach yourself the statistical mechanics, at least a little bit. If so, then:

You’ve just broken an axle in Donner Pass, and winter is coming.

It’s not because you’re stupid, okay?

And it’s not because you haven’t been willing to sit down and put forth the effort.

In fact, the only way that you would get to this page at all is through putting forth the effort.

But if you’ve been cruising along the edge of statistical mechanics for neural networks, machine learning, and artificial intelligence / deep learning applications, see if this cry for help (posted on Quora) resonates with you:

A question posted on Quora read:

“How can I develop a deep/unified view of statistical mechanics, information theory and machine learning?”

“I learned all three fields independently (from physics, EE and cs), and many of the same concepts show up in all of them, some times with different meaning and views, like: entropy, maximum entropy model, Ising model, Boltzmann machine, partition function, information, energy, mean field, variational methods, phase transition, relative entropy, coding/decoding/inference, etc, etc. I felt that my understanding of those concepts are broken, lacking an unified view.”

The person asking this question had it so absolutely right.

He mentioned that he had an electrical engineering background, which is nothing to sneeze at. He also knows a lot of different neural network / machine learning areas; at least, he knows their names.

And his laundry list of topics is exactly spot-on. He’s identified the right things.

Yet he was having a horrible time putting them all together, and he’s not alone.

Why This Is So Very Hard

So suppose that you’ve sucked it up, and have been getting into the hard stuff – on your own.

Suppose that, just like the guy who posted his question on Quora, you’ve been trying to pull together a basic understanding of statistical mechanics and how it underlies fundamental topics in neural networks and machine learning / deep learning. But it’s been slow, sloggy, painful going – and you’re not at all sure that you’re learning what you need to know.

There are three big reasons why it has been so very hard:

  1. Most texts and tutorials are written by physicists, for physicists,
  2. They don’t know when to stop, and
  3. The notational cross-references are a killer.

Let’s just pull this apart for a moment, shall we? At the very least, you’ll understand why you’ve felt so frustrated all along.

Key Factor #1: Physicists Write for Physicists

Physicists are not like you and me. (Unless you are one. In which case, you don’t need this page.)

Physicists live in an abstract, exalted space in which words get in the way of thinking, and diagrams are usually not necessary.

This is in stark contrast to the rest of us.

Key Factor #2: It’s Hard to Know When You Can Stop

If you’ve bit the bullet, told yourself that you’d start from the beginning, and picked up an honest-to-God, for-real physics or physical chemistry text, then you’ve quickly realized that it would get you more confused than clear-minded.

The reason? Any good physics / physical chemistry text is uber-focused on its own area. The authors don’t know how to relate it to what YOU need to know. And there you are, following the bunny-trail down to the heat capacity of solids, linear harmonic oscillators, and a number of things that are truly exciting to physicists – but are not at all relevant to what YOU need to know.

Further, you can’t just stop reading, because things that you DO need to know – such as variational principles – are hidden later in the text.

Key Factor #3: Notational Cross-References Will Kill You

Notational cross-references are the biggest gotcha in the entire journey. You read something from one person, work for hours (days, weeks) to understand it … then read something on the same topic by another person, and it just doesn’t make sense.

The reason?

The notation used by the second person (and the third, fourth, and fifth …) is just different enough so that it’s almost impossible to mentally translate and correlate what you’ve learned in one setting to another.

Notational cross-references are the places where you get stuck in Donner Pass over winter. We all know what that leads to. The chances for survival? Not good.

It’s Not Your Fault. And It’s Not Theirs. What to Do.

Statistical Mechanics for Neural Networks and Deep Learning; in progress.
Statistical Mechanics for Neural Networks and Deep Learning; in progress.

Obviously, you’re going to have to learn some statistical mechanics.

Just as obviously, picking up a random stat mech book – or even a very good one – is not going to work.

Not that the authors aren’t trying. It’s just that they don’t know how to deliver what you need.

That’s why I’m writing a book, Statistical Mechanics, Neural Networks, and Machine Learning.

The book is in progress.

You will even be able to access an early crucial-chapters draft. As soon as I can make this happen.

What you can do, right this minute, is to get your hands on the Précis.

The Seven Key Equations

There are seven key equations that are crucial in machine learning, drawing from both statistical mechanics and Bayesian theory.

They are not all that you need. Machine learning is a pretty complex field, and is growing rapidly.

However, these seven equations will get you going.

They’ll get you out of Donner Pass, back on the trail, and once again heading towards the California Gold Rush of machine learning.

Seven equations from statistical mechanics and Bayesian probability theory that you need to know, including the Kullback-Leibler divergence and variational Bayes.
Seven equations from statistical mechanics and Bayesian probability theory that you need to know, including the Kullback-Leibler divergence and variational Bayes.

These are the equations that you’ll learn about in the Précis.

Italian-renaissance-border-2-thin

A Précis for the Seven Essential Machine Learning Equations

A student of mine – a very bright guy – looked at these equations and asked, “What do the double lines mean?” (See Eqns. 6 & 7 above.)

Very good question. One of the things that kicked me into writing this book – because if you can’t even read the equation, then you’re stuck in Donner Pass, with a broken axle, and your tool chest has tumbled down the mountain.

In response to his question, I’ve created the 24-page Précis for the book-in-progress, Statistical Mechanics, Neural Networks and Machine Learning.

What it has:

  1. The Seven Essential Machine Learning Equations,
  2. How to read these equations, in plain English, and
  3. Figures and examples.

All this does is help you READ THE EQUATIONS. That’s all. No derivations.

But after reading through this, your comfort, familiarity and overall confidence will increase.

You’ll be able to name the demons.

You can get the Précis – plus a bonus slidedeck on microstates by doing (yes, I know) another Opt-In process.

I’ve added two more Bonus Microstates slidedecks to this – which you’ll get on Days 7 and 9. This means that you WILL understand microstates. It also means that you’ll start getting an intuitive feel for free energy and free energy minimization (equilibrium), which will let you at least start understanding some of the papers, books, and monographs that you’re trying to read.

To be sure that you get the ENTIRE follow-on sequence, read and follow the directions below.

Italian-renaissance-border-2-thin


Five steps to get your copy of the Précis and also the Bonus Microstates Slidedeck:

  1. Use the Opt-In form below and (yes) opt-in.
  2. Check your email for your confirmation message. Click on the link to confirm.
  3. Check your email again – and IMPORTANT – you may need to check on your “Promotions” folder, or even look in your spam folder. Find the email that should have arrived AS SOON AS YOU CONFIRMED that you opted-in. Move this email to your regular “Primary” folder, so you’ll get further emails about machine learning.
  4. Open the email. It will have a link. Click on the link. It will take you to the webpage where the goodies are stored.
  5. Once on the “goodies” webpage – look for the orange-colored area (this color). Look for two download buttons. They’re yours. Use them.

Final note & word-of-warning: Check your email EVERY DAY for the next nine days. You’ll get a follow-on sequence. Starting about Day 3 or 4 (giving you time to read and digest both the Précis and the first Bonus Microstates Slidedeck, we go into a five-day follow-on tutorial sequence. Content Pages. TWO more Microstates slidedecks.

At the end of this, you will feel immensely more comfortable and confident.

But you have to get those extra follow-ons and bonuses.

To do that, move the emails from me into your Primary folder.

I don’t skip a day. Not until I’m sure that you know what you need to know to get started.

Make sure you get all the juicy goodies.

Italian-renaissance-border-2-thin
Opt-In to get your Precis and Bonus Slidedeck – Right HERE:






Opt-in to access Seven Essential Machine Learning Equations

We respect your email privacy

Italian-renaissance-border-2-thin