Browsed by
Tag: restricted Boltzmann machine

Future Directions in AI: Fundamentals (Part 1) – New YouTube Vid)

Future Directions in AI: Fundamentals (Part 1) – New YouTube Vid)

Are you an AI expert, or are you planning to be? There are three fundamental challenges that will underlie the major AI evolutions over the next decade. These are the three areas where you NEED to understand the fundamentals – before AI moves so fast that you’ll never catch up.  Let them guide your deep study for the year ahead.  Check them out in this new YouTube post: Live free or die, my friend – AJ Maren Live free or…

Read More Read More

Directed vs. Undirected Graphs in NNs: The (Surprising!) Implications

Directed vs. Undirected Graphs in NNs: The (Surprising!) Implications

Most of us don’t always use graph language to describe neural networks, but if we dig into the implications of graph theory language, we get some surprising (and very useful) insights! We probably all know that a typical feedforward neural network can be described as a “directed graph.” Many of us also know that a restricted Boltzmann machine (RBM) is an “undirected graph.” In this little difference of terms, there is a wealth of meaning. Salakhutdinov, Mnih, and Hinton (2007;…

Read More Read More

Book Chapter: Draft Chapter 7 – The Boltzmann Machine

Book Chapter: Draft Chapter 7 – The Boltzmann Machine

Chapter 7: Energy-Based Neural Networks This is the full chapter draft from the book-in-progress, Statistical Mechanics, Neural Networks, and Artificial Intelligence. This chapter draft covers not only the Hopfield neural network (released as an excerpt last week), but also the Boltzmann machine, in both general and restricted forms. It deals with that form-equals-function connection, based on the energy equation. (However, we postpone the full-fledged learning method to a later chapter.) Get the pdf using the pdf link in the citation…

Read More Read More

A “Hidden Layer” Guiding Principle – What We Minimally Need

A “Hidden Layer” Guiding Principle – What We Minimally Need

Putting It Into Practice: If we’re going to move our neural network-type architectures into a new, more powerful realm of AI capability, we need to bust out of the “sausage-making” mentality that has governed them thus far, as we discussed last week. To do this, we need to give our hidden layer(s) something to do besides respond to input stimulus. It’s very realistic that this “something” should be free energy minimization, because that’s one of the strongest principles in the…

Read More Read More