Browsed by
Category: A Resource

Future Directions in AI: Fundamentals (Part 1) – New YouTube Vid)

Future Directions in AI: Fundamentals (Part 1) – New YouTube Vid)

Are you an AI expert, or are you planning to be? There are three fundamental challenges that will underlie the major AI evolutions over the next decade. These are the three areas where you NEED to understand the fundamentals – before AI moves so fast that you’ll never catch up.  Let them guide your deep study for the year ahead.  Check them out in this new YouTube post: Live free or die, my friend – AJ Maren Live free or…

Read More Read More

New! The YouTube Vid Series: Backpropagation and More

New! The YouTube Vid Series: Backpropagation and More

If you are branding yourself as an AI/neural networks/deep learning person, how well do you really know the backpropagation derivation? That is, could you work through that derivation, on your own, without having to find a tutorial on it? If not – you’re in good company. MOST people haven’t worked through that derivation – and for good reason. MOST people don’t remember their chain rule methods from first semester calculus. (It probably doesn’t help if I say that the backprop…

Read More Read More

AI at the Edge: Upcoming Webinar

AI at the Edge: Upcoming Webinar

“AI at the Edge” – coming soon to a theater near you! OK. Maybe not to a theater. But most certainly, a live webinar (where you can offer questions during the live Q&A at the end), to be hosted by Avnet on Thursday, Dec. 5th, 2PM EST. (Don’t worry; I’ll send out reminders.) Why “AI at the Edge”? And Why Now? You know that I’m mostly a theoretician. (Love ’em equations.) So for me to go over to the dark…

Read More Read More

Interpreting Karl Friston (Round Deux)

Interpreting Karl Friston (Round Deux)

He might be getting a Nobel prize some day. But – no one can understand him. You don’t believe me? Have a quick glance at Scott Alexander’s article, “God Help Us, Let’s Try To Understand Friston On Free Energy”. We’re referring, of course, to Karl Friston. I’ve spent the past three-and-a-half years studying Friston’s approach to free energy, which he treats as the guiding principle in the brain. He has extended the classic variational Bayes treatment (frontier-material in machine learning)…

Read More Read More

Directed vs. Undirected Graphs in NNs: The (Surprising!) Implications

Directed vs. Undirected Graphs in NNs: The (Surprising!) Implications

Most of us don’t always use graph language to describe neural networks, but if we dig into the implications of graph theory language, we get some surprising (and very useful) insights! We probably all know that a typical feedforward neural network can be described as a “directed graph.” Many of us also know that a restricted Boltzmann machine (RBM) is an “undirected graph.” In this little difference of terms, there is a wealth of meaning. Salakhutdinov, Mnih, and Hinton (2007;…

Read More Read More

Book Chapter: Draft Chapter 7 – The Boltzmann Machine

Book Chapter: Draft Chapter 7 – The Boltzmann Machine

Chapter 7: Energy-Based Neural Networks This is the full chapter draft from the book-in-progress, Statistical Mechanics, Neural Networks, and Artificial Intelligence. This chapter draft covers not only the Hopfield neural network (released as an excerpt last week), but also the Boltzmann machine, in both general and restricted forms. It deals with that form-equals-function connection, based on the energy equation. (However, we postpone the full-fledged learning method to a later chapter.) Get the pdf using the pdf link in the citation…

Read More Read More

Book Excerpt: Chapter 7

Book Excerpt: Chapter 7

Chapter 7: Energy-Based Neural Networks This is the first time that I’m sharing an excerpt from the book-in-progress, Statistical Mechanics, Neural Networks, and Artificial Intelligence. This excerpt covers the Hopfield neural network only; I’m still revising / editing / adding-to the remaining sections on the (general and restricted) Boltzmann machine. Get the pdf using the pdf link in the citation below: Maren, A.J. (In progress). Chapter 7: Introduction to Energy-Based Neural Networks: The Hopfield Network and the (Restricted) Boltzmann Machine…

Read More Read More

Book Progress

Book Progress

The Uber-Important Chapter 7 – Introduction to Energy-based Neural Networks:   I tell students that it’s like being on the Oregon Trail. All of the stochastic gradient-descent networks (up to and including Convolutional Neural Networks, or ConvNets, and Long Short-Term Memory networks, or LSTM networks) can be understood using backpropagation. This requires only that first-semester calculus background. Sure, grunting through the chain rule (many, many times) gets tedious. But it’s doable. In contrast, the energy-based networks are the heart and…

Read More Read More

The Yin and Yang of Learning Deep Learning

The Yin and Yang of Learning Deep Learning

Sometimes Leaning Into It Is Not Enough: You folks tend to be hyper-focused, hugely on-your-game types. (My TA, and one of my favorite people, described himself as “alpha-squared.” So true for a lot of you!) So given your alpha-ness (or your alpha-squared-ness), your dominant approach to mastering a new topic is to work like crazy. Read a whole lot of stuff, from original papers down to tech blogs and forums. You install code environments, teach yourselves all the latest and…

Read More Read More

Entropy Trumps All (First Computational for the 2-D CVM)

Entropy Trumps All (First Computational for the 2-D CVM)

Computational vs. Analytic Results for the 2-D Cluster Variation Method:   Three lessons learned: first computational results for the 2-D Cluster Variation Method, or CVM. The first-results comparisons between analytic predictions and the actual computational results tell us three things: (1) the analytics are a suggestion, not an actual values-prediction, and the further that we go from zero-values for the two enthalpy parameters, the more that the two diverge, (2) topography is important (VERY important), and (3) entropy rules the…

Read More Read More