What’s Next for AI (Beyond Deep Learning)

What’s Next for AI (Beyond Deep Learning)

The Next Big Step:

 

We know there’s got to be something.

Right now, deep learning systems are like sausage-making machines.

Current deep learning systems are like sausage-making machines, such as shown by Jon Thorner.
Current deep learning systems are like sausage-making machines, such as shown by Jon Thorner. (Image from Thorner’s YouTube post.)

You put raw materials in at one end, turn the crank, and at the other end, you get output – nicely wrapped-up sausages.

Wherever you are in your studies of machine learning / deep learning / neural networks / AI, you know there’s got to be more.

If we’re going to make anything like general artificial intelligence (GAI), we need systems that do more than operate as simple input-output devices.

 
Italian-renaissance-border-2-thin
 

Here, in that blessed oasis of tranquility between Christmas and New Year, when many of you have a week off from work and the franticness of Christmas preparations has abated, the presents have all been opened and phone calls and visits have been made, and the big meal digested … you’ve probably done three important (life-essential, crucial things):

  1. Gone to the gym, at least once. (Hey, I’m going out for a longish walk after writing this, so we’re all in good company here.)
  2. Done a first-pass triage on the desk clutter. (Yeah, it really has piled up, hasn’t it? My situation is probably like yours – I’ve got a box full of stuff to sort and toss or file. Once again, you’re not alone.)
  3. Had yourself a serious early-morning coffee – alone, in quiet – and thought to yourself: what exactly is NEXT for AI?

Really important question, that last one. Particularly if you’re just beginning to recover from (yet another) grueling quarter in Northwestern’s Master of Science in Predictive Analytics course (and soon to be Master of Science in Data Science).

I know. I get it.

You’re worn out, totally drained, exhausted … and there’s more to come. One or two more electives, and then a Capstone or Thesis.

And even if you’re not active in NU’s MSPA program, you’re self-studying like a mad banshee – Udemy, Coursera, tutorials, blogs … downloading TensorFlow and Keras and reading and experimenting like crazy. You are just trying to catch up … to match orbit with the already-out-there world of AI and machine learning.

And damn, it’s crazy hard work, isn’t it?

So, I know. The last thing that you want to hear from me is that the AI world is going to flip-flop like crazy within the next few years.

The only thing that you want to hear less than that is that the next evolution – the next giant big step in AI – is likely to involve the kind of physics that most people don’t hit until they’re in graduate school. In physics. Which is an area in which … perhaps (maximally) about 1% of you have familiarity.

And you know that there’s no way in which you can take on yet another Master’s degree program in (of all things) physics.

Now way. Just not gonna happen.

At the same time, you’re pouring yourself into learning the fundamentals, and busting your a** to catch up with the basics of machine learning. (That includes a lot of basics – math, R and Python proficiency, and methods that are in the “everybody knows this” category – except that you’re learning them now.)

So, it’s awful hard to imagine that all this work is for naught, and that you’re going to have to do an even bigger push to get on top of the next wave.

So what’s a guy or gal to do?

Certainly not just go back to bed and hope that this is all just a horrible bad dream and when you go back into dream-state the world will suddenly be sane

Because it won’t be. It’s getting crazier by the week. And you’re doing the only sane, sensible, practical thing possible – which is massively studying and learning to be on top of this current wave.

So, there is an answer.

 
Italian-renaissance-border-2-thin
 

A Sad Cautionary Tale

 

A while ago, I wrote about my mom. She got her Master’s in biology, just when Watson & Crick (and Rosalind Franklin, see this fascinating (and short) little history … but I digress.

Iconic photo from Life magazine: sailor kissing a woman in Times Square, celebrating victory over Japan at the end of WWII.
Iconic photo from Life magazine: sailor kissing a woman in Times Square, celebrating victory over Japan at the end of WWII. Story behind the photo.)

Mom got her Master’s just as Watson and Crick discovered DNA. Her scientific world changed overnight. So, she got married, as women tended to do in that era. (It was just post WWII; the men were finally coming home, and everyone was desperate for normalcy and for family life.) Mom had babies; practical genetics.

She gave up her science career, which was not – at that time – exceedingly important to her.

You, however – my dear one – are in no wise about to give up your career. You’ve worked WAY too hard, and you know that the world is not returning to “normal.” So you can’t let up.

What you can do is to give yourself a long runway.

 
Italian-renaissance-border-2-thin
 

The Essence of What Is to Come

 

So here’s the heart and soul and core of the whole thing: nature is ruled by a process called free energy minimization.

Free energy minimization also plays a current role in machine learning / deep learning systems; it is what happens whenever we have an energy-based model. For example, the whole notion of gradient descent in the (restricted) Boltzmann machine deals with free energy minimization.

There is this not-so-subtle, not-so-delicate little point, though.

Free energy minimization, as it is currently used, is done just to get useful values for the connection weights from the input to hidden nodes, and from hidden nodes (however many layers there are) up to the next layer and to the output nodes.

This is all very well and proper and good. It solves the problem of weight adjustments, which is what is needed to let these engines do classification (or whatever task they’re being trained to do).

However, this is living in a very small box.

Using free energy minimization just to get good connection weights is like saying that the most that AI / deep learning can ever do is be a fancier and more efficient sausage machine.

In our gut, we know that there’s more.

So, let’s pull back and ponder for a moment, shall we?

 
Italian-renaissance-border-2-thin
 

Current Deep Learning Systems are a (Relatively) Small Box

 

We need to bust out of the small-AI box.
We need to bust out of the small-AI box. (Maze picture from Wiki on Maze.

Current deep learning thinking is – for all of its practical use – a relatively small box. Even a recent article by some of the fine folks at Google’s DeepMind is simply proposing a better path through the maze. (See 2017 article by Fernando et al. PathNet: Evolution Channels Gradient Descent in Super Neural Networks.)

This is good work; not pooh-poohing it at all. It’s just … this is still inside the maze inside the box. What we want is not so much a better path through the maze, but a way to blow a hole in the side of the box.

Simply put, we need a bigger box.

Let me play Scheherazade with you, and continue with this story – of what might be a bigger box – tomorrow, ok?

Until then –

 
Italian-renaissance-border-2-thin

 

Live free or die, my friend –

AJ Maren

Live free or die: Death is not the worst of evils.
Attr. to Gen. John Stark, American Revolutionary War

 
Italian-renaissance-border-2-thin
 

Some Useful Background Reading on Statistical Mechanics

 

  • Hermann, C. Statistical Physics – Including Applications to Condensed Matter, in Course Materials for Chemistry 480B – Physical Chemistry (New York: Springer Science+Business Media), 2005. pdf. Very well-written, however, for someone who is NOT a physicist or physical chemist, the approach may be too obscure.
  • Maren, A.J. Statistical Thermodynamics: Basic Theory and Equations, THM TR2013-001(ajm) (Dec., 2013) Statistical Thermodynamics: Basic Theory and Equations.
  • Salzman, R. Notes on Statistical Thermodynamics – Partition Functions, in Course Materials for Chemistry 480B – Physical Chemistry, 2004. Statistical Mechanics (chapter). Online book chapter. This is one of the best online resources for statistical mechanics; I’ve found it to be very useful and lucid.
  • Tong, D. Chapter 1: Fundamentals of Statistical Mechanics, in Lectures on Statistical Physics (University of Cambridge Part II Mathematical Tripos), Preprint (2011). pdf.

 
Italian-renaissance-border-2-thin
 

Previous Related Posts

 
Italian-renaissance-border-2-thin
 

Return to:

8 thoughts on “What’s Next for AI (Beyond Deep Learning)

Leave a Reply

Your email address will not be published. Required fields are marked *