CORTECON II

CORTECON II

CORTECON II – A Brain-Computer Interface Information Representation Architecture

One of the big artificial intelligence (AI) and machine learning (ML) challenges is temporal persistence. This means: how do you give your system some new information, and help it remember that new info, even when you give it something new?

Machine learning methods, such as recurrent neural networks (RNNs) and long-short-term-memory (LSTM) networks, along with other deep learning (DL) tools, offer only limited value.

The next big step for neural networks (NNs) and artificial intelligence (AI) is to:

  1. Break out of the time-step trap, so that memories of recent events (stimulus) can be retained, even with “quiet time” between data presentations,
  2. Have temporal stability and persistence of recent events, even with small perturbations and presentation of new partial stimuli, and
  3. Be able to reconstruct larger patterns from presentation of partial patterns over time (the sparse data problem).

The CORTECON II architecture not only provides a means for representing information on either side of the Brain-Computer Interface, it also provides the groundwork for a new approach to predictive intelligence – one based on a strong foundation in physics. This would give quantifiable measures to guide predictive actions.

The CORTECON II uses a relatively unknown form of statistical mechanics – the Cluster Variation Method – to provide a more complete description of the entropy inherent in local pattern distributions. It uses a neural network front-end, combined with ontologies to organize concept hierarchies – whether relating to abstract patterns, or to specific concepts.

For more, see Current Work.