首页 > 代码库 > 【转载】Hierarchal Temporal Memory (HTM)

【转载】Hierarchal Temporal Memory (HTM)

最近在看机器学习,看能否根据已有的历史来预测Hardware的故障发生概率。下文是一篇很有意思的文章,转自 http://numenta.org/htm.html。

NuPIC是一个开源项目,用来实现HTM.

-------------------

There are many things humans find easy to do that computers are currently unable to do. Tasks such as visual pattern recognition, understanding spoken language, recognizing and manipulating objects by touch, and navigating in a complex world are easy for humans. Yet despite decades of research, we have few viable algorithms for achieving human-like performance on a computer.

In humans, these capabilities are largely performed by the neocortex. Hierarchal Temporal Memory (HTM) is a technology modeled on how the neocortex performs these functions. It offers the groundwork for building machines that approach or exceed human level performance for many cognitive tasks. HTM is implemented within the NuPIC open source project.

Online Learning

Most machine learning techniques are relatively static. A model is constructed from a training data set, verified on a testing data set, and then applied to real-world data. However the patterns and structure in the world changes over time. Therefore previously accurate models must be regularly retrained with new data, repeating the time and expense of the original process.

HTM on the other hand is an online learning system. It does not require conventional training and testing data sets. Instead, HTM learns continuously with each new data point. HTM is constantly making predictions which are continually verified as more data arrives. As the underlying patterns in the data change HTM adjusts accordingly. An online learning system such as HTM forces you to think about many things differently than you do with algorithms that rely on static training data sets.

Sparse Distributed Representations

Computers store information in “dense” representations such as a 32 bit word where all combinations of 1s and 0s are possible.

By contrast, brains use sparse distributed representations. The human neocortex has roughly 100 billion neurons, but at any given time only a small percent are active. The activity of neurons are like bits in a computer, and therefore the representation is sparse. HTM also uses SDRs. A typical implementation of HTM might have 2048 columns and 64K artificial neurons where as few as 40 might be active at once. There are many mathematical advantages of using SDRs. HTM and the brain could not work otherwise.

Example of a sparse distributed representation in an array of cells

This diagram represents sparsity: two thousand circles with a small number of red circles active.

This diagram represents a sparse distributed representation: two thousand circles with a small number of red circles active.

In SDRs, unlike in a dense representations, each bit has meaning. This means that if two vectors have 1s in the same position they are semantically similar in that attribute. SDRs are how brains solve the problem of knowledge representation that has plagued AI for decades.

For more details about SDRs, watch this excerpt from a talk given by Jeff Hawkins.

【转载】Hierarchal Temporal Memory (HTM)