Friday, June 17, 2011

Reichert, P., Series, P. and Storkey, A.. A Hierarchical Generative Model of Recurrent Object-Based Attention in the Visual Cortex

A concept of deep Boltzmann machines (DBM) was proposed by Salakhutdinov & Hinton (2009) as a deep undirected probabilistic model. It differs from other directed models, such as deep belief network (DBN), such that a neuron in an intermediate layer is excited by signals from both upper and lower layers (recurrent processing). It is, in some sense, closer to how a human brain works, as it is unlikely that each neuron is arranged to be activated by the signal from lower layers only when recognizing and by the signal from upper layers only when generating (Hinton et al., 2006).

In this work, the authors considered DBM as a cortical model (Reichert et al., 2010), and tried to find empirical connections between DBM and the recurrent object-based attention.

The authors describe how some attentional theories suggest that in higher cortical areas form representation that are specific to one object at a time. The paper experimentally explores how some properties of DBM coincide with those theories. There are two main points on which the paper focuses:

(1) Recurrent processing helps DBM (or a human brain in case you agree to assume DBM as a right cortical model) concentrate on meaningful objects when images contain noise by letting higher layers tend to represent a single object at a time, which is contrast to lower layers that tend to encode a whole image (with multiple objects) into low-level features.
(2) Suppresive mechanism in higher layers avoids DBM from 'hallucinating' wrong objects by having sparse lateral activations.

In order to confirm those points, the paper uses various performance measures and inspection methods for DBM. One such method is to inspect the states of a single layer of hidden neurons by clamping the layer to the specific state and sampling from a visible layer. This method revealed that the recurrent processing (in contrast to feed-forward sweep from the visible layer to the top layer) drives the higher layers to focus their attentions on more specific single object at a time.

Additionally, the authors tried quantitative analysis by classifying cluttered data sets using the hidden states. For simple toy data sets, the recurrent processing indeed turned out to excel over a simple feedforward processing. However, for the more realistic data set such as MNIST handwritten digits with clutters, this simple approach was not sufficient.

An explanation was given by the authors that this difficulty arises from the fact that clutters in the image could let the higher layers 'hallucinate' such that one object is considered as (transformed to) another object. For instance, a digit 9 with clutters can be thought to be a digit 8 during the recurrent processing in the higher layers.

As a naive (but, effective according to the paper) remedy, they suggested intitializing the biases to negative values to sparsify the hidden states which results possibly in suppressed noise during the recurrent processing of the higher layers. The authors coined this approach as an additional suppressive mechanism.

These results show that DBM embodies a number of properties that can be somehow related to the attentional recurrent processing in the cortex. This work is meaningful, as it has shown that there is a middle point between neuroscience and machine learning where both fields are able to learn from each other.

Unfortunately, the authors trained DBM using a pre-training (Salakhutdinov & Hinton, 2009) only. As the pre-training is known to only greedily find a solution that is close to a local maximum likelihood solution, it can be debated whether the experimental results obtained in this paper indeed reflect the true nature of DBM.

References

Hinton, G.. A Fast Learning Algorithms for Deep Belief Networks. Neural Comp., Vol. 18, No. 7. (1 July 2006), pp. 1527-1554.
Salakhutdinov, R., Hinton, G.. Deep Boltzmann machines. Proceedings of the 12th International Conference on Artificial Intelligence and Statistics (AISTATS). Volutme 5. (2009).Reichert, D. P., Series, P., Storkey, A. J.. Hallucinations in Charles Bonnet Syndrome induced by homeostasis: a Deep Boltzmann Machine model. Advances in Neural Information Processing Systems 23 (2010).

No comments:

Post a Comment