Griffiths prefers to consider human learning as a black box that maps inputs to outputs. Hence, a strategy to solve this problem would be to learn the mapping used by humans. He presented the Bayes theorem as a solution to how to solve the problem of mapping from input to output.

However, he showed that Bayes rule can not be directly implemented because it is often very difficult to determine and express the priors human use in decision making. He also presented an interesting example involving the audience for the same purpose. The question was: a movie has made $90 million so far how much will it make? The audience’s answer varied between $300 and $500 million. Similarly, another question was that a movie has made $6 million, how much will it make? The audience answers were around $10 million. In order to give the contrasting view on the topic, he gave a counter example; if you see a 90 year old man, how much do you think will he live? The answers were between 95 to 100 years. Again, he followed that up another question, you meet a 6 year old boy, how much will he live? The audience answers were around 70.

From the answers to this problem, he emphasized that learning human inductive biases were hard because there were different answers/outputs when the data/inputs were the same numerically. The moral of the example is that priors have strong effect on predictions, so inductive biases can be inferred from behavior of humans if we can determine the relevant priors. They performed a set of cognitive experiments to conclude that different priors (Power-law, Gaussian and Erlang) were associated with different examples of human learning.

For example, power-law prior was associated with the case of movie making certain amount of money; while predicting the age would be associated with Gaussian prior. Thus, it is difficult to come-up with a single strategy that can be used for many tasks. Hence, Griffiths and colleagues have proposed several models for many different tasks. Some of the notable ones include Causal learning (Griffiths & Tenenbaum, 2009), Category Learning (Griffiths et al., 2008), Speech perception (Feldman & Griffiths, 2007), and subjective randomness (Griffiths & Tenenbaum, 2003).

Griffiths also presented about the concept of iterated learning (Kirby, 2001) realised with Bayesian learners where the distribution over the hypothesis converges to the prior. They build a concept known as “Rational process model” (Sanborn et al., 2006) by approximating Bayesian inference and connecting them to psychological processes. They use Monte Carlo mechanisms based on importance sampling (Shi, Griffiths et al., 2010) and “Win-stay, lose-shift” to approximate the Bayesian inference.

Overall, the talk was interesting comprising of several real world experiments and their mathematical formulation showing that Bayesian models of cognition provides a method to identify human inductive Biases. However, the relationship of priors and representations in those models to mental and neural processes were not transparent.

I personally liked the mathematical formulations of the problems and developing models that had more focus mathematical and statistical methods rather than mimicking the human learning process. These cognitive issue along with several neural and relevant issues were also discussed in panel discussion that immediately followed the talk by Griffiths.

The panel discussion was attended by all the keynote speakers except John Shawe-Taylor and Riitta Hari and co-ordinated by Timo Honkela.

## No comments:

## Post a Comment