Workshop on “Biologically Plausible Learning”

Satellite Workshop at the
6th International Conference on Machine Learning, Optimization & Data Science
July 19, 2020 – Certosa di Pontignano, Siena, Italy

Virtual Meeting Room: LOD 2020 Lecture Hall 1

Half-Day Workshop @ LOD 2020



Biologically Plausible Learning

Biological realism, distributed representations, and spatiotemporal local computational models of learning are not only of fundamental importance in cortical cognition, but they are also likely to drive fundamental innovations in machine learning.

This workshop covers topics in biologically plausible learning with the purpose of  discussing the limitations of some classic machine learning algorithms, while looking for new computational models.  What is the role of time and what are the consequences connected with the adoption of static neurons, that seems to be interwound with the longstanding debate on the biological plausibility of Backpropagation? Can we get a truly spatiotemporal locality? What about biologically plausible mechanisms in case in which we process a stream (e.g. speech or video)? Most artificial neural networks exhibit a recurrent behavior either by relaxing to a fixed point, once they are fed with a fixed input (e.g. Hopfield nets), or by updating the state upon receipt of a new element of the input sequence.  On the other hand, in nature time is shared by the model and the environment, so as a model with higher degree of biological plausibility is supposed to exhibit an internal relaxing dynamics that overlaps the inherent temporal structure of the inputs. Whenever sequential processing is required, it looks like the requirement of spatiotemporal locality is far from being achieved in nowadays learning algorithms (e.g.  BPTT). What about computational laws with such a built-in property? The local processing on receptive fields,  the weight sharing constraints, and the pooling process are somehow inspired from biology, as well as the deep structure of neural nets that strongly favor the emergence of rich representational pattern descriptions.  Are there other insightful architectural regularities we can borrow from biology to improve the spectacular results of deep learning? Most interesting human learning processes are strongly driven by appropriate focus of attention mechanisms. How to replicate similar computational schemes with the purpose of dramatically cutting the complexity of learning?

Finally, it is worth mentioning that while biology can definitely offer insightful inspiration to artificial neural networks, the emergence of cognition might be the outcome of computational laws of learning that hold regardless of current biological knowledge. This could also shed light in the emergence of human cognition.

Main topics

  • Biological realism
  • Representational issues
  • Role of time and space
  • Spatiotemporal locality of learning
  • Biologically plausible focus of attention
  • Backpropation plausibility issues
  • Error-driven and Hebbian learning co-existence
  • Information-based principles and global descriptions

 

Biologically Plausible Learning Workshop Schedule

 

Alessandro Sperduti Chair

 

14:55- 15:00 Alessandro Sperduti,“Introduction”

 

15:00 – 15:30 Tomaso Poggio, “Towards new foundations for machine learning”

 

15:30 – 16:00 Yoshua Bengio, “Equilibrium Propagation”

 

16:00 – 16:30 Naftali Tishby, “Local Information Bottleneck optimization as a Biologically plausible feedforward learning mechanism”

 

Coffee Break

 

16:45-17:15 Pierre Baldi, “The Theory of Local Learning”

 

17:15 – 17:45 Marco Gori, “Backprop Diffusion is Locally Plausible”

 

17:45 – 18:15 Cristina Savin, “TBA”

 

18:30 – 20:00 Panel

Ten  questions for speakers  who are supposed to select the one they like most 

  1. To what extent can neuroscience provide insights to gain the abstraction needed to conceive effective learning algorithms? What is your preferred example of success in machine learning? How do you rank the possibility of breakthroughs based on this approach for years to come
  2. What are most relevant “biological plausibility requirements” for learning machines? Which one is most predominant?
  3. Machine learning relies on the indisputable classic notion of algorithm. On the other hand, the regularities that emerge in human perception might be stimulated by information-based laws in the continuous setting of computation. Is there a pre-algorithmic level for an in-depth understanding of learning perceptual tasks in vision, speech and language understanding?
  4. The overall field of artificial intelligence is mostly dominated by searching and optimization methods. Interestingly, the search for optimal solutions, which stimulates the growth of heuristics, does characterize both symbolic and sub-symbolic models behind intelligent agents. On the other hand, human learning processes can hardly be regarded as  search  or as optimization of risk functions created by big data collections. Humans learn incrementally as time goes by.  Couldn’t be the case that learning mechanisms that involve time can better be regarded as equilibrium computational processes instead of search/optimization?
  5. There is plenty of evidence that deep nets strongly favor the emergence of rich representational pattern descriptions in their hidden layers. Are there other insightful architectural regularities we can borrow from biology to improve the results of deep learning?
  6. Most interesting human learning processes are strongly driven by appropriate focus of attention mechanisms. How to replicate similar computational schemes with the purpose of dramatically cutting the complexity of learning? Should we focus on “biological replication” or on the understanding of underlining computational structures behind focus of attention.
  7. There are surprising results that come from developmental psychology on what a newborn see. Charles Darwin came up with the following remark:

It was surprising how slowly he acquired the power of following with his eyes an object if swinging at all rapidly; for he could not do this well when seven and a half months old.

At the end of the seventies, this early remark was given a technically sound basis. Nowadays, we know that for newborns to gain adult visual acuity, depending on the specific visual test, several months are required. Does it come from  our own biology or is it a more general information-based principle for efficiently learning to see?

  1. Why do foveal animals perform eye movements? Could this be related to information-based principles? What about possible computer-based implementations?
  2. At the beginning of the nineties, it has been pointed out that the visual cortex of humans and other primates is composed of two main information pathways that are referred to as the ventral stream and dorsal stream. The ventral “what” and the dorsal “where/how” visual pathways are traditionally distinguished, so as the ventral stream is devoted to perceptual analysis of the visual input, such as object recognition, whereas the dorsal stream is concerned with motion ability in the interaction with the environment. Why are there two different mainstream systems in the visual cortex? Couldn’t this be related to studies to invariance in computer vision and machine learning?
  3. When thinking of ImageNet, one might figure out what human life could have been in a world of visual information with shuffled frames. Could children really acquire visual skills in such an artificial world, which is the one we are presenting to machines? Couldn’t be the case that we are tackling a problem that is more difficult than the one nature has offered us? When considering the spectacular results of deep learning, there could still be remarkable room of improvement.

 

Chair

Alessandro Sperduti
University of Padova, Italy

Keynote Speakers

Pierre Baldi
University of California Irvine, USA
Yoshua Bengio
Head of the Montreal Institute for Learning Algorithms (MILA) & University of Montreal, Canada
Marco Gori
University of Siena, Italy
Tomaso Poggio
MIT, USA
Cristina Savin
Center for Neural Science, New York University, USA
Naftali Tishby
The Hebrew University, Israel

Papers submission

All papers must be submitted using   EasyChair.   https://easychair.org/conferences/?conf=lod2020

Submission deadline:  June 15, 2020

Any questions regarding the submission process can be sent to conference organizers: lod@icas.cc