Workshop on “Biologically Plausible Learning”

Satellite Workshop at the
6th International Conference on Machine Learning, Optimization & Data Science
July 19, 2020 – Certosa di Pontignano, Siena, Italy

Conference Room: Bracci

Half-Day Workshop @ LOD 2020



Biologically Plausible Learning

Biological realism, distributed representations, and spatiotemporal local computational models of learning are not only of fundamental importance in cortical cognition, but they are also likely to drive fundamental innovations in machine learning.

This workshop covers topics in biologically plausible learning with the purpose of  discussing the limitations of some classic machine learning algorithms, while looking for new computational models.  What is the role of time and what are the consequences connected with the adoption of static neurons, that seems to be interwound with the longstanding debate on the biological plausibility of Backpropagation? Can we get a truly spatiotemporal locality? What about biologically plausible mechanisms in case in which we process a stream (e.g. speech or video)? Most artificial neural networks exhibit a recurrent behavior either by relaxing to a fixed point, once they are fed with a fixed input (e.g. Hopfield nets), or by updating the state upon receipt of a new element of the input sequence.  On the other hand, in nature time is shared by the model and the environment, so as a model with higher degree of biological plausibility is supposed to exhibit an internal relaxing dynamics that overlaps the inherent temporal structure of the inputs. Whenever sequential processing is required, it looks like the requirement of spatiotemporal locality is far from being achieved in nowadays learning algorithms (e.g.  BPTT). What about computational laws with such a built-in property? The local processing on receptive fields,  the weight sharing constraints, and the pooling process are somehow inspired from biology, as well as the deep structure of neural nets that strongly favor the emergence of rich representational pattern descriptions.  Are there other insightful architectural regularities we can borrow from biology to improve the spectacular results of deep learning? Most interesting human learning processes are strongly driven by appropriate focus of attention mechanisms. How to replicate similar computational schemes with the purpose of dramatically cutting the complexity of learning?

Finally, it is worth mentioning that while biology can definitely offer insightful inspiration to artificial neural networks, the emergence of cognition might be the outcome of computational laws of learning that hold regardless of current biological knowledge. This could also shed light in the emergence of human cognition.

Main topics

  • Biological realism
  • Representational issues
  • Role of time and space
  • Spatiotemporal locality of learning
  • Biologically plausible focus of attention
  • Backpropation plausibility issues
  • Error-driven and Hebbian learning co-existence
  • Information-based principles and global descriptions

Chair

Marco Gori
University of Siena, Italy

Keynote Speakers

Pierre Baldi
University of California Irvine, USA
Yoshua Bengio
Head of the Montreal Institute for Learning Algorithms (MILA) & University of Montreal, Canada
Tomaso Poggio
MIT, USA
Cristina Savin
Center for Neural Science, New York University, USA
Naftali Tishby
The Hebrew University, Israel

Papers submission

All papers must be submitted using   EasyChair.   https://easychair.org/conferences/?conf=lod2020

Submission deadline:  April 30, 2020

Any questions regarding the submission process can be sent to conference organizers: lod@icas.xyz

Speakers
(TBA)