Social activities: Visit of the Certosa di Pontignano, Tour in the Tuscan Countryside and Wine Tasting

Social activities: Visit of the Certosa di Pontignano, Tour in the Tuscan Countryside and Wine Tasting.

 

  • Guided tour of the Certosa di Pontignano, guided visit to the famous Certosa Chapel (called the Sistine Chapel of Siena).
  • Guided tour in the Tuscan Countryside: walk from Certosa di Pontignano to Osservanza Basilica 6.6 Km (4.1mi) 1h 23 minutes (one way).
  • Wine Tasting of Tuscan Wines (e.g., Chianti Classico, Chianti Classico Riserva, Chianti Classico Gran Selezione, Brunello di Montalcino, Vernaccia di San Gimignano) produced in Siena, in general, and in the surroundings of the Certosa di Pontignano, in particular.

Igor Babuschkin’s Lectures

Lecture 1: An Introduction to Deep Reinforcement Learning

Reinforcement Learning (RL) is a subfield of machine learning that is concerned with training agents to take decisions such that a cumulative reward signal is maximized in an environment. It provides a highly general and elegant toolbox for building intelligent systems that learn through interacting with their environment rather than supervision. In the past few years RL (in combination with deep neural networks) has shown impressive results on a variety of challenging domains, from games to robotics. It is also seen by some as a possible path towards general human-level intelligent systems. I will explain some of the basic algorithms that the field is based on (Q Learning, Policy Gradients), as well as a few extensions to these algorithms that are used in practice (PPO, IMPALA, and others).

Lecture 2: Milestones in Large-scale Reinforcement Learning: AlphaZero, OpenAI Five and AlphaStar

Over the past few years, we have seen a number of successes in Deep Reinforcement Learning: Among other results, RL agents have been able to match or exceed the strength of the best human players at the games of Go, Dota II and StarCraft II. These were achieved by AlphaZero, OpenAI Five and AlphaStar, respectively. I will go into details of how these three systems work, highlighting similarities and differences. What are the lessons we can draw from these results, and what is missing to apply Deep RL to challenging real world problems?

Lecture 3 – Tutorial: JAX, A new library for building neural networks

JAX is a new framework for deep learning developed at Google AI. Written by the authors of the popular autograd library, it is built on the concept of function transformations: Higher-order functions like ‘grad’, ‘vmap’, ‘jit’, ‘pmap’ and others are powerful tools that allow researchers to express ML algorithms succinctly and correctly, while making full use of hardware resources like GPUs and TPUs. Most importantly, solving problems with JAX is fun! I will give a short introduction to JAX, covering the most important function transformations, and demonstrating how to apply JAX to several ML problems.

Jose C. Principe’s Lectures

Beyond Backpropagation: Cognitive Architectures for Object Recognition in Video

Jose C. Principe, Ph.D.

Distinguished Professor and Eckis Chair of Electrical Engineering

University of Florida, USA

 

Backpropagation has been the hallmark of neural network technology, but it creates as many problems as it solves, because it leads to a black box approach, the difficulty of optimizing hyperparameters and the lack of interpretation, and explainability so important in practical applications. We have devised a new way to train deep networks for classification without error backpropagation, with guarantees of optimality under some conditions. This talk presents an overview of this recent advance, illustrates performance in benchmark problems, and advantages for transfer learning.

 

Lecture I – Requisites for a Cognitive Architecture

  • Processing in space
  • Processing in time with memory
  • Top down and bottom processing
  • Extraction of information from data with generative models
  • Attention mechanisms and fovea vision

 

Lecture II – Putting it all together

  • Empirical Bayes with generative models
  • Clustering of time series with linear state models
  • Information Theoretic Autoencoders

 

Lecture III – Beyond Backpropagation: Modular Learning for Deep Networks

  • Reinterpretation of neural network layers
  • Training each learning without backpropagation
  • Examples and advantages in transfer learning

 

 

Jose C. Principe (M’83-SM’90-F’00) is a Distinguished Professor of Electrical and Computer Engineering and Biomedical Engineering at the University of Florida where he teaches statistical signal processing, machine learning and artificial neural networks (ANNs) modeling. He is the Eckis Professor and the Founder and Director of the University of Florida Computational NeuroEngineering Laboratory (CNEL) www.cnel.ufl.edu . His primary area of interest is processing of time varying signals with adaptive neural models. The CNEL Lab has been studying signal and pattern recognition principles based on information theoretic criteria (entropy and mutual information). The relevant application domain is neurology, brain machine interfaces and computation neuroscience.

Dr. Principe is an IEEE Fellow. He was the past Chair of the Technical Committee on Neural Networks of the IEEE Signal Processing Society, Past-President of the International Neural Network Society, and Past-Editor in Chief of the IEEE Transactions on Biomedical Engineering. He received the IEEE Neural Network Pioneer Award in 2011.  Dr. Principe has more than 800 publications.  He directed 99 Ph.D. dissertations and 65 Master theses.  He wrote in 2000 an interactive electronic book entitled “Neural and Adaptive Systems” published by John Wiley and Sons and more recently co-authored several books on “Brain Machine Interface Engineering” Morgan and Claypool, “Information Theoretic Learning”, Springer, and “Kernel Adaptive Filtering”, Wiley.

ACDL 2020 an On-Site & On-Line Advanced Course!

ACDL 2020 an On-Site & On-Line Event!

After numerous discussions, we have concluded that holding the event this year in mixed form both in presence (like previous editions) and virtual is more advantageous than postponing it until next year.

In order to accommodate a large number of situations, we are offering the option for either physical presence or virtual participation. We would be delighted if all participants manage to attend, but are aware that special circumstances are best handled by having flexible options.

The advanced course, hence, will be held in presence with virtual rooms  for participants using remote connection (e.g., Zoom or MS Teams). The online lectures  (e.g., live presentations and/or recorded ones) will be made possible.

See you (physically or virtually) in Tuscany in July!