Schedule
Please note that these times are in Central Africa Time (CAT).
10:00 - 10:15 |
Opening remarks
|
Organizers
|
10:15 - 11:00 |
Learning with Discrete Structures and Algorithms
Machine learning at scale has led to impressive results ranging from text-based image generation, reasoning with natural language, and code synthesis to name but a few. ML at scale is also successfully applied to a broad range of problems in engineering and the sciences. These recent developments make some of us question the utility of incorporating prior knowledge in the form of symbolic (discrete) structures and algorithms. Are computing and data at scale all we need?
We will make an argument that discrete (symbolic) structures and algorithms in machine learning models are advantageous and even required in numerous application domains such as Biology, Material Science, and Physics. Biomedical entities and their structural properties, for example, can be represented as graphs and require inductive biases equivariant to certain group operations. My lab's research is concerned with the development of machine learning methods that combine discrete structures with continuous equivariant representations. We also address the problem of learning and leveraging structure from data where it is missing, combining discrete algorithms and probabilistic models with gradient-based learning. We will show that discrete structures and algorithms appear in numerous places such as ML-based PDE solvers and that modeling them explicitly is indeed beneficial. Especially machine learning models with the aim to exhibit some form of explanatory properties have to rely on symbolic representations. The talk will also cover some biomedical and physics-related applications. |
|
11:00 - 11:30 |
Online Spotlight Talks 1
|
Poster authors
|
11:30 - 12:00 |
In-person Spotlight Talks
|
Poster authors
|
12:00 - 13:30 |
Lunch
|
Organizers
|
13:30 - 14:15 |
Reflections on a few neurosymbolic approaches to ML4Code in the age of Transformers
|
|
14:15 - 15:00 |
Building AI with neuro-symbolic generative models
Despite recent successes, deep learning systems are still limited by their lack of generalization. I'll present an approach to addressing this limitation which combines probabilistic, model-based learning, symbolic learning and deep learning. My work centers around probabilistic programming which is a powerful abstraction layer that separates Bayesian modeling and inference. In the first part of the talk, I’ll describe “inference compilation”, an approach to amortized inference in universal probabilistic programs. In the second part of the talk, I’ll introduce a family of wake-sleep algorithms for learning model parameters. Finally, I’ll introduce a neurosymbolic generative model called “drawing out of distribution”, or DooD, which allows for out of distribution generalization for drawings.
|
|
15:00 - 15:40 |
Online Spotlight Talks 2
|
Poster authors
|
15:40 - 16:15 |
Hybrid Poster Session
|
Organizers
|
16:15 - 16:30 |
Coffee Break 1
|
Organizers
|
16:30 - 17:15 |
Discovering abstractions that bridge perception, action, and communication
Humans display a remarkable capacity for discovering useful abstractions to make sense of and interact with the world. In particular, many of these abstractions are portable across behavioral domains, manifesting in what people see, do, and talk about. For example, people can visually decompose objects into parts; these parts can be rearranged to create new objects; the procedures for doing so can be encoded in language. What principles explain why some abstractions are favored by humans more than others, and what would it take for machines to emulate human-like learning of such “bridging” abstractions? In the first part of this talk, I’ll discuss a line of work investigating how people learn to communicate about shared procedural abstractions during collaborative physical assembly, which we formalize by combining a model of linguistic convention formation with a mechanism for inferring recurrent subroutines within the motor programs used to build various objects. In the second part, I’ll share new insights gained from extending this approach to understand why the kinds of abstractions that people learn and use varies between contexts. I will close by suggesting that embracing the study of such multimodal, naturalistic behaviors in humans at scale may shed light on the mechanisms needed to support fast, flexible learning and generalization in machines.
|
|
17:15 - 18:00 |
AI can learn from data. But can it learn to reason?
Many expect that AI will go from powering chatbots to providing mental health services. That it will go from advertisement to deciding who is given bail. The expectation is that AI will solve society’s problems by simply being more intelligent than we are. Implicit in this bullish perspective is the assumption that AI will naturally learn to reason from data: that it can form trains of thought that make sense, similar to how a mental health professional or judge might reason about a case, or more formally, how a mathematician might prove a theorem. This talk will investigate the question whether this behavior can be learned from data, and how we can design the next generation of AI techniques that can achieve such capabilities, focusing on constrained language generation, neuro-symbolic learning and tractable deep generative models.
|
|
18:00 - 18:10 |
Coffee Break 2
|
Organizers
|
18:10 - 18:55 |
Panel
|
Panelists
|
18:55 - 19:00 |
Closing remarks
|
Organizers
|