Events

Filtering by: “Distinguished Speaker Series”

Apr
19

Symmetry and Structure in Deep Reinforcement Learning

Elise van der Pol

In this talk, I will discuss our work on symmetry and structure in reinforcement learning. In particular, I will discuss MDP Homomorphic Networks, a class of networks that ties transformations of observations to transformations of decisions. Such symmetries are ubiquitous in deep reinforcement learning, but often ignored in current approaches. Enforcing this prior knowledge into policy and value networks allows us to reduce the size of the solution space, a necessity in problems with large numbers of possible observations. I will showcase the benefits of our approach on agents in virtual environments. Building on the foundations of MDP Homomorphic Networks, I will also discuss our ongoing work on symmetries among multiple agents. This forms a basis for my vision for reinforcement learning for complex virtual environments, as well as for problems with intractable search spaces.

How can you join?

Note that this is an in person event held at Lecture Theatre B in the computer science department.

(Registration closes 2 hours before the beginning of the seminar)

Speaker Bio

Elise van der Pol did her PhD in the Amsterdam Machine Learning Lab under Max Welling. Her research interests lie in structure, symmetry, and equivariance in reinforcement learning and machine learning. During her PhD, Elise spent time as a research scientist intern in DeepMind. She was an invited speaker at the self-supervision for reinforcement learning workshop at ICLR 2021 and co-organizer of the workshop on ecological/data-centric reinforcement learning at NeurIPS 2021. Before her PhD, she studied Artificial Intelligence at the University of Amsterdam, graduating on the topic of coordination in deep reinforcement​ ​learning. She was also involved in UvA's Inclusive AI.

View Event →
Dynamical modeling, decoding, and control of multiscale brain networks: from motor to mood
Mar
1

Dynamical modeling, decoding, and control of multiscale brain networks: from motor to mood

Maryam Shanechi

I will present our work on dynamical modeling, decoding, and control of multiscale brain network activity toward restoring lost motor and emotional function in brain disorders. I first discuss a multiscale dynamical modeling framework that can decode mood variations from multisite human brain activity and identify brain regions that are most predictive of mood. I then develop a system identification approach that can predict multiregional brain network dynamics (output) in response to time-varying electrical stimulation (input) toward enabling closed-loop control of neural activity. Further, I extend our modeling framework to enable dissociating and uncovering behaviorally relevant neural dynamics that can otherwise be missed, such as those during naturalistic movements. I then show how our framework can model the dynamics of multiple modalities and spatiotemporal scales of brain activity simultaneously, thus enhancing decoding and uncovering the relationship across scales. Finally, I develop recurrent neural network (RNN) models that can dissect the source of nonlinearity in behaviorally relevant neural dynamics. These dynamical models, decoders, and controllers can enable a new generation of brain-machine interfaces for personalized therapy in neurological and neuropsychiatric disorders.

How can you join?

(Registration closes 2 hours before the beginning of the seminar)

Speaker Bio

Maryam M. Shanechi is Associate Professor and Viterbi Early Career Chair in Electrical and Computer Engineering (ECE) and a member of the Neuroscience Graduate Program and Department of Biomedical Engineering at the University of Southern California. Prior to joining USC, she was Assistant Professor at Cornell University’s ECE department in 2014. She received her B.A.Sc. degree in Engineering Science from the University of Toronto, her S.M. and Ph.D. degrees in Electrical Engineering and Computer Science from MIT, and her postdoctoral training in Neural Engineering at Harvard Medical School and UC Berkeley. Her research focuses on developing closed-loop neurotechnology and studying the brain through decoding and control of neural dynamics. She is the recipient of several awards including the NIH Director’s New Innovator Award, NSF CAREER Award, ONR Young Investigator Award, ASEE’s Curtis W. McGraw Research Award, MIT Technology Review’s top 35 Innovators Under 35, Popular Science Brilliant 10, Science News SN10, and a DoD Multidisciplinary University Research Initiative (MURI) Award.

View Event →
Scoring Systems: At the Extreme of Interpretable Machine Learning
Nov
9

Scoring Systems: At the Extreme of Interpretable Machine Learning

Cynthia Rudin

With widespread use of machine learning, there have been serious societal consequences from using black box models for high-stakes decisions, including flawed bail and parole decisions in criminal justice, flawed models in healthcare, and black box loan decisions in finance. Interpretability of machine learning models is critical in high stakes decisions.

In this talk, I will focus on one of the most fundamental and important problems in the field of interpretable machine learning: optimal scoring systems. Scoring systems are sparse linear models with integer coefficients. Such models first started to be used ~100 years ago. Generally, such models are created without data, or are constructed by manual feature selection and rounding logistic regression coefficients, but these manual techniques sacrifice performance; humans are not naturally adept at high-dimensional optimization. I will present the first practical algorithm for building optimal scoring systems from data. This method has been used for several important applications to healthcare and criminal justice.

I will mainly discuss work from three papers:
Learning Optimized Risk Scores. Journal of Machine Learning Research, 2019. http://jmlr.org/papers/v20/18-615.html
The Age of Secrecy and Unfairness in Recidivism Prediction. Harvard Data Science Review, 2020. https://hdsr.mitpress.mit.edu/pub/7z10o269
Struck et al. Association of an Electroencephalography-Based Risk Score With Seizure Probability in Hospitalized Patients. JAMA Neurology, 2017.

How can you join?

(Registration closes 2 hours before the beginning of the seminar)

Speaker Bio

Cynthia Rudin is a professor of computer science, electrical and computer engineering, statistical science, and biostatistics & bioinformatics at Duke University, and directs the Prediction Analysis Lab, whose main focus is in interpretable machine learning. Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. She holds an undergraduate degree from the University at Buffalo, and a PhD from Princeton University. She is a three-time winner of the INFORMS Innovative Applications in Analytics Award, was named as one of the ""Top 40 Under 40"" by Poets and Quants in 2015, and was named by Businessinsider.com as one of the 12 most impressive professors at MIT in 2015. She is a fellow of the American Statistical Association and a fellow of the Institute of Mathematical Statistics.

Some of her (collaborative) projects are: (1) she has developed practical code for optimal decision trees and sparse scoring systems, used for creating models for high stakes decisions. Some of these models are used to manage treatment and monitoring for patients in intensive care units of hospitals. (2) She led the first major effort to maintain a power distribution network with machine learning (in NYC). (3) She developed algorithms for crime series detection, which allow police detectives to find patterns of housebreaks. Her code was developed with detectives in Cambridge MA, and later adopted by the NYPD. (4) She solved several well-known previously open theoretical problems about the convergence of AdaBoost and related boosting methods. (5) She is a co-lead of the Almost-Matching-Exactly lab, which develops matching methods for use in interpretable causal inference.

View Event →
Chatbots can be good: What we learn from unhappy users
Oct
12

Chatbots can be good: What we learn from unhappy users

Rachel Tatman

It’s no secret that chatbots have a bad reputation: no one enjoys a cyclical, frustrating conversation when all you need is a quick answer to an urgent question. But chatbots can, in fact, be good. Having bad conversations can help us get there before they’re ever deployed. This talk will draw on both academic and industry knowledge to discuss problems like: What do users’ reactions to unsuccessful systems tell us about what successful systems should look like? Are we evaluating the right things… or the easy to measure things? Do we really have to look at user data? If so, when and how often? When, if ever, should we retire old methods?

How can you join?

(Registration closes 2 hours before the beginning of the seminar)

Speaker Bio

Rachael Tatman has her PhD in computational sociolinguistics from the University of Washington, where her research primarily focused on sub-lexical units in speech, text and sign as well as ethics in NLP. After graduating she moved into industry, working as a data scientist at Kaggle and a developer advocate at Google. Currently she's a senior developer advocate at Rasa, where she supports their open source chatbot development framework.

View Event →