Dharmesh Tailor

I am a PhD student with Eric Nalisnick in the Amsterdam Machine Learning Lab. I am interested in building interpretable and robust AI systems using Bayesian principles.

From 2019 to 2021, I was a research assistant in the Approximate Bayesian Inference Team at the RIKEN Centre for Advanced Intelligence Project (Tokyo, Japan). From 2017 to 2018, I worked at the European Space Agency as a ‘Young Graduate Trainee’ in the Advanced Concepts Team (Netherlands).

In 2017, I graduated with an MSc in Artificial Intelligence from the University of Edinburgh specialising in machine learning and computational neuroscience. I did my thesis under Prof. Mark van Rossum on (neuronal) population coding. In 2016, I graduated with a Bachelors in Computer Science and Mathematics (joint course) from Imperial College London.

Contact: d.v.tailor [ a t ] uva.nl
GitHub  /  Google Scholar

profile photo

Highlights / News

Papers

project image

Learning to Defer to a Population: A Meta-Learning Approach


Dharmesh Tailor, Aditya Patra, Rajeev Verma, Putra Manggala, Eric Nalisnick
27th International Conference on Artificial Intelligence and Statistics (AISTATS), 2024
arxiv /

We formulate a learning to defer (L2D) system that can cope with never-before-seen experts at test-time. We accomplish this by using meta-learning, considering both optimization- and model-based variants. Given a small context set to characterize the currently available expert, our framework can quickly adapt its deferral policy. For the model-based approach, we employ an attention mechanism that is able to look for points in the context set that are similar to a given test point, leading to an even more precise assessment of the expert’s abilities.

project image

The Memory-Perturbation Equation: Understanding Model's Sensitivity to Data


Peter Nickl, Lu Xu*, Dharmesh Tailor*, Thomas Möllenhoff, Emtiyaz Khan
37th Conference on Neural Information Processing Systems (NeurIPS), 2023
ICML 2023 Workshop on Principles of Duality for Modern Machine Learning
arxiv / video / poster /

We present the Memory-Perturbation Equation (MPE) which relates model’s sensitivity to perturbation in its training data. Derived using Bayesian principles, the MPE unifies existing sensitivity measures, generalizes them to a wide-variety of models and algorithms, and unravels useful properties regarding sensitivities. Our empirical results show that sensitivity estimates obtained during training can be used to faithfully predict generalization on unseen test data.

project image

Exploiting Inferential Structure in Neural Processes


Dharmesh Tailor, Emtiyaz Khan, Eric Nalisnick
39th Conference on Uncertainty in Artificial Intelligence (UAI), 2023
5th Workshop on Tractable Probabilistic Modeling at UAI 2022
paper / poster /

This work provides a framework that allows the latent variable of Neural Processes to be given a rich prior defined by a graphical model. These distributional assumptions directly translate into an appropriate aggregation strategy for the context set. We describe a message-passing procedure that still allows for end-to-end optimization with stochastic gradients. We demonstrate the generality of our framework by using mixture and Student-t assumptions that yield improvements in function modelling and test-time robustness.

Talks

project image

Adaptive and Robust Learning with Bayes


NeurIPS Bayesian Deep Learning workshop

, 2021
Jointly with Emtiyaz Khan (main speaker) & Siddharth Swaroop
video / slides /

We show that a wide-variety of machine-learning algorithms are instances of a single learning-rule called the Bayesian learning rule. The rule unravels a dual perspective yielding new adaptive mechanisms for machine-learning based AI systems.

project image

Identifying Memorable Experiences of Learning Machines


RIKEN AIP Open Seminar (virtual)

, 2021
video / slides /

Humans and other animals have a natural ability to identify useful past experiences. How can machines do the same? We present “memorable experiences” to identify a machine’s relevant past experiences and understand its current knowledge. The approach is based on a new notion of duality which is an extension of similar ideas used in kernel methods.

Below are papers I published whilst at the European Space Agency (2017-2018) and University of Edinburgh (2016-2017), focusing on optimal control/trajectory optimization and computational neuroscience respectively.

project image

On the Stability Analysis of Deep Neural Network Representations of an Optimal State-Feedback


Dario Izzo, Dharmesh Tailor, Thomas Vasileiou
IEEE Transactions on Aerospace and Electronic Systems, 2020
paper / arxiv /

project image

Learning the Optimal State-Feedback via Supervised Imitation Learning


Dharmesh Tailor, Dario Izzo
Astrodynamics (Springer), 2019
paper / arxiv / code /

project image

Machine Learning and Evolutionary Techniques in Interplanetary Trajectory Design


Dario Izzo, Christopher Sprague, Dharmesh Tailor
Modeling and Optimization in Space Engineering (Springer), 2019
paper / arxiv /

project image

Unconscious Biases in Neural Populations Coding Multiple Stimuli


Sander Keemink, Dharmesh Tailor, Mark van Rossum
Neural Computation, 2018
paper /


Adapted from Leonid Keselman's fork of John Barron's website.