Dharmesh Tailor

3rd-year PhD Student @ AMLab, University of Amsterdam

google scholar
github
twitter

About Me

I am a 3rd-year PhD student in the Amsterdam Machine Learning Lab (opens new window) supervised by Eric Nalisnick (opens new window) (Johns Hopkins University). I also closely collaborate with Emtiyaz Khan (opens new window) (RIKEN AIP) and the Approximate Bayesian Inference Team (opens new window). I am interested in building safe, interpretable and robust AI systems. My work is rooted in Bayesian principles, uncertainty quantification, local sensitivity measures and human-AI interplay.

Short Bio

Recent News

[Past News]

Papers

[All Publications]

Learning to Defer to a Population: A Meta-Learning Approach

Dharmesh Tailor, Aditya Patra, Rajeev Verma, Putra Manggala, Eric Nalisnick

27th International Conference on Artificial Intelligence and Statistics (AISTATS), 2024

Oral presentation & Outstanding Student Paper award (top-1% of accepted papers)

paper (opens new window) / arXiv (opens new window) / code (opens new window) / poster / slides

We formulate a learning to defer (L2D) system that can cope with never-before-seen experts at test-time. We accomplish this by using meta-learning, considering both optimization- and model-based variants. Given a small context set to characterize the currently available expert, our framework can quickly adapt its deferral policy. For the model-based approach, we employ an attention mechanism that is able to look for points in the context set that are similar to a given test point, leading to an even more precise assessment of the expert’s abilities.

The Memory-Perturbation Equation: Understanding Model's Sensitivity to Data

Peter Nickl, Lu Xu*, Dharmesh Tailor*, Thomas Möllenhoff, Emtiyaz Khan

37th Conference on Neural Information Processing Systems (NeurIPS), 2023

ICML 2023 Workshop on Principles of Duality for Modern Machine Learning

paper (opens new window) / arXiv (opens new window) / code (opens new window) / poster (opens new window)

We present the Memory-Perturbation Equation (MPE) which relates model’s sensitivity to perturbation in its training data. Derived using Bayesian principles, the MPE unifies existing sensitivity measures, generalizes them to a wide-variety of models and algorithms, and unravels useful properties regarding sensitivities. Our empirical results show that sensitivity estimates obtained during training can be used to faithfully predict generalization on unseen test data.

Exploiting Inferential Structure in Neural Processes

Dharmesh Tailor, Emtiyaz Khan, Eric Nalisnick

39th Conference on Uncertainty in Artificial Intelligence (UAI), 2023

5th Workshop on Tractable Probabilistic Modeling at UAI 2022

paper (opens new window) / arXiv (opens new window) / poster (opens new window)

This work provides a framework that allows the latent variable of Neural Processes to be given a rich prior defined by a graphical model. These distributional assumptions directly translate into an appropriate aggregation strategy for the context set. We describe a message-passing procedure that still allows for end-to-end optimization with stochastic gradients. We demonstrate the generality of our framework by using mixture and Student-t assumptions that yield improvements in function modelling and test-time robustness.

Talks

5. How to Build Transparent and Trustworthy AI
2nd Bayes-Duality Workshop (opens new window) (Japan), 06/2024
Jointly with Emtiyaz Khan (opens new window) (main speaker)
video (opens new window) /

4. Learning to Defer to a Population: A Meta-Learning Approach (Oral presentation)
27th International Conference on Artificial Intelligence and Statistics (opens new window) (Spain), 05/2024
slides /

3. Memory Maps to Understand Models
Dutch Society of Pattern Recognition and Image Processing: Fall Meeting on Anomaly Detection (opens new window) (Amsterdam, Netherlands), 11/2023 (Oral presentation)
1st Bayes-Duality Workshop (opens new window) (Japan), 06/2023
ESA Advanced Concepts Team Science Coffee (opens new window) (Netherlands), 04/2023
slides /

2. Adaptive and Robust Learning with Bayes
NeurIPS Bayesian Deep Learning workshop (opens new window) (virtual), 12/2021
Jointly with Emtiyaz Khan (opens new window) (main speaker) & Siddharth Swaroop (opens new window)
video (opens new window) / slides (opens new window)

1. Identifying Memorable Experiences of Learning Machines
RIKEN AIP Open Seminar (opens new window) (virtual), 03/2021
video (opens new window) / slides

Last Updated: 11/17/2024, 11:07:20 PM