Raghav Singhal

I'm currently a researcher at Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) in Abu Dhabi, where I work with Prof. Praneeth Vepakomma. Prior to this, I graduated from IIT Bombay with a Bachelor's in EE and a Master's in AI/ML.

Email  /  CV  /  Scholar  /  Twitter  /  LinkedIn  /  Github

profile photo

Research

My research is driven by a strong motivation to improve the usability of AI systems. Towards this goal, my research interests include efficient AI, enhancing the safety and reliability of AI systems, improving reasoning capabilities, and, more recently, developing useful agentic use-cases and evals.

Check out my Google Scholar for a complete list of publications. Some selected works are highlighted. * denotes equal contribution.

PontTuset Fed-SB: A Silver Bullet for Extreme Communication Efficiency and Performance in (Private) Federated LoRA Fine-Tuning
Raghav Singhal*, Kaustubh Ponkshe*, Rohit Vartak, Lav Varshney, Praneeth Vepakomma
arXiv; Under Review at ACL 2025
code / arXiv

We set a new Pareto frontier for federated fine-tuning of LLMs, achieving SOTA performance, stronger privacy guarantees, and up to 230x lower communication costs.

PontTuset Initialization using Update Approximation is a Silver Bullet for Extremely Efficient Low-Rank Fine-Tuning
Kaustubh Ponkshe*, Raghav Singhal*, Eduard Gorbunov, Alexey Tumanov, Samuel Horvath, Praneeth Vepakomma
arXiv; SCOPE @ ICLR 2025; Under Review at ICML 2025
project page / code / arXiv

We provably achieve the best approximation of full fine-tuning in low-rank spaces solely through clever initialization, outperforming LoRA while using up to 90x fewer parameters.

PontTuset FedEx-LoRA: Exact Aggregation for Federated and Efficient Fine-Tuning of Foundation Models
Raghav Singhal*, Kaustubh Ponkshe*, Praneeth Vepakomma
arXiv; FITML @ NeuRIPS 2024; SCOPE @ ICLR 2025; Under Review at ACL 2025
project page / code / arXiv

We achieve exact aggregation in federated fine-tuning of LLMs, improving over SOTA.

PontTuset M3CoL: Harnessing Shared Relations via Multimodal Mixup Contrastive Learning for Multimodal Classification
Raja Kumar*, Raghav Singhal*, Pranamya Kulkarni, Deval Mehta, Kshitij Jadhav
arXiv; UniReps @ NeuRIPS 2024; Under Review at TMLR
project page / code / arXiv

We introduce a multimodal mixup-based contrastive learning framework that effectively captures shared relations across modalities, enabling robust multimodal representation learning.

PontTuset Regularization-based Framework for Quantization-, Fault- and Variability-Aware Training
Anmol Biswas*, Raghav Singhal*, Sivakumar Elangovan, Shreyas Sabnis, Udayan Ganguly
arXiv; MLNCP @ NeuRIPS 2024; Under Review at TMLR
arXiv

We develop a learnable, non-uniform quantization-aware training framework that boosts efficiency and reliability of AI models deployed on low-power edge devices.

PontTuset Translation and Scale Invariance for Event-Based Object Tracking
Jens Egholm Pedersen, Raghav Singhal, Jörg Conradt
NICE 2023
code / paper

We train an extremely low-power SNN capable of accurate temporal regression, achieving ANN-level performance and faster convergence, directly portable to neuromorphic hardware.


Source code taken from Jon Barron's website.