Raghav Singhal

I'm currently an AI PhD student at EPFL. Previously, I was a researcher at Massachusetts Institute of Technology and Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), where I worked with Prof. Praneeth Vepakomma. Prior to this, I graduated from IIT Bombay with a Bachelor's in EE and a Master's in AI/ML.

Email  /  Scholar  /  Twitter  /  LinkedIn  /  Github

profile photo

Research

My research is driven by a strong motivation to improve the usability of AI systems. Towards this goal, my research interests include efficient AI, enhancing the safety and reliability of AI systems, improving reasoning capabilities, and, more recently, developing useful agentic use-cases and evals.

Check out my Google Scholar for a complete list of publications. * denotes equal contribution.

PontTuset FedEx-LoRA: Exact Aggregation for Federated and Efficient Fine-Tuning of Foundation Models
Raghav Singhal*, Kaustubh Ponkshe*, Praneeth Vepakomma
ACL 2025 - Oral (Top 2.2% of submitted papers)
project page / code / arXiv

We achieve exact aggregation in distributed fine-tuning of LLMs, consistently improving over SOTA.

PontTuset ABBA: Highly Expressive Hadamard Product Adaptation for Large Language Models
Raghav Singhal*, Kaustubh Ponkshe*, Rohit Vartak*, Praneeth Vepakomma
arXiv; ES-FOMO @ ICML 2025 - Spotlight (Top 9.5% of accepted papers)
project page / code / arXiv

We introduce ABBA, a PEFT method that enhances expressivity by decoupling low-rank updates from pre-trained weights via a Hadamard product, consistently improving over SOTA methods.

PontTuset Safety Subspaces are Not Distinct: A Fine-Tuning Case Study
Kaustubh Ponkshe*, Shaan Shah*, Raghav Singhal*, Praneeth Vepakomma
arXiv
code / arXiv

We show that safety alignment in LLMs is not confined to distinct subspaces, challenging the foundation of subspace-based defenses.

PontTuset Initialization using Update Approximation is a Silver Bullet for Extremely Efficient Low-Rank Fine-Tuning
Kaustubh Ponkshe*, Raghav Singhal*, Eduard Gorbunov, Alexey Tumanov, Samuel Horvath, Praneeth Vepakomma
arXiv; SCOPE @ ICLR 2025
project page / code / arXiv

We provably achieve the best approximation of full fine-tuning in low-rank spaces solely through clever initialization, outperforming LoRA while using up to 90x fewer parameters.

PontTuset Fed-SB: A Silver Bullet for Extreme Communication Efficiency and Performance in (Private) Federated LoRA Fine-Tuning
Raghav Singhal*, Kaustubh Ponkshe*, Rohit Vartak, Lav Varshney, Praneeth Vepakomma
arXiv; ES-FOMO @ ICML 2025
project page / code / arXiv

We set a new Pareto frontier for distributed fine-tuning of LLMs, achieving SOTA performance, stronger privacy guarantees, and up to 230x lower communication costs.

PontTuset M3CoL: Harnessing Shared Relations via Multimodal Mixup Contrastive Learning for Multimodal Classification
Raja Kumar*, Raghav Singhal*, Pranamya Kulkarni, Deval Mehta, Kshitij Jadhav
TMLR
project page / code / arXiv

We introduce a multimodal mixup-based contrastive learning framework that effectively captures shared relations across modalities, enabling robust multimodal representation learning.

PontTuset Regularization-based Framework for Quantization-, Fault- and Variability-Aware Training
Anmol Biswas*, Raghav Singhal*, Sivakumar Elangovan, Shreyas Sabnis, Udayan Ganguly
arXiv; MLNCP @ NeuRIPS 2024; Under Review at TMLR
arXiv

We develop a learnable, non-uniform quantization-aware training framework that boosts efficiency and reliability of AI models deployed on low-power edge devices.

PontTuset Translation and Scale Invariance for Event-Based Object Tracking
Jens Egholm Pedersen, Raghav Singhal, Jörg Conradt
NICE 2023
code / paper

We train an extremely low-power SNN capable of accurate temporal regression, achieving ANN-level performance and faster convergence, directly portable to neuromorphic hardware.


Source code taken from Jon Barron's website.