About Me
I am currently a Research Scientist at Siri Speech, Apple. My research at Apple focuses on distributed (federated) learning and/or active learning towards large-scale production-oriented ASR models, with an aim to address challenges related to privacy, data/device heterogeneity, large-model training, data scarcity, personalization, and interpretability.
I joined Apple’s Class of 2022 AIML Residents in July, 2022 after graduating with a Master of Science (MS) with thesis titled “Towards Privacy and Communication Efficiency in Distributed Representation Learning.”) from the School of Electrical and Computer Engineering, Purdue Univesity advised by Dr. Christopher Brinton. During graduate studies, I also actively collaborated with Dr. David Inouye, Dr. Qiang Qiu, Dr. Saurabh Bagchi, and Dr. Seyyedali Hosseinalipour.
Research Interests
My research interest lies at the intersection of Representation Learning, Density Estimation, and Distributed Learning. One of the main focuses of my work is to reduce the dependence on a large labeled dataset in model training by developing algorithms that are either predominantly unsupervised, self-supervised and/or federated in nature. I also have an interest in the design and analysis of representation learning algorithms that are scalable, communication-efficient, memory-efficient, and robust to adversarial perturbations. I have recently started exploring the role of deep reinforcement learning and its integration with unsupervised learning.
Recent Updates
- May, 2024
- November, 2023
- September, 2023
- Preprint of our paper “Federated Learning with Differential Privacy for End-to-End Speech Recognition” is available.
- Our paper “Importance of Smoothness Induced by Optimizers in FL4ASR: Towards Understanding Federated Learning for End-to-End ASR.” got accepted at the IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), 2023
- July, 2023
- Joined Apple as a Research Scientist after finishing the AIML Residency Program, 2022.
- Jan, 2023
- Our paper “Efficient Federated Domain Translation” got accepted at the International Conference on Learning Representations (ICLR), 2023.