I am a master's student in computer science at USC. I am part of the Theory Group, advised by the brilliant Prof. Vatsal Sharan.
My interest lies in theoretical machine learning, specifically robustness, fairness, and generalization. I am also fond of cryptography, which I studied during my research internship at Signal research. I am currently thinking about the following questions:
- How can we theoretically understand learning in the presence of spurious features? Are models that rely on these features despite achieving low training error memorizing datapoints? Can we draw connections to multicalibration?
- Can we explore the implicit biases of gradient-based methods from the lens of stability? Under which settings is gradient descent the most stable learning algorithm?
- Can we scale current private information retrieval (PIR) systems to implement secure contact discovery for a commercial user-base (Signal)?
I am currently applying to Ph.D. programs to start in Fall 2024. Please reach out if there is anything you would like to discuss!
M.S. in Computer Science, University of Southern California
Expected May 2024
B.S. in Computer Science, University of Southern California
May 2023, Summa Cum Laude