The Cost of Robustness: Tighter Bounds on Parameter Complexity for Robust Memorization in ReLU Nets
We analyze the parameter bounds for robust memorization as a function of the robustness ratio.
My name is Yujun Kim, and I am a Ph.D. candidate in the Optimization & Machine Learning (OptiML) group at KAIST AI. I am interested in optimization and deep learning theory. I joined KAIST AI as a Master/Ph.D. integrated program in Fall 2024. Previously, I completed my undergraduate program in Mathematical Sciences with a double major in School of Computing at KAIST.
M.S./Ph.D. integrated in Artificial Intelligence
Korea Advanced Institute of Science and Technology (KAIST)
B.S. in Mathematical Sciences, B.E. in School of Computing
Korea Advanced Institute of Science and Technology (KAIST)
Korea Science Academy of KAIST (KSA)
We analyze the parameter bounds for robust memorization as a function of the robustness ratio.
I am continuing my studies as a first-year Ph.D. student from Fall 2025
I got best paper awards for KAIA 2025 for the paper “The Cost of Robustness: Tighter Parameter Complexity for Robust Memorization in ReLU Nets”
We analyze the parameter bounds for robust memorization as a function of the robustness ratio.
We study Incremental Gradient Descent in the small epoch regime and show that it exhibits severe slowdown especially in the presence of nonconvex components.