»ùÓÚ¼±¾ç·Ç¾«È·ÌݶȼӿìµÄÏ¡ÉÙ¹ý²ÎÊý»¯Çó½âËã·¨

2025.04.07

Ͷ¸å£º¹¨»ÝÓ¢²¿ÃÅ£ºÀíѧԺä¯ÀÀ´ÎÊý£º

»î¶¯ÐÅÏ¢

»ã±¨±êÌâ (Title)£ºAccelerating Sparse Over-parameterized Solvers via Fast Inexact Gradients£¨»ùÓÚ¼±¾ç·Ç¾«È·ÌݶȼӿìµÄÏ¡ÉÙ¹ý²ÎÊý»¯Çó½âËã·¨£©

»ã±¨ÈË (Speaker)£º Áº¾­Î³ ¸±½ÌÊÚ£¨ÉϺ£½»Í¨´óѧ£©

»ã±¨¹¦·ò (Time)£º2025Äê3ÔÂ27ÈÕ(ÖÜËÄ) 9:00

»ã±¨µØÖ· (Place)£ºÐ£±¾²¿GJ303

Ô¼ÇëÈË(Inviter)£ºÔøÏþÑÞ

Ö÷°ì²¿ÃÅ£ºÀíѧԺÊýѧϵ

»ã±¨ÌáÒª£ºSparse regularization, such as -norm, (over-lapping) group sparsity and low-rank, are widely used in data science and machine learning. Designing efficient numerical solvers for sparse regularization has been an extremely active research area since the new millennium. Recently, over-parameterized solvers such as VarPro [Poon & Peyre, 2023] have been particularly attractive due to its improved conditioning properties and ability to transform the original non-smooth optimization problem into a smooth one, enabling the possibilities of adopting high-order numerical schemes such as quasi-Newton methods. However, despite the fast global convergence behavior, such methods incur a high per iteration complexity. To circumvent this drawback, by an ``approximated dimension reduction'', in this talk I will introduce an inexact version of VarPro which can significantly reduce the per-iteration complexity. The error incurred at each step can be directly controlled and our inexact gradient approach is adaptive, thus allowing for significant acceleration without hurting convergence. The method works seamlessly with L-BFGS and substantially enhances the performance of VarPro. Numerical experiments are provided for various sparse optimization problems.

¡¾ÍøÕ¾µØÍ¼¡¿