»ã±¨±êÌâ (Title)£ºScale-invariant regularizations for sparse signal and low-rank tensor recover£¨Ï¡ÉÙÐźŵij߶Ȳ»±äÕýÔòºÍµÍÖÈÕÅÁ¿¸´Ô£©
»ã±¨ÈË (Speaker)£º Íõ³¬ £¨ÄÏ·½¿Æ¼¼´óѧ£©
»ã±¨¹¦·ò (Time)£º2023Äê11ÔÂ7ÈÕ(Öܶþ) 13:30
»ã±¨µØÖ· (Place)£ºÐ£±¾²¿ F420 ÅԵĻáÉÌÊÒ
Ô¼ÇëÈË(Inviter)£ºÅíÑÇР½ÌÊÚ
Ö÷°ì²¿ÃÅ£ºÀíѧԺÊýѧϵ
»ã±¨ÌáÒª£ºRegularization plays a pivotal role in tackling challenging ill-posed problems by guiding solutions towards desired properties. In this presentation, I will introduce the ratio of the L1 and L2 norms, denoted as L1/L2, which serves as a scale-invariant and parameter-free regularization method for approximating the elusive L0 norm. Our theoretical analysis reveals a strong null space property (sNSP) and proves that any sparse vector qualifies as a local minimizer of the L1/L2 model when a system matrix adheres to the sNSP condition. Furthermore, we extend the L1/L2 model to the realm of low-rank tensor recovery by introducing a tensor nuclear norm over the Frobenius norm (TNF). We demonstrate that local optimality can be assured under an NSP-type condition. Given that both the L1 and L2 norms are absolutely one-homogeneous functions, we propose a gradient descent flow method to minimize the quotient model similarly to the Rayleigh quotient minimization problem. This derivation offers valuable numerical insights into convergence analysis and the boundedness of solutions. Throughout the presentation, we will explore a range of applications, including limited angle CT reconstruction and video background modeling, showcasing the superior performance of our approach compared to state-of-the-art methods.