»ã±¨±êÌâ (Title)£ºOn Convergence of Iterative Thresholding Algorithms to Global Solution for Nonconvex Sparse Optimization£¨Çó½â·Ç͹ϡÉÙÓÅ»¯ÎÊÌâÈ«¾Ö×îÓŽâµÄµü´úãÐÖµËã·¨µÄÊÕÁ²ÐÔ£©
»ã±¨ÈË (Speaker)£º ºúÒ«»ª ½ÌÊÚ£¨ÉîÛÚ´óѧ£©
»ã±¨¹¦·ò (Time)£º2023Äê4ÔÂ21ÈÕ(ÖÜÎå) 13:00
»ã±¨µØÖ· (Place)£ºÐ£±¾²¿F309
Ô¼ÇëÈË(Inviter)£ºÓ೤¾ý ½ÌÊÚ
Ö÷°ì²¿ÃÅ£ºÀíѧԺÊýѧϵ
»ã±¨ÌáÒª£º
Sparse optimization is a popular research topic in applied mathematics and optimization, and nonconvex sparse regularization problems have been extensively studied to ameliorate the statistical bias and enjoy robust sparsity promotion capability in vast applications. However, puzzled by the nonconvex and nonsmooth structure in nonconvex regularization problems, the convergence theory of their optimization algorithms is still far from completion: only the convergence to a stationary point was established in the literature, while there is still no theoretical evidence to guarantee the convergence to a global minimum or a true sparse solution.
This talk aims to find an approximate global solution or true sparse solution of an under-determined linear system. For this purpose, we propose two types of iterative thresholding algorithms with the continuation technique and the truncation technique respectively. We introduce a notion of limited shrinkage thresholding operator and apply it, together with the restricted isometry property, to show that the proposed algorithms converge to an approximate global solution or true sparse solution within a tolerance relevant to the noise level and the limited shrinkage magnitude. Applying the obtained results to nonconvex regularization problems with SCAD, MCP and Lp penalty and utilizing the recovery bound theory, we establish the convergence of their proximal gradient algorithms to an approximate global solution of nonconvex regularization problems.