ÉϺ£ÖÎÀíÂÛ̳µÚ390ÆÚ£¨ÕŽø£¬ÄÏ·½¿Æ¼¼´óѧÖúÀí½ÌÊÚ£©
Ìâ Ä¿£º¸îÁÑËã·¨¼±¾çÊÕÁ²ÂÊ×êÑм°ÆäÔÚÊý¾Ý¿ÆÑ§ÖеÄÀûÓÃ
ÑÝ ½² ÈË£ºÕŽø£¬ÄÏ·½¿Æ¼¼´óѧÖúÀí½ÌÊÚ
Ö÷ ³Ö ÈË£ºÁֹ󻪣¬Ð±¦GGÖÎÀíѧԺ½ÌÊÚ
ʱ ¼ä£º2019Äê7ÔÂ2ÈÕ£¨Öܶþ£©£¬ÏÂÎç2:30-3:30
µØ µã£ºÐ£±¾²¿¶«ÇøÖÎÀíѧԺ420ÊÒ
Ö÷°ìµ¥Ôª£ºÐ±¦GGÖÎÀíѧԺ¡¢Ð±¦GGÖÎÀíѧԺÇàÀÏ´óʦÁªÒê»á
Ñݽ²È˼ò½é£º
ÕŽø£¬ÄÏ·½¿Æ¼¼´óѧÊýѧϵÖúÀí½ÌÊÚ¡£2007ÄêÓÚ´óÁ¬Àí¹¤´óѧÈËÎÄÉç»á¿ÆÑ§Ñ§Ôº»ñÎÄѧѧʿ£¬2010ÄêÓÚ´óÁ¬Àí¹¤´óѧÊýѧ¿ÆÑ§Ñ§Ôº»ñÀíѧ˶ʿѧ룬2014Äê12ÔÂÓÚ¼ÓÄôóά¶àÀûÑÇ´óѧÊýѧÓëͳ¼ÆÏµ»ñÀûÓÃÊýѧ²©Ê¿Ñ§Î»¡£2015Äê4ÔÂÖÁ2019Äê1Ô¾ÍÖ°Ïã¸Û½þ»á´óѧ¡£ÖØÒª´ÓÊÂ×îÓÅ»¯¼°ÆäÀûÓÃÁìÓòµÄ×êÑУ¬ÔÚ Mathematical Programming¡¢SIAM Journal on Optimization¡¢SIAM journal on Numerical Analysis¡¢European Journal of Operational ResearchµÈ°ä·¢ÂÛÎÄ20ÓàÆª¡£
Ñݽ²ÄÚÈݼò½é£º
Despite the rich literature, the linear convergence of alternating direction method of multipliers (ADMM) has not been fully understood even for the convex case. For example, the linear convergence of ADMM can be empirically observed in a wide range of applications, while existing theoretical results seem to be too stringent to be satisfied or too ambiguous to be checked and thus why the ADMM performs linear convergence for these applications still seems to be unclear. In this paper, we systematically study the linear convergence of ADMM in the context of convex optimization through the lens of variaitonal analysis. We show that the linear convergence of ADMM can be guaranteed without the strong convexity of objective functions together with the full rank assumption of the coefficient matrices, or the full polyhedricity assumption of their subdifferential; and it is possible to discern the linear convergence for various concrete applications, especially for some representative models arising in statistical learning. We use some variational analysis techniques sophisticatedly; and our analysis is conducted in the most general proximal version of ADMM with Fortin and Glowinski's larger step size so that all major variants of the ADMM known in the literature are covered.
Ó½Ó¿í´óʦÉú²ÎÓ룡