±¨ ¸æ ÈË£ºÕÔÇåÓî [˹̹¸£´óѧ]
»ã±¨¹¦·ò£º2019Äê4Ô 15ÈÕ£¨ÖÜÒ»£©»ã±¨10:00¡«10:30£¬»áÉÌ13:30¡«15:30
»ã±¨µØÖ·£ºÐ£±¾²¿ÀÖºõÐÂ¥˼ԴÌü
Ö÷ ³Ö ÈË£ºÀäÍØ
»ã±¨È˼ò½é£º
ÕÔÇåÓʿĿǰÔÚ˹̹¸£´óѧÐÄÁéÓëÐÐΪ¿ÆÑ§Ñ§Ôº´Óʲ©Ê¿ºó¹¤×÷¡£Ëû2012ÄêÓÚÉϺ£½»Í¨´óѧ»ñµÃÍÆËã»úѧʿѧ룬2017ÄêÓÚ±±¿¨ÂÞÀ´ÄÉ´óѧ½ÌÌÃɽ·ÖУ»ñµÃÍÆËã»ú²©Ê¿Ñ§Î»¡£ËûµÄ×êÑз½ÏòÊÇʹÓÃÐÂÏʵÄͼÏñ·ÖÎöÓë»úе½ø½¨¼¼ÊõÀí½â¡¢Õï¶ÏÒÔ¼°Ò½ÖÎÐÄÁéÀ༲²¡¡£
»ã±¨ÌáÒª£ºGenerative models in combination with neural networks, such as variational autoencoders (VAE), have gained tremendous popularity in learning complex distribution of training data by embedding them into a low-dimensional latent space. For inference of the model, a traditional VAE often incorporates simple priors, e.g., a single Gaussian, for regularizing latent variables. This practice limits VAE¡¯s modelling capacity when the target distribution is multi-modal. In this talk, I will present my recent work on exploring an extension of the VAE framework, by adopting robust mixture-models in the latent space, for the purpose of data clustering in the presence of outliers. Furthermore, reformulating basic VAEs often used for unsupervised learning, I propose a way to leverage them for general classification and regression tasks. I will show applications of such VAE-based frameworks in neuroimaging studies to understand how the brain structures change with age in both healthy aging and in neurodegenerative diseases, and also in discovering major patterns of functional brain connectivity.