»ùÓÚ±ä·Ö·¨µÄͼÏñ·´ÎÊÌâÈ·°ÑÎÈÁ¦/Transformer»úÔì

2023.11.27

Ͷ¸å£º¹¨»ÝÓ¢²¿ÃÅ£ºÀíѧԺä¯ÀÀ´ÎÊý£º

»î¶¯ÐÅÏ¢

»ã±¨±êÌâ (Title)£ºVariational Model based Attention/Transformer Mechanisms for Image Inverse Problem £¨»ùÓÚ±ä·Ö·¨µÄͼÏñ·´ÎÊÌâÈ·°ÑÎÈÁ¦/Transformer»úÔ죩

»ã±¨ÈË (Speaker)£ºÁõ¾ý ¸±½ÌÊÚ£¨±±¾©Ê¦·¶´óѧ£©

»ã±¨¹¦·ò (Time)£º2023Äê11ÔÂ24ÈÕ(ÖÜÎå) 10:00

»ã±¨µØÖ· (Place)£ºÌÚѶ»áÒé 533326207

Ô¼ÇëÈË(Inviter)£ºÅíÑÇР½ÌÊÚ

Ö÷°ì²¿ÃÅ£ºÀíѧԺÊýѧϵ

»ã±¨ÌáÒª£ºFeatures extracted by the deep convolution neural networks (DCNN) are always complicated and difficult to model. We developed a method to integrate the features prior into the DCNN architectures by a variational method. It is built upon the universal approximation property of the probability density functions for the mixture distributions. By considering the duality of the maximum likelihood estimation for the deep features in high dimension space, several mechanisms such as learnable fidelity, regularizer, segmentation with geometry prior would be proposed. It partly reveals the connections between the variational methods and some popular DCNN architectures in image processing. For example, weighted norm and attention, nonlocal regularization and transformer, dual and translation, multi-grid and encoder-decoder U-net.

¡¾ÍøÕ¾µØÍ¼¡¿