»ùÓÚÊÓ¾õµÄ¸´Êö

2021.10.27

Ͷ¸å£ºÖÜʱǿ²¿ÃÅ£ºÍÆËã»ú¹¤³ÌÓë¿ÆÑ§Ñ§Ôºä¯ÀÀ´ÎÊý£º

»î¶¯ÐÅÏ¢

¹¦·ò£º 2021Äê11ÔÂ01ÈÕ 15:00

µØÖ·£º ÌÚѶ»áÒ飨ID£º508 326 821£©

±¨ ¸æ ÈË£ºñÒ³¿Á𠏱½ÌÊÚ

µ¥    λ£º¾©¶¼´óѧ-´óѧԺÇéˆóѧ×êÑпÆÖªÄÜѧŒŸ¹¥

»ã±¨¹¦·ò£º2021Äê11ÔÂ1ÈÕ£¨ÖÜÒ»£©15£º00¡«16:00

»ã±¨µØÖ·£ºÌÚѶ»áÒ飨ID£º508 326 821£©

Ñû Çë ÈË£ºÍõ ê»

»ã±¨ÌáÒª£º

Visually grounded paraphrases (VGPs) are different phrasal expressions describing the same visual concept in an image. VGPs have the potential to improve language and vision tasks such as visual question answering and image captioning. In this talk, I will cover our recent work on various topics in VGP research, including VGP identification, VGP classification, VGP generation, cross-lingual visual grounding, flexible visual grounding, and VGP for vision-and-language representation learning.

»ã±¨È˼ò½é:

Chenhui Chu received his B.S. in software engineering from Chongqing University in 2008, and his M.S. and Ph.D. in Informatics from Kyoto University in 2012 and 2015, respectively. He is currently a program-specific associate professor at Kyoto University. His research interests include natural language processing, particularly machine translation and multimodal machine learning. His research won the MSRA collaborative research 2019 grant award, 2018 AAMT Nagao award, and CICLing 2014 best student paper award.


¡¾ÍøÕ¾µØÍ¼¡¿