Lecture Series in Intelligent Perception and Computing
  题  目(TITLE): Zero-shot Learning and Multi-modality Data Interpretability and Integration
  讲座人(SPEAKER): Yang Long, Research Fellow, Newcastle University
  主持人 (CHAIR): Yan Huang
  时  间 (TIME): Friday (August. 3), 9:30 AM
  地  点 (VENUE): 1610 Meeting Room,16 Floor, Intelligent Building
  报告摘要(ABSTRACT):
  The presentation introduces Dr Yang Long’s research background and his based Open Lab in School of Computing, Newcastle University. Zero-shot learning is different to conventional machine learning that focus most on mathematical modelling and optimization. Moreover, it requires deep understanding and thinking about the label space, which is known is the ontological engineering. Essentially, Zero-shot Learning aims to teach machine to recognize new categories using human-understandable protocols, e.g. attributes, words, textual descriptions, taxonomy, rather than training examples. Therefore, the Zero-shot Learning domain widely studies various modalities and aims to design an ultimate framework that can take in these modalities, integrate them so that a machine can deal with images, videos, audios, attributes, natural languages, with strong inferable and interpretable features and engagement with users. The presentation will finally provide some insights about the applications of zero-shot Learning based on Yang’s research.
  报告人简介(BIOGRAPHY):
  Yang Long is currently a Research Fellow with OpenLab, School of Computing, Newcastle University. He received his Ph. D. degree in Computer Vision and Machine Learning from the Department of Electronic and Electrical Engineering, the University of Sheffield, UK, in 2017. He received the M.Sc. degree from the same institution, in 2014. His research interests include Artificial Intelligence, Machine Learning, Computer Vision, Deep Learning, Zero-shot Learning, with focus on Transparent AI for Healthcare Data Science. He has authored/co-authored papers in refereed journals/conferences such as IEEE TPAMI, TIP, CVPR, AAAI and ACM MM.
附件: