任烁,本科与博士毕业于北京航空航天大学计算机学院,为北京航空航天大学与微软亚洲研究院联合培养直博生,师从马帅教授与周明博士,博士研究方向为机器翻译。博士毕业后先后任于微软亚洲研究院自然语言计算组,中国证券监督管理委员会科技监管局,中国科学院自动化研究所紫东太初大模型研究中心。目前为中科院自动化所副研究员,研究方向为大模型预训练、多模态对齐等。
人工智能,自然语言处理,预训练
无
(1)Shuo Ren, Wenhu Chen, Shujie Liu, Mu Li, Ming Zhou, Shuai Ma, Triangular Architecture for Rare Language Translation, The 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018)
(2)Shuo Ren, Yu Wu, Shujie Liu, Ming Zhou, Shuai Ma, Explicit Cross-lingual Pre-training for Unsupervised Machine Translation, 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP 2019).
(3)Shuo Ren, Zhirui Zhang, Shujie Liu, Ming Zhou and Shuai Ma, Unsupervised Neural Machine Translation with SMT as Posterior Regularization, The 33rd AAAI Conference on Artificial Intelligence (AAAI 2019).
(4)Shuo Ren, Shujie Liu, Ming Zhou, Shuai Ma, A Graph-based Coarse-to-fine Method for Unsupervised Bilingual Lexicon Induction, The 58th Annual Meeting of the Association for Computational Linguistics (ACL 2020).
(5)Shuo Ren, Yu Wu, Shujie Liu, Ming Zhou, Shuai Ma, A Retrieve-and-Rewrite Initialization Method for Unsupervised Machine Translation, The 58th Annual Meeting of the Association for Computational Linguistics (ACL 2020).
(6)Shuo Ren, Long Zhou, Shujie Liu, Furu Wei, Ming Zhou, Shuai Ma, SemFace: Pre-training Encoder and Decoder with a Semantic Interface for Neural Machine Translation, The 59th Annual Meeting of the Association for Computational Linguistics (ACL 2021).
(7)Shuo Ren, Shujie Liu, Yu Wu, Long Zhou, Furu Wei, Speech Pre-training with Acoustic Piece, (INTERSPEECH 2022)
Daya Guo, Shuo Ren, Shuai Lu, et al., Graphcodebert: Pre-training Code Representations with Data Flow, The 9th International Conference on Learning Representations (ICLR 2021)
(8)Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, et al., CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation, NeurIPS 2021 Datasets and Benchmarks Track
(9)Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Xiangzhan Yu, Furu Wei, WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing, IEEE Journal of Selected Topics in Signal Processing
无
无