News

Sep 18, 2025 Our paper “A Semantic Parsing Framework for End-to-End Time Normalization” has been accepted to NeurIPS 2025!
Aug 20, 2025 Our survey paper “Transformer-Based Temporal Information Extraction and Application: A Review” has been accepted to EMNLP 2025!
Jan 30, 2025 Our paper “SK-VQA: Synthetic Knowledge Generation at Scale for Training Context-Augmented Multimodal LLMs” has been accepted as a Spotlight Oral (top 1%) at ICML 2025!
Jan 06, 2025 Joined Intel Labs Multimodal Cognitive AI Team full-time as an AI Research Scientist! 🎉🎉🎉 Excited to continue my research on agentic AI and multimodal models 🚀
Dec 20, 2024 Successfully defended my Ph.D. dissertation at the University of Arizona!
Mar 15, 2024 Our paper “Semi-Structured Chain-of-Thought: Integrating Multiple Sources of Knowledge for Improved Language Model Reasoning” has been accepted to NAACL 2024!
Oct 06, 2023 Our paper “Fusing Temporal Graphs into Transformers for Time-Sensitive Question Answering” has been accepted to EMNLP 2023 Findings!
May 01, 2023 Returning to Intel Labs for my internship!
May 15, 2022 Starting my internship at Intel Labs as an AI Research Intern!
Feb 23, 2022 Our paper “A Comparison of Strategies for Source-Free Domain Adaptation” has been accepted to ACL 2022!
May 15, 2020 Received Edsger W. Dijkstra High Achievement Award in Computer Science from Loyola University Chicago!

Xin Su, Ph.D.

AI Research Scientist, Intel

prof_pic.jpg

Hi, I’m Xin Su. I’m an AI Research Scientist at Intel Labs in the Multimodal Cognitive AI team, focusing on multimodal AI and autonomous agents.

My research centers on making AI systems more powerful and reliable. I develop synthetic data generation and post-training methods to improve multimodal models and agents across diverse reasoning tasks, build autonomous agents that can interact with complex environments, and create knowledge graph systems and novel retrieval-reasoning frameworks for RAG applications.

Previously, I worked on building AI systems that extract structured information from text and apply it to complex reasoning tasks, such as temporal reasoning. I also applied AI to healthcare, developing clinical NLP models for medical applications.

I received my Ph.D. in Information from the University of Arizona in 2024 at the Computational Language Understanding Lab, advised by Dr. Steven Bethard, and my M.S. in Computer Science from Loyola University Chicago in 2020, advised by Dr. Dmitriy Dligach.

Selected Publications

2025

  1. NeurIPS
    A Semantic Parsing Framework for End-to-End Time Normalization
    Xin Su, Sungduk Yu, Phillip Howard, and 1 more author
    In The Thirty-ninth Annual Conference on Neural Information Processing Systems, Dec 2025
  2. ICML
    SK-VQA: Synthetic Knowledge Generation at Scale for Training Context-Augmented Multimodal LLMs
    Xin Su, Man Luo, Kris W Pan, and 3 more authors
    In Forty-second International Conference on Machine Learning, Dec 2025
  3. EMNLP
    Transformer-Based Temporal Information Extraction and Application: A Review
    Xin Su, Phillip Howard, and Steven Bethard
    In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, Nov 2025

2024

  1. NAACL
    Semi-Structured Chain-of-Thought: Integrating Multiple Sources of Knowledge for Improved Language Model Reasoning
    Xin Su, Tiep Le, Steven Bethard, and 1 more author
    In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), Jun 2024

2023

  1. EMNLP
    Fusing Temporal Graphs into Transformers for Time-Sensitive Question Answering
    Xin Su, Phillip Howard, Nagib Hakim, and 1 more author
    In Findings of the Association for Computational Linguistics: EMNLP 2023, Dec 2023

2022

  1. ACL
    A Comparison of Strategies for Source-Free Domain Adaptation
    Xin Su, Yiyun Zhao, and Steven Bethard
    In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), May 2022