Xin Su, Ph.D.
AI Research Scientist, Intel

Hi, I’m Xin Su. I’m an AI Research Scientist at Intel Labs in the Multimodal Cognitive AI team, focusing on multimodal AI and autonomous agents.
My research centers on making AI systems more powerful and reliable. I develop synthetic data generation and post-training methods to improve multimodal models and agents across diverse reasoning tasks, build autonomous agents that can interact with complex environments, and create knowledge graph systems and novel retrieval-reasoning frameworks for RAG applications.
Previously, I worked on building AI systems that extract structured information from text and apply it to complex reasoning tasks, such as temporal reasoning. I also applied AI to healthcare, developing clinical NLP models for medical applications.
I received my Ph.D. in Information from the University of Arizona in 2024 at the Computational Language Understanding Lab, advised by Dr. Steven Bethard, and my M.S. in Computer Science from Loyola University Chicago in 2020, advised by Dr. Dmitriy Dligach.
Selected Publications
2025
- NeurIPSA Semantic Parsing Framework for End-to-End Time NormalizationIn The Thirty-ninth Annual Conference on Neural Information Processing Systems, Dec 2025
- ICMLSK-VQA: Synthetic Knowledge Generation at Scale for Training Context-Augmented Multimodal LLMsIn Forty-second International Conference on Machine Learning, Dec 2025
- EMNLPTransformer-Based Temporal Information Extraction and Application: A ReviewIn Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, Nov 2025
2024
- NAACLSemi-Structured Chain-of-Thought: Integrating Multiple Sources of Knowledge for Improved Language Model ReasoningIn Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), Jun 2024