Biography

I am a final-year PhD researcher in Natural Language Processing at the University of Edinburgh and the University of Cambridge through the European Lab for Learning and Intelligent Systems (ELLIS) Network, advised by Shay Cohen, Anna Korhonen, and Edoardo Ponti. I am a recipient of the Apple Scholars in AIML PhD Fellowship, recognizing emerging leaders in AI/ML research, and a member of the ELLIS PhD Cohort, a pan-European network for excellence in artificial intelligence.

My research investigates whether large foundation models (LFM; e.g., LLM and VLM) develop a latent world models across modalities such as text, vision, and speech. Latent world modelling refers to a model’s intrinsic understanding of the environment and agent dynamics, thus enabling reasoning and planning without an external, specialized world model. Around this, my interest spans:

  1. World modelling as an emergent capability β€” eliciting LFM to simulate future states and causal dynamics, with the novel training paradigm requiring minimum supervision. (πŸŒ€ SWIRL, BoostrapWM)

  2. Language grounding β€” examining how abstract linguistic representations connect to physical and perceptual reality (e.g., spatial/temporal system) in LFM. (⏳ TemporalGrounding, SpatialGrounding)

  3. Next-generation model architecture β€” advancing unified architecture for the omnimodal FM with any-to-any modality, serving as the substrate for emergent latent world models (πŸ’­ vLCM).

  4. Hallucination mitigation β€” designing interventions and evaluation frameworks to diagnose and reduce hallucinations in LFM. (πŸ„ SEA, mFACT, ICR2, TWEAK)

I have multiple first-author publications at leading AI venues, including ICLR, NeurIPS, ACL, EMNLP, and NAACL. My research is complemented by industry collaborations with Meta FAIR, Apple AIML, NVIDIA Research, and Baidu NLP, spanning both fundamental and applied foundation model research.

I am on the 2026 job market for industrial research scientist positions. I am actively looking for positions based in πŸ‡¨πŸ‡³/πŸ‡¬πŸ‡§/πŸ‡¨πŸ‡­/πŸ‡ΊπŸ‡Έ/πŸ‡ͺπŸ‡Ί/πŸ‡ΈπŸ‡¬.

News

  • Feb 2026 My paper on omnimodal foundation model is accepted to ICLR 2026! πŸŽ‰ See you in Cidade Maravilhosa πŸ‡§πŸ‡·
  • Dec 2025 Bootstrapping World Model paper is accepted to NeurIPS 2025 LAW Workshop! πŸŽ‰ See you in San Diego πŸ‡ΊπŸ‡Έ
  • Dec 2025 One paper about Safety Editing for LLMs is accepted to EMNLP 2025! πŸŽ‰ See you in Suzhou πŸ‡¨πŸ‡³
  • Jul 2025 My paper on In-context Retrieval and Reasoning for LLMs is accepted to ACL 2025! πŸŽ‰ See you in Vienna πŸ‡¦πŸ‡Ή
  • Jun 2025 I will be interning at Meta FAIR in Paris πŸ‡«πŸ‡· this summer! Bonjour πŸ₯
  • Dec 2024 One paper on Spectral Editing of LLM Activations is accepted to NeurIPS 2024! πŸŽ‰ See you in Vancouver πŸ‡¨πŸ‡¦
  • Jul 2024 Two papers on Language Grounding and Hallucination are accepted to NAACL 2024! See you in Mexico πŸ‡²πŸ‡½
  • Jun 2024 I will be back to Apple AIML & MLR in Seattle πŸ‡ΊπŸ‡Έ this summer! Already miss the Mt Rainier πŸ”οΈπŸ’
  • Dec 2023 My paper on Mitigating Hallucinations is accepted to EMNLP 2023! πŸŽ‰ See you in Singapore πŸ‡ΈπŸ‡¬
  • Jun 2023 I will be interning at Apple AIML in Seattle πŸ‡ΊπŸ‡Έ this summer! Hello β˜•
  • Feb 2023 I was awarded the Apple AIML PhD Fellowship for 2023! ο£Ώ
  • Dec 2022 Two papers are accepted to EMNLP 2022! πŸŽ‰ See you in Abu Dhabi πŸ‡¦πŸ‡ͺ
  • Nov 2021 I will be interning at Baidu (NLP Department) in Beijing πŸ‡¨πŸ‡³ this winter! δ½ ε₯½ 🐼

Professional Experience

Research Scientist Intern
Meta FAIR (Fundamental AI Research), Paris, πŸ‡«πŸ‡·
Jun 2025 - Sep 2025
Omnimodal Foundation Models (Large Concept Model).
Research Scientist Intern
Apple AIML, Seattle, πŸ‡ΊπŸ‡Έ
Jun 2024 - Sep 2024
In-Context Retrieval and Reasoning.
Research Scientist Intern
Apple AIML, Seattle, πŸ‡ΊπŸ‡Έ
Jun 2023 - Sep 2023
Trustworthy Decoding for Hallucination Mitigation.
Research Scientist Intern
Baidu Inc. (NLP Department), Beijing, πŸ‡¨πŸ‡³
Nov 2021 - Aug 2022
Dense Retrieval at Scale.

Education

  • PhD in Natural Language Processing
    University of Edinburgh/Cambridge
    2022-2026
  • MSc in Cognitive Science
    University of Edinburgh
    2020-2021
  • BSc in Computer Science
    BNU-HKBU United International College
    2016-2020

Selected Publications

Filter:
Self-Improving World Modelling with Latent Actions
Self-Improving World Modelling with Latent Actions
Yifu Qiu, Zheng Zhao, Waylon Li, Yftah Ziser, Anna Korhonen, Shay Cohen, Edoardo Ponti
Under Review
Unified Vision-Language Modeling via Concept Space Alignment
Unified Vision-Language Modeling via Concept Space Alignment
Yifu Qiu, Paul-Ambroise Duquenne, Holger Schwenk
ICLR 2026
Lost in Space? Vision-Language Models Struggle with Relative Camera Pose Estimation
Lost in Space? Vision-Language Models Struggle with Relative Camera Pose Estimation
Ken Deng, Yifu Qiu, Yoni Kasten, Shay B. Cohen, Yftah Ziser
Under Review
Bootstrapping Action-Grounded Visual Dynamics in Unified Vision-Language Models
Bootstrapping Action-Grounded Visual Dynamics in Unified Vision-Language Models
Yifu Qiu, Yftah Ziser, Anna Korhonen, Shay Cohen, Edoardo Ponti
Under Review
Eliciting In-context Retrieval and Reasoning for Long-Context Language Models
Eliciting In-context Retrieval and Reasoning for Long-Context Language Models
Yifu Qiu, Varun Embar, Shay Cohen, Yizhe Zhang, Navdeep Jaitly, Benjamin Han
ACL 2025
Iterative Multilingual Spectral Attribute Erasure
Iterative Multilingual Spectral Attribute Erasure
Shun Shao, Yftah Ziser, Zheng Zhao, Yifu Qiu, Shay Cohen, Anna Korhonen
EMNLP 2025
Spectral Editing of Activations for Large Language Model Alignment
Spectral Editing of Activations for Large Language Model Alignment
Yifu Qiu, Zheng Zhao, Anna Korhonen, Edoardo Ponti, Shay Cohen
NeurIPS 2024
Are Large Language Models Temporally Grounded?
Are Large Language Models Temporally Grounded?
Yifu Qiu, Zheng Zhao, Anna Korhonen, Edoardo Ponti, Shay Cohen
NAACL 2024
Think While You Write: Hypothesis Verification Promotes Faithful Knowledge-to-text Generation
Think While You Write: Hypothesis Verification Promotes Faithful Knowledge-to-text Generation
Yifu Qiu, Varun Embar, Shay Cohen, Benjamin Han
NAACL 2024
Detecting and Mitigating Hallucinations in Multilingual Summarisation
Detecting and Mitigating Hallucinations in Multilingual Summarisation
Yifu Qiu, Anna Korhonen, Edoardo Ponti, Shay Cohen
EMNLP 2023
Abstractive Summarization Guided by Latent Hierarchical Document Structure
Abstractive Summarization Guided by Latent Hierarchical Document Structure
Yifu Qiu, Shay Cohen
EMNLP 2022
DuReader-Retrieval: A Large-scale Chinese Benchmark for Dense Information Retrieval from Real-World Applications
DuReader-Retrieval: A Large-scale Chinese Benchmark for Dense Information Retrieval from Real-World Applications
Yifu Qiu, Hongyu Li, Yingqi Qu, Ying Chen, Qiaoqiao She, Jing Liu, Hua Wu, Haifeng Wang
EMNLP 2022

Teaching Activities

Advanced Topic of Natural Language Processing
Teaching Assistant, University of Edinburgh
2024
Designed the post-training assignment for a tiny reasoning LLM.
Natural Language Understanding, Generation and Machine Translation
Teaching Assistant, University of Edinburgh
2024
Accelerated Natural Language Processing
Teaching Assistant, University of Edinburgh
2023
Machine Learning Practical (Deep Learning)
Tutor, University of Edinburgh
2023, 2024

Activities

Presenter
Citadel PhD Summit, United Kingdom
2025
Invited Speaker
Apple AIML, United States
2024
Invited Speaker
Shanghai AI Lab, China
2023
Member
Oxford Machine Learning Summer School
2022
Member
Cornell, Maryland, and Max Planck Pre-Doctoral Research School (CMMRS)
2021

Academic Service

ICLR
Reviewer
2025, 2026
NeurIPS
Reviewer
2024, 2025
ICML
Reviewer
2026
AISTATS
Reviewer
2025, 2026
ACL
Reviewer
2025, 2026
EMNLP
Reviewer
2024, 2025
NAACL
Reviewer
2024, 2025
Workshop on the Scaling Behavior of Large Language Models @ EACL
Committee Member
2024
Journal of Language Resources and Evaluation
Reviewer