Topic
RAG
Retrieval-augmented generation systems, evaluation, and retrieval-heavy workflows.
6 papers · latest 2026-04-14
Most active fields for this topic
Bo Li, Mingda Wang, Gexiang Fang et al.
GRIP turns retrieval into a native decoding action so the model can decide when to search, rewrite queries, and stop inside one reasoning trace instead of bolting on a controller.
Artem Gadzhiev, Andrew Kislov
Synthius-Mem replaces retrieval-heavy agent memory with structured persona memory, improving both long-term recall and adversarial robustness against invented facts.
Kyle Whitecross, Negin Rahimi
RecaLLM tackles the lost-in-thought problem by interleaving reasoning with explicit in-context retrieval, giving long-context models a practical way to stay grounded at up to 128K tokens.
VISOR: Agentic Visual Retrieval-Augmented Generation via Iterative Search and Over-horizon Reasoning
Yucheng Shen, Jiulong Wu, Jizhou Huang et al.
VISOR pushes visual RAG toward real agent behavior with iterative search, evidence-space tracking, and drift control for long-horizon multimodal question answering over documents.
Mohamed Darwish Mounis, Mohamed Mahmoud, Shaimaa Sedek et al.
Shows multimodal retrieval is often a query-alignment problem, not an encoder problem, and beats strong baselines by rewriting image-text queries into retrieval-optimized text.
A Systematic Study of Retrieval Pipeline Design for Retrieval-Augmented Medical Question Answering
Nusrat Sultana, Abdullah Muhammad Moosa, Kazi Afzalur Rahman et al.
A careful 40-setting RAG study shows dense retrieval, query reformulation, and reranking matter more than many heavyweight choices, offering practical tuning guidance that extends beyond medical QA.