【论文阅读】语言模型何时需要检索增强
文章目录
- When Do LMs Need Retrieval Augmentation
- LMs' Perception of Their Knowledge Boundaries
- White-box Investigation
- Training The Language Model
- Utilizing Internal States or Attention Weights
- Grey-box Investigation
- Black-box Investigation
- Adaptive RAG
When Do LMs Need Retrieval Augmentation
A curated list of awesome papers about when do language models (LMs) need retrieval augmentation. This repository will be continuously updated. If I missed any papers, feel free to open a PR to include them! And any feedback and contributions are welcome!
Trigger retrieval when language model can not provide correct answers. Therefore, many work focuses on determining whether the model can provide a correct answer.
LMs’ Perception of Their Knowledge Boundaries
These methods focus on determining whether the model can provide a correct answer but do not perform adaptive Retrieval-Augmented Generation (RAG).
White-box Investigation
These methods require access to the full set of model parameters, such as for model training or using internal signals of the model.
Training The Language Model
-
[EMNLP 2020, Token-prob-based] Calibration of Pre-trained Transformers Shrey Desai et.al. 17 Mar 2020
Investigate calibration in pre-trained transformer models & in-domain and OOD settings. Find: 1) Pre-trained models are calibrated in-domain. 2) Label smooth is better that temperature scaling in OOD setting
-
[TACL 2021, Token-prob-based] How Can We Know When Language Models Know? On the Calibration of Language Models for Question Answering Zhengbao Jiang et.al. 2 Dec 2020
1)Investigate calibration (answerr: not good) in generative language models (e.g., T5) in QA task (OOD settings). 2) Examine the effectiveness of some methods (fine-tuning, post-hoc probability modification, or adjustment of the predicted outputs or inputs)
-
[TMLR 2022] Teaching Models to Express Their Uncertainty in Words Stephanie Lin et.al. 28 May 2022
The first time a model has been shown to express calibrated uncertainty about its own answers in natural language. For testing calibration, we introduce the CalibratedMath suite of tasks
-
[ACL 2023] A Close Look into the Calibration of Pre-trained Language Models Yangyi Chen et.al. 31 Oct 2022
Answer two questions: (1) Do PLMs learn to become calibrated in the training process? (No) (2) How effective are existing calibration methods? (learnable methods significantly reduce PLMs’ confidence in wrong predictions)
-
[NeurIPS 2024] Alignment for Honesty Yuqing Yang et.al. 12 Dec 2023
1)Establishing a precise problem definition and defining “honesty” 2)introduce a flexible training framework which emphasize honesty without sacrificing performance on other tasks
Utilizing Internal States or Attention Weights
These papers focus on determining the truth of a statement or the model’s ability to provide a correct answer by analyzing the model’s internal states or attention weights. It usually involves using mathematical methods to extract features or training a lightweight MLP (Multi-Layer Perceptron).
-
[EMNLP 2023 Findings] The Internal State of an LLM Knows When It’s Lying Amos Azaria et.al. 26 Apr 2023
LLM’s internal state can be used to reveal the truthfulness of statements(train a classifier using hidden states)
-
[ICLR 2024] Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of Language Models Mert Yuksekgonul et.al. 26 Sep 2023
Convert a QA problem into a constraint satisfaction problem (are the constraints in the question satisfied) and focus on the attention weights of each constraint when generating the first token.
-
[EMNLP 2023] The Curious Case of Hallucinatory (Un)answerability: Finding Truths in the Hidden States of Over-Confident Large Language Models Aviv Slobodkin et.al. 18 Oct 2023
Investigate whether models already represent questions’ (un)answerablility when producing answers (Yes)
-
[ICLR 2024] INSIDE: LLMs’ internal states retain the power of hallucination detection Chao Chen et.al. 6 Feb 2024
1)Propose EigenScore metric using hidden states to better evaluate responses’ self-consistency and 2) Truncate extreme activations in the feature space, which helps identify the overconfident (consistent but wrong) hallucinations
-
[ACL 2024 Findings, MIND] Unsupervised Real-Time Hallucination Detection based on the Internal States of Large Language Models Weihang Su et.al. 11 Mar 2024
Introduce MIND, an unsupervised training framework that leverages the internal states of LLMs for real-time hallucination detection without requiring manual annotations.
-
[NAACL 2024] On Large Language Models’ Hallucination with Regard to Known Facts Che Jiang et.al. 29 Mar 2024
Investigate the phenomenon of LLMs possessing correct answer knowledge yet still hallucinating from the perspective of inference dynamics
-
[Arxiv, FacLens] Hidden Question Representations Tell Non-Factuality Within and Across Large Language Models Yanling Wang et.al. 8 Jun 2024
Studies non-factuality prediction (NFP) before response generation and propose FacLens (train MLP) to enhance efficiency and transferability (across different models, the first in NFP) of NFP
Grey-box Investigation
Need to access to the probability of generated tokens. Some methods also rely on the probability of generated tokens; however, since training is involved in the paper, they do not fall into this category.
-
[ICML 2017, Token-prob-based] On Calibration of Modern Neural Networks Chuan Guo et.al. 14 Jun 2017
Investigate calibration in modern neural networks, propose ECE metric, propose enhance calibration via temperature
-
[ICLR 2023] Prompting GPT-3 To Be Reliable Chenglei Si et.al. 17 Oct 2022
With appropriate prompts, GPT-3 is more reliable (both consistency-based and prob-based confidence estimation) than smaller-scale supervised models
-
[ICLR 2023 Spotlight, Semantic Uncertainty] Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation Lorenz Kuhn et.al. 19 Feb 2023
Introduce semantic entropy—an entropy which incorporates linguistic invariances created by shared meanings
-
[ACL 2024] Confidence Under the Hood: An Investigation into the Confidence-Probability Alignment in Large Language Models Abhishek Kumar et.al. 25 May 2024
Investigate the alignment between LLMs’ internal confidence and verbalized confidence
-
[CCIR 2024] Are Large Language Models More Honest in Their Probabilistic or Verbalized Confidence? Shiyu Ni et.al. 19 Aug 2024
Conduct a comprehensive analysis and comparison of LLMs’ probabilistic perception and verbalized perception of their factual knowledge boundaries
Black-box Investigation
These methods only require access to the model’s text output.
-
[EMNLP 2023, Selfcheckgpt] Selfcheckgpt: Zero-resource black-box hallucination detection for generative large language models Potsawee Manakul et.al. 15 Mar 2023
The first to analyze model hallucination of general LLM responses, and is the first zero-resource hallucination detection solution that can be applied to black-box systems
-
[EMNLP 2023] Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from Language Models Fine-Tuned with Human Feedback Katherine Tian et.al. 24 May 2023
Conduct a broad evaluation of methods for extracting confidence scores from RLHF-LMs
-
[ACL 2023 Findings] Do Large Language Models Know What They Don’t Know? Zhangyue Yin et.al. 29 May 2023
Evaluate LLMs’ self-knowledge by assessing their ability to identify unanswerable or unknowable questions
-
[ICLR 2024] Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs Miao Xiong et.al. 22 Jun 2023
Explore black-box approaches for LLM uncertainty estimation. Define a systematic framework with three components: prompting strategies for eliciting verbalized confidence, sampling methods for generating multiple responses, and aggregation techniques for computing consistency
-
[EMNLP 2023, SAC3] SAC3: Reliable Hallucination Detection in Black-Box Language Models via Semantic-aware Cross-check Consistency Jiaxin Zhang et.al. 3 Nov 2023
Extend self-consistency across pertubed questions and different models
-
[Arxiv] Large Language Model Confidence Estimation via Black-Box Access Tejaswini Pedapati et.al. 1 Jun 2024
Engineer novel features and train a (interpretable) model (viz. logistic regression) on these features to estimate the confidence. Design different ways of manipulating the input prompt and produce values based on the variability of the answers for each such manipulation. We aver to these values as features
Adaptive RAG
These methods focus directly on the “when to retrieve”, designing strategies and evaluating their effectiveness in Retrieval-Augmented Generation (RAG).
-
[ACL 2023 Oral, Adaptive RAG] When Not to Trust Language Models: Investigating Effectiveness of Parametric and Non-Parametric Memories (Adaptive RAG) Alex Mallen et.al. 20 Dec 2022
Investigate 1) when we should and should not rely on LMs’ parametric knowledge and 2) how scaling and non-parametric memories (e.g., retrievalaugmented LMs) can help. Propose adaptive RAG based on entity popularity
-
[EMNLP 2023, FLARE] Active Retrieval Augmented Generation Zhengbao Jiang et.al. 11 May 2023
Propose FLARE for long-form generation: Iteratively uses a prediction of the upcoming sentence to anticipate future content, which is then utilized as a query to retrieve relevant documents to regenerate the sentence if it contains low-confidence tokens
-
[EMNLP 2023 Findings, SKR] Self-Knowledge Guided Retrieval Augmentation for Large Language Models Yile Wang et.al. 8 Oct 2023
Investigate eliciting the model’s ability to recognize what they know and do not know and propose Self-Knowledge guided Retrieval augmentation (SKR), which can let LLMs adaptively call retrieval
-
[ICLR 2024 Oral, Self-RAG] Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection Akari Asai et.al. 17 Oct 2023
Propose a new framework to train an arbitrary LM to learn to **retrieve, generate, and critique (via generating special tokens)**to enhance the factuality and quality of generations, without hurting the versatility of LLMs.
-
[Arxiv, Rowen, Enhanced SAC3] Retrieve Only When It Needs: Adaptive Retrieval Augmentation for Hallucination Mitigation in Large Language Models Hanxing Ding et.al. 16 Feb 2024
Introduces Rowen which assesses the model’s uncertainty regarding the input query by evaluating the semantic inconsistencies in various responses generated across different languages or models.
-
[ACL 2024 Findings] When Do LLMs Need Retrieval Augmentation? Mitigating LLMs’ Overconfidence Helps Retrieval Augmentation Shiyu Ni et.al. 18 Feb 2024
1)Quantitatively measure LLMs’ such ability and confirm their overconfidence 2) study how LLMs’ certainty about a question correlates with their dependence on external retrieved information 3)Propose several prompting methods to enhance LLMs’ perception of knowledge boundaries and show that they are effective in reducing overconfidence 4) equipped with these methods, LLMs can achieve comparable or even better performance of RA with much fewer retrieval calls.
-
[Arxiv, Position paper] Reliable, Adaptable, and Attributable Language Models with Retrieval Akari Asai et.al. 5 Mar 2024
Advocate for retrieval-augmented LMs to replace parametric LMs as the next generation of LMs and propose a roadmap for developing general-purpose retrieval-augmented LMs
-
[ACL 2024 Oral, DRAGIN, Enhanced FLARE] DRAGIN: Dynamic Retrieval Augmented Generation based on the Information Needs of Large Language Models Weihang Su et.al. 15 Mar 2024
Propose Dragin, focusing on 1) when to retrieve: considers the LLM’s uncertainty about its own generated content, the influence of each token on subsequent tokens, and the semantic significance of each token and 2) what to retrieve: construct query using important words by leveraging the LLM’s self-attention across the entire context
-
[EMNLP 2024 Findings, UAR] Unified Active Retrieval for Retrieval Augmented Generation Qinyuan Cheng et.al. 18 Jun 2024
Propose Unified Active Retrieval (UAR),consists of four orthogonal criteria for determining the retrieval timing: Intent-aware; Knowledge-aware; Time-Sensitive-aware; Selfa-aware
-
[Arxiv, SEAKR] SEAKR: Self-aware Knowledge Retrieval for Adaptive Retrieval Augmented Generation Zijun Yao et.al. 27 Jun 2024
Use hidden states of the last generated tokens to meauser LLMs’ uncertainty and use this uncertainty to decide: when to retrieve, re-rank the retrieved documents, choose the reasoning strategy
后续更新将在github上进行:https://github.com/ShiyuNee/Awesome-When-To-Retrieve-Papers