Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting
NeurIPS 2023(2023)
Key words
Natural language processing,large language models,XAI,explainability
AI Read Science
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined