DPO Meets PPO: Reinforced Token Optimization for RLHF.Han Zhong,Guhao Feng,Wei Xiong, Xinle Cheng,Li Zhao,Di He,Jiang Bian,Liwei WangCoRR(2024)引用 1|浏览45AI 理解论文溯源树样例生成溯源树,研究论文发展脉络Chat Paper正在生成论文摘要