Weak-to-Strong Preference Optimization: Stealing Reward from Weak Aligned ModelWenhong Zhu,Zhiwei He, Xiaofeng Wang,Pengfei Liu,Rui WangICLR 2025(2025)引用 0|浏览4AI 理解论文溯源树样例生成溯源树,研究论文发展脉络Chat Paper正在生成论文摘要