Open-Source Can Be Dangerous: on the Vulnerability of Value Alignment in Open-Source LLMsJingwei Yi,Rui Ye, Qisi Chen, Bin Benjamin Zhu,Siheng Chen,Defu Lian,Guangzhong Sun,Xing Xie,Fangzhao WuICLR 2024(2024)引用 0|浏览7关键词Large language model,Harmful,AlignmentAI 理解论文溯源树样例生成溯源树,研究论文发展脉络Chat Paper正在生成论文摘要