06
06
06
Detecting Clickbait in Chinese Social Media by Prompt Learning
Detecting Clickbait in Chinese Social Media by Prompt Learning
Detecting Clickbait in Chinese Social Media by Prompt Learning
Empowering Low-Resource NLP with Part-of-Speech Enhanced Prompting
Empowering Low-Resource NLP with Part-of-Speech Enhanced Prompting
Empowering Low-Resource NLP with Part-of-Speech Enhanced Prompting



Description
Description
This project tackles the challenge of spotting misleading headlines in Chinese social media by leveraging advanced prompt-learning techniques that excel even with minimal labeled data.
本项目旨在利用先进的提示学习方法,在样本极度匮乏的情况下,识别中文社交媒体中具有误导性的标题内容。
This project tackles the challenge of spotting misleading headlines in Chinese social media by leveraging advanced prompt-learning techniques that excel even with minimal labeled data.
本项目旨在利用先进的提示学习方法,在样本极度匮乏的情况下,识别中文社交媒体中具有误导性的标题内容。
Keywords
Keywords
Natural Language Processing
Natural Language Processing
Prompt Learning
Prompt Learning
Social Media Analysis
Social Media Analysis
Low-Resource Scenario
Year
Year
2023
2023






Technical Details
Technical Details
Part-of-Speech Enhanced Prompt Learning (PEPL) introduces grammatical features into state-of-the-art pre-trained language models, elevating clickbait detection to new levels of accuracy and efficiency. By converting classification into a prompt-driven cloze task and selectively highlighting part-of-speech cues, the method preserves rich contextual information while requiring significantly fewer labeled examples. This approach stands apart from conventional fine-tuning by systematically activating the latent knowledge within large language models, ultimately delivering a highly scalable solution for content moderation and user experience enhancement in Chinese social media.
本项目提出了**词性增强提示学习(PEPL)**框架,将语法特征巧妙地融入到预训练语言模型中,以更高效、更精准地检测标题党。通过将分类任务转化为“填空式”提示,PEPL在保留上下文信息的同时,大幅减少对标注数据的依赖。与传统微调方式相比,此方法通过深度挖掘大型语言模型的潜在知识库,实现了在中文社交媒体内容审核和用户体验提升方面的高度可扩展性与精确性。
Part-of-Speech Enhanced Prompt Learning (PEPL) introduces grammatical features into state-of-the-art pre-trained language models, elevating clickbait detection to new levels of accuracy and efficiency. By converting classification into a prompt-driven cloze task and selectively highlighting part-of-speech cues, the method preserves rich contextual information while requiring significantly fewer labeled examples. This approach stands apart from conventional fine-tuning by systematically activating the latent knowledge within large language models, ultimately delivering a highly scalable solution for content moderation and user experience enhancement in Chinese social media.
本项目提出了**词性增强提示学习(PEPL)**框架,将语法特征巧妙地融入到预训练语言模型中,以更高效、更精准地检测标题党。通过将分类任务转化为“填空式”提示,PEPL在保留上下文信息的同时,大幅减少对标注数据的依赖。与传统微调方式相比,此方法通过深度挖掘大型语言模型的潜在知识库,实现了在中文社交媒体内容审核和用户体验提升方面的高度可扩展性与精确性。






Highlights
Highlights
Robust Low-Resource Performance: Maintains high accuracy under extreme few-shot conditions.
Adaptive Prompt Design: Leverages part-of-speech insights to optimize context understanding.
Generalizable Framework: Demonstrates strong transferability across various pre-trained language models.
Published at 2023 IEEE CSCWD: Validated through rigorous peer review and real-world applicability.
低资源条件下的稳健表现:即使在极少样本的场景中仍能保持高准确率
灵活的提示设计:通过词性信息引导模型更好地理解句子结构与语义
优异的泛化能力:在多种预训练语言模型上均展现出良好适应性
发表于 2023 IEEE CSCWD:经同行评审认证,具有现实应用价值
Robust Low-Resource Performance: Maintains high accuracy under extreme few-shot conditions.
Adaptive Prompt Design: Leverages part-of-speech insights to optimize context understanding.
Generalizable Framework: Demonstrates strong transferability across various pre-trained language models.
Published at 2023 IEEE CSCWD: Validated through rigorous peer review and real-world applicability.
低资源条件下的稳健表现:即使在极少样本的场景中仍能保持高准确率
灵活的提示设计:通过词性信息引导模型更好地理解句子结构与语义
优异的泛化能力:在多种预训练语言模型上均展现出良好适应性
发表于 2023 IEEE CSCWD:经同行评审认证,具有现实应用价值
Credits
Appendix
Read our paper presented at CSCWD 2023
@INPROCEEDINGS{10152690, author={Wu, Yin and Cao, Mingpei and Zhang, Yueze and Jiang, Yong}, booktitle={2023 26th International Conference on Computer Supported Cooperative Work in Design (CSCWD)}, title={Detecting Clickbait in Chinese Social Media by Prompt Learning}, year={2023}, volume={}, number={}, pages={369-374}, keywords={Training;Social networking (online);Federated learning;Computational modeling;Semantics;Collaboration;Tagging;clickbait;prompt learning;pre-trained language model;social media;low-resource scenario}, doi={10.1109/CSCWD57460.2023.10152690}}
Credits
Appendix
Read our paper presented at CSCWD 2023
@INPROCEEDINGS{10152690, author={Wu, Yin and Cao, Mingpei and Zhang, Yueze and Jiang, Yong}, booktitle={2023 26th International Conference on Computer Supported Cooperative Work in Design (CSCWD)}, title={Detecting Clickbait in Chinese Social Media by Prompt Learning}, year={2023}, volume={}, number={}, pages={369-374}, keywords={Training;Social networking (online);Federated learning;Computational modeling;Semantics;Collaboration;Tagging;clickbait;prompt learning;pre-trained language model;social media;low-resource scenario}, doi={10.1109/CSCWD57460.2023.10152690}}
More works