Abstract:Clickbait refers to the use of sensational or exaggerated headlines to attract users into clicking, a practice that has proliferated in recent years across online platforms such as news portals and social media. This trend has led to user dissatisfaction and, in some cases, facilitated online fraud. Large language models (LLM), known for their robust natural language understanding and text generation capabilities, have demonstrated outstanding performance across various natural language processing tasks. However, when faced with specific challenges like clickbait detection, where decision boundaries are often unclear, LLM are prone to hallucination. To address the issue, a method based on a dual-layer multi-agent large language model is proposed, which significantly enhances clickbait detection accuracy without the need to fine-tune the entire model. Specifically, internal voting within agents in the first layer and cross-voting among different agents in the second layer results in enhanced detection performance. Validation against three benchmark datasets shows that the proposed method outperforms state-of-the-art large-scale models and prompt learning techniques by nearly 13% and 10% in terms of accuracy, respectively.