Abstract:The YOLOv8n algorithm exhibits suboptimal performance when dealing with complex backgrounds, dense targets, and small-sized objects with limited pixel information, leading to reduced precision, missed detection, and misclassification. To address these issues, this study proposes an algorithm, LNCE-YOLOv8n, for safety equipment detection. This algorithm includes a linear multi-scale fusion attention (LMSFA) mechanism, which adaptively focuses on key features to improve the extraction of information from small targets while reducing computational loads. An architecture called C2f_New networks (C2f_NewNet) is also introduced, which maintains high performance and reduces depth through an effective parallelization design. Combined with a lightweight universal up-sampling operator, content-aware reassembly of features (CARAFE), the proposed algorithm realizes efficient cross-scale feature fusion and propagation and aggregates contextual information within a large receptive field. Based on the SIoU (symmetric intersection over union) loss function, this study proposed enhanced SIoU (ESIoU) to improve the adaptability and accuracy of the model in complex environments. Tested on a safety equipment dataset, LNCE-YOLOv8n outperforms YOLOv8n, exhibiting a 5.1% increase in accuracy, a 2.7% rise in mAP50, and a 3.4% boost in mAP50-95, significantly enhancing the detection accuracy of safety equipment for workers in complex construction conditions.