Abstract:As the carrier of Chinese culture, Chinese characters are distinguished from other scripts by their complex structure. As the basic unit of Chinese characters, strokes play a vital role in the evaluation of Chinese handwriting characters. The correct extraction of strokes is the primary step in evaluating Chinese handwriting characters. Most existing stroke extraction methods are based on specific rules, and due to the complexity of Chinese characters, these rules usually cannot take into account all the features, and cannot match the strokes of template characters based on stroke order and other information during evaluation. To address these issues, this study transforms stroke extraction into a multi-label semantic segmentation problem and proposes a multi-label semantic segmentation model (M-TransUNet), which utilizes a deep convolutional model to train with Chinese characters as a unit task, retaining the original structure of the strokes and avoiding ambiguity in stroke segment combinations. At the same time, the stroke order of the Chinese handwriting characters is obtained, which is conducive to downstream tasks, such as stroke evaluations. Since the handwriting images are only divided into foreground and background without additional color information, they are more prone to generating FP segmentation noise. To solve this problem, this study also proposes a local smooth strategy on strokes (LSSS) for the stroke segmentation results to dilute the impact of noise. Finally, this study conducted experiments on the segmentation performance and efficiency of M-TransUNet, demonstrating that the algorithm significantly enhances efficiency with minimal performance loss. Additionally, experiments were carried out on the LSSS algorithm to demonstrate its effectiveness in eliminating FP noise.