Abstract:Some mainstream image arbitrary style transfer models still have limitations in maintaining the saliency information and detailed features of content images, resulting in problems such as content blurring and loss of details in the generated images. To solve the problems, this study proposes an arbitrary style transfer model that can effectively preserve the detailed features of content images. The model includes flexible fusing shallow to deep multi-layer image features extracted from the encoder. A new feature fusion is proposed, which allows for a high-quality fusion of content features and style features. In addition, a new loss function is proposed, which can well preserve the global structure of content and style and eliminate artifacts. The experimental results show that the proposed image arbitrary style transfer model can effectively balance style and content, preserve the complete semantic information and detailed features of the content image, and generate stylized images with better visual effects.