Abstract:To address the problems of blurred image boundaries, unclear image texture, and poor visual effect after inpainting, we propose a generative adversarial inpainting model that combines edge detection with self-attention mechanism in this study. Through this model, the contour information of the images can be extracted by edge detection, avoiding the problem of blurred boundaries after inpainting. Since the self-attention mechanism can capture the global information of images and generate precise details, a texture inpainting network incorporating the self-attention mechanism is designed. The proposed model is composed of an edge complement network and a texture inpainting network. First, the designed edge complement network completes the edges of a damaged image to obtain an edge complement image. Secondly, the texture of the missing region is accurately inpainted by the texture inpainting network combining the complemented edge image. Finally, the model proposed in this study is trained and tested on the CelebA and Place2 image datasets. The experimental results show that compared with the existing image inpainting methods, the model can greatly improve the accuracy of image inpainting and generate vivid images.