Abstract:CBCT is widely used in image-guided radiation therapy due to its integration with modern linear accelerator systems. However, its inferior image quality compared to CT poses significant challenges in achieving optimal treatment planning. This study proposes a new model named DDFGAN (dual-domain feature fusion generative adversarial network), aiming at increasing the image quality of CBCT to that of CT as much as possible. The model adopts a dual-branch architecture: the first branch extracts multi-scale features in the spatial domain through the introduction of an RFB module; the second branch designs a frequency domain feature extraction module specifically for CBCT to CT synthesis. By fusing features from both branches, DDFGAN significantly enhances the imaging quality of CBCT. Additionally, the model incorporates a geometric consistency loss, transforming the traditional bidirectional generative network into a unidirectional one, which not only aligns more with clinical application requirements but also substantially reduces training time. Experimental results show that DDFGAN outperforms the other four comparative methods in generating synthetic CT images with fewer artifacts, and the HU values of synthetic images are closer to those of CT images, significantly improving the accuracy of adaptive radiation therapy.