计算机系统应用  2018, Vol. 27 Issue (11): 192-197 PDF

1. 云南财经大学 云南省经济社会大数据研究院, 昆明 650221;
2. 中国科学院大学, 北京 100049;
3. 中国科学院 计算机网络信息中心, 北京 100190

Non-Convex Optimized Impulse Noise Removal Model with L1 Data Fidelity Term
CHEN Jing-Si1, LI Chun2,3
1. Big Data Research Institute of Yunnan Economy and Society, Yunnan University of Finance and Economics, Kunming 650221, China;
2. University of Chinese Academy of Sciences, Beijing 100049, China;
3. Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China
Abstract: With the rapid development of digital image processing technology, image recovery has been widely used in the fields of medicine, military, public defense, and agro-meteorology. This study integrates TVL1, ROF, Squares TVL1 (STVL1), and SHI model, proposes a non-convex and non-smooth model for removing impulse noise, and uses a variable separation technique ADMM to solve the model. In general, gradient-based methods are not suitable for non-smooth optimizations. Half-quadratic and Iterative Reweighted Least Squares (IRLS) algorithms cannot be applied to non-smooth functions when the zero point is non-differentiable. For non-convex non-smooth terms, Graduated NonConvexity (GNC) algorithms track non-smooth and non-convex minimums along the potential energy of a series of approximate non-smooth energy functions and need to consider their computational time. So in order to deal with non-convex non-smooth terms of the model, the multi-step convex relaxation method is used to solve the subproblem of the model. Although this method only leads to the local optimal solution of the original nonconvex problem, the local solution is an improvement over the global solution of the initial convex relaxation. In addition, because each stage is a convex optimization problem, this method is computationally efficient. The genetic algorithm was used to select the parameters of the model. Through a large number of experiments on different pictures and different noises, the robustness, running time, ISNR and PSNR of the model were better than the other three models. And this model can maintain the local information of the image with better visual quality.
Key words: image restoration     non-convex optimization     ADMM     image processing     multi-step convex relaxation optimization method

1 图像恢复概述

 ${\bf{f}} = {\bf{Au}} + {\bf{n}}$ (1)

 ${\bf{u}} = \mathop {\arg \min }\limits_{\bf{u}} \left\{ {\left\| {{\bf{f}}} - {{\bf{Au}}} \right\|_2^2 + \lambda {\bf{R}}\left( {\bf{u}} \right)} \right\}$ (2)

 ${\bf{u}} = \mathop {\arg \min }\limits_{\bf{u}} \left\{ {{{\left\| {{\bf{f}}} - {{\bf{Au}}} \right\|}_{{{0 \; {\rm {or}} \; 1}}}} + \lambda {\bf{R}}\left( {\bf{u}} \right)} \right\}$ (3)

2 相关工作

 ${\hat{\bf u}} = \mathop {\arg \max }\limits_{\bf{u}} p\left( {{\bf{u}} | {\bf{f}}} \right)$ (4)

 $\begin{array}{l}{\hat{\bf u}} = \mathop {\arg \max }\limits_{\bf{u}} p\left( {{\bf{u}} | {\bf{f}}} \right)\\\;\; \;\; = \mathop {\arg \max }\limits_{\bf{u}} \displaystyle\frac{{p\left( {{\bf{f}} | {\bf{u}}} \right)p\left( {\bf{u}} \right)}}{{p\left( {\bf{f}} \right)}}\\\;\; \;\; = \mathop {{\mathop{\rm argmin}\nolimits} }\limits_{\bf{u}} - \log \left( {{\bf{f}} | {\bf{u}}} \right) - \log \left( {\bf{u}} \right)\\\;\; \;\; = \mathop {{\mathop{\rm argmin}\nolimits} }\limits_{\bf{u}} - \int\limits_\Omega {\log p\left( {{\bf{f}}\left( {\bf{x}} \right)|{\bf{u}}\left( {\bf{x}} \right)} \right)} dx - \log p\left( {\bf{u}} \right)\end{array}$ (5)

 ${\hat{\bf u}} = \mathop {{ \arg \min } }\limits_{\bf{u}} \left\| {{\bf{f}} - {\bf{u}}} \right\|_1^2 + p\left( {\bf{u}} \right)$ (6)

 $p\left( {\bf{u}} \right) = \exp \left( { - \gamma \int\limits_\Omega {\phi \left( {{\bf{u}}\left( {\bf{x}} \right)} \right)} } \right)d{\bf{x}}$ (7)

 ${\hat{\bf u}} = \mathop {{ \arg \min } }\limits_{\bf{u}} \left\{ {\frac{{{\gamma _1}}}{2}\left\| {{\bf{f}} - {\bf{Au}}} \right\|_2^2 + \frac{\alpha }{2}\left\| {\nabla {\bf{u}}\left( {\bf{x}} \right)} \right\|_2^2 + {{\left\| {\nabla {\bf{u}}\left( {\bf{x}} \right)} \right\|}_1}} \right\}$ (8)

 ${\hat{\bf u}} = \mathop {{\mathop{\rm argmin}\nolimits} }\limits_{\bf{u}} \left\{ {\frac{{{\gamma _1}}}{2}{{\left\| {{\bf{f}} - {\bf{Au}}} \right\|}_1}{\rm{ + }}\frac{\alpha }{2}\left\| {\nabla {\bf{u}}\left( {\bf{x}} \right)} \right\|_2^2 + {{\left\| {\nabla {\bf{u}}\left( {\bf{x}} \right)} \right\|}_1}} \right\}$ (9)

 ${\hat{\bf u}} = \mathop {{ \arg \min } }\limits_{\bf u} \left\{ {\begin{array}{*{20}{c}} {\displaystyle\frac{{{\gamma _1}}}{2}{{\left\| {{\bf{f}} - {\bf{Au}}} \right\|}_1} + \displaystyle\frac{{{\gamma _2}}}{2}\left\| {{\bf{f}} - {\bf{Au}}} \right\|_2^2} \\ { + \displaystyle\frac{\alpha }{2}\left\| {\nabla {\bf{u}}\left( {\bf{x}} \right)} \right\|_2^2 + {{\left\| {\nabla {\bf{u}}\left( {\bf{x}} \right)} \right\|}_1}} \end{array}} \right\}$ (10)

 ${\hat{\bf u}} = \mathop {{ \arg \min } }\limits_{\bf{u}} \left\{ {\lambda {{\left\| {{\bf{f}} - {\bf{Au}}} \right\|}_1} + \frac{\alpha }{2}\left\| {\nabla {\bf{u}}\left( {\bf{x}} \right)} \right\|_2^2 + p\left( {\bf{u}} \right)} \right\}$ (11)

 $p\left( {\bf{u}} \right) = \exp \left( { - \gamma \int\limits_\Omega {\phi \left( {{\bf{u}}\left( {\bf{x}} \right)} \right)} } \right)d{\bf{x}} = \int {\phi \left( {\left| {\nabla {\bf{u}}} \right|} \right)} d{\bf{x}}$ (12)

 $\phi \left( t \right) = \arctan \left( {\frac{{1 + 2\rho \left| {{t}} \right|}}{{\sqrt 3 }}} \right) - \frac{\pi }{6}$ (13)
3 模型求解

 $\left\{\begin{array}{*{20}{l}} {\mathop {\min }\limits_{{\bf{u}}, {\bf{h}}, {\bf{d}}} \left\{ {\lambda \int\limits_\Omega {\left| {\bf{h}} \right|d{\bf{x}} + \displaystyle\frac{\alpha }{2}\left\| {\nabla {\bf{u}}} \right\|_2^2 + \int\limits_\Omega {\phi \left( {\left| {\bf{d}} \right|} \right)d{\bf{x}}} } } \right\}} \\ {{\rm{s}}{\rm{.t}}.\;\; {\bf{h}} = {\bf{f}} - {\bf{Au}},\nabla {\bf{u}} = {\bf{d}}} \end{array}\right.$ (14)

 $\begin{array}{l}\left( {{{\bf{u}}^{k + 1}},{{\bf{h}}^{k + 1}},{{\bf{d}}^{k + 1}}} \right){\rm{ = }}\\ \mathop {\arg \min }\limits_{{\bf{u}},{\bf{h}},{\bf{d}}} \left\{ {\begin{array}{*{20}{c}}{\lambda \int\limits_\Omega {\left| {\bf{h}} \right|d{\bf{x}} + \displaystyle\frac{\alpha }{2}\left\| {\nabla {\bf{u}}} \right\|_2^2 + \int\limits_\Omega {\phi \left( {\left| {\bf{d}} \right|} \right)d{\bf{x}}{\rm{ + }}} } }\\{\displaystyle\frac{{{\gamma _1}}}{2}\left\| {{\bf{h}} - \left( {{\bf{f}} - {\bf{Au}}} \right) - {\bf{b}}_1^k} \right\|_2^2 + \displaystyle\frac{{{\gamma _2}}}{2}\left\| {{\bf{d}} - \nabla {\bf{u}} - {\bf{b}}_2^k} \right\|_2^2}\end{array}} \right\}\end{array}$ (15)

 ${\bf{b}}_1^{k + 1} = {\bf{b}}_1^k + \left( {{\bf{f}} - {{\bf{Au}}^{k + 1}}} \right) - {{\bf{h}}^{k + 1}}$ (16)
 ${\bf{b}}_2^{k + 1} = {\bf{b}}_2^k + \nabla {{\bf{u}}^{k + 1}} - {{\bf{d}}^{k + 1}}$ (17)
3.1 u-子问题

 ${{\bf{u}}^{k + 1}} \in \mathop {\arg \min }\limits_{\bf{u}} \left\{ {\begin{array}{*{20}{c}} {\displaystyle\frac{\alpha }{2}\left\| {\nabla u} \right\|_2^2 + \frac{{{\gamma _1}}}{2}\left\| {{\bf{h}} - \left( {{\bf{f}} - {\bf{Au}}} \right) - {\bf{b}}_1^k} \right\|_2^2} \\ { + \displaystyle\frac{{{\gamma _2}}}{2}\left\| {{{\bf{d}}^k} - \nabla {\bf{u}} - {\bf{b}}_2^k} \right\|_2^2} \end{array}} \right\}$ (18)

 $\begin{array}{l}\left( {\alpha + {\gamma _2}} \right) {\Delta {\bf{u}}} + {\gamma _1}{{\bf{A}}^{\rm{T}}}{\bf{Au}}{\rm{ = }}{\gamma _1}{{\bf{A}}^{\rm{T}}}\left( {{\bf{h}} - {\bf{f}} - {\bf{b}}_1^k} \right) + {\gamma _2}div\left( {{{\bf{d}}^k} - {\bf{b}}_2^k} \right)\end{array}$ (19)

 ${{\bf{u}}^{k + 1}} = {\cal{F}^{ - 1}}\left( {\frac{{{\gamma _1}\cal{F}\left( {{{\bf{A}}^{\bf{T}}}\left( {{{\bf{h}} - {\bf{f}} - {\bf{b}}}_1^k} \right)} \right) + {\gamma _2}\cal{F}\left( {div\left( {{{\bf{d}}^k} - {\bf{b}}_2^k} \right)} \right)}}{{{\gamma _1}\cal{F}\left( {{{\bf{A}}^{\rm{T}}}{\bf{A}}} \right) + \left( {\alpha + {\gamma _2}} \right)\cal{F}\left( {{\Delta }} \right)}}} \right)$ (20)
3.2 h-子问题

${\bf{h}}$ -子问题可以通过如下收缩算子求解:

 ${{\bf{h}}^{k + 1}} = shrink\left( {{\bf{f}} + {\bf{b}}_1^k - {\bf{A}}{{\bf{u}}^{k + 1}},\frac{\lambda }{{{\gamma _1}}}} \right)$ (21)

 $shrink\left( {{s_\alpha },{t_\alpha }} \right) = \frac{{{s_\alpha }}}{{\left| {{s_\alpha }} \right|}}\max \left( {\left| {{s_\alpha }} \right| - {t_\alpha },0} \right)$ (22)
3.3 d-子问题

 $\mathop {\min }\limits_{\bf{d}} \left\{ {\int {\phi \left( {\left| {\bf{d}} \right|} \right)dx + \frac{{{\gamma _2}}}{2}\left\| {{\bf{d}} - {\bf{s}}} \right\|_2^2} } \right\}$ (23)

 ${{\bf{d}}^{k + 1,l + 1}} = shrink\left( {{\bf{s}},\frac{{{{\bf{v}}^{k + 1,l}}}}{\gamma }} \right)$ (24)

 ${\bf{v}}_1^{k + 1,l + 1} = \frac{1}{{{{\left( {1 + \rho \left| {{{\bf{d}}^{k + 1,l + 1}}} \right|} \right)}^2}}}$ (25)

1) 初始化: 设k=0, 选择 $\scriptstyle \lambda ,\alpha ,{\gamma _1},{\gamma _2} > 0$ , $\scriptstyle {u^0} =f,{h^0} = b_1^0 =$ $\scriptstyle 0,{d^0} = b_2^0 = 0,v = 1$ ;

While “not converged” do

2) 计算 ${u^{k + 1}}$ : 用(20)FFT或者Gauss-Seidel计算 ${u^{k + 1}}$ ;

3) 计算 $\scriptstyle {{\bf{h}}^{k + 1}}$ : 用(21)计算 $\scriptstyle {{\bf{h}}^{k + 1}}$ ;

4) 计算 $\scriptstyle {{\bf{d}}^{k + 1}}$ : 用(24)计算 $\scriptstyle {{\bf{d}}^{k + 1}}$ ;

5) 间乘子更新:

$\scriptstyle {\bf{b}}_1^{k + 1} = {\bf{b}}_1^k + \left( {{\bf{f - }}{{\bf{A}}^{\bf{T}}}{{\bf{u}}^{k + 1}}} \right) - {{\bf{h}}^{k + 1}}.$ ,

$\scriptstyle {\bf{b}}_2^{k + 1} = {\bf{b}}_2^k + \nabla {{\bf{u}}^{k + 1}} - {{\bf{d}}^{k + 1}}.$

End while

4 实验分析

 $PSNR = 20{\log _{10}}\frac{{255mn}}{{{{\left\| {\tilde u - u} \right\|}_2}}}$ (26)
 $ISNR = 10*{\log _{10}}\frac{{\sum {_{i,j}{{\left( {{u_{i,j}} - {f_{i,j}}} \right)}^2}} }}{{\sum {_{i,j}{{\left( {{u_{i,j}} - {{\tilde u}_{i,j}}} \right)}^2}} }}$ (27)
4.1 参数选择

 \left\{ {\begin{aligned}& {P = \left\{ {{p_1},{p_2},\cdots,{p_n}} \right\}}\\& {O\left( {a,b,c,d} \right) = \sum\limits_{p \in P} {ISNR\left( {M\left( {p|a,b,c,d} \right)} \right)} }\end{aligned}} \right. (28)

 $\alpha = 1.7,{\gamma _1} = 15,{\gamma _2} = 0.3,\lambda = 3,\rho = 0.32.$

 图 1 模型对脉冲噪声去燥效果图

5 总结与展望

 [1] Buades A, Coll B, Morel JM. A non-local algorithm for image denoising. Proceedings of 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Diego, CA, USA. 2005, 2: 60–65. [2] Elad M, Aharon M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image Processing, 2006, 15(12): 3736-3745. DOI:10.1109/TIP.2006.881969 [3] Dabov K, Foi A, Katkovnik V, et al. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Transactions on Image Processing, 2007, 16(8): 2080-2095. DOI:10.1109/TIP.2007.901238 [4] Dong WS, Li X, Zhang L, et al. Sparsity-based image denoising via dictionary learning and structural clustering. Proceedings of CVPR 2011. Colorado Springs, CO, USA. 2011. 457–464. [5] Yan M. Restoration of images corrupted by impulse noise and mixed Gaussian impulse noise using blind inpainting. arXiv: 1304.1408, 2013. [6] Huang T, Dong WS, Xie XM, et al. Mixed noise removal via Laplacian scale mixture modeling and nonlocal low-rank approximation. IEEE Transactions on Image Processing, 2017, 26(7): 3171-3186. DOI:10.1109/TIP.2017.2676466 [7] Jia TT, Shi YY, Zhu YG, et al. An image restoration model combining mixed L1/L2 fidelity terms. Journal of Visual Communication and Image Representation, 2016, 38: 461-473. DOI:10.1016/j.jvcir.2016.03.022 [8] Rudin LI, Osher S, Fatemi E. Nonlinear total variation based noise removal algorithms. Physica D, 1992, 60(1–4): 259-268. DOI:10.1016/0167-2789(92)90242-F [9] Geman S, Geman D. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1984, PAMI-6(6): 721-741. DOI:10.1109/TPAMI.1984.4767596 [10] Esser E. Applications of Lagrangian-based alternating direction methods and connections to split Bregman. CAM Report 9, 2009. 31. [11] Geman D, Reynolds G. Constrained restoration and the recovery of discontinuities. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1992, 14(3): 367-383. DOI:10.1109/34.120331 [12] Geman D, Yang CD. Nonlinear image recovery with half-quadratic regularization. IEEE Transactions on Image Processing, 1995, 4(7): 932-946. DOI:10.1109/83.392335 [13] Nikolova M. Analysis of the recovery of edges in images and signals by minimizing nonconvex regularized least-squares. Multiscale Modeling & Simulation, 2005, 4(3): 960-991. [14] Aubert G, Kornprobst P. Mathematical Problems in Image Processing: Partial Differential Equations and the Calculus of Variations. 2nd ed. New York: Springer, 2006. [15] Zhang T. Analysis of multi-stage convex relaxation for sparse regularization. Journal of Machine Learning Research, 2010, 11: 1081-1107. [16] Aubert G, Aujol JF. A variational approach to removing multiplicative noise. SIAM Journal on Applied Mathematics, 2008, 68(4): 925-946. DOI:10.1137/060671814 [17] Chan TF, Esedoglu S. Aspects of total variation regularized L1 function approximation. SIAM Journal on Applied Mathematics, 2005, 65(5): 1817-1837. DOI:10.1137/040604297 [18] Cai XH, Chan R, Zeng TY. A two-stage image segmentation method using a convex variant of the Mumford-Shah model and thresholding. SIAM Journal on Imaging Sciences, 2013, 6(1): 368-390. DOI:10.1137/120867068 [19] Alliney S. A property of the minimum vectors of a regularizing functional defined by means of the absolute norm. IEEE Transactions on Signal Processing, 1997, 45(4): 913-917. DOI:10.1109/78.564179 [20] Nikolova M. Minimizers of cost-functions involving nonsmooth data-fidelity terms. Application to the processing of outliers. SIAM Journal on Numerical Analysis, 2002, 40(3): 965-994. DOI:10.1137/S0036142901389165 [21] Nikolova M. A variational approach to remove outliers and impulse noise. Journal of Mathematical Imaging and Vision, 2004, 50(1-2): 99-120. [22] He BS, Liao LZ, Han DR, et al. A new inexact alternating directions method for monotone variational inequalities. Mathematical Programming, 2002, 92(1): 103-118. DOI:10.1007/s101070100280 [23] Goldstein T, Osher S. The split Bregman method for L1-regularized problems. SIAM Journal on Imaging Sciences, 2009, 2(2): 323-343. DOI:10.1137/080725891