Документ взят из кэша поисковой машины. Адрес оригинального документа : http://imaging.cs.msu.ru/pub/2010.PRIA.Nasonov_Krylov.Metrics.en.pdf
Дата изменения: Sun Dec 12 23:00:00 2010
Дата индексирования: Mon Oct 1 19:48:26 2012
Кодировка:
BASIC EDGES METRICS FOR IMAGE DEBLURRING
A.V. Nasonov, A.S. Krylov
2 2

1

Laboratory of Mathematical Methods of Image Processing, Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University, nasonov@cs.msu.ru, kryl@cs.msu.ru The paper presents a new adaptive full reference method for quality measurement of image deblurring algorithms. The method is based on the analysis of basic edges -- sharp edges which are distant from another edges. The proposed basic edges metrics calculates error values in two areas related to typical weak points of image deblurring algorithms: basic edges area and basic edges neighborhood. Unsharp mask deblurring method is used to demonstrate the proposed method.

Introduction Development of image metrics is important for the objective analysis of image processing algorithms. Image metrics are used to numerically evaluate and compare results of different image restoration, image enhancement and image compression algorithms. There are different approaches in image metrics [1]. The simplest approaches are based on per-pixel errors calculation -- mean square error (MSE), root of MSE (RMSE) and peak signal-to-noise ratio (PSNR): 1 MSE (u, v) ui, j vi, j 2 WH i , j

RMSE (u, v) MSE(u, v)

MAX I2 PSNR (u, v) 10 log10 M SE (u, v) , where MAX I is the maximum possible intensity value (usually set to 255). These metrics are inconsistent with human eye perception. To make image metrics well correlated with human perception, human Basic Edges visual system (HVS) is used [2, 3]. HVS includes mathematical models for several We consider the areas related to typical phenomena like luminance and contrast artifacts of image deblurring algorithms: blur masking effects, contrast sensitivity function. artifact which appears in edge areas and In this paper develop a full reference metrics ringing (overshooting) artifact in the areas of to estimate the quality of image deblurring edge neighborhood. These areas are illustrated algorithms. The following images are given: z in fig. 1. -- ground truth artifact free image, w -- blurred and noisy observation of the image z, 1 The work was supported by RFBR grant 10-01-00535 and by grant of Human Capital Foundation.

z ( k ) -- deblurred images obtained by different image deblurring algorithms with different parameters. The goal is to choose the image z ( k ) which is the most similar to z. There are metrics for blur level estimation [4, 5], but these metrics provide only the information about image sharpness, but not about image quality. It is possible to apply perceptual based image metrics to compare z with z ( k ) . But these metrics do not focus on weak points of deblurring algorithms and do not provide detailed information about image quality in areas of different kind like edge area, texture area. If a method has better metrics value than another one, it does not mean that the former method is better than the latter for all image areas. The key idea of the proposed image metrics for image deblurring is to choose areas related to weak points of image deblurring algorithms and to calculate the perceptual metrics in these areas.


Basic Edges Detection The proposed basic edges detection method is based on Canny edge detection method [6] and consists of the following steps: 1. Edge detection and edge masking. 2. Finding edge points distant from other edges. 3. Finding edge neighborhood. At the first step, we calculate the gradient modulus field g i , j and apply non-maximum suppression as in Canny method. Then we apply masking and threshold rules: gi0 , j0 max gi , j ((i i0 ) 2 ( j j0 ) 2 ) ,
i, j

Fig. 1. Edge area and edge neighborhood.

We define as edge area the set of points with the distance to the closest edge point less than r. Edge neighborhood is formed by points with the distance to the closest edge point between r and R=rs, where parameter s is defined by the number of ringing oscillations. Since most of the deblurring algorithms do not add more than two oscillations, we use s = 5. We need to choose the edges on the reference image z which are suitable for the analysis of deblurring algorithms. There are two effects observed on blurred images: 1. Masking effect. If an edge with low gradient value is located near an edge with high gradient value, it will disappear after image blurring. 2. Edge displacement. If two edges with the same or close gradient values are located near each other, they will be displaced after image blurring. These effects are illustrated in fig. 2.

g

i0 , j0

g

min

and set to zero the values g i0 , j0 for all points where these rules are not The obtained gradient field contains only non-masked edges which forms the set of edge points E {(i, j ) : gi , j 0} . Function (d ) is chosen in assumption of Gaussian blur model. We consider onedimensional image with a single step edge: 0, x 0, f ( x) 1, x 0, blur it by Gauss filter H with radius and calculate the gradient modulus:
2 1 ( Hf )( x) e 2 . 2 According to this formula, we use x
2

( x) he
where h



x

2 2

2

,

Fig. 2. The effects of edge masking and edge displacement. Left: original edge profiles; right: gradient values with marked local maxima. Top row: low blur; middle: medium blur; bottom: strong blur.

To avoid these effects we propose a concept of basic edges -- the set of edges points which satisfy the following conditions: 1. A point is not masked by nearby edges. 2. The distance from the edge point to the closest edge is greater than a predefined threshold rT . 3. The gradient value is greater than a given threshold.

1 . 2 At the second step we use mathematical morphology to find edge points with the distance to other edges greater than rT by the following algorithm: 1. We perform morphological erosion of nonedge points area with circular structuring r element with a radius of T pixels. If an edge 2 is more distant that pixels from other edges, then the eroded area will lay in both sides of r the edge with the distance T to the edge. 2


rT 2 pixels. Parameter 0 is used to take into account the nonzero edge thickness and to make the algorithm stable to small errors in edge detection. We use 2 . 3. Finally we erode the dilated area by 2 pixels. The intersection between the obtained area and the edge points set E is the basic edge points set E B . This algorithm is illustrated in fig. 3.

2. Next we dilate the eroded area by

The algorithm to find BEA and BEN areas is illustrated in Fig. 4.

a) input image.

b) edge detection result: white edges are nonmasked edges, blue edges are masked edges.

Fig. 3. Finding basic edges points.

At the third step, we find edge neighborhood using Euclidean Distance Transform (EDT). The distance transform (DT) is an operator widely used in computer vision and geometry. It finds for each image pixel its smallest distance to the region of interest M: ( p, M ) min ( p, q) .
qM

c) finding basic edges: d) BEA (white) and BEN white edges are basic (gray) areas. edges, red edges are nonbasic non-masked edges. Fig. 4. Finding basic edges areas.

Application to Image Deblurring We consider Gaussian blur model with known parameter . In this case, we use the following parameters for basic edges: r / 2 , R 2 , rT 3 . We use SSIM (structural similarity) [8] metrics to compare images in BEA and BEN areas: (2 c )(2 c ) SSIM (u, v) 2 u 2 v 1 2 uv 2 2 , ( u v c1 )( u v c2 ) where u and v are the means of u and v respectively,
2 u

For

Euclidean

distance

( p, q)

( px qx ) 2 ( p y q y ) 2 there are efficient

algorithms for EDT calculation [7]. We find the set S { p : ( p, E) R} . Then for every edge and non-edge points p we check if the closest edge point to p is basic edges point by checking the condition ( p, E) ( p, EB ) and check if the point p is part of the edge neighborhood ( p, E) ( p, S ) R The set M E is formed by the points p which satisfy these conditions. Basic Edges Areas The set M E contains both basic edges area (BEA) and basic edges neighborhood (BEN). To separate BEA points from BEN points, we simply compare the distance ( p, EB ) to r.

and

2 v

are the variances,



uv

is the covariance of u and v, c1 (0.01L) 2 ,

c2 (0.03L) 2 , L is the dynamic range of pixel values (usually L = 255). The effectiveness of the suggested method was verified with the unsharp mask deblurring method for a wide class of test images. The main idea of the unsharp mask is to amplify the high-frequency information z Hu (1 )(u Hu) , (1)


where H is the Gaussian filter with the same used for image blur, is the amplification parameter. An application of basic edges metrics to image deblurring using unsharp mask is shown in fig. 5. It can be seen that there is no that maximized SSIM in both BEA and BEN areas simultaneously.

The result for the images from fig. 5 is shown in fig. 6.

SSIM (BEA) = 0.960, SSIM (BEN) = 0.954. Fig. 6. Application of basic edges metrics to combined unsharp mask method (2).

Conclusion
a) Original image. b) Blurred image, SSIM(BEA)=0.763, SSIM(BEN)=0.970.

Full-reference basic edges metrics for quality measurement of image deblurring algorithms have been proposed. An effective algorithm to find the basic edges area and the basic edges neighborhood was proposed. Applications to the unsharp mask deblurring method have shown that the proposed metrics can be used to improve image deblurring algorithms. References
1. A. J. Ahumada. Computational image quality metrics: a review // SID Digest, pp. 305­308, 1993. 2. A. Ninassi, O. Le Meur, P. Le Callet, D. Barba. On the performance of human visual system based image quality assessment metric using wavelet domain // SPIE Human Vision and Electronic Imaging XIII (HVEI'08), 2009. 3. F. M. Giaime Ginesu, D. Giusto. A multi-factors approach for image quality assessment based on a human visual system model // Elsevier Signal Proc. Image Communication, no. 21, pp. 316­333, 2006. 4. R. Ferzli, L. J. Karam. Human Visual System Based No-Reference Objective Image Sharpness Metric // 2006 IEEE International Conference on Image Processing (ICIP), pp. 2949­2952. 5. P. Marziliano, F. Dufaux, S. Winkler, T. Ebrahimi. Perceptual Blur and Ringing Metrics: Application to JPEG2000 // Signal Processing: Image Communication, vol. 19, num. 2, 2004, p p. 163­172. 6. J. Canny. A computational approach to edge detection // IEEE Transactions PAMI, 8, pp. 679­ 714, 1986. 7. R. Fabbri, L. da F. Costa, J. C. Torelli, O. M. Bruno. 2D euclidean distance transforms: a comparative survey // ACM Computing Surveys, 40(1), 2008. 8. Z. Wang, A. Bovik, H. Sheikh, E. Simoncelli. Image quality assessment: from error visibility to structural similarity // IEEE Transactions on Image Processing, 13(4), pp. 600­612, 2004.

c) Unsharp mask (1) with d) Unsharp mask with 7, 1.4 , SSIM (BEA) = 0.960, SSIM (BEA) = 0.853, SSIM (BEN) = 0.893. SSIM (BEN) = 0.990. Fig. 5. Application of basic edges metrics for unsharp mask method. The values of which maximizes SSIM in BEA and BEP areas were found.

To improve the results of the unsharp mask, we use a combination of the results of the unsharp mask method for different based on the distance to the edge points set E. zs ( p) z1 ( p)s( ( p, E )) (2) z2 ( p)(1 s( ( p, E ))) , where s(d ) is the combining function. We choose as 1 the parameter of the unsharp mask that maximizes SSIM in BEA area and as 2 the parameter that maximizes SSIM in BEN area. The obvious choice s(d ) 1 for d r and s(d ) 0 for d r shows good metrics values but is unacceptable because of discontinuity. To create seamless images, we use d r, 1, s(d ) ( R d ) /( R r ), r d R, 0, d R.