Документ взят из кэша поисковой машины. Адрес оригинального документа : http://imaging.cs.msu.su/pub/2012.CISP-BMEI.Chernomorets_Krylov.RetinalQ.en.pdf
Дата изменения: Thu Sep 20 11:37:14 2012
Дата индексирования: Sat Apr 9 23:42:49 2016
Кодировка:
Blur detection in fundus images
Alexandra A. Chernomorets and Andrey S. Krylov
Laboratory of Mathematical Methods of Image Processing Faculty of Computational Mathematics and Cybernetics Lomonosov Moscow State University

Abstract--The paper addresses the problem of retinal image quality assessment. The problem of blur detection for retinal images is considered. The blur value is obtained using an analysis of blood vessel edge widths. The paper presents the model of a general edge and the algorithm for edge width estimation.

downsampled images and significantly decrease computational time. I I . E D G E W I D T H E S T I M AT I O N A. Edge model We consider the following edge model: the general edge is a result of convolution of the ideal step edge of unit height and a Gaussian kernel with some dispersion . Such assumption gives us a unique correspondence between the edge and a numeric value, i.e. the dispersion of the Gaussian kernel, which we take as the value of the edge width. We define the ideal step edge function of unit height as: H ( x) = 1, 0, x 0, x < 0. (1)

I . I N T RO D U C T I O N Automatic quality analysis of medical images has become an important research direction. However, the automatic blur detection in fundus images has not received enough attention. Fundus images are acquired at different sites, using different cameras operated by people with varying levels of experience. This results in a variety of images with different quality, and in some of them pathologies cannot be clearly detected or are artificially introduced. These low quality images should be examined by an ophthalmologist and reacquired if needed. Current approaches for retinal image quality determination are based on global image intensity histogram analysis [1] or on the analysis of the global edge histogram in combination with localized image intensity histograms [2]. In both these approaches a small set of excellent quality images was used to construct a mean histogram. The difference between the mean histogram and a histogram of a given image then indicated the image quality. Nevertheless the methods cannot be used as general fundus image quality classifiers because images of poor quality that match the method's characteristics of acceptable quality can be easily presented. In [3] a correlation between image blurring and visibility of the vessels was pointed out. By running a vessel segmentation algorithm and measuring the area of detected vessels over the entire image, the authors estimate if the image is good enough to be used for screening. The main drawback is that the classification between good and poor quality needs a thresholding. In [4] a quality class classification of the images is proposed. An analysis is made on the vasculature in a circular area around the macula. The presence of small vessels in this area is used as an indicator of image quality. The presented results are good, but the method requires a segmentation of the vasculature and other major anatomical structures to find the region of interest around the fovea. This paper presents a novel approach to blur detection in fundus images based on estimation of blur level of automatically segmented vasculature. The method does not require the segmentation of optic disk or macula, and only a rough segmentation of blood vessels is necessary. This allows us to perform the procedure of vasculature segmentation using

The edge E (x) (see fig. 1) is defined as E (x) = [H G ](x). (2)

Fig. 1.

Edge model

Note that the function E (x) holds E (x) = E ( B. Edge width For the edge width estimation we use the unsharp masking approach. Let U, [E0 ](x) be the result of unsharp masking applied to the edge E0 (x): U
,

x).

(3)

[E0 ](x) = (4)

= (1 + )E0 (x) - E0 G = = (1 + )E0 (x) - E2 +2 (x).
0

Using (3) and supposing = 0 = 1 , (4) holds


U

1 ,

[E1 ](x) =
2 2
1

= (1 + )E1 (x) - E = (1 + )E1 (x) - E = U
2

2

,

[E2 ](

2 x) . 1

( x) = 22 ( x) = 21

(5)
Original image

The unsharp masking approach (4), due to (5), holds that the intensity values of corresponding extrema of U, [E ](x) at xmax and xmin are the same for all > 0 with fixed . Another important fact is that the function U, [E0 ](x) is increasing as a function of while x > 0, the value of is fixed. This is due to (5) and (2). Thus , fixing the value of and UE = max UE , [EE ](x) for some E
x

max U
x

, ,

[E0 ](x) UE , <

0

max U
x

[E0 ](x) UE , > 0 .

(6)

C. The edge width estimation algorithm The final edge width estimation algorithm looks as follows: 1. Given values: , UE , 1-dimensional edge profile E0 (x). 2. for = min to max : step compute U, [E0 ](x), find local maxima xmax of U, [E0 ](x), if U, [E0 ](xmax ) UE result = , stop cycle. 3. Output: result. I I I . A P P L I C AT I O N O F T H E E D G E W I D T H E S T I M AT I O N
A L G O R I T H M T O B L U R D E T E C T I O N I N F U N D U S I M AG E S

Fig. 2.

Vessel Segmentation

where µ(x, y ) is the local mean value of the image I (x, y ) in the neighborhood of pixel (x, y ), (x, y ) is its standard deviation and > 0 is a small coefficient. The pixel is classified as the background pixel if D(x, y ) < DT . We use DT = 0.7. The corrected image is computed as I (x, y ) - µB (x, y ) , B (x, y ) +

For the problem of the edge width estimation in fundus images we use the following parameters: = 4, min = 0.3, max = 10, step = 0.1. For this the value UE is equal to 1.24. The choice of does not affect the result of the algorithm, it is only important that the value of UE is computed according to the used value of . In order to compute the blur value of the image we use the following algorithm: a) Extract vessels, b) Extract edge profiles from vessels, c) Compute the blur value for the image taking into account the edge heights. A. Vessel detection The algorithm for vessel segmentation was previously described in [5]. It includes the following steps: 1. Preprocessing of retinal images [6]: this step consists of luminosity correction on the green channel. The algorithm implies the computation of the background image by computing the Mahalanobis distance D(x, y ) = I (x, y ) - µ(x, y ) , (x, y ) +

IC (x, y ) =

where µB (x, y ) and B (x, y ) are the mean value and the standard deviation of the background pixels in the neighborhood of (x, y ). 2. Alternating Sequential Filtering (ASF) [7]: the negative part of the difference between the corrected image and the result of ASF. Essentially ASF is morphological openings and closings of increasing structural element size. We use circular SE of sizes from 1 to double maximum vessel width. At present we use fixed values of maximum vessel widths for images of different sizes. 3. Maximum of Gabor filter responses on 4 scales by 6 directions; 4. Rough vessel centers segmentation using edge detection; 5. Segmentation using morphological amoebas [8][9], which is a dilation of the set of starting points acquired on the previous step by a structuring element of adjustable shape for each pixel. An example of vessel segmentation is shown in fig. 2.


B. Extraction of the vessel profiles We analyze the profiles of the boundaries of the widest vessels. In order to obtain these vessels' segments, we use the following algorithm: 1. Skeletonize the vessel mask, 2. Take only segments with the length greater than double maximum vessel width, 3. Sort the segments list in the descending order of the segment width, 4. Take 50 of the widest segments, 5. For every segment find its center and direction. The edge profile is taken as the cross-section of the segment. The algorithm can be fastened if only the area near the optic disk is analyzed. C. Blur value estimation In order to obtain an adequate result in comparison between two fundus images we should take into account not only the average edge widths, but also amplitudes of the edges. The median of the weighted edge widths is taken as the blur value of the image. The weights are taken as inverse amplitude of the edge 10/A. So the algorithm for computing the blur value of the image is as follows: 1. Compute the edge amplitude Ai and normalize every edge profile so that the values belong to the interval from 0 to 1, 2. For every normalized profile compute the edge width Wi , 3. Scale the obtained values by the inverse original edges amplitudes and obtain the value 10 Wii that characterizes the A edge, 4. Compute the median value of the weighted edge widths. I V. R E S U LT S Most of the publicly available databases contain images only of good quality. As an example the proposed algorithm was tested on retinal images from the DRIVE database [10]. The average blur value for the images from this database was found as 0.29. The method was also tested on real images from ophthalmological practice of different quality. The results for images of different quality are shown in fig. 3, 4. The results show that the images with blur value less than 1 are good enough for pathology analysis. V. C O N C L U S I O N The paper presents the solution to the problem of the detection of blurred retinal images. The application of the proposed algorithm to the classification of real retinal images has shown good results. The possible application of the algorithm is its use during the acquisition of fundus images and for the preliminary control of input data for retinal image CAD systems.
Fig. 4. Blur value estimated 1.09 Blur value estimation for real fundus images.

Blur value estimated 0.28 Fig. 3. Blur value estimation for DRIVE image 01 test.

Blur value estimated 0.63

Blur value estimated 0.82

AC K N O W L E D G M E N T The work was supported by Federal Targeted Programme "R&D in Priority Fields of the S&T Complex of Russia 2007­ 2013".


REFERENCES
[1] S.Lee and Y.Wang, "Automatic retinal image quality assessment and enhancement," in Proceedings of SPIE Image Processing, 1999, pp. 1581­1590. [2] M.Lalonde, L.Gagnon, and M.C.Boucher, "Automatic visual quality assessment in optical fundus images," in Proceedings of Vision Interface, 2001, pp. 259­264. [3] D.B.Usher, M.Himaga, and M.J.Dumskyj, "Automated assessment of digital fundus image quality using detected vessel area," in Proceedings of Medical Image Understanding and Analysis, 2003, pp. 81­84. [4] A.D.Fleming, S.Philip, K.A.Goatman, J.A.Olson, and P.F.Sharp, "Automated assessment of diabetic retinal image quality based on clarity and field definition," nvest Ophthalmol Vis Sci, vol. 47(3). [5] A.A.Chernomorets, A.S.Krylov, A.V.Nasonov, A.S.Semashko, V.V.Sergeev, V.S.Akopyan, A.S.Rodin, and N.S.Semenova, "Automated processing of retinal images," in 21-th International Conference on Computer Graphics GraphiCon'2011, September 2011, pp. 78­81. [6] G.D.Joshi and J.Sivaswamy, "Colour retinal image enhancement based on domain knowledge," 6th Indian Conf. on Computer Vision, Graphics and Image Processing, pp. 591­598, 2008. [7] J.Serra, Image Analysis and Mathematical Morphology.Vol.2.:Theoretical Advances, Academic Press, London, 1988. [8] M.Welk, M.Breub, and O.Vogel, "Differential equations for morphological amoebas," Lecture Notes in Computer Science, vol. 5720/2009, pp. 104­114, 2009. [9] R.Lerallut, E.Decenciere, and F.Meyer, "Image filtering using morphological amoebas," Image and Vision Computing, vol. 25, pp. 395­404, 2007. [10] A.Can, H.Shen, J.N.Turner, H.L.Tanenbaum, and B.Roysam, "Rapid automated tracing and feature extraction from retinal fundus images using direct exploratory algorithms," in IEEE Transactions on Information Technology in Biomedicine, 1999, vol. 3, pp. 125­138.