Документ взят из кэша поисковой машины. Адрес оригинального документа : http://imaging.cmc.msu.ru/pub/Superres08.pdf
Дата изменения: Fri Aug 8 23:00:00 2008
Дата индексирования: Mon Oct 1 19:29:04 2012
Кодировка:
Fast Super-Resolution from video data using optical flow estimation
Andrey Krylov Faculty of Computational Mathematics and Cybernetics, Moscow Lomonosov State University, Moscow, Russia kryl@cs.msu.ru Abstract
Regularization-based and a fast non-iterative methods using optical flow estimation are suggested for video data super-resolution with correction of nonuniform illumination.

Andrey Nasonov Faculty of Computational Mathematics and Cybernetics, Moscow Lomonosov State University, Moscow, Russia nasonov@cs.msu.ru
usually modeled by Gauss filter, D is a decimation operator, n is a noise which is usually ignored. Various models of warping operator Fk are used. The simplest model is a translation model. In this case, k-th frame is considered as a shifted first image. Translation model is not appropriate for SR problem, when the motion is not constant. Different motion models are used [4], [5]. The motion of adjacent pixels is usually similar, so, the motion of only several pixels is calculated. The motion of other pixels is interpolated. The simplest model is regular motion field [4]. For large images, it is more effective to calculate the motion of pixels which belong to edges and corners [5]. Optical flow estimation algorithms are used in the case of small motion vectors. They produce accurate enough result. The key idea of flow estimation is the following representation of the consecutive frames: I ( x + u ( x, y ), y + v( x, y )) = J ( x, y ) , (2) where u ( x, y ) and v( x, y ) are components of motion vector field v = (u , v) . Frames are considered to be smooth enough and differentiable, so I ( x + u , y + v ) can be approximated as I ( x, y ) I ( x, y ) I ( x + u , y + v ) I ( x, y ) + u+ v . (3) x y Under assumption (3), equation (2) takes the form I x ( x, y )u ( x, y ) + I y ( x, y )v( x, y ) = I t ( x, y ) , (4) where I t ( x, y ) = J ( x, y ) - I ( x, y ) . Since motion vectors of adjacent pixels are close to each other, various additional constraints are used. For example, in [6] partial derivatives I x , I y , I t are smoothed.
Regularization algorithms are also used [7], [8]. To improve the accuracy for video sequences with nonuniform illumination, image gradient is used instead of image intensity in (2) [9].

1. Introduction
The problem of super-resolution (SR) is to recover a high-resolution image from a sequence of several degraded low-resolution images. This problem is very helpful in human surveillance, biometrics, etc. because it can significantly improve image quality. There are two groups of video SR algorithms: learning- and reconstruction-based. Learning-based algorithms enhance the resolution of a single image using information on the correspondence of sample low- and high-resolution images. Reconstruction-based algorithms use only a set of low-resolution images to construct high-resolution image. More detailed introduction into video SR problems is given in [1], [2]. The majority of reconstruction-based algorithms use camera models [3] for downsampling the highresolution image. The problem is posed as error minimization problem z R = arg min Ak z - wk , (1)
z


k

where z is reconstructed high-resolution image, wk is k-th low-resolution image, Ak is a downsampling operator which transforms high-resolution image into k-th low-resolution image. Different norms are used. The operator can be generally represented as Ak z = DH cam Fk H atm z + n , where H atm is atmosphere turbulence effect which is often neglected, Fk is a warping operator like motion blur or motion deformation, H cam is camera lens blur which is

2. Our approach


We propose a reconstruction-based algorithm with optical flow estimation model incorporating both translation model and variational approach for optical flow estimation.

a13 = 1 I x I t + 2 ( I xx I xt + I xy I yt ) , a
21

= 1 I x I y + 2 ( I xx I a
22 2 = 1 I y + 2 ( I

xy

+ I xy I yy ) ,

2 xy

2 + I yy ) ,

2.1. Optical flow estimation
The proposed optical flow estimation algorithm consists of two steps: 1. At the first step, we find the best shift between two frames to reduce average length of motion vectors to achieve better results at the second step. For every frame, we find 10 most significant points obtained from Harris detector [10]. Then we find the best shift vector so that the points from the first frame should fit the points from the second frame. On practice, for consecutive frames, about 7­8 points from both frames correspond each other, so we assume that at least 5 points correspond each other. We minimize the following functional

a

23

= 1 I y I t + 2 ( I xy I xt + I yy I yt ) .

The conditions (8) are independent for different points ( x, y ) , so the motion vector field is not accurate. To add the condition of similarity of the motion of close pixels, we use the approach used in KanadeLucas method [6]. We spatially smooth all the coefficients in (8) using Gauss filter. We use Gauss filter with a radius of 5. The proposed flow estimation forms warping operator Fk used in (1). A result is shown in Fig. 1.

2.2. Regularization
The SR problem (1) is ill-conditioned, so we use Tikhonov regularization approach [11]. We use l1 norm z 1 =

F (v ) =


k =1

5

Pk + v - Qk

2

,

(5)


i, j

|z

i, j

|.

where Pk are key points from the first frame, Qk -- from the second, and values Pk + v - Qk are the first five minimal penalty values for the given v . To minimize the functional (5), we calculate its value for every pair v = ( Pi - Q j ), i, j = 1,2,...,10.

z R = arg min Ak z - vk 1 + f ( z ) . (9) z k We choose bilateral total variation functional [3] f ( z) = |x|+| y| S x, y z - z as a stabilizer, S x, y is a



2. After the rough motion estimation, we apply a modification of Kanade-Lucas method [6]. This modification adds image gradient conditions: I ( x + u , y + v) = J ( x, y ), (6) I x ( x + u , y + v) = J x ( x, y ), I ( x + u, y + v) = J ( x, y ). y y For every point ( x, y ) we minimize the following functional: F (u , v ) = 1 | I x u + I y v - I t | + (7) + 2 (| I xx u + I xy v - I xt | + | I xy u + I yy v - I yt |) . Here weights 1 and 2 indicate the importance of the conditions (6). To minimize (7), we solve Euler equation a11u + a12 v = a13 , (8) a21u + a22 v = a23 , where
2 a11 = 1 I x + 2 ( I 2 xx 2 + I xy ) ,

- p x, y p



1

shift operator along horizontal and vertical axis by x and y pixels respectively, = 0.8 , p = 1 . The functional (9) is minimized by subgradient method [2], [12]. A result is shown in Fig. 2.

2.3. Fast super-resolution
Solving of (9) is time-consuming and it important to get a fast approximation of SR Our approach is close to [13]. The algorithm follows: 1. Fix the first frame w1 and calculate is often problem. looks as the flow

between w1 and wk , k = 2,3,..., n , n is the number of input frames. The flow between w1 and w1 is zerofilled. 2. Upsample every frame wk taking into account optical flow estimation for the frame to compensate the motion and to make frame close to the first frame: Wk = FkUwk , (10) where U is Gauss upsampling operator. Fast implementation of the procedure to calculate FkU is described in Section 3. 3. Calculate an average image

a12 = 1 I x I y + 2 ( I xx I

xy

+ I xy I yy ) ,


1 z= n

W
k =1

n

k

.

(11)

Then we add the value wk ( xi , y j ) to Wk* ( xi*, j , yi*, j ) and 1 to Wk** ( xi*, j , yi*, j ) . If coordinates ( xi*, j , yi*, j ) are not integer, then we approximate the convolution with a single delta function as a convolution with a sum of delta functions defined at integer coordinates using bilinear interpolation. After the images Wk* and Wk** formed, we apply Gauss filter to both, then divide Wk* on Wk** elementwise Wk = Wk* / Wk** .

4. Deblur the resulting image. A result is shown in Fig. 3.

2.4. Illumination correction
We have applied the proposed SR methods for input frames with non-uniform illumination. In this case equation (4) gives bad results. So, to estimate the flow, we use only information about image gradient, i.e. 1 = 0 and 2 = 1 in (7). Using this estimation fast approximation has successfully processed the frame, while regularization-based method (9) did not show good results. A result is shown in Fig. 4. Another approach to illumination correction by Empirical Mode Decomposition for regularization-based method was suggested by us in [2].

4. Results
Results of the comparison of the proposed SR method with single-frame linear methods and regularization-based non-linear interpolation method [14] are given Figures 1­4.

3. Numerical methods
To upsample the image for fast SR, we use modified Gauss resampling to calculate (10). Original Gauss method for scale factor p looks as follows:

Wk ( px, py ) =

- ( x - xi )2 -( y - yi ) 2 2 ( xi , y j )



e

2

wk ( xi , yi )
2

a) A pair of source images;

-( x - xi )2 -( y - yi ) 2 2 ( xi , y j )



e

,

(12)

where wk is the low-resolution image, Wk is the highresolution image, is a Gauss filter radius (we choose = 0.4 ), ( xi , y j ) are grid points where wk is known. We apply the warping operator Fk directly to (12) by shifting the grid points by motion vectors Wk ( px, py ) =

b) Kanade-Lucas method;

c) The proposed method.

Figure 1. Optical flow estimation for images with non-uniform illumination.




e

- ( x - xi +ui , j )2 -( y - yi +vi , j ) 2
2

2

wk ( xi , yi )
2




e

-( x - xi +ui , j )2 -( y - yi + vi , j ) 2
2

. (13)

a) Source image examples;

To perform fast computation of (13), we represent both numerator and denominator as convolutions of delta functions with Gauss filter. In discrete form, we form two images ( Wk* and Wk** ), initially zero-filled. Next for every point ( xi , y j ) from wk , we calculate its coordinates on upsampled image Wk : ( xi , y j ) ( p( xi - ui , j ), p ( yi - vi , j )) = ( xi*, j , yi*, j ) .
b) Box filter interpolation; c) Bilinear interpolation;


6. References
[1] S. Borman, and Robert L. Stevenson, "Super-Resolution from Image Sequences -- A Review", Midwest Symposium on Circuits and Systems, 1998, pp. 374­378. [2] A.S. Krylov, A.V. Nasonov, D.V. Sorokin, "Face image super-resolution from video data with non-uniform illumination", Proc. Int. Conf. Graphicon2008, pp. 150­155. d) Non-linear interpolation; e) Super-resolution

Figure 2. Super-resolution by a factor of 4 for 10 images with uniform illumination.

[3] S. Farsiu, D. Robinson, M. Elad, and P. Milanfar, "Fast and Robust Multi-Frame Super-Resolution", IEEE Trans. On Image Processing, Vol. 13, No. 10, 2004, pp. 1327-1344. [4] Sung limitation images", Processin Won Park, and Marios Savvides, "Breaking the of manifold analysis for super-resolution of facial IEEE Int. Conf. on Acoustics, Speech and Signal g, Vol. 1, April 2007, pp. 573­576.

a) An example of source frames;

[5] Ha V. Le, and Guna Seetharaman, "A Super-Resolution Imaging Method Based on Dense Subpixel-Accurate Motion Fields", Proceedings of the Third International Workshop on Digital and Computational Video, Nov. 2002, pp. 35­42. b) Box interpolation; c) Non-linear interpolation; [6] B.D. Lucas, T. Kanade, "An iterative image registration technique with an application to stereo vision", Proc. of Imaging understanding workshop, 1981, pp. 121­130. [7] A. Bruhn, J. Weickert, and C. Shnorr, "Lucas/Kanade Meets Horn/Schunck: Combining Local and Global Optic Flow Methods", International Journal of Computer Vision, Vol. 61, No. 3, 2005, pp. 211­231. [8] J. Weickert, C. Shnorr, "Variational Optic Flow Computation with a Spatio-Temporal Smoothness Constraint", J. Math. Im. & Vis., Vol. 14, 2001, pp. 245­255. [9] T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, "High Accuracy Optical Flow Estimation Based on a Theory for Warping", Proc. 8th European Conf. on Comp. Vis., Vol. 4, 2004, pp. 25­36. [10] C. Harris, and M.J. Stephens, "A combined corner and edge detector", In Alvey Vision Conf., 1998, pp. 147­152. [11] A.N. Tikhonov, and V.Y. Arsenin, Solutions of Ill Posed Problems, WH Winston, Washington DC, 1977. [12] S. Boyd, L. Xiao, A. Mutapcic "Subgradient methods", Lecture notes of EE392o, Stanford University, 2003. [13] F. Lin, C. Fookes, V. Chandran, and S. Sridharan, "Investigation into Optical Flow Super-Resolution for Surveillance Applications", Proceedings of APRS Workshop on Digital Image Computing, February 2005, pp. 73­78. [14] A.S. Lukin, A.S. Krylov, A.V. Nasonov, "Image Interpolation by Super-Resolution", Proc. Int. Conf. Graphicon2006, pp. 239­242.

d) Super-resolution;

e) Fast super-resolution.

Figure 3. Real-time super-resolution by a factor of 4 for 12 images with uniform illumination obtained from a camera without pre-processing.

a) Source images;

b) Superresolution.

Figure 4. Fast super-resolution by a factor of 2 for 8 images with non-uniform illumination.

5. Conclusion
Tikhonov regularization-based and a fast noniterative methods using optical flow estimation for video data SR have been suggested. Fast method is less time-consuming than non-linear resampling method and of the same SR quality. Tikhonov regularizationbased method gives the best quality but it is slower. The work was supported by grant RFBR 06-0139006-.