Please use this identifier to cite or link to this item:
|Title:||Video Stabilization using Feature Point Tracking with Classified Feature Points|
|Authors:||Singh, C. K.|
|Abstract:||Digital Image Stabilization which is also known as Video Stabilization is an active area of Research under computer vision. It generally consists of three steps i.e., motion estimation, undesired motion filtering and finally the video completion step. Motion estimation is the most crucial step of the video stabilization. There are two different categories of technique for motion estimation i.e., pixel based also known as block based technique and feature based technique. The second step of VS is motion filtering and compensation, in this step the estimated camera motion information is used to suppress the undesired motion components which is generally due to camera jitters, flickers and due to an unsteady platform. In the final step the border pixels of frame are required to be filled with intensity values, as compensation step creates a vacancy of pixel intensity there. Hence, this step is known as video completion. In this thesis, the work of video stabilization is done with a new approach . The first step of video stabilization is motion estimation for which we extract feature points. Now, feature points can be of two types i.e., background feature point and foreground feature point. If we take combined feature points then the motion estimation due to the foreground feature points would deviate the result compared to considering only background feature points. Also, the fact that the motion of the camera can more be associated with the background (steady) feature points than the foreground feature points. So, here only background FPs are taken for motion parameter estimation. Now, these FPs are tracked through the use of KLT tracking algorithm this method uses intensity based comparison method to track same intensity variation points between the frames. After tracking, some points may be incorrectly tracked. So, RANSAC algorithm is applied to all FP correspondence by selecting a proper transformation model. Hence, wrong correspondences are eliminated. With remaining inlier points the motion parameter is calculated. The motion model taken in this case is either projective or affine which requires 8 parameters for projective or 6 parameters for affine. The another important aspect of this work is the method of feature point classification. In general classification two parameters are used to update the BG and FG feature ii points, which depends upon the location difference of a FP in its tracked and transformed position. If a FP is moving fast then it is identified as FG feature point and if it is slow then it is considered as a BG feature point. The general classification of FPs provides wrong classification in certain situation i.e if a FG moves slow or if a FG starts moving. So, to deal with this a new improved method is incorporated through which a FP is monitored for certain frames then only it is decided whether it is a FG point or BG point. Hence with this FP classification based approach, global motion is identified. Now, filtering of these parameters are required in order to separate out the desired motion components from undesired motion components and for this purpose Kalman Filter is used. So, after obtaining the undesired motion components the frame is warped accordingly and we get a stable video output.|
|Appears in Collections:||03. EE|
Files in This Item:
|Video Stabilization using Feature Point Tracking with Classified Feature Points.pdf||7.88 MB||Adobe PDF||View/Open Request a copy|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.