tracking moving objects form image sequences
play

Tracking moving objects form image sequences Janno Jgeva Mihkel - PowerPoint PPT Presentation

University of Tartu Faculty of Mathematics and Computer Science MTAT.03.260 Pattern Recognition and Image Analysis Tracking moving objects form image sequences Janno Jgeva Mihkel Pajusalu Tartu, 2011 GENERAL EFFECTS OF TIME Static to dynamic


  1. University of Tartu Faculty of Mathematics and Computer Science MTAT.03.260 Pattern Recognition and Image Analysis Tracking moving objects form image sequences Janno Jõgeva Mihkel Pajusalu Tartu, 2011

  2. GENERAL EFFECTS OF TIME

  3. Static to dynamic • Adding the fourth dimension • Each picture is not a separate isolated event • Sequence of 2D pictures -> reconstruct 4D o Picture is now in 3D: f(x,y,t) • Human brain handles still image processing as a motion problem

  4. What happens if time passes? • Objects move (or transform) and/or camera moves • Lighting changes • Noise background changes

  5. Movements • Effectively produces different views of objects and the general scene o Helps to remove ambiguities • Dynamic occlusions o Objects might be missing in some pictures and (re)appear in others producing problems

  6. Lighting changes • Positions of light o Possibility of examining surface features • The color of light o Spectral properties • Diffusion of light (sharp shadows to smooth diffused ambient light) o Different lighting conditions optimal for different goals

  7. Noise changes • Each picture has variety of noises o Sensor noise o Rounding errors (also under/over exposure) o Shutter effects (rolling vs. global shutter) • Analyzing time series allows time-domain smoothing o Reduces noise o Makes analysis more precise

  8. Camera's time resolution • Camera stores a single picture o Each pixel is average over time • Shutter exposes pixels o Global shutter: all pixels are exposed during the same time window o Rolling shutter: rows/columns of pixels or single pixels are exposed in sequence

  9. Different shutter speeds http://upload.wikimedia.org/wikipedia/commons/b/b2/Windflower-05237-nevit.JPG

  10. Rolling shutter: Skew http://en.wikipedia.org/wiki/File:CMOS_rolling_shutter_distortion.jpg

  11. Rolling shutter: Smear & Skew http://upload.wikimedia.org/wikipedia/commons/4/46/Focalplane_shutter_distortions.jpg

  12. Rolling shutter: Partial exposure http://en.wikipedia.org/wiki/File:Lightning_rolling_shutter.jpg

  13. MODELING TIME

  14. Time sequence • Each picture holds a projection of 3D space at a time value o A different view of 4D environment o Goal: to construct the 4D environment from a sequence of 2D pictures • Requires modeling of time

  15. Time: the most general case • Time is a continuous variable o Just like a spatial coordinate

  16. Simplifications • Causality: o Each moment can only depend on previous moments, not future moments • Discretization of time: o Time is divided into discrete moments • Markov chain: o Each future moment depends only on the current moment o Knowing the current state is used to predict future o Simplifies analysis greatly

  17. Applications of Markov processes • Markov filter o Particle filter • Kalman filter

  18. Markov filter • Generate a map and visualize the probabilities o Propagate model using control and error estimation o Multiply by a sensor data • Changing of the scene in time decreases ambiguities 1. R. Siegwart, I. Nourbakhsh, "Introduction to Autonomous Mobile Robots", The MIT Press,2004

  19. Kalman filter • Single estimate o Updating the estimate probabilistically using sensor data and belief 1. R. Siegwart, I. Nourbakhsh, "Introduction to Autonomous Mobile Robots", The MIT Press,2004

  20. http://www.lce.hut.fi/~ssarkka/course_k2011/pdf/ handout3.pdf

  21. Frequency domain • Many problems are simplified in frequency domain • Fourier transform properties o Tr anslation:  Only phase changes upon translation!  Moving object changes only phase, not amplitude of Fourier transform

  22. Frequency domain 2 • Convolution  Much faster • Both properties apply in 2D case o 2D translation = shift in phase

  23. MODELS OF MOVEMENT

  24. Models • Simple model o Only translation between frames • More complex approach o Affine transformations

  25. Brightness constancy constraint • The brightness constancy constraint (BCC) is satisfied if the colors of the spatio–temporal image points representing the same 3 D points, remain unchanged throughout the spatio– temporal image „Vision with Direction: A Systematic Introduction to Image Processing and Computer Vision“ by Josef Bigün, Springer-Verlag Berlin Heidelberg 2006

  26. Simple model • Patches of image are translated by a vector • Sufficient if time differences are small • Points transform to lines • Lines to planes • A good book: Bigun J., "Vision with Direction A Systematic Introduction to Image Processing and Computer Vision" „Vision with Direction: A Systematic Introduction to Image Processing and Computer Vision“ by Josef Bigün, Springer-Verlag Berlin Heidelberg 2006

  27. Affine transformations • Modeling only translation might not be enough o In reality picture regions are translated, rotated and scaled • Affine transformations for point s=(x y) T : • • Speed vector field • Matrix: „Vision with Direction: A Systematic Introduction to Image Processing and Computer Vision“ by Josef Bigün, Springer-Verlag Berlin Heidelberg 2006

  28. METHODS

  29. Methods for analyzing image sequences • Simple o Using object/feature detection and constructing time dependences • Model fitting • Optical flow o Sparse optical flow o Dense optical flow o Frequency domain optical flow

  30. Simple and straightforward • Detect object to track in sequences o Increase precision using Markov/Kalman filtering • Simple case: unambiguous o Easily distinguishable objects, no occlusion • Occlusion: o A model of movement must be applied • Hard to distinguish objects, bad noise tolerance, bad precision

  31. Example "Handbook of Computer Vision and Applications, Volume 3: Systems and Applications", Bernd Jähne, Horst Haussecker, Peter Geissler Academic Press (January 1999)

  32. Model fitting based • Create a 3D model and fit o Partially helps in case of occlusion o Ambiguities still remain

  33. Feature detection + prediction + fitting example • Process o Detect edges o Predict transformation using edges o Project transformed model o Generate control points on model o Fit control points and edges • http://sites.google.com/site/jbarandiaran/ 3dtracking

  34. Optical flow • Motion field analysis o Try to deconstruct the image sequence into moving object surface patches o Each patch moves by a 2D vector Enables estimation of Movement of objects Movement of camera Depth (3D vision using a single moving camera)

  35. Motion Estimation by Differentials in Two Frames • Definition of speed • Difference of intensity f(x,y,t) „Vision with Direction: A Systematic Introduction to Image Processing and Computer Vision“ by Josef Bigün, Springer-Verlag Berlin Heidelberg 2006

  36. Motion Estimation by Differentials in Two Frames • Let's find a path s(t) , where intensity is constant (BCC): df/dt=0 • Equation system in a neighborhood : d =- Dv „Vision with Direction: A Systematic Introduction to Image Processing and Computer Vision“ by Josef Bigün, Springer-Verlag Berlin Heidelberg 2006

  37. Motion Estimation by Differentials in Two Frames: Lucas-Kanade • Solving the system: d=-Dv D T d=D T Dv v=(D T D) -1 D T d d k =f(x k ,y k ,t 1 ) - f(x k ,y k ,t 0 ) • S=D T D is called the structure tensor o Solution exists if S is not singular „Vision with Direction: A Systematic Introduction to Image Processing and Computer Vision“ by Josef Bigün, Springer-Verlag Berlin Heidelberg 2006

  38. What are good features to track? • The key is the structure tensor • The larger the eigenvalues the better o Corners o Textures „Vision with Direction: A Systematic Introduction to Image Processing and Computer Vision“ by Josef Bigün, Springer-Verlag Berlin Heidelberg 2006

  39. Sparse optical flow • Only small portion of pixels are tracked (mostly detected features) • Demo (OpenCV) o cvGoodFeaturesToTrack  Uses corner detection o cvOpticalFlowPyrLK  Uses Pyramidal Lucas-Kanade algorithm

  40. Dense optical flow • Every pixel is tracked • Requires more complex algorithms in regions where good features are absent

  41. Motion Estimation by Spatial Correlation • Basically find a similar patch near the original patch in next frame „Vision with Direction: A Systematic Introduction to Image Processing and Computer Vision“ by Josef Bigün, Springer-Verlag Berlin Heidelberg 2006

  42. Phase correlation optical flow • Amplitudes of 2D Fourier transform do not change when an object moves o Only phases change o An area where phase changes correlate likely belongs to the same object o Can be used to identify moving objects

  43. Phase correlation optical flow • The math o Only phase changes when image is shifted (G a and G b are 2D Fourier transforms of two frames) o Calculating correlation in Fourier space o Inverse Fourier transform http://en.wikipedia.org/wiki/Phase_correlation

  44. Cool applications • http://www.2d3.com/ • Multiple temporal view advantages o 3D/2D mapping from single camera o Super resolution (synthetic aperture) o Image stabilization • NASA VISAR • Virtualdub deshaker • SLAM • Optical glyph tracking

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend