ieee etfa 2018
play

IEEE ETFA 2018 Turin, September 6-th A single camera inspection - PowerPoint PPT Presentation

IEEE ETFA 2018 Turin, September 6-th A single camera inspection system to detect and localize obstacles on railways based on manifold Kalman filtering Federica Fioretti 1 , Emanuele Ruffaldi 2 , Carlo Alberto Avizzano 1


  1. IEEE ETFA 2018 Turin, September 6-th A single camera inspection system to detect and localize obstacles on railways based on manifold Kalman filtering Federica Fioretti 1 , Emanuele Ruffaldi 2 , Carlo Alberto Avizzano 1 federica.fioretti.90@gmail.com (1) TeCIP Institute, Scuola Superiore Sant'Anna Pisa, Italy, (2) MMI s.p.a, Italy

  2. Railway surveillance Due to the raising need to enforce the surveillance of the railroad in order to guarantee the safety for its users, many systems have been designed involving different technologies: • Infrastructure based systems , communicating with on board train sensors and actively cooperating with sensory equipment mounted nearby the railway line • Locomotive systems , on board sensors with no interactions with any wayside equipment. • Unmanned rail vehicles (URV) represent an innovative solution, bringing several advantages for being a self powered base for a larger number of sensors, with no traffic disruption. The Computer Vision system developed in this work is specifically aimed to an URV, to support its intrusion and obstacle detection tasks.

  3. Issues and goals Goal Assuming that the URV is equipped with a single camera placed at the head of the train, we are interested in reconstructing the trajectory performed by the vehicle and the localization of known objects nearby the railroad. Application issues Scarce amount of available images regarding italian railway signs • Few structural elements in the environment • Monocular camera usage • o Low cost sensor o Incomplete knowledge of camera parameters o Unavoidable reconstruction uncertainty due to projection

  4. Homography and Vanishing Points Homography Considering a couple of views, is the mapping of an image point from a frame to the other is related to their intrinsic parameters (", "′) , their relative pose, defined by (&, '), nonethless to the 3D local coordinates of the point [*, +, ,] . . 1 = " 1 &" 34 / 0 + " 1 '/, / 0 Vanishing point Projection of a world point at infinity to the image plane.

  5. Operational concepts Camera Matched Further matrix IMAGE 1 points, knowledge ! " [$ " , & " ] lines Relative 3D Camera Homography camera points model poses Camera Matched matrix IMAGE 2 points, ! ( [$ ( , & ( ] lines

  6. Objects detection HAAR Cascade Classifier process: Positive Annotation images Negative Create samples images Parameters Train cascade Object Classifiers Dataset size Minimum Hit Rate Maximum False Alarm rate Classifier Kilometer sign 60, 200 0.995 0.05 Poles 248, 200 0.95 0.2

  7. Line detection Scheme: Algorithm: Railway Extraction frame ← first frame iter ← 30 Loop: extract each frame of the video while frame is not empty do: Evaluation of the lowest stripe of the image: leftX [0] ← Hough line search 1. Extract the lower portion of B/W image, applying rightX [0] ← Hough line search Hough Transform: Iterative evaluation of the upper stripes of image: for i ← 1 to iter do: 2. Apply Template matching to the first rail element: Template matching technique: leftX [i] ← Find position of the template close to leftX[i-1] rightX [i] ← Find position of the template close to rightX [i-1]

  8. Line detection Algorithm: Railway Extraction frame ← first frame iter ← 30 Loop: extract each frame of the video while frame is not empty do: End points of the segment fitting the railroad sleepers: For each point feature between the rails having coordinates Evaluation of the lowest stripe of the image: (;, !) : leftX [0] ← Hough line search ; "#$$%$& '()*/,-./* = ; &9:# (!) rightX [0] ← Hough line search ! "#$$%$& '()*/,-./* = ! + 2 Δ45678 &9:# ! Iterative evaluation of the upper stripes of image: for i ← 1 to iter do: Template matching technique: leftX [i] ← Find position of the template close to leftX[i-1] rightX [i] ← Find position of the template close to rightX [i-1]

  9. FoE Tracking FoE coordinate estimation + Principal point + Intersection point of the lines approximating the rails + Mean of the point of intersection among all matched points in a bounding box, considering the same fixed y coordinate In this fashion the displacement of the FoE along the horizon line w.r.t. the principal point, Δ" #$% can be registered and exploited to: • Filter the matched set of points Obtain motion cues. In more detail, the vehicle’s yaw rate can be • estimated as: & ' = Δ) Δ* = Δ" #+% , - . / 0 1234 Where: o Δ) is the difference in yaw angle between consecutive frames o Δ* is the time interval between consecutive frames o Δ" #+% is the FoE displacement along the horizon line w.r.t. the principal point o . / is the focal distance along the image x axis o , - is the tangental velocity o 0 1234 is the near clipping plane distance of the camera

  10. Camera pose recovery pose Lines Lines Algebraic Algorithm Algebraic Algorithm detection velocity initialization pose, velocity, Camera Point acceleration Sensor Sensor motion Feature UKF UKF Fusion Fusion estimate matching Image Flow Fused camera pose and motion Threshold check to filter wrong matches Rail Lines Yaw rate FoE tracking FoE tracking Detected turn markings Object detection

  11. Pose – Algebraic Algorithm Line Constraint 3 ) 2 ) 4 3 &̃ 2 = 0 3 &̃ 4 − ) 1 × $ 4 3 ) 4 ) 2 4. [) 1 ] × $ 2 Considering a 3D line ! seen by a camera, having extrinsic parameters [$, &] the back-projection of the imaged line is a plane ( with normal ) . ) × $+ − & = 0 Line-based Algorithm Two points lying on the edges of a sleeper line, can be locally reconstructed imposing their known euclidean distance: 1.435 m. Employing the parameters [$ 6 , &̃ 6 ], [$ 7 , &̃ 7 ], associated to two 1. To each j-th view a rotation matrix I 7 is different views of the same railroad tie, the unknown projective associated, according to the lo loca cal sp spheric ical scale 8 is recovered using the following: coordinates of the vanishing points co $ 6 9 : ; − 8&̃ 6 − $ 7 9 : < − 8&̃ 7 = 0 The linear velocity = can be computed as: 3 × $ 2. )j 7 ! − & 7 = 0 = > = & > − &(> − 1) 3 ) 1 0 8BCDEF)G &FCH Y = 0, X = Y = [! 3 , 1] 3 3 $ 2 3 & 2 3. X! ) 2 ) 1 , ! From ‘’ Extrinsic calibration of heterogeneous cameras by line images’’ by Ly et al 3 $ 4 3 & 4 ) 4 ) 4 (2014)

  12. Pose – Algebraic Algorithm Accuracy achieved in simulation Numerical test Whenever at least two groups of parallel lines are devised across three frames, it is possible to estimate the camera pose. The algorithm is able to estimate the camera orientation with high accuracy. The non-zero error affecting the displacement estimation provokes small drift over time (~ 1% of the distance traveled). Subsampling four views with overall displacement ~10 m:

  13. Pose – UKF on manifold State dynamics State variables: • Camera orientation and translation ℰ " + 1 = ℰ " ⊞ [( " Δ*, , " Δ*] . ℰ " = E " , * " ∈ 9: 3 Linear velocity , ∈ ℝ G • , " + 1 = , " + /(")∆* Angular velocity ( ∈ ℝ G • ( " + 1 = ( " + 3(")Δ* Linear acceleration / ∈ ℝ G • / " = 4 5 (") Angular acceleration 3 ∈ ℝ G • 3 " = 4 7 (") Output variable: Where: ⊞ ∶ 9:(3)×ℝ > → 9:(3) non-homogeneous coordinates of each matched image • point H ∈ ℝ 7 ⊟ ∶ 9:(3)×9:(3) → ℝ > Sampling time: Δ* = 5 GI ⁄ Applying the Lie Algebra: s ℰ " + 1 = exp ((")" × ,(")" × ℰ " Noise variables: 0 1 Process noise [4 5 , 4 7 ] ∈ ℝ > acting only on 3 and / with • associated covariance matrix K Measurement noise L ∈ ℝ 7 with associated covariance • matrix M

  14. Pose – UKF on manifold Work flow C(&) RANSAC Actual Image points up to 50 iterations 1. Initialization: Two view ! " #$% & − 1 , * + #$% & − 1 from algebraic algorithm matching C & − 1 , C(&) 2. UKF processes each feature tracked between consecutive frames: Triangulation a) Prediction: + - & = /(, 3D points , + & − 1 , ! " & − 1 , * + & − 1 , 1) G 4 Handling sigma points 3 4 around , + & − 1 : Ĉ(&) • 3 4 k = 3 4 & − 1 ⊞ ! Projection Estimated " & − 1 Δ9, * + & − 1 Δ9 - State prediction 2D – 3D Image Point + - k • Recompose sigma-points in , • : - & = ∑ 3 4 − , + - k) (3 4 − , + - k) B (=) ( >? < Intrinsics 4@A 4 b) Correction: Repeat for each feature for each i- th feature Kalman Correction + - k), C 4,D-E , C 4,D → G 4 Gain Innovation • Triangulation: ( K, , + & − 1 , , • Ĉ 4,D = ℎ(J, , + & − 1 , ! " & − 1 , * + & − 1 , G 4 ) ! S (&) Weighted Epipole (FoE) • Innovation: ℐ 4 = C 4,D − C 4,D-E Fusion Tracking - & ⊞ L M ℐ 4 , - & − L M NL M • , + 4 & = , + 4 : 4 & = : 4 *(&) Algebraic 3. RANSAC( , + & , : & , ℐ(&)) algorithm 4. Final , +(&) from fusion with ! " O,PQR & and * + #$% &

  15. Analysis and recostruction

  16. Analysis and recostruction

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend