Using line-based wind turbine representations for UAV localisation - - PowerPoint PPT Presentation
Using line-based wind turbine representations for UAV localisation - - PowerPoint PPT Presentation
Using line-based wind turbine representations for UAV localisation during autonomous inspection Dr Oliver Moolan-Feroze University of Bristol Overview 1. Why are we using a line-based model for localisation during wind turbine inspection
Overview
- 1. Why are we using a line-based model for localisation during wind
turbine inspection
- 2. Defining the line-based turbine model
- 3. Extracting line-based representations from images
- 4. Integrating the line model into an optimiser
- 5. Results
Why Lines?
- Wind turbines are quite similar.
- Tower
- Nacelle (hub)
- 3 Blades
- Model-based tracking approach
makes sense over full SLAM
- However...
- What model do we use?
- How do we associate model features
with image locations?
Why Lines?
- Typically, we find correspondences
between model and image using distinguishable features
- Enough correspondences allow us to
estimate pose
Why Lines?
- Typically, we find correspondences
between model and image using distinguishable features
- Enough correspondences allow us to
estimate pose
- In certain views we can use point-
based features
Why Lines?
- Typically, we find correspondences
between model and image using distinguishable features
- Enough correspondences allow us to
estimate pose
- In certain views we can use point-
based features
- However, a lot of the time, no
features are in view, especially when close up
Why Lines?
- We extend lines connecting the point
features
- Enables us to incroporate image
measurements inbetween feature points
Turbine Model - definition
Model is defined using a set of points P β π$
- Turbine base: π&
- Tower top: π'
- Blade centre: π(
- Blade tips: π)* where π β 1,2,3
And the set of connecting lines L
- Turbine tower: π' = {π&, π'}
- Nacelle: π( = π', π(
- Blades: π)* = {π(, π)*}
Turbine Model - parameters
The model is parameterised with the following values π
- The x,y location of the base: π β π7
- The height of the tower: β
- The heading of the turbine: π
- The length of the nacelle: π
- The rotation of the blades: π
- The length of the blades: π
Turbine Model - instantiation
- Given a βunitβ version of the model =
β³ = { ? π¬, A β}, and parameters π we instantiate a model with the following functions π π¬ = π& = π& ( Μ π&, π) π' = π'( Μ π', π) π( = π( ( Μ π(, π) π)* = π)* ( Μ π)*, π)
CNN line model feature extraction
- To enable matching, we extract a representation of the reprojection
- f the line-model from the images using a CNN
RGB images as well as 'prior' reprojection images are fed in as inputs Reprojection estimates of line model and point model are the
- utputs
Encoder Decoder Convolutional layer MaxΒpooling layer Linear upsampling layer Sigmoid layer
CNN line model feature extraction
With priors Without priors
CNN line model feature extraction
- Examples of the network output
Top) the extracted line model Bottom) the extracted point model
Integration with the optimiser
- The lines extracted reprojection doesn't provide enough information
to fully estimate pose
- We combine the image information with a keyframe pose graph
- ptimizer
- The graph is constrained using image measurements and pose
estimates obtained from IMU / GPS
- π» = π, πΉ
- π = π€L, β¦ , π€N
- π€N = π, π
- πΉ = πL,7, β¦ , πNRL,N
Integration with the optimiser
- Model lines are split into a series of points
and projected into the image to generate constraints
- Image constraints can be generated in two
different ways
1. Using a perpendicular line search and establishing a 3D -> 2D correspondence
- Restricts the movement of the model point during
- ptimisation
2. Using direct image interpolation
- Allows the model point to move freely over the image
Example matching using perpendicular line search
Experiments β real Inspection data
Experiments β flight using synthetic data
Pose and Model Joint Optimisation
- Previously, turbine model parameters are estimated and set at the
beginning of flight
- Error in parameters -> error in estimated poses
- We now jointly optimize both the set of poses, and the turbine
parameters π
Pose and Model Joint Optimisation
- The set of function π are designed to be differentiable.
- When we project model points into the images, they are
transformed using π.
- Parameters π are now estimated during optimisation
Experiments β real data joint optimisation
Experiments β synthetic joint optimisation
Experiments β synthetic joint optimisation
2 4 6 8 10 Initial position error (m) 0.0 2.5 5.0 7.5 10.0 Final position error (m)
Error in position
Pose and model Pose only
0.00 0.02 0.04 0.06 0.08 Initial orientation error (rad) 0.0 0.1 0.2 0.3 0.4 Final orientation error (rad)
Error in orientation
- Using synthetic data, we evaluated
the performance of joint
- ptimisation
- Improvement in both position and
- rientation estimation when doing joint
- ptimisation
Thanks!
The work presented today is based on two papers
- Improving drone localisation around wind turbines using monocular model-based tracking - Oliver
Moolan-Feroze, Konstantinos Karachalios, Dimitrios N. Nikolaidis, and Andrew Calway
- Simultaneous drone localisation and wind turbine model fitting during autonomous surface
inspection - Oliver Moolan-Feroze, Konstantinos Karachalios, Dimitrios N. Nikolaidis, and Andrew Calway