Visu Visual I Inertial Su Subse sea 3 3D D Recon onstr - - PowerPoint PPT Presentation

visu visual i inertial su subse sea 3 3d d recon onstr
SMART_READER_LITE
LIVE PREVIEW

Visu Visual I Inertial Su Subse sea 3 3D D Recon onstr - - PowerPoint PPT Presentation

Visu Visual I Inertial Su Subse sea 3 3D D Recon onstr tructi tion on For Subsea Model Generation and Real-Time Positioning w w w . Z u p t . c o m ZUPT Ts V VIEW O W ON THE A APPLIC ICAT ATIO IONS A AND T TECHNOLOGY


slide-1
SLIDE 1

Visu Visual I Inertial Su Subse sea 3 3D D Recon

  • nstr

tructi tion

  • n

For Subsea Model Generation and Real-Time Positioning

w w w . Z u p t . c o m

slide-2
SLIDE 2

ZUPT’ T’s V VIEW O W ON THE A APPLIC ICAT ATIO IONS A AND T TECHNOLOGY

Wha hat do do we we wan want?

  • We need a way to navigate accurately subsea, within the space we are working.
  • We may want to navigate where no external reference are available.

SLAM lets us work within unknown environments autonomously.

  • We need to build an accurate model of the world around us – in real time –no delay to deliverable.
  • We need to be able to support the accuracy claims of this model.

3D Reconstruction allows us to deliver this.

  • Any solution has to be aware of the infrastructure and incumbent processes we will compete within.

Power, size, bandwidth, water depth – probably most important – time to delivery of product!

2

slide-3
SLIDE 3

JU JUST A T A FEW O OF TH THE AP APPLICAT ATIONS?

Posi sitio ionin ing: Under H Hull P ll Posi sitionin ing – Tricky to position free moving targets in the water column. Met etrol

  • log
  • gy – Delivers metrology level accuracy,

30mm over 30m (1/1000) Out of f Str traightness (OOS): Accurate offset determination. Multibeam like deliverable with position solution in the model. The la last few me meters: Precise positioning for autonomous intervention into structures/control panels, etc.

3

Model el: Pip ipelin ine su surveys – High resolution free span data, anode depletion volumes possible As s Bu Built ilt – Delivers exactly what is on the seabed and exactly where it is – import into operator GIS As Asset Int Integrity Moni nitoring ng – Facilitates automated change detection, position and feature definition. Chain/moo moorin ing inspect ectio ion – dynamic structure modeling Both th: Augmen mented ed R Realit ity/Percep ceptio ion – Identify a feature, automatically display metadata and automatically navigate to that specific feature. Dime Dimensional Control at De Depth – Structure modeling and subsea offset determination

slide-4
SLIDE 4

CONTENTS OF OUR TALK TODAY

  • An introduction to SLAM
  • An overview of our version of Visual Inertial SLAM system - 3D Recon
  • The basics of 3D reconstruction
  • Why we think you must integrate inertial?
  • System design limitations and failure modes
  • Integration into current work processes

4

slide-5
SLIDE 5

Mapping Localization SLAM

WH WHAT IS AT IS SLAM?

SLAM provides the ability to position us while developing knowledge of the environment around us.

Where is the world around me? Where am I?

5

slide-6
SLIDE 6

APPLIC ICAT ATIO IONS

Search and Rescue

6

Widely used today in autonomous vehicle applications – in air Simple versions used subsea (SLAM to calibrate a LBL beacon) Air Space Subsea

slide-7
SLIDE 7

SLAM AM P PROCESS: I INITIAL IALIZ IZATIO TION

  • Choose a global frame
  • Small initial uncertainty
  • Sensor measurements initialize

landmarks

  • Sensor could be range info, camera

image, sonar or LiDAR)

Airb rborne UAV as an ex example

7

slide-8
SLIDE 8

SLAM P PROCESS: P : PROPAG AGATI TION

  • UAV is moving
  • Dynamic models estimate the new

location

  • But - uncertainty increases

8

slide-9
SLIDE 9

SLAM P M PROCESS: SS:

U P D A T I N G A N D T H E N I N I T I A L I Z I N G N E W L A N D M A R K S

  • Data association matches previous landmarks
  • Uncertainty is decreased
  • New landmarks are added

9

slide-10
SLIDE 10

TOOLS AVAILABLE FOR SLAM

Landmark Sensing (output is relative to sensor frame):

  • Lidar – point cloud in local frame
  • Structured light- point cloud in local frame
  • Monocular Camera – RGB imagery, map and poses only recoverable up to a scale factor
  • Stereo Cameras – RGB + point cloud, map and pose

Inertial Sensing (output is relative to NED frame)

  • Accelerometers and Gyroscopes- IMU/INS allows for accurate position and attitude

estimation when aiding data is not available

  • Inertial + stereo gives high rate pose estimation and adds robustness to global data

association.

10

slide-11
SLIDE 11

SLAM –The Algorithms

Online SLAM-Estimate only the current pose and Map

  • EKF SLAM
  • UKF SLAM
  • SEIF SLAM
  • Particle Filter SLAM
  • Gaussian Mixture Model SLAM

12

Full SLAM- Estimate every pose (computationally expensive)

  • Optimization based – Graph SLAM,

Bundle Adjustment, etc Our Approach: Compute real time map and vehicle states using GMM SLAM Build Optimization Errors and Jacobian for key frames Run full optimization every N key-frames (bundle adjustment).

slide-12
SLIDE 12

VISUAL AL I INERTIAL IAL S SLAM AM

Multi baseline stereo, lower triangulation error + more image overlap for nearby targets Tactical grade IMU - provides high rate control inputs for dynamic model Custom strobed lighting with image feedback controller – change light intensity, not exposure time (blurring and variable time of validity). Specially designed lens for balanced illumination across images

14

Camera1 Camera2 Camera3 IMU

slide-13
SLIDE 13

VISUAL AL-IN INERTIAL IAL S SLAM AM

O U R I M P L E M E N T A T I O N O F A S U B S E A S L A M S O L U T I O N

Inertial Propagation Global Feature Matching SLAM Updates and Feature Initialization

3D model in global frame Vehicle position, attitude and velocity

Stereo Rectification Feature Detection and Description Local Matching Triangulation Lever Arm Adjustments

Imaging Sensors IMU

Dense Stereo Matching Sparse point cloud with descriptors Dense Point Cloud Dense Model Refinement

slide-14
SLIDE 14

FEATU TURE D DETE TECTI TION AN AND D DESCRIPTI TION

16

Detection – find unique points in the image. Usually corners or edges. Description – compute a unique descriptor so the features can be matched locally and globally. SIFT, SURF, and ORB are the most common.

slide-15
SLIDE 15

FEATUR URE M E MATCHI HING

17

  • Use Euclidean distance or angle

between (dot product) descriptors for matching

  • Stereo constraint can be used to

eliminate outliers

slide-16
SLIDE 16

DIS ISPARITY TY C COMPUTA TATI TION

D I F F E R E N C E I N X C O O R D I N A T E ( D E P T H ) I N B O T H I M A G E S

18

Disparity to XYZ Example Calculation:

  • Rectify images and attempt to match

every pixel in each row based on intensity.

  • Structured light (line laser or pseudo

random patterns) can be used to improve accuracy in poorly textured scenes.

slide-17
SLIDE 17

POINT C T CLOUD G GENERATI TION (S (SPAR ARSE AN AND D DENSE)

19

slide-18
SLIDE 18

Gl Globa bal M Matching a and Ra Random S Sampl mpling Co g Conse sensus

20

  • Use current INS solution to project global points into the camera frame.
  • Match features based on position and descriptor
  • Use RANSAC to remove outliers.
slide-19
SLIDE 19

SPAR ARSE ( (posit sitio ioning) / / DENSE (mod model) P POINT C CLO LOUD G GENERAT ATION

21

Sparse SLAM Map and Vehicle Poses Dense Point Cloud Projection using SLAM Poses Each feature point has XYZ, RBG+descriptor. Descriptor distance + XYZ distance are used for global matching for SLAM updates Each feature point has only XYZ,RBG. Down- sampling and refinements are made to further align projected point clouds.

slide-20
SLIDE 20

Continual IMU Calibration

22

slide-21
SLIDE 21

DE DE-NO NOISI SING NG A AND MESH G SH GENER NERATION

23

slide-22
SLIDE 22

ANALY LYZE S E STRUC UCTUR URE D E DEPTH

24

slide-23
SLIDE 23

AC ACCURACY O OF TH THIS IS D DATA ATA SET < T <+/ +/-2MM

25

slide-24
SLIDE 24

WHY AN IMU IS A CRITICAL COMPONENT

26

When compared to pure image alone based solutions: Lower image frame rate required, less uplink bandwidth, less storage Continues to work in very degraded visibility – ignore particulate matter in water column: INS + RANSAC can deal with false features. Fallback to free inertial in total blindness. INS allows us to lose imagery and still estimate position and attitude between valid poses Enables much faster real time (nearly real time) processing to dense point cloud. A very precise and separate “aid” to constrain any image calibration issues – significantly removes scaling errors seen in image only based linear model deliverables.

slide-25
SLIDE 25

SYSTEM DESIGN LIMITATIONS AND FAILURE MODES

  • Distance to target – decreases relative accuracy, beyond 4m a larger baseline than 30cm

is needed.

  • A solution for chain link/mooring surveys would define shorter baseline – 5cm to 10cm.
  • If we cannot see it – we cannot build a model.
  • A reflective surface (mirror like finish or high gloss surface) – dense matching on reflective

surfaces can be inaccurate. Testing is in progress with polarized cameras to alleviate this.

  • Shadows, in frame/view ROV fixtures have to be blocked from processing solution.
  • Lighting is critical – balanced illumination across the scene is critical.
  • Zupt developed our own lights/diffusers to ensure optimal lighting.

27

slide-26
SLIDE 26

LAKE TEST DATA EXAMPLES - DENSE PLAN VIEW

28

slide-27
SLIDE 27

LAKE TEST DATA EXAMPLES – ISOMETRIC VIEW

29

slide-28
SLIDE 28

POSITIONING TRAJECTORY OVERLAY

30

slide-29
SLIDE 29

AN ENVIRONMENT WITH NO FEATURES?

31

Original Image Detected Features Close Up Features are still present, but their descriptors wont be as strong

slide-30
SLIDE 30

INTEGRATION INTO CURENT WORK PROCESSES?

DIFFICULT TO GET ANSWERS?

  • Who is the real customer for the deliverable?
  • What deliverable is really wanted?
  • To what level of resolution?
  • How do these deliverables merge into the operators, enterprise wide, systems?

DISRUPTIVE TECHNOLOGIES

  • How do these next generation solutions “fit” into incumbent processes that primarily insist

upon video/DVR based data sets?

  • Eventing and Classification will be from very different data sets?
  • Have to collect a baseline data set to enable automated change detection
  • Some sort of standardization might need to exist to allow the baseline data to be used by

many – i.e. competitively bid

  • Data management - much larger data sets, full data set transfer still has to be physical, not

via bandwidth.

32

slide-31
SLIDE 31

COMPLIANCE WITH INCUMBENT PROCESSES

Just some of the demands from the incumbent processes, historically delivered by video:

  • Operators IM platforms – Risk Based Integrity (RBI) management software
  • Practically a hierarchical task list that glues video or screen grams to an event:

Aker ix3 COABIS ™ Wood Group Nexus (IC) ™ Integrity Center Very integrated into such conventional applications as Visual Soft Process driven IM demands are inherent in operators procedures – drive existing methods:

  • CSWIP 3.3u/3.4u- certification required during surveys – purely a video eventing

33

slide-32
SLIDE 32

Summary

  • Our goal is to have the ability to autonomously navigate, while simultaneously generating high

resolution models in real time

  • We’re utilizing a combination of online and full SLAM algorithms to enable accurate, but real time

navigation.

  • By using multi-baseline stereo, we’re able to increase image overlap close-up, while decreasing

triangulation error for far away objects. There is no need for scale bars.

  • Furthermore, by incorporating high rate acceleration and angular velocity, we are able to navigate

during instances of poor visibility.

  • The IMU also enables us to extend our platform to other tightly coupled INS solutions (LBL,

beam level DVL).

34

slide-33
SLIDE 33

EMAIL

mrt@zupt.com

Contact Info

PHONE

+1 832 295 7280

WEBSITE

www.zupt.com