Exploring a Multi-Sensor Picking Process in the Future Warehouse - - PowerPoint PPT Presentation

exploring a multi sensor picking process in the future
SMART_READER_LITE
LIVE PREVIEW

Exploring a Multi-Sensor Picking Process in the Future Warehouse - - PowerPoint PPT Presentation

Exploring a Multi-Sensor Picking Process in the Future Warehouse Alexander Diete September 9, 2016 University of Mannheim About the project Problem Figure 1: Picking process in warehouses 1 Idea Use sensors and video data to enhance the


slide-1
SLIDE 1

Exploring a Multi-Sensor Picking Process in the Future Warehouse

Alexander Diete September 9, 2016

University of Mannheim

slide-2
SLIDE 2

About the project

slide-3
SLIDE 3

Problem

Figure 1: Picking process in warehouses

1

slide-4
SLIDE 4

Idea

Use sensors and video data to enhance the process

2

slide-5
SLIDE 5

Hardware

  • Data glass (Vuzix M100)
  • Wristband (Custom 3D

Print)

  • Depth Sensor (Project

Tango Tablet)

3

slide-6
SLIDE 6

Data gathering

slide-7
SLIDE 7

Data collected

  • Data glass
  • IMU data
  • Video stream
  • Wristband
  • IMU data
  • RFID read
  • Tango
  • Point cloud data

4

slide-8
SLIDE 8

Recording Session

Figure 2: Different parts being recorded

5

slide-9
SLIDE 9

Point cloud

Figure 3: 3rd person depth view

6

slide-10
SLIDE 10

Recording Application

Figure 4: Sensor Data Collector App

7

slide-11
SLIDE 11

Activities to be recognized

  • Navigation (walking to shelf)
  • Locating shelf
  • Grabbing into shelf

8

slide-12
SLIDE 12

Problems

  • Time synchronization
  • Consistent recording rate for the sensors
  • Start and endpoint of labels

9

slide-13
SLIDE 13

Solutions

  • Zero lining for time synchronization
  • Align datasets in post-processing
  • Manual sensor rate adjustment for glasses
  • Use observation video to pinpoint start and end of activities

10

slide-14
SLIDE 14

Solutions - Alignment tool

11

slide-15
SLIDE 15

Solutions - Labeling tool

12

slide-16
SLIDE 16

Dataset

slide-17
SLIDE 17

Description

  • First recording session resulted in 2.7 GB
  • Different processes recorded
  • Picking from one shelf
  • Picking from multiple shelves
  • Picking with different hands

13

slide-18
SLIDE 18

Example

Figure 5: Accelerometer data from wristband

14

slide-19
SLIDE 19

Future Work

slide-20
SLIDE 20

Recording optimization

  • Switch to full client server architecture
  • Synchronized start of all devices recording
  • Health status of sensors
  • Reduce the overall setup time
  • Better live preview of data
  • Video stream and plot of data
  • Includes health status of sensors

15

slide-21
SLIDE 21

Machine Learning

  • Video stream
  • Object recognition (boxes, shelves)
  • Motion detection
  • Sensor data
  • Activity recognition (walking, standing, arm movement)
  • Combination of both data streams

16

slide-22
SLIDE 22

Depth information

  • 3rd person perspective vs. 1st person perspective
  • 3rd person perspective feasible for recognition but hard to

deploy.

  • 1st person perspective: minimum distance of depth sensor is

30cm

  • Means that detection of objects is not feasible
  • But: Can recognize if background is blocked by some object
  • Thus grabbing detection should be possible

17

slide-23
SLIDE 23

Conclussion

slide-24
SLIDE 24

Summary

  • Created a framework for collecting multiple data sources
  • Built tools to align and label data
  • Proposed multiple approaches for activity recognition

18

slide-25
SLIDE 25

Open Questions

  • Is the selection of sensors sufficient for task?
  • Can machine learning be applied to the combination of data?
  • Semi supervised learning applicable for different warehouse

locations?

19

slide-26
SLIDE 26

Thank you for your attention