Environment Awareness for Low-Vision Patients Environment Awareness - - PowerPoint PPT Presentation
Environment Awareness for Low-Vision Patients Environment Awareness - - PowerPoint PPT Presentation
Environment Awareness for Low-Vision Patients Environment Awareness for Low-Vision Patients IntelliSight Team Xin Xinyua uan Z Zhang Rafael Ca Carranza Vanessa M Mejia ia IMU Software Camera Software PCB and Mount Integration
IntelliSight Team
Rafael Ca Carranza
Camera Software Integration
Vanessa M Mejia ia
PCB and Mount Design
Xin Xinyua uan Z Zhang
IMU Software Integration
Overview
When we are out in the world, we are able to understand our surroundings by using both our global and local context.
- Global Context → Where we are in the world
- Local Context → What objects are in our surroundings
Global Context Local Context
The Problem
- According to the World Health Organization there are 285 million low vision individuals in
the world.
- Rely on their senses and on the people around them to understand their local context.
- Although technology has come a long way, it is still unable to help them understand what is
in their surroundings in a quick and easy way.
The Solution
IntelliSight solves this problem by developing a pair of smart sunglasses that uses:
- Visual information from a camera
- Orientation information from an IMU
- Location information using GPS
Hardware
Block Diagram
ESP32:
- Interfaces our camera and IMU sensor
- Data → Android phone via Bluetooth
- Onboard USB-to-Serial converter
- Operating voltage: 3.7 V
Microcontroller
IMU: BNO055
- Collects orientation data
- Capture gestures
- Operating voltage: 3.3 V
- ESP32 → I2C
Data Collection
Camera: ArduCam Mini 2MP
- Takes pictures of the user’s surrounding environment
- Operating voltage: 5 V
- ESP32 → SPI
LiPo Battery:
- Output Voltage: 3.7 V
- Powers the PCB
PowerBoost 500C
- Takes 3.7 V as input and outputs 5 V
- Powers the camera
Power Supply
Printed Circuit Board
PCB Schematic
PCB Layout
Final PCB
Software
Camera Mode
Software
IMU Mode
Gesture detection Building detection Text-to-speech Object capture Object detection Text-to-speech
Camera Mode
Camera Mode
- Capture objects in pictures taken of the user’s
surrounding environment
- Our phone application can identify the objects in the
pictures using TensorFlow Lite
- Relay information using Text-to-Speech
IMU Mode
IMU Mode
Detects gesture Transmits bearing data to app via bluetooth Scans along the bearing to detect landmark Collects bearing
IMU Node: Nodding
z - yaw y - pitch x - roll Pitch Time
Difference Threshold
IMU Mode: Gesture Detection
Azimuth [0,360) degree Select NDOF mode
Bearing Data
IMU Mode: Building Detection
Building Search User’s GPS Location | 2m |
IMU Mode: Building Detection
Bearing Data User’s GPS Location Building Search Failed | 2 m | Next Building Search | 6 m |
IMU Mode: Value Returned
Return the final result in voice
Distance Output < 30m “ X is in front of you” 30 - 80m “X is close to you” > 80m “No building nearby”
IMU Mode: Further Development
- Higher accuracy to determine the landmark
○ Compass accuracy ○ Better state estimation