Illumination Assessment for Vision-Based Traffic Monitoring By - - PowerPoint PPT Presentation

illumination assessment for vision based traffic
SMART_READER_LITE
LIVE PREVIEW

Illumination Assessment for Vision-Based Traffic Monitoring By - - PowerPoint PPT Presentation

Illumination Assessment for Vision-Based Traffic Monitoring By SHRUTHI KOMAL GUDIPATI Outline Introduction PVS system design & concepts Assessing lighting Assessing contrast Assessing shadow presence Conclusion


slide-1
SLIDE 1

Illumination Assessment for Vision-Based Traffic Monitoring

By SHRUTHI KOMAL GUDIPATI

slide-2
SLIDE 2

Outline

 Introduction  PVS system design & concepts  Assessing lighting  Assessing contrast  Assessing shadow presence  Conclusion

slide-3
SLIDE 3

Introduction

 Vision systems in traffic domain operates

autonomously over varying environmental conditions

 Uses different parameter values or algorithms

depending on these conditions

 Parameters depends on ambient conditions on

camera images

slide-4
SLIDE 4

PVS system

 Commercial real-time vision system for traffic

monitoring that detects, tracks, and counts vehicles

 Uses large volume of video data obtained from

25 different scenes

 Switches between different parameter values

and algorithms depending on scene illumination aspects

slide-5
SLIDE 5

Aspects of Scene illumination

 Is the scene well-lit?

 Is vehicle bodies visible?  In poorly lit scenes, Are only vehicle

lights visible ?

slide-6
SLIDE 6

Aspects of Scene illumination

slide-7
SLIDE 7

Aspects of Scene illumination

slide-8
SLIDE 8

Aspects of Scene Iillumination

 Is the contrast sharp enough?

 Ex:

 Is visibility sufficient for reliable

detection ?

 Is visibility sufficient or too

diminished ?

 Fog, Dust or Snow

slide-9
SLIDE 9

Aspects of Scene illumination

 Are vehicles in the scene casting

shadows?

slide-10
SLIDE 10

PVS System Design

 Processes frames at 30 Hz  Process images simultaneously up

to 4 cameras

 Compact and fits in 3U VME board

slide-11
SLIDE 11

3U VME Board

slide-12
SLIDE 12

PVS system hardware

 Two Texas Instruments TMS320C31

DSP chips

 A Sensar pyramid chip  Custom ALU implemented using a

Xilinx chip

slide-13
SLIDE 13

Operation principle

 Maintains a reference image that contains

the scene as it would appear if no vehicles were present

 Each incoming frame is compared to the

reference

 Pixels where there are significant

differences are grouped together into "fragments" by the detection algorithm

 These fragments are grouped and tracked

from frame to frame using a predictive filter

slide-14
SLIDE 14
slide-15
SLIDE 15

One dimensional strip representation

 Reduces the 2D image of each lane to a 1D

"strip“

 Integration operation that sums two pixel-

wise measures across the portion of each image row that is spanned by the lane, resulting in a brightness and energy measurement for each row

 Integration operation is performed by the

ALU, which takes as input a bit-mask identifying each lane

slide-16
SLIDE 16

2D -> 1D transformation

slide-17
SLIDE 17

Strip measurements

 Two measurements, brightness and energy,

are computed for each strip element y of each strip s

 Brightness B(s,y) = Σ (pixels inWy)  Energy E(s,y) =

[Σ (absolute difference between every two adjacent pixels in W) ] / ااWyاا

slide-18
SLIDE 18

Reference strips

 Brightness and Energy measurements

gathered from a strip over time are used to construct a reference strip

 For scenes in which traffic is flowing freely,

reference strip can be constructed by IIR filtering

 IIR filtering doesn’t work in stop-and-go or

very crowded areas

slide-19
SLIDE 19

Reference strips

 Strip element in reference image is

updated

 If 1 second has elapsed after the last

time a significant dt (frame-to-frame difference) value was observed at that position

 If 1 minute has elapsed since the last

update

slide-20
SLIDE 20

Strip element classification

 Classify each strip element on the

current strip as background or non- background

 Done by computing the brightness

and energy difference measures

 ΔB(y) = B(I,y) - B (R,y) - (o اا W(y) اا )  ΔE(y) = ا (E(I,y) - E(R,yا

slide-21
SLIDE 21

Classification as Background or non- Background

slide-22
SLIDE 22

Strip element classification

 Each strip element that is classified as

non-background is further classified as "bright" or "dark“

 Depends on whether its brightness is

greater or less than the brightness of the corresponding reference strip element

slide-23
SLIDE 23

Illumination Assessment

 All frames grabbed in a two-minute interval, all

strip elements that both have been identified as non- background and have significant dt are used to update various statistical measures

 Values of these measures are used to assess

the lighting, contrast, and shadows

slide-24
SLIDE 24

Fragment Detection

 Groups non-background strip elements into

symbolic "vehicle fragments“

 To prevent false positive vehicle detections, the

system avoids detecting the illumination artifacts as vehicle fragments

slide-25
SLIDE 25

Fragment Detection

 Uses three different detection techniques,

depending on the nature of the scene illumination

 Detection in well-lit scenes without vehicle

shadows

 Detection in well-lit scenes with vehicle shadows  Detection in poorly-lit scenes

slide-26
SLIDE 26

Fragment Detection

Detection in well-lit scenes without vehicle shadows

Scene as well-lit if the entire vehicle body is Visible

Scenes are termed poorly-lit if the only clearly- visible vehicle components are the headlights

  • r taillights
slide-27
SLIDE 27

Fragment Detection

Detection in well-lit scenes with vehicle shadows

 Well-lit scenes where vehicles are casting

shadows, the detection process must be modified so that non-background strip elements due to shadows are not grouped into vehicle fragments

 Uses stereo or motion cues to infer height

slide-28
SLIDE 28

Fragment Detection

Detection in poorly-lit scenes

 Where only vehicle lights are visible, fragment

extraction via connected components is prone to false positives due to headlight reflections

 Fragments are extracted by identifying compact

bright regions of non-background strip elements around local brightness maxima

slide-29
SLIDE 29

Fragment tracking & grouping

After the vehicle fragments have been extracted, they are passed to the Tracker module which tracks

  • ver time and groups them into
  • bjects
slide-30
SLIDE 30

Assessing lighting

 Measures used for assessing whether the scene is

well-lit, i.e. whether the entire body of most vehicles will be visible

 Ndark + Nbright = total number of non-

background pixels that were detected

 Pdark = Ndark/(Ndark+Nbright)

 If the scene is poorly-lit, the background image will

be quite dark, and it will be difficult to detect any pixel with a dark surface color. Under this condition ndark will be small, and hence Pdark will be small

slide-31
SLIDE 31
slide-32
SLIDE 32

Assessing contrast

 Two typical causes of insufficient contrast --

fog or raindrops

 Contrast can be measured using the energy

difference measure ΔE(y)

 In low-contrast scenes that occur during the

day, vehicles will usually appear as objects darker than the haze, which often appears rather bright

 In low-contrast scenes occurring at night, no

dark regions will be detectable

 Measure ΔEbright and ΔEdark

slide-33
SLIDE 33
slide-34
SLIDE 34

Assessing shadow presence

 Scenes that are well-lit can be

decomposed into two sub-classes

 Shadows 

Non - Shadows

 Contrast of a "bright" portion of a vehicle

against the road surface would be less than that of a "dark" portion

slide-35
SLIDE 35
slide-36
SLIDE 36

Assessing shadow presence

 Using k4 = 1.2, this method has been found to

work well

 Sometimes, when there are very faint shadows, it

does classify the scene as having no shadows

 Fails when the background is not a road

 For example, in some scenes a camera is

looking at the road primarily from the side, and the vehicles occlude either objects (e.g. trees)

  • r the sky as they move across the scene
slide-37
SLIDE 37

Illumination Assessment module

 Three methods for assessing

lighting, contrast, and shadows are applied sequentially

slide-38
SLIDE 38

Illumination Assessment module

slide-39
SLIDE 39

Conclusions

 During Strip representation, transformation

from 2D -> 1D is not clearly explained

 In strip classification, the global offset “o” is

mentioned to have been measured by a different process. The paper doesn’t mention/explain anything about the process

 The paper mentions that the deployment

results were satisfactory but it doesn’t provide any statistical data to support the claim

slide-40
SLIDE 40

References

Wixson, L.B., Hanna, K., Mishra, D., Improved Illumination Assessment for Vision-Based Traffic Monitoring, VS98(Image Processing for Visual Surveillance)

Hanna, K. L. Wixso and D. Mishra , Illumination Assessment for Vision-Based Traffic Monitoring, ICPR '96: Proceedings of the International Conference on Pattern Recognition

Femer et al. 941 N.J. Ferrier, S.M. Rowe, A. Blake, "Real-Time Traffic Monitoring," in Proceedings of the IEEE Workshop on Applications of Computer Vision, pages 81-88, 1994

Kilger 911 M. Kilger, "A Shadow Handler in a Video-based Real- time Traffic Monitoring System", in Proceedings of the IEEE Workshop on Applications of Computer Vision, pages 11-18, 1992

slide-41
SLIDE 41

Questions ?