Full Body Gestures enhancing a Game Book for Interactive Story - - PowerPoint PPT Presentation

full body gestures enhancing a game book
SMART_READER_LITE
LIVE PREVIEW

Full Body Gestures enhancing a Game Book for Interactive Story - - PowerPoint PPT Presentation

Full Body Gestures enhancing a Game Book for Interactive Story Telling Felix Kistler, Dominik Sollfrank, Nikolaus Bee, and Elisabeth Andr Human Centered Multimedia Institute of Computer Science Augsburg University Universittsstr. 6a 86159


slide-1
SLIDE 1

Human Centered Multimedia • Augsburg University • www.hcm-lab.de ICIDS 2011, Vancouver

Full Body Gestures enhancing a Game Book for Interactive Story Telling

Human Centered Multimedia Institute of Computer Science Augsburg University Universitätsstr. 6a 86159 Augsburg, Germany

Felix Kistler, Dominik Sollfrank, Nikolaus Bee, and Elisabeth André

slide-2
SLIDE 2

Human Centered Multimedia • Augsburg University • www.hcm-lab.de ICIDS 2011, Vancouver

  • 1. Adaption of the Game Book

Sugarcane Island for Interactive Story Telling

  • 2. Quick Time Events with

Full Body Interaction

2

slide-3
SLIDE 3

Human Centered Multimedia • Augsburg University • www.hcm-lab.de ICIDS 2011, Vancouver

Adaption of the Game Book Sugarcane Island for Interactive Story Telling

3

slide-4
SLIDE 4

Human Centered Multimedia • Augsburg University • www.hcm-lab.de ICIDS 2011, Vancouver

Adaption of Sugarcane Island

Book Game

4

slide-5
SLIDE 5

Human Centered Multimedia • Augsburg University • www.hcm-lab.de ICIDS 2011, Vancouver

Game Book?

  • Book with non-linear story
  • Decision after each page how to proceed  Continue
  • n a different page

5

slide-6
SLIDE 6

Human Centered Multimedia • Augsburg University • www.hcm-lab.de ICIDS 2011, Vancouver

Implementation of the Story

  • Story reduced to about one-third

6

slide-7
SLIDE 7

Human Centered Multimedia • Augsburg University • www.hcm-lab.de ICIDS 2011, Vancouver

Implementation of the Story

  • Virtual character narrates

the story of each book page

  • Adapting background image

and background sound

  • Adapting facial expressions of the narrator and

additional “action sounds”

  • After each text passage the character asks for a

decision on how to proceed

  • 2-player scenario

7

slide-8
SLIDE 8

Human Centered Multimedia • Augsburg University • www.hcm-lab.de ICIDS 2011, Vancouver

Implementation of the Decisions

  • Two versions of decision prompt:

1. Indicated mode: the users speak out one of the key- words displayed on screen 2. Freestyle mode: on-screen prompt is omitted  users have to figure out the answers for themselves by listening to the story

  • Speech input idealized in a Wizard-of-Oz design

No influence of any difficulties in speech processing

  • Evaluation with 26 participants in groups of 2

8

slide-9
SLIDE 9

Human Centered Multimedia • Augsburg University • www.hcm-lab.de ICIDS 2011, Vancouver

Results of the Questionnaire

  • User rated the indicated mode significantly better than the freestyle mode

in terms of usability

  • All other items (satisfaction, effectance, suspense, and curiosity) were not

affected by the two modes  Freestyle mode more difficult, but the same fun!

  • 4
  • 3
  • 2
  • 1

1 2 3 4

easy to use* quickly learnable* inconvenient to use* Indicated Freestyle

Strongly agree Strongly disagree Neutral

*significant difference with p < 0.01

9

slide-10
SLIDE 10

Human Centered Multimedia • Augsburg University • www.hcm-lab.de ICIDS 2011, Vancouver

  • Video recordings show that in freestyle mode was significantly (p < 0.001)

…more and longer gaze between the participants …more and longer speech utterances between the participants …a longer overall duration for one run

 More user interaction causing longer playtime in freestyle mode

Results of the Video Recordings

5 10 15 20 25 30 35 40 gaze count gaze duration speech count speech duration

  • verall

duration x0.1

Indicated Freestyle

avg time in sec

10

slide-11
SLIDE 11

Human Centered Multimedia • Augsburg University • www.hcm-lab.de ICIDS 2011, Vancouver

Quick Time Events with Full Body Interaction

11

slide-12
SLIDE 12

Human Centered Multimedia • Augsburg University • www.hcm-lab.de ICIDS 2011, Vancouver

Full Body Interaction? Kinect!

  • Microsoft Kinect includes an

easily affordable depth sensor

  • It induced software for real-time

full body tracking

  • Interaction without any

handheld devices = „You are the controller“

12

150 CDN$

slide-13
SLIDE 13

Human Centered Multimedia • Augsburg University • www.hcm-lab.de ICIDS 2011, Vancouver

Full Body Interaction?

  • Avatar control

– An avatar mirrors the movements and postures of the user

  • GUI interaction

 User controls a graphical user interface with body movements (e.g. controlling a cursor by moving one hand in the air)

  • Gesture recognition

 Interpretation of the user motions  recognition of full body postures and gestures of the user

13

slide-14
SLIDE 14

Human Centered Multimedia • Augsburg University • www.hcm-lab.de ICIDS 2011, Vancouver

Kinect Games

  • Current Kinect games:

– sport and fitness games (e.g. Kinect Sports) – racing games (e.g. Kinect Joy Ride) – party and puzzle games (e.g. Game Party in Motion)

  • Motion or gesture interaction, but no focus
  • n a story

14

slide-15
SLIDE 15

Human Centered Multimedia • Augsburg University • www.hcm-lab.de ICIDS 2011, Vancouver

Full Body Interaction in Interactive Story Telling

  • Various approaches for innovative interaction modalities in interactive

storytelling, but none with full body tracking yet

  • Frequent use of Qick Time Events (QTEs)

in current video games, e.g. Fahrenheit (Indigo Prophecy)

  • r Heavy Rain
  • QTE occurs  symbol representing a

specific action on the control device appears on screen  limited time to perform that action  Application of Full Body Interaction as QTEs adding new story transitions

  • 2-player scenario as in most Kinect games

15

slide-16
SLIDE 16

Human Centered Multimedia • Augsburg University • www.hcm-lab.de ICIDS 2011, Vancouver

Quick Time Events?

  • On-screen prompt?

16

slide-17
SLIDE 17

Human Centered Multimedia • Augsburg University • www.hcm-lab.de ICIDS 2011, Vancouver

Full Body Quick Time Events!

  • Full body gesture symbols
  • 7 different actions
  • 2 players  2 symbols at once, positioned left and right
  • Countdown centered on top for representing the time that is left

17

slide-18
SLIDE 18

Human Centered Multimedia • Augsburg University • www.hcm-lab.de ICIDS 2011, Vancouver

Full Body Gestures

18

slide-19
SLIDE 19

Human Centered Multimedia • Augsburg University • www.hcm-lab.de ICIDS 2011, Vancouver 19

Full Body Gestures

slide-20
SLIDE 20

Human Centered Multimedia • Augsburg University • www.hcm-lab.de ICIDS 2011, Vancouver

Posture and Gesture Recognition

  • Framework “FUBI” available under

http://hcm-lab.de/fubi.html

  • Recognition of gestures and postures divided in four

categories:

1) Static postures 2) Linear movements 3) Combinations of postures and linear movements 4) Complex gestures

20

slide-21
SLIDE 21

Human Centered Multimedia • Augsburg University • www.hcm-lab.de ICIDS 2011, Vancouver

Posture and Gesture Recognition

  • 1. Static posture: Configuration of several joints, no

movement

  • 2. Linear movement: Linear movement of several joints with

specific direction and speed

Posture: arms crossed Linear movement: right hand moves right

21

slide-22
SLIDE 22

Human Centered Multimedia • Augsburg University • www.hcm-lab.de ICIDS 2011, Vancouver

Posture and Gesture Recognition

  • 3. Combination of postures and linear movements:

Combination of static postures and linear movements with specific time constraints

3x ( + ) = ?

right hand moves left + over shoulder right hand moves right + over shoulder

user waves the right hand over shoulder

22

  • max. duration

= 600 ms

  • max. duration

= 600 ms

max. transition = 200 ms

slide-23
SLIDE 23

Human Centered Multimedia • Augsburg University • www.hcm-lab.de ICIDS 2011, Vancouver

Posture and Gesture Recognition

  • 4. Complex gestures: Detailed observation
  • f one (or more) joints over a certain

amount of time  sequence of 3D-coordinates is processed by some kind of recognition algorithm, e. g. Dollar$1  Main problem: determine the start and end of a gesture (separation of gestures)  Not used in this application

23

slide-24
SLIDE 24

Human Centered Multimedia • Augsburg University • www.hcm-lab.de ICIDS 2011, Vancouver

Second type of Quick Time Events

  • Instead of performing a gestures the users have to select randomly

positioned buttons on screen that contains the text of the required action

  • The cursor is controlled with Kinect by moving one hand in the air
  • Selection occurs by hovering the button for 1.5 seconds
  • r by performing a push gesture

24

slide-25
SLIDE 25

Human Centered Multimedia • Augsburg University • www.hcm-lab.de ICIDS 2011, Vancouver

Evaluation of Quick Time Events

  • 18 participants ran through both

conditions (gesture mode and button mode) in groups of 2

  • Both modes worked well in terms
  • f recognition

– 93% successful actions within the button-based mode (67 out of 72) – 97% in gesture-based mode (65 out

  • f 67)
  • Gesture mode significantly rated

better in terms of usability and game experience

25

slide-26
SLIDE 26

Human Centered Multimedia • Augsburg University • www.hcm-lab.de ICIDS 2011, Vancouver

Evaluation of Quick Time Events

  • 4
  • 3
  • 2
  • 1

1 2 3 4

stared with expectation at the screen* interaction was fun** expected more usability*** pleased about execution* inconvenient to use** quickly learnable* easy to use* gesture mode button mode

Significance: * p < 0.05; ** p < 0.01; *** p < 0.001 Strongly agree Strongly disagree Neutral

26

slide-27
SLIDE 27

Human Centered Multimedia • Augsburg University • www.hcm-lab.de ICIDS 2011, Vancouver

Conclusions

  • Game books can be easily used to

get a ready-to-use story for IDS

  • It can make sense to hide

information from the users in the interface

(but you have to be careful)

  • Full body gestures that map directly

to actions in the story are feasible and

  • ffer an engaging way of interaction

27

slide-28
SLIDE 28

Human Centered Multimedia • Augsburg University • www.hcm-lab.de ICIDS 2011, Vancouver

Thank you for your attention! Questions? http://hcm-lab.de/fubi.html

28