full body gestures enhancing a game book
play

Full Body Gestures enhancing a Game Book for Interactive Story - PowerPoint PPT Presentation

Full Body Gestures enhancing a Game Book for Interactive Story Telling Felix Kistler, Dominik Sollfrank, Nikolaus Bee, and Elisabeth Andr Human Centered Multimedia Institute of Computer Science Augsburg University Universittsstr. 6a 86159


  1. Full Body Gestures enhancing a Game Book for Interactive Story Telling Felix Kistler, Dominik Sollfrank, Nikolaus Bee, and Elisabeth André Human Centered Multimedia Institute of Computer Science Augsburg University Universitätsstr. 6a 86159 Augsburg, Germany ICIDS 2011, Vancouver Human Centered Multimedia • Augsburg University • www.hcm -lab.de

  2. 1. Adaption of the Game Book Sugarcane Island for Interactive Story Telling 2. Quick Time Events with Full Body Interaction ICIDS 2011, Vancouver Human Centered Multimedia • Augsburg University • www.hcm -lab.de 2

  3. Adaption of the Game Book Sugarcane Island for Interactive Story Telling ICIDS 2011, Vancouver Human Centered Multimedia • Augsburg University • www.hcm -lab.de 3

  4. Adaption of Sugarcane Island Game Book ICIDS 2011, Vancouver Human Centered Multimedia • Augsburg University • www.hcm -lab.de 4

  5. Game Book? • Book with non-linear story • Decision after each page how to proceed  Continue on a different page ICIDS 2011, Vancouver Human Centered Multimedia • Augsburg University • www.hcm -lab.de 5

  6. Implementation of the Story • Story reduced to about one-third ICIDS 2011, Vancouver Human Centered Multimedia • Augsburg University • www.hcm -lab.de 6

  7. Implementation of the Story • Virtual character narrates the story of each book page • Adapting background image and background sound • Adapting facial expressions of the narrator and additional “action sounds” • After each text passage the character asks for a decision on how to proceed • 2-player scenario ICIDS 2011, Vancouver Human Centered Multimedia • Augsburg University • www.hcm -lab.de 7

  8. Implementation of the Decisions • Two versions of decision prompt: Indicated mode: the users 1. speak out one of the key- words displayed on screen Freestyle mode: on-screen 2. prompt is omitted  users have to figure out the answers for themselves by listening to the story • Speech input idealized in a Wizard-of-Oz design  No influence of any difficulties in speech processing • Evaluation with 26 participants in groups of 2 ICIDS 2011, Vancouver Human Centered Multimedia • Augsburg University • www.hcm -lab.de 8

  9. Results of the Questionnaire • User rated the indicated mode significantly better than the freestyle mode in terms of usability • All other items (satisfaction, effectance, suspense, and curiosity) were not affected by the two modes  Freestyle mode more difficult, but the same fun! Strongly agree 4 3 2 1 Indicated Neutral 0 Freestyle easy to use* quickly inconvenient to -1 learnable* use* -2 -3 Strongly disagree -4 *significant difference with p < 0.01 ICIDS 2011, Vancouver Human Centered Multimedia • Augsburg University • www.hcm -lab.de 9

  10. Results of the Video Recordings • Video recordings show that in freestyle mode was significantly (p < 0.001) …more and longer gaze between the participants …more and longer speech utterances between the participants …a longer overall duration for one run avg time 40 in sec 35 30 25 20 Indicated 15 Freestyle 10 5 0 gaze count gaze speech speech overall duration count duration duration x0.1  More user interaction causing longer playtime in freestyle mode ICIDS 2011, Vancouver Human Centered Multimedia • Augsburg University • www.hcm -lab.de 10

  11. Quick Time Events with Full Body Interaction ICIDS 2011, Vancouver Human Centered Multimedia • Augsburg University • www.hcm -lab.de 11

  12. Full Body Interaction? Kinect! • Microsoft Kinect includes an 150 CDN$ easily affordable depth sensor • It induced software for real-time full body tracking • Interaction without any handheld devices = „You are the controller“ ICIDS 2011, Vancouver Human Centered Multimedia • Augsburg University • www.hcm -lab.de 12

  13. Full Body Interaction? • Avatar control – An avatar mirrors the movements and postures of the user • GUI interaction  User controls a graphical user interface with body movements (e.g. controlling a cursor by moving one hand in the air) • Gesture recognition  Interpretation of the user motions  recognition of full body postures and gestures of the user ICIDS 2011, Vancouver Human Centered Multimedia • Augsburg University • www.hcm -lab.de 13

  14. Kinect Games • Current Kinect games: – sport and fitness games (e.g. Kinect Sports) – racing games (e.g. Kinect Joy Ride) – party and puzzle games (e.g. Game Party in Motion) • Motion or gesture interaction, but no focus on a story ICIDS 2011, Vancouver Human Centered Multimedia • Augsburg University • www.hcm -lab.de 14

  15. Full Body Interaction in Interactive Story Telling • Various approaches for innovative interaction modalities in interactive storytelling, but none with full body tracking yet • Frequent use of Qick Time Events (QTEs) in current video games, e.g. Fahrenheit (Indigo Prophecy) or Heavy Rain • QTE occurs  symbol representing a specific action on the control device appears on screen  limited time to perform that action  Application of Full Body Interaction as QTEs adding new story transitions • 2-player scenario as in most Kinect games ICIDS 2011, Vancouver Human Centered Multimedia • Augsburg University • www.hcm -lab.de 15

  16. Quick Time Events? • On-screen prompt? ICIDS 2011, Vancouver Human Centered Multimedia • Augsburg University • www.hcm -lab.de 16

  17. Full Body Quick Time Events! • Full body gesture symbols • 7 different actions • 2 players  2 symbols at once, positioned left and right • Countdown centered on top for representing the time that is left ICIDS 2011, Vancouver Human Centered Multimedia • Augsburg University • www.hcm -lab.de 17

  18. Full Body Gestures ICIDS 2011, Vancouver Human Centered Multimedia • Augsburg University • www.hcm -lab.de 18

  19. Full Body Gestures ICIDS 2011, Vancouver Human Centered Multimedia • Augsburg University • www.hcm -lab.de 19

  20. Posture and Gesture Recognition • Framework “FUBI” available under http://hcm-lab.de/fubi.html • Recognition of gestures and postures divided in four categories: Static postures 1) Linear movements 2) Combinations of postures and linear movements 3) 4) Complex gestures ICIDS 2011, Vancouver Human Centered Multimedia • Augsburg University • www.hcm -lab.de 20

  21. Posture and Gesture Recognition 1. Static posture: Configuration of several joints, no movement 2. Linear movement: Linear movement of several joints with specific direction and speed Posture: arms crossed Linear movement: right hand moves right ICIDS 2011, Vancouver Human Centered Multimedia • Augsburg University • www.hcm -lab.de 21

  22. Posture and Gesture Recognition 3. Combination of postures and linear movements: Combination of static postures and linear movements with specific time constraints 3x ( + ) = ? max. transition = 200 ms right hand moves left right hand moves right user waves the right + over shoulder + over shoulder hand over shoulder max. duration max. duration = 600 ms = 600 ms ICIDS 2011, Vancouver Human Centered Multimedia • Augsburg University • www.hcm -lab.de 22

  23. Posture and Gesture Recognition 4. Complex gestures: Detailed observation of one (or more) joints over a certain amount of time  sequence of 3D-coordinates is processed by some kind of recognition algorithm, e. g. Dollar$1  Main problem: determine the start and end of a gesture (separation of gestures)  Not used in this application ICIDS 2011, Vancouver Human Centered Multimedia • Augsburg University • www.hcm -lab.de 23

  24. Second type of Quick Time Events • Instead of performing a gestures the users have to select randomly positioned buttons on screen that contains the text of the required action • The cursor is controlled with Kinect by moving one hand in the air • Selection occurs by hovering the button for 1.5 seconds or by performing a push gesture ICIDS 2011, Vancouver Human Centered Multimedia • Augsburg University • www.hcm -lab.de 24

  25. Evaluation of Quick Time Events • 18 participants ran through both conditions (gesture mode and button mode) in groups of 2 • Both modes worked well in terms of recognition – 93% successful actions within the button-based mode (67 out of 72) – 97% in gesture-based mode (65 out of 67) • Gesture mode significantly rated better in terms of usability and game experience ICIDS 2011, Vancouver Human Centered Multimedia • Augsburg University • www.hcm -lab.de 25

  26. Evaluation of Quick Time Events easy to use* quickly learnable* inconvenient to use** pleased about execution* expected more usability*** interaction was fun** stared with expectation at the screen* -4 -3 -2 -1 0 1 2 3 4 gesture mode Strongly disagree Neutral Strongly agree button mode Significance: * p < 0.05; ** p < 0.01; *** p < 0.001 ICIDS 2011, Vancouver Human Centered Multimedia • Augsburg University • www.hcm -lab.de 26

  27. Conclusions • Game books can be easily used to get a ready-to-use story for IDS • It can make sense to hide information from the users in the interface (but you have to be careful) • Full body gestures that map directly to actions in the story are feasible and offer an engaging way of interaction ICIDS 2011, Vancouver Human Centered Multimedia • Augsburg University • www.hcm -lab.de 27

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend