the blue c distributed scene graph

The blue-c Distributed Scene Graph Martin Naef, Edouard Lamboray, - PowerPoint PPT Presentation

The blue-c Distributed Scene Graph Martin Naef, Edouard Lamboray, Markus Gross Computer Graphics Laboratory, ETH Zurich Oliver Staadt Computer Science Department, UC Davis Context: The blue-c Collaborative Immersive Virtual Reality


  1. The blue-c Distributed Scene Graph Martin Naef, Edouard Lamboray, Markus Gross Computer Graphics Laboratory, ETH Zurich Oliver Staadt Computer Science Department, UC Davis

  2. Context: The blue-c • Collaborative Immersive Virtual Reality Environment • Provide remote collaboration features – Shared, synchronized virtual world – Render partners using 3D video streams – Concurrent rendering and acquisition http://blue-c.ethz.ch 2

  3. Overview • Motivation and Related Work • Design Goals • Scene Distribution – System integration – Object model – Messages • Locking and Consistency • Networking • Example applications • Conclusions 3

  4. Motivation • Support collaborative VR applications • Distributed manipulation of shared geometry – Generalized concept for a large class of applications – No application-specific synchronization protocols • Support blue-c features – 3D video rendering – Multimedia integration (2D video, audio) – Animation 4

  5. Related Work • Large-scale virtual environments – DIS/HLA, RING, DIVE, MASSIVE • Cluster rendering – OpenSG, Syzygy, WireGL/Chromium • Distributed scene graphs – Distributed Inventor, Avango, Repo3D 5

  6. bcDSG: Design Goals • Distributed, synchronized scene graph – Including all attributes • Avoid reinventing the wheel – Use existing technology – Support legacy code • Based on familiar tools – C++ programming language – Proven scene graph (e.g. Performer) 6

  7. bcDSG: Overview • Part of the blue-c API Application • Partitioned scene blue-c Core SyncManager Scene Graph – Shared and local partition ClassFactory Shared • Synchronization service NodeManager Networking – Traverses the shared Graphics Rendering partition once per frame Audio Rendering – Generates and handles update messages • Networking – Built upon blue-c communication layer (bcl) 7

  8. Object Model (Nodes) • Shared nodes pfNode – Derived from Performer Nodes pfGroup • Add interface and state using multiple inheritance pfDCS CBCSharedNode • Keep Performer interface in place – Streaming interface bcDCS • Read / write state • Read / write connectivity 8

  9. Object Model II • State data per node Word 0 Generation ID – Unique node ID Word 1 Owner Connectivity S. State Serial – Ownership information 31 23 15 7 0 – State information • State dirty flag and serial no. • Connectivity dirty flag and serial no. • Ownership requested 9

  10. Components • Sync Manager – Scene traversal once per frame • Checks flags, generates and handles messages – Initiates and handles network connections • Using the blue-c communication layer • Node Manager – Maps IDs to nodes • Class Factory – Create class instances by name or type 10

  11. Synchronization Messages • Create Node – New ID and class name is transmitted – Handled by the class factory • Update State – Transmits internal node state / attributes – Updates ownership information – Serial number ensures consistency • Update Connectivity – Similar to Update State – Includes ID list of child nodes • Delete Node 11

  12. Locking Scheme • Concept: Node ownership • Messages – Request ownership – Pass ownership – Decline ownership request • Application intervention 12

  13. Locking Scheme II • Default behavior: Lazy locking – Local site may modify node regardless of ownership status – Updates are only sent by the owner – Local changes may be overridden by node owner 13

  14. Per-Frame Evaluation Visit node no Dirty? Done yes yes Req. • no Owner? Pending? yes no Send update Request ownership Clear dirty flag 14

  15. Locking Scheme III • Customized locking scheme – Application may decide not to modify a node without having ownership (strict locking) – Ownership may be requested at any time – Application may grant or decline ownership requests by providing a hook 15

  16. Consistency • Network assumptions – Reliable transmission – Guaranteed ordering between two machines – No global ordering • Conflicting updates – Only the owner sends updates – Serial number scheme • Outdated update messages are discarded • Delayed processing of updates – Avoids global ordering requirement 16

  17. Networking • Connection management • Reliable transport and multicast CORBA CORBA • ID management • Time service NamingService A B TimeService NodeIDService Connection Management UDP Multicast • Based on CORBA: ACE/TAO 17

  18. Example Application • Collaborative Painter – Place quads anywhere in 3D space – Paint on textures – Geometry and textures are synchronized 18

  19. Video: Painter 19

  20. Example Application • Distributed Chess – Static background geometry – Shared chessboard and pieces – Strict locking for chess pieces – No chess application logic 20

  21. Video: Chess 21

  22. Conclusions • Distributed scene graph – No changes in the Performer API – Complete synchronization including vertex and texture data – Relaxed locking scheme, customizable 22

  23. Future Work • Enable late joining sites • Finer granularity for updates • Completely transparent operation – Automatic setting of flags • Network congestion control • Applications 23

  24. Questions? http://blue-c.ethz.ch 24 24

Recommend


More recommend