scalable view dependent progressive mesh streaming
play

Scalable View-Dependent Progressive Mesh Streaming WEI TSANG OOI - PowerPoint PPT Presentation

Scalable View-Dependent Progressive Mesh Streaming WEI TSANG OOI National University of Singapore 1 1 joint work with Cheng Wei National University of Singapore 2 2 3 3 4 4 5 5 10 MB 6 6 2 GB 7


  1. 新加坡國立大學 黃瑋璨 Scalable View-Dependent Progressive Mesh Streaming WEI TSANG OOI National University of Singapore 1 1

  2. joint work with Cheng Wei National University of Singapore 2 2

  3. 3 3

  4. 4 4

  5. 5 5

  6. 10 MB 6 6

  7. 2 GB 7 7

  8. 8 8

  9. Hoppe’s Progressive Mesh Edge Collapse Vertex Split 9 9

  10. At the sender = ... + v k v 4 v 3 v 2 v 1 base model 10 10

  11. Transmission UDP TCP v 4 ... base v 1 v 2 v 3 v k model 11 11

  12. At the receiver v 4 ... base v 1 v 2 v 3 v k model ... ... 12 12

  13. Vertex Split v v1 v2 13 13

  14. base mesh 14

  15. 15

  16. 16

  17. 17

  18. complete mesh 18

  19. view-dependent streaming: only send what the receiver can see 19 19

  20. 20

  21. 21

  22. what to send? in what order? 22 22

  23. what to send? determined by view point in what order? determined by visual contributions 23 23

  24. Existing Approach view point what to split how to split 24

  25. Existing Approach view point view point what to split how to split 25

  26. For each receiver, server needs to: • compute visibility • compute visual contribution of each vertex split • sort vertex splits • remember what has been sent 26 26

  27. “dumb client, smart server” does not scale 27 27

  28. Receiver-driven Approach what to split how to split 28

  29. how to identify a vertex split? 29 29

  30. Attempt 1 0 1 2 3 4 5 6 7 8 30

  31. want to split vertex 2 2 here is how to split, and 6 7 2 splits into 6 and 7 31

  32. Attempt 2 0 1 00 01 10 11 000 001 010 011 100 101 110 111 Kim, Lee, “Truly selective refinement of progressive meshes,” In Proceedings of Graphics Interface, pages 101–110, June 2001 32

  33. want to split vertex 00 00 000 001 here is how to split 00 33

  34. Receiver-driven Approach what to split how to split 34

  35. Encoding of vertex split IDs 0 1 00 01 10 11 000 001 010 011 100 101 110 111 000 001 10 110 35

  36. proc encode (T) if no vertices to be split in T return 0 else return 1 + encode(T.left) + encode(T.right) 36

  37. Encoding of vertex split IDs 0 1 1 10011000 11001000 0 37

  38. how to compute visibility + visual contributions? (without possessing the complete mesh?) 38 38

  39. Estimate with screen space area of vertices V 2 V 1 39

  40. Sender-driven Receiver-driven send base mesh 1.4 1.13 decode IDs - 1.55 search vertex split 1.85 1.85 determine visibility 0.41 - update state 1.41 - encode IDs 0.94 - others 0.16 0.16 total 6.17 4.69 40

  41. 41

  42. receiver-driven protocol alleviates the computational bottleneck at the sender. 42 42

  43. the other bottleneck is bandwidth . 43 43

  44. goal : reduce server overhead by retrieving vertex splits from other clients if possible 44 44

  45. difficulty : need to quickly and efficiently determine who to retrieve the vertex splits from 45 45

  46. low server overhead low response time low message overhead 46 46

  47. common P2P techniques: 1. build an overlay and push 2. use DHT to search for chunks 3. pull based on chunk availability 47 47

  48. common P2P techniques: 1. build an overlay and push 2. use DHT to search for chunks 3. pull based on chunk availability 48 48

  49. peer-to-peer file transfer: a needed chunk is likely to be available in any peer 49 49

  50. peer-to-peer video streaming: a needed chunk is likely available from a peer that has watched the same segment earlier (temporal locality) 50 50

  51. peer-to-peer mesh streaming a needed chunk is likely available from a peer that is viewing the same region (spatial locality) 51 51

  52. idea: exploit spatial locality to reduce message overhead. 52 52

  53. chunks 53

  54. chunks (1 chunk = 240 vertex splits) 54

  55. 55

  56. groups (1 group = 16 chunks) 56

  57. Only exchange messages between peers that need chunks from the same group. 57 57

  58. how the protocol works 58 58

  59. server maintains a list of group members for each group, and who possesses which chunk. (128.3.13.44, 100100) (123.44.121.99, 111111) .. (90.1.1.00, 0001001) (32.11.99.233, 101111) .. : : 59 59

  60. client: “I want to view mesh M” server sends : (i) base mesh (ii) group members of the highest group. (iii) what each member possess 60 60

  61. client decides which vertex splits (chunk) to refine if some peer has that chunk, request from peer else request chunk from server 61 61

  62. peers inform server when they received a chunk if a chunk in the next group can be decoded, server sends group members of the next group 62 62

  63. groups 63

  64. if too many group members, server sends only most recent subsets + some seeds 64 64

  65. on-going work: 1. evaluation using user traces and simulator 2. other design parameters 3. further reduce the role of server 65 65

  66. summary receiver-driven design to reduce CPU cost peer-to-peer design to reduce bandwidth cost 66 66

  67. 謝謝 67 67

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend