Scalable View-Dependent Progressive Mesh Streaming WEI TSANG OOI - - PowerPoint PPT Presentation

scalable view dependent progressive mesh streaming
SMART_READER_LITE
LIVE PREVIEW

Scalable View-Dependent Progressive Mesh Streaming WEI TSANG OOI - - PowerPoint PPT Presentation

Scalable View-Dependent Progressive Mesh Streaming WEI TSANG OOI National University of Singapore 1 1 joint work with Cheng Wei National University of Singapore 2 2 3 3 4 4 5 5 10 MB 6 6 2 GB 7


slide-1
SLIDE 1

Scalable View-Dependent Progressive Mesh Streaming

WEI TSANG OOI

National University of Singapore

黃瑋璨

新加坡國立大學

1

1

slide-2
SLIDE 2

joint work with

Cheng Wei

National University of Singapore

2

2

slide-3
SLIDE 3

3

3

slide-4
SLIDE 4

4

4

slide-5
SLIDE 5

5

5

slide-6
SLIDE 6

10 MB

6

6

slide-7
SLIDE 7

2 GB

7

7

slide-8
SLIDE 8

8

8

slide-9
SLIDE 9

Hoppe’s Progressive Mesh

Edge Collapse Vertex Split

9

9

slide-10
SLIDE 10

+

v1 v2 v3 v4

... =

base model

At the sender

vk

10

10

slide-11
SLIDE 11

base model v1 v2 v3 v4 ...

Transmission

TCP UDP

vk

11

11

slide-12
SLIDE 12

... ...

base model v1 v2 v3 v4 ...

At the receiver

vk

12

12

slide-13
SLIDE 13

Vertex Split

v2 v

13

v1

13

slide-14
SLIDE 14

base mesh

14

slide-15
SLIDE 15

15

slide-16
SLIDE 16

16

slide-17
SLIDE 17

17

slide-18
SLIDE 18

complete mesh

18

slide-19
SLIDE 19

view-dependent streaming:

  • nly send what the receiver can see

19

19

slide-20
SLIDE 20

20

slide-21
SLIDE 21

21

slide-22
SLIDE 22

what to send? in what order?

22

22

slide-23
SLIDE 23

what to send? determined by view point in what order? determined by visual contributions

23

23

slide-24
SLIDE 24

Existing Approach

view point what to split how to split

24

slide-25
SLIDE 25

Existing Approach

view point view point what to split how to split

25

slide-26
SLIDE 26

For each receiver, server needs to:

  • compute visibility
  • compute visual contribution of each

vertex split

  • sort vertex splits
  • remember what has been sent

26

26

slide-27
SLIDE 27

“dumb client, smart server” does not scale

27

27

slide-28
SLIDE 28

Receiver-driven Approach

what to split how to split

28

slide-29
SLIDE 29

how to identify a vertex split?

29

29

slide-30
SLIDE 30

7 2 6 3 8 4 5 1

Attempt 1

30

slide-31
SLIDE 31

7 2 6

want to split vertex 2 here is how to split, and 2 splits into 6 and 7

31

slide-32
SLIDE 32

001 00 000 011 01 010 101 10 100 111 11 110 1

Attempt 2

Kim, Lee, “Truly selective refinement of progressive meshes,” In Proceedings of Graphics Interface, pages 101–110, June 2001

32

slide-33
SLIDE 33

001 00 000

want to split vertex 00 here is how to split 00

33

slide-34
SLIDE 34

Receiver-driven Approach

what to split how to split

34

slide-35
SLIDE 35

001 00 000 011 01 010 101 10 100 111 11 110 1

Encoding of vertex split IDs

001 000 10 110

35

slide-36
SLIDE 36

proc encode(T) if no vertices to be split in T return 0 else return 1 + encode(T.left) + encode(T.right)

36

slide-37
SLIDE 37

1

Encoding of vertex split IDs 11001000

1 10011000

37

slide-38
SLIDE 38

how to compute visibility + visual contributions? (without possessing the complete mesh?)

38

38

slide-39
SLIDE 39

V1 V2

Estimate with screen space area of vertices

39

slide-40
SLIDE 40

Sender-driven Receiver-driven send base mesh 1.4 1.13 decode IDs

  • 1.55

search vertex split 1.85 1.85 determine visibility 0.41

  • update state

1.41

  • encode IDs

0.94

  • thers

0.16 0.16 total 6.17 4.69

40

slide-41
SLIDE 41

41

slide-42
SLIDE 42

receiver-driven protocol alleviates the computational bottleneck at the sender.

42

42

slide-43
SLIDE 43

the other bottleneck is bandwidth.

43

43

slide-44
SLIDE 44

goal: reduce server overhead by retrieving vertex splits from other clients if possible

44

44

slide-45
SLIDE 45

difficulty: need to quickly and efficiently determine who to retrieve the vertex splits from

45

45

slide-46
SLIDE 46

low server overhead low response time low message overhead

46

46

slide-47
SLIDE 47

common P2P techniques:

  • 1. build an overlay and push
  • 2. use DHT to search for chunks
  • 3. pull based on chunk availability

47

47

slide-48
SLIDE 48

common P2P techniques:

  • 1. build an overlay and push
  • 2. use DHT to search for chunks
  • 3. pull based on chunk availability

48

48

slide-49
SLIDE 49

peer-to-peer file transfer: a needed chunk is likely to be available in any peer

49

49

slide-50
SLIDE 50

peer-to-peer video streaming: a needed chunk is likely available from a peer that has watched the same segment earlier (temporal locality)

50

50

slide-51
SLIDE 51

peer-to-peer mesh streaming a needed chunk is likely available from a peer that is viewing the same region (spatial locality)

51

51

slide-52
SLIDE 52

idea: exploit spatial locality to reduce message overhead.

52

52

slide-53
SLIDE 53

chunks

53

slide-54
SLIDE 54

chunks (1 chunk = 240 vertex splits)

54

slide-55
SLIDE 55

55

slide-56
SLIDE 56

groups (1 group = 16 chunks)

56

slide-57
SLIDE 57

57

Only exchange messages between peers that need chunks from the same group.

57

slide-58
SLIDE 58

58

how the protocol works

58

slide-59
SLIDE 59

59

server maintains a list of group members for each group, and who possesses which chunk.

(128.3.13.44, 100100) (123.44.121.99, 111111) .. (90.1.1.00, 0001001) (32.11.99.233, 101111) ..

: :

59

slide-60
SLIDE 60

60

client: “I want to view mesh M” server sends : (i) base mesh (ii) group members of the highest group. (iii) what each member possess

60

slide-61
SLIDE 61

61

client decides which vertex splits (chunk) to refine if some peer has that chunk, request from peer else request chunk from server

61

slide-62
SLIDE 62

62

peers inform server when they received a chunk if a chunk in the next group can be decoded, server sends group members of the next group

62

slide-63
SLIDE 63

groups

63

slide-64
SLIDE 64

64

if too many group members, server sends only most recent subsets + some seeds

64

slide-65
SLIDE 65

65

  • n-going work:
  • 1. evaluation using user traces and simulator
  • 2. other design parameters
  • 3. further reduce the role of server

65

slide-66
SLIDE 66

66

summary receiver-driven design to reduce CPU cost peer-to-peer design to reduce bandwidth cost

66

slide-67
SLIDE 67

謝謝

67

67