optimal automatic multipass shader partitioning by
play

Optimal Automatic Multipass Shader Partitioning by Dynamic - PowerPoint PPT Presentation

Optimal Automatic Multipass Shader Partitioning by Dynamic Programming Alan Heirich Sony Computer Entertainment America 31 July 2005 Disclaimer This talk describes GPU architecture research carried out at Sony Computer Entertainment. It


  1. Optimal Automatic Multipass Shader Partitioning by Dynamic Programming Alan Heirich Sony Computer Entertainment America 31 July 2005

  2. Disclaimer This talk describes GPU architecture research carried out at Sony Computer Entertainment. It does not describe any commercial product. In particular, this talk does not discuss the PLAYSTATION 3 nor the RSX.

  3. Outline The problem: Automatically compile large shaders for small GPUs. The insight: This is a classical job-shop scheduling problem. The proposed solution: Dynamic Programming.

  4. Outline++ The problem: Automatically compile large shaders for small GPUs. Exhaust registers, interpolants, pending texture requests, ... Goal: optimal solutions, scalable algorithm. The insight: This is a classical job-shop scheduling problem. Job-shop scheduling is NP-hard/complete. Well-studied problem, many solution algorithms exist. The proposed solution: Dynamic Programming. Satisfies nonlinear objective function. Optimal and (semi-)scalable.

  5. The Problem Physical resources are limited. DAG Rasterized interpolants. GP registers. = Pending texture requests. Instruction count. * etcetera sw result x A very simple example: + + result.x = (a+b)*(c+d) a b c d Requires three GP registers Multiple passes with two registers

  6. result.x = (a+b)*(c+d) R0 R1 a Load R0=a DAG a b Load R1=b Load R1=b = a+b b R0=+(R0,R1) a+b c Load R1=c * a+b c Store R0 aux sw d c Load R0=d result x + + c+d c R0=+(R0,R1) c+d a+b New Pass a b c d a+b R0=*(R0,R1) (a+b)*(c+d) a+b Store R0 swizzle(result,x) (a+b)*(c+d)

  7. The MPP Problem Multipass Partitioning Problem [Chan 2002] Given: An input DAG. A GPU architecture. Find: A schedule of DAG operations. A partition of that schedule into passes. Such that: Schedule observes DAG precedence relations. Schedule respects GPU resource limits. Runtime of compiled shader is minimal (optimality). (Chan: number of passes is minimal.)

  8. References Graphics Hardware 2002: Efficient partitioning of fragment shaders for multipass rendering on programmable graphics hardware. E. Chan, R. Ng, P. Sen, K. Proudfoot, P. Hanrahan Graphics Hardware 2004: Efficient partitioning of fragment shaders for multiple-output hardware. T. Foley, M. Houston, P. Hanrahan Mio: fast multipass partitioning via priority-based instruction scheduling. A. Riffel, A. Lefohn, K. Vidimce, M. Leone, J. Owens

  9. Requirements: Optimal, Scalable Nonlinear cost function. Depends on current machine state. Optimal solutions: (Many) fine-grained passes. Long shaders. High-dimensional solution space. Many local minima (suboptimal solutions). Scalable algorithm: Compile-time cost must not grow unreasonably. O(n log n) is scalable. O(n 2 ) is not scalable.

  10. Scalability, n=10 100 90 n 2 80 70 60 N log n N^1.1 50 N^1.2 N^2.0 40 30 n 1.2 20 n 1.1 10 n log n 10 0

  11. Scalability, n=100 (Current vertex shaders) 1000 n 2 900 800 700 600 N log n N^1.1 500 N^1.2 N^2.0 400 300 n 1.2 n log n 200 n 1.1 100 100 0

  12. Scalability, n=1000 (Current real-time fragment shaders) 10000 n 2 9000 8000 7000 6000 N log n N^1.1 5000 N^1.2 N^2.0 4000 n 1.2 n log n 3000 2000 n 1.1 1000 1000 0

  13. Scalability, n=10000 (Current GPGPU fragment shaders) 100000 n 2 90000 80000 70000 60000 N log n N^1.1 n 1.2 50000 N^1.2 N^2.0 n log n 40000 30000 n 1.1 20000 10000 10000 0

  14. Three Proposed Solutions Graph (DAG) cut sets. Minimum cut sets (RDS h , MRDS h ) Minimize number of cuts. [Chan 2002, Foley 2004] Greedy algorithms. O(n 3 ), O(n 4 ), nonscalable. Job scheduling. List scheduling (MIO) Minimize instruction count (linear function). [Riffel 2004] Greedy algorithm. O(n log n), scalable. Dynamic programming Job scheduling. (DPMPP) Minimize predicted run time [this paper] (nonlinear function). Globally optimal algorithm. O(n 1.14966 ) empirically, (semi-) scalable.

  15. Scalability, n=10000 MRDS h O(n 4 ) RDS h 100000 O(n 3 ) 90000 80000 70000 60000 N log n N^1.1 50000 N^1.2 O(n log n) N^2.0 40000 MIO 30000 DPMPP 20000 O(n 1.14966 ) 10000 10000 0

  16. The Insight: Job Shop Scheduling An NP-hard multi-stage decision problem. A set of time slots and functional units. A set of tasks. An objective function (cost). Goal: assign tasks to slots/units to minimize cost. Examples: Compiler instruction selection. Airline crew scheduling. Factory production planning. etcetera Solving project scheduling problems by minimum cut computations. R. Mohring, A. Schulz, F. Stork, M. Uetz. Management Science (2002), pp. 330-350.

  17. Job Shop Scheduling for MPP Defined by DAG and GPU architecture. A set of n DAG operations (+ “new pass” operation). A schedule with n time slots. A single GPU processor. Cost function predicts quality of compiled code. Predicted execution time (DPMPP). Instruction count (MIO). Number of passes (RDS h , MRDS h ). Many possible formulations and solution algorithms. Integer programming, linear programming, dynamic programming, list scheduling, graph cutting, branch and bound, Satisfiability, ... Problem size can often be O(n 2 ) [nonscalable]

  18. Integer Programming Formulation Jobs (tasks) j, times t; unknowns x. 0-1 decision variables x j,t =1 iff job j is scheduled at time t. Costs w j,t = time-dependent cost of job j at time t . Resource requirements r j ,k for job j of resource k . Constraints: Precedence: Σ t t(x j,t -x i,t ) >= d i,j Resource: Σ t r j,k ( Σ t s=t-pj+1 x j,s ) <= R k Uniqueness: Σ t x j,t = 1 Objective: Minimize Σ j,t w j ,t x j ,t subject to constraints (linear objective). Various solvers (simplex, Karmarkar's algorithm, ...). Potentially exponential worst-case time. Easy transformation to SAT (Boolean decision variables). Different solvers (CHAFF, branch and bound, Tabu , ...) || X || is O ( n 2 ).

  19. Graph Cut Formulation See [Mohring 2002] for details. Vertices v j,t represents job j scheduled at time t . v j,first(j) ... v j,last(j) marks all possible times for job j . Temporal arcs (v i,t , v j,t+dij ) for time lags d i,j have infinite capacity. Assignment arcs ( v j,first(j) , v j,first(j)+1 ) have capacity w j,t . A minimum cut in this graph defines a minimum cost schedule. O( m log m ) time for m vertices [but m is O( n 2 )].

  20. Dynamic Programming Formulation Search tree root is terminal end state at time t=n. R0 R1 Vertices are snapshots of a+b c+d machine state. R2 R3 Edges are transitions (DAG operation, or “new pass”). b d Generate tree breadth-first. Leaves represent initial states +(c,d) (time t=1). Every path from leaf to +(a,b) root is a valid schedule. R0 R1 MPP solution is the a+b c lowest-cost path. R2 R3 R0 R1 Time and space are O(n b ) b d a c+d where b is the average R2 R3 branching factor. b d Prune maximally. ( b < 1.2) (semi-)scalable.

  21. Dynamic Programming Example Root is terminal end state (time t=n ). DAG = Store (=) R0 R1 * @res (a+b)* sw (c+d) ult.x result x + + a b c d

  22. Dynamic Programming Example Generate tree breadth-first. Accumulate cost along paths. DAG = Store (=) R0 R1 * @res (a+b)* sw (c+d) ult.x result x + + @result + @x *((a+b),(c+d)) R2 R1 R0 R0 a b c d @res (a+b)* (a+b) (c+d) (c+d) ult

  23. Dynamic Programming Example Every path from leaf to root is a valid schedule. DAG Store (=) = R1 R0 (a+b) @res *(c+d) ult.x *((a+b),(c+d)) @result + @x * sw R2 R1 R0 R0 (a+b) @res (a+b) (c+d) *(c+d) ult result x +(a,b) +(c,d) + + R3 R4 R0 R2 a b c d a b c d Load R3,b Load R0,a Load R4,d Load R2,c R3 R3 R1 R1 R0 R0 R0 R0 a b c d

  24. Dynamic Programming Example MPP solution is the lowest-cost path. DAG Store (=) = R1 R0 (a+b) @res *(c+d) ult.x *((a+b),(c+d @result + )) @x * sw R2 R1 R0 R0 (a+b) @res (a+b) (c+d) *(c+d) ult result x +(a,b) +(c,d) + + R3 R4 R0 R2 a b c d a b c d Load R3,b Load R0,a Load R4,d Load R2,c R3 R3 R1 R1 R0 R0 R0 R0 a b c d

  25. Key Elements of DP Solution Solve problem in reverse. Start from optimal end state. Requires Markov property. Prune maximally. Manage complexity. “optimal substructure”. Retain all useful intermediate states. Consider all valid paths to find solution.

  26. Markov property (Stale operands) R1 R1 R0 R0 a d Load R0=a Load R0=d a b d c Load R1=b Load R1=b Load R1= Load R1=c a+b b c+d c R0=+(R0,R1) R0=+(R0,R1) a+b c c+d b Load R1=c Load R1=b a+b c c+d b Store R0 Store R0 d c a b Load R0=d Load R0=a c+d c a+b b R0=+(R0,R1) R0=+(R0,R1) c+d a+b New Pass a+b c+d New Pass a+b R0=*(R0,R1) c+d R0=*(R0,R1) (a+b)*(c+d) (a+b)*(c+d) a+b Store R0 c+d Store R0 (a+b)*(c+d) (a+b)*(c+d)

  27. Markov property holds for ... GP registers Rasterized interpolants Pending texture requests Instruction storage etcetera

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend