gradient error and origami
play

Gradient Error and Origami My Work With The Space Time Programming - PowerPoint PPT Presentation

Gradient Error and Origami My Work With The Space Time Programming Group Josh Horowitz MIT CSAIL November 2, 2007 Josh Horowitz (MIT CSAIL) Gradient Error and Origami November 2, 2007 1 / 18 UROP Proposal Two branches of research: 1


  1. Gradient Error and Origami My Work With The Space Time Programming Group Josh Horowitz MIT CSAIL November 2, 2007 Josh Horowitz (MIT CSAIL) Gradient Error and Origami November 2, 2007 1 / 18

  2. UROP Proposal Two branches of research: 1 Analyze errors of gradients. ‘. . . I will investigate error in the ubiquitous “gradient” algorithm, which determines the distance from a processor to a designated region. My investigation will involve both conceptual analysis of mathematical models and quantitative analysis of computer-run simulations.’ 2 Make Proto fold. ‘. . . I will also pursue the project of implementing Radhika Nagpal’s biologically-inspired Origami Shape Language in the STPG’s Proto programming language.’ Josh Horowitz (MIT CSAIL) Gradient Error and Origami November 2, 2007 2 / 18

  3. Gradients The Algorithm The gradient algorithm gives a way to estimate global distance (across many radio ranges) from local distance. 1 Initialize the source S at 0 and all other nodes at ∞ . 2 Continually update the gradient value of all nodes N / ∈ S based on their neighbors in order to keep the triangle inequality maintained: � Gradient( N ′ ) + d ( N, N ′ ) � Gradient( N ) = min . N ′ near N Essentially, this gives the length of the shortest path from N to the source in a graph with edges connecting nearby nodes. Josh Horowitz (MIT CSAIL) Gradient Error and Origami November 2, 2007 3 / 18

  4. Gradients The Problem Even with error-free processing and error-free local range-finding, the gradient algorithm is not error-free: trails are jagged. Figure: Gradient trails back to a source. Josh Horowitz (MIT CSAIL) Gradient Error and Origami November 2, 2007 4 / 18

  5. Gradients This jaggedness introduces both systematic and statistical error into the gradient algorithm. I studied one particular kind of systematic error, the fact that the gradient algorithm more accurately measures αd ( N, S ) than d ( N, S ) itself (for some constant α � 1 ). That is, there is an underlying jaggedness that asymptotically causes gradient values to be scaled upwards by a constant factor α . 1.0 0.8 0.6 0.4 0.2 5 10 15 20 Figure: d ( N, S ) vs. [Gradient S ( N ) − d ( N, S )] (so the red line has slope α − 1 ) Josh Horowitz (MIT CSAIL) Gradient Error and Origami November 2, 2007 5 / 18

  6. Gradients There are different ways of looking at this phenomena: It is unfortunate error which we would like to avoid. We need to know how high node density ( N loc ) has to be to put α close enough to 1 for our purposes, whatever they may be. More reasonably: It is a feature of how the gradient algorithm works which we would like to correct for. We need to know what α is in terms of N loc . Either way requires knowing the relationship between N loc and α . I first investigated this question through direct experimentation. Josh Horowitz (MIT CSAIL) Gradient Error and Origami November 2, 2007 6 / 18

  7. Gradients Experiments Basic Setup Fill a rectangle randomly with nodes. Set a thin strip along the left edge to be the source. Run a gradient. The output of a run will be a set of pairs ( d ( N, S ) , Gradient S ( N )) = (actual distance , calculated distance) . We perform a linear regression to determine the value of α for this particular run. After many runs, with varying N loc , we will have a large set of ( N loc , α ) s to analyze. Α Α � 1 0.300 1.14 0.200 0.150 1.12 0.100 1.10 0.070 0.050 1.08 0.030 1.06 0.020 0.015 1.04 0.010 1.02 n loc n loc 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 7 8 9 10 11 12 13 14 15 16 17 18192021222324 Figure: N loc vs. α (linear and log-log). Josh Horowitz (MIT CSAIL) Gradient Error and Origami November 2, 2007 7 / 18

  8. Gradients Experiments Initial Results When I gave my first presentation on my research, things seemed to be − 2 pointing nicely to a α ∝ n relationship. The two experiments loc − 2 . 212 − 2 . 045 represented below gave fits α = 0 . 693 n and α = 0 . 648 n , loc loc respectively. Figure: Old-school data. Josh Horowitz (MIT CSAIL) Gradient Error and Origami November 2, 2007 8 / 18

  9. Gradients Experiments Condor To continue exploring the parameter space effectively, I developed a system to run Proto simulations on CSAIL’s Condor-based computing cluster. This required: Stripping Proto of its graphics code, so it can compile and run on the cluster. Figuring out how to make Condor play nice with output of dump files. Making scripts to generate large “submit” files for Condor. Josh Horowitz (MIT CSAIL) Gradient Error and Origami November 2, 2007 9 / 18

  10. Gradients Experiments Newer Results With the computing cluster, I could run Proto thousands and thousands of times with thousands of nodes per run. The results threw doubt on the simplicity of a -2 exponent: Each point is a run: 0.300 they are partitioned 0.200 0.150 into four experiments 0.100 (red, blue, green, and 0.070 purple). The dark 0.050 black line is the fit, 0.030 − 2 . 608 α = 22 . 91 · n . 0.020 loc 0.015 The pale line is the fit 0.010 from the old set of experiments. 8 10 12 14 16 18 20 22 24 Josh Horowitz (MIT CSAIL) Gradient Error and Origami November 2, 2007 10 / 18

  11. Gradients Theory I took a break from running experiments and analyzing data to see if I could derive an expression for α purely from theory. Josh Horowitz (MIT CSAIL) Gradient Error and Origami November 2, 2007 11 / 18

  12. Gradients Theory Theoretical Model Consider a node N at (0 , 0) , with the gradient source infinitely far to the right (positive x direction). If N uses some neighboring node N 1 located at ( x, y ) in its gradient path to the source, it will move us x closer to the source, while x 2 + y 2 . � increasing the gradient value by √ � y � x 2 + y 2 � 2 . If this pattern continues, we will have α = = 1 + x x Assuming N 1 is distributed uniformly in the half-circle to N ’s right, a simple geometric argument yields the cumulative distribution √ F ( A ) = P ( α < A ) = 2 π arctan A 2 − 1 . Taking the derivative of this yields the probability density function f ( α ) = 2 1 α 2 − 1 . √ π α Josh Horowitz (MIT CSAIL) Gradient Error and Origami November 2, 2007 12 / 18

  13. Gradients Theory Now suppose N has exactly N loc / 2 = n neighbors to its right. Each gives rise to an α with an identical distribution to that just derived (That is, we suppose that we have N loc / 2 independent and identically distributed random variables). To calculate the distribution of their minimum, we use the order-statistic formula: � 2 � n (arccot α ) n − 1 f min ( α ) = n (1 − F ( α )) n − 1 f ( α ) = n . √ π α 2 − 1 α 2.0 1.5 1.0 0.5 Α 1.2 1.4 1.6 1.8 2.0 Figure: Plots of f min ( α ) for n = 1 , 2 , 3 , 4 , 5 (higher n = ⇒ lower f min as α → ∞ ). Josh Horowitz (MIT CSAIL) Gradient Error and Origami November 2, 2007 13 / 18

  14. Gradients Theory The expected values of these distributions are � 2 � n � π/ 2 β n − 1 csc β d β. E( α min ) = n π 0 Mathematica can’t touch these, but we can analyze them numerically. Α � 1 Α � 1 0.012 1 0.010 0.008 0.1 0.006 0.01 0.004 0.002 0.001 N loc N loc 50 100 150 200 2 5 10 20 50 100 200 Figure: n loc vs. α (linear and log-log). − 1 . 922 The fit line plotted in the second graph is α = 6 . 476 n . loc Josh Horowitz (MIT CSAIL) Gradient Error and Origami November 2, 2007 14 / 18

  15. Gradients Theory Theory vs. Experiment Predicting an exponent of -2 used to sound good, but more thorough experimentation seems to suggest that it is innacurate. 0.300 0.200 0.150 0.100 0.070 0.050 0.030 0.020 0.015 0.010 8 10 12 14 16 18 20 22 24 Figure: n loc vs. α (red line is the prediction from theory). Josh Horowitz (MIT CSAIL) Gradient Error and Origami November 2, 2007 15 / 18

  16. Gradients Theory Theoretical Model Where does this discrepancy come from? Assumptions 1 The source is very far away. 2 There is no variation in the number of neighbors. 3 The distribution of neighbors of any two nodes is independent (even for nodes with overlapping neighborhoods). None of these holds exactly during the execution of a gradient. It is unknown whether their failures invalidate the results completely, introduce additional phenomena to augment the results with, or are completely negligible. Josh Horowitz (MIT CSAIL) Gradient Error and Origami November 2, 2007 16 / 18

  17. Origami Origami with Proto In her PhD thesis, Radhika Nagpal describes a foldable sheet of cells which, purely through local interaction and actuation, can attain a shape determined by a global specification. My other goal for the summer was to implement this in Proto. This consisted of several tasks: Implement a folding mechanism in the Proto simulator and add 1 language hooks to access the new features (actuators and sensors). Figure out how to determine creases and sequence folds in Proto code. 2 Write a program to transform a specification written in Nagpal’s 3 high-level Origami Shape Language into a Proto program which folds the material as specified. I finished the first two of these to some degree. Unfortunately, I can’t get it to run right now. :-( Josh Horowitz (MIT CSAIL) Gradient Error and Origami November 2, 2007 17 / 18

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend