Field-Wide Estimation of Soil 2/26 Moisture Using Compressive - - PowerPoint PPT Presentation

field wide estimation of soil
SMART_READER_LITE
LIVE PREVIEW

Field-Wide Estimation of Soil 2/26 Moisture Using Compressive - - PowerPoint PPT Presentation

15/05/2018 In The Name of God Contents Field-Wide Estimation of Soil 2/26 Moisture Using Compressive Sensing Importance of moisture estimation Compressive Sensing (CS) Applying CS theory to moisture estimation problem Data sets


slide-1
SLIDE 1

15/05/2018 1

Sharif University

  • f Technology

Electrical Engineering Department Hosein Pourshamsaei Supervisor: Dr. Amin Nobakhti

Field-Wide Estimation of Soil Moisture Using Compressive Sensing

In The Name of God

Contents

 Importance of moisture estimation  Compressive Sensing (CS)  Applying CS theory to moisture estimation problem  Data sets for numerical experiments  Different approximations for solving CS problem  Comparison of different algorithms  Novel sensor placement algorithm  Conclusion and future works

2/26

Importance of Moisture Estimation

3/26

 Essential role of moisture monitoring for decision

making in precision agriculture:

 Saving water in irrigation  Sever effects of water stress on crop yield  Irrigation in right time and right quantity

 Methods:

 Regular moisture sensor installation over the field  Cost and maintenance issues  Remote sensing methods  Not possible for fine resolution and arbitrary time with reasonable

cost

 Estimation theories

Compressive Sensing (CS)

 Effective tool for reconstructing sparse signals  An 𝑚 norm optimization problem

𝑦 = arg min

  • 𝑦 subject to 𝑧 = 𝛸𝑦

 𝑧: measurement vector (M by 1)  𝛸: measurement matrix (M by N)  𝑦: sparse signal (N by 1)

 CS is valid for compressible signals

4/26

slide-2
SLIDE 2

15/05/2018 2

Applying CS Theory to Moisture Estimation Problem

 Moisture data is not sparse, but they are spatially

correlated.

 Factors affecting the moisture is nearly constant

during time.

 So with proper sorting data, they are sparse in

frequency domain.

 DCT (Discrete Cosine Transform) is used in this

project.

5/26

Applying CS Theory to Moisture Estimation Problem

 Consider 𝑦 = Ψα, where Ψ is IDCT matrix  Consider Φ as a matrix with M rows and N columns that

each row contains 1 one and N-1 zeros.

 Modified formulation of CS for moisture estimation:

𝛽 = arg min

  • 𝛽 subject to 𝑧 = ΦΨα

𝑦 = Ψ𝛽

  •  Preconditions:

𝜈 Φ, Ψ = 𝑂 max

, < 𝛸, 𝛺 >  For our case: 𝜈 Φ, Ψ = 1  For reconstructing signal with 𝑚 approximation:

𝑁 ≥ 𝐷𝐿𝜈 Φ, ψ log 𝑂

6/26

Data Sets for Numerical Experiments

7/26

 TIN-based Real-time

Integrate Basin Simulator (tRIBS)

 Peacheater Creek

Watershed: A land with area of 64 km2 that is located in the northeastern corner of Oklahoma.

Data Sets for Numerical Experiments

8/26

 Sorting data to enhance

sparsity in frequency domain

 Sorting methods

investigation is not purpose of this project.

 A good method is coarse-

grained monotonic

  • rdering.

Moisture Percent

slide-3
SLIDE 3

15/05/2018 3 Coarse-Grained Monotonic Ordering

9/26

 Dependence of the results

  • n nature of the field and

possibility of well-sorting data in all conditions is

  • ut of scope of the project.

 We simply assume that

values are sorted exactly.

 Although exact ordering is

not necessary for sparsity in frequency domain.

Ref: Wu, X., Wu, Y., Liu, M., & Zheng, L. (2011). In-Situ Soil Moisture Sensing: Efficient Random Sensor Placement and Field Estimation using Compressive Sensing. Paper presented at the 7th International Conference on Wireless Communications, Networking and Mobile Computing, Wuhan, China.

Exact ordering Exact ordering Coare-grained monotonic

  • rdering

Coare-grained monotonic

  • rdering

10/26

Coarse-Grained Monotonic Ordering

Different Approximations for Solving CS Problem

11/26

 Simple 𝑚 norm Approximation:

𝛽 = arg min

  • 𝛽 subject to 𝑧 = ΦΨα

 Weighted 𝑚 norm Approximation:  FOCUSS Algorithm  Orthogonal Matching Pursuit (OMP) Algorithm

Weighted 𝑚 Norm Approximation

 Simple 𝑚 norm is not a good choice in some

examples: x=[0 1 0] , Φ = 2 1 1 1 1 2 , then y= Φ x=[1 1]T

 Solution with 𝑚 norm approximation: 𝐲

=[1/3 0 1/3]T  Weighted 𝑚 norm Approximation:

𝑦 = arg min

  • W𝑦 subject to 𝑧 = Φ𝑦

𝑥 = 1 𝑦

  • ,

𝑦 ≠ 0 ∞, 𝑦 = 0

12/26

slide-4
SLIDE 4

15/05/2018 4 Weighted 𝑚 Norm Approximation

 Weighting matrix is dependent on the solution.  Iterative method:

1.

Set wi(0)=1, for i=1,…,n.

2.

Solve the weighted l1 minimization problem: 𝑦() = arg min

  • 𝑋()𝑦 subject to 𝑧 = 𝛸𝑦.

3.

Update the weights: 𝑥

() =

  • .

4.

Terminate on convergence or if l reach to specific number. Otherwise, increment l and go to step 2.  Value of ϵ in step 3 should be chosen slightly smaller

than the expected nonzero magnitudes of 𝐲 .

13/26

FOCUSS Algorithm

14/26

 Using 𝑚 norm:

𝑦 = arg min

  • 𝑦 subject to 𝑧 = Φ𝑦

 This problem has a unique solution:

𝑦 = Φ𝑧 Φ denotes the Moore-Penrose inverse.

 Solution is not proper for sparse signals.  Weigthed optimization can improve the results for

sparse signals.

FOCUSS Algorithm

15/26

 FOcal Underdetermined System Solver (FOCUSS):

𝑦 = W arg min

  • 𝑟 subject to 𝑧 = 𝛸𝑋𝑟

 An iterative algorithm:

1.

For initialization, set 𝑦 = Φ𝑧

2.

Compute weighting matrix: 𝑋

= 𝑒𝑗𝑏𝑕 𝑦 3.

Compute 𝐲𝐥: 𝑦 = 𝑋

𝛸𝑋

  • 𝑧

4.

Increment k and repeat steps 2 and 3 until convergence

  • ccurs.

Orthogonal Matching Pursuit (OMP) Algorithm

16/26

 OMP is one of greedy algorithms.  OMP is an iterative algorithm. at each iteration, the

column of 𝜲 is chosen that is most strongly correlated with the remaining part of y. Then its contribution to y is subtracted off and iterate on the residual.

 If the main signal is K-sparse, after K iterations the

algorithm will recover the signal properly.

slide-5
SLIDE 5

15/05/2018 5

Orthogonal Matching Pursuit (OMP) Algorithm

17/26

1.

Initialize the residual 𝑠 = 𝑧, the index set 𝛭 = ∅, the matrix of chosen atoms Φ = ∅, and the iteration number t=1.

2.

Find the index 𝜇 by solving following simple optimization problem: 𝜇 = arg max

,…, < 𝑠, 𝜒 > .

3.

Augment the index set and the matrix of chosen atoms: 𝛭 = 𝛭 ∪ {𝜇}. Φ = [Φ 𝜒].

4.

Solve a least squares problem to obtain a new signal estimate: 𝑡 = arg min

  • 𝑧 − Φ𝑡 .

5.

Calculate the new approximation of the data and the new residual: 𝛽 = Φs. 𝑠 = 𝑧 − 𝛽.

6.

Increment t and return to step 2 if 𝑢 < 𝐿.

7.

The estimate 𝐲 has nonzero indices at the components listed in 𝛭. The value of the estimate 𝐲 in component 𝜇 equals the jth component of 𝑡.

Comparison of Different Algorithms

18/26

 Comparison criteria:

 RMSE error:

𝑆𝑁𝑇𝐹 =

  • .

 Recovery percent: ratio of values that are recovered perfectly

to number of all values.

 A value is assumed perfectly recovered if the error between

estimated value and the real value is below %1.

 Computational time

Comparison of Different Algorithms

19/26

  • Estimated moisture

data using different approximation methods using 200 sensors with random placement:

  • (a) l1-norm.
  • (b) Weighted l1-

norm.

  • (c) FOCUSS.
  • (d) OMP.

Comparison of Different Algorithms

20/26 50 100 150 200 250 300 350 Number of Sensors 10 20 30 40 50 60 70 80 90 100

l1 weighted l1 FOCUSS OMP 50 100 150 200 250 300 350

Number of Sensors 5 10 15 20 25 30

l1 weighted l1 FOCUSS OMP

  • FOCUSS in not proper algorithm.
  • Main difference of the algorithms is related to using a few number of sensors.
  • Dependence of results on both number and location of sensors.
slide-6
SLIDE 6

15/05/2018 6 Comparison of Different Algorithms

21/26

Method Computational time in seconds l1-norm 76 Weighted l1-norm 788 FOCUSS 74 OMP 71

  • Time is not critical in many real applications.
  • Anyway it can be important in some situations especially in very large scale

fields.

  • In sum, OMP is the best method for most situations.

Sensor Placement

22/26

 Random sensor placement is not efficient for

situations that sensor numbers is not high enough.

 High variations could not estimated well by random

sensor placement.

 One approach: dividing whole data to some clusters and do sensor allocation

proportional to variance of each cluster.

Moisture Percent

Novel Sensor Placement Algorithm

23/26

1.

Place first sensor randomly. Set k=1.

  • 2. Solve following optimization problem:

𝛽 = arg min

  • 𝛽 subject to 𝑧 = ΦΨα

𝑦 = Ψ𝛽

  • 3. Find the location which has the worst estimation:

𝑗 = arg max

  • 𝑦 𝑗 − 𝑦

(𝑗)

  • 4. Place next sensor at location 𝑗.

5.

Increment k and go to step 2 if (k<sensor numbers)

Comparison of The Results

24/26

  • Estimated moisture data using different sensor

placement methods using 70 sensors:

  • (a) Random
  • (b) Clustered
  • (c) Novel approach

(a) (b) (c)

1000 2000 3000 4000 5000 6000 7000 Numbered Index 20 30 40 50 60 70 80 90 100 110 Moisture Percent Real data Estimated data 1000 2000 3000 4000 5000 6000 7000 Numbered Index 20 30 40 50 60 70 80 90 100 110 Moisture Percent Real data Estimated data 1000 2000 3000 4000 5000 6000 7000 Numbered Index 20 30 40 50 60 70 80 90 100 110 Moisture Percent Real data Estimated data

slide-7
SLIDE 7

15/05/2018 7 Comparison of The Results

25/26

50 100 150 200 250 300 Number of Sensors 10 20 30 40 50 60 70 80 90 100 random placement clustered random placement new approach 50 100 150 200 250 300 Number of Sensors 0.5 1 1.5 2 2.5 3 3.5 4 4.5 RMSE Error random placement clustered random placement new approach

  • Main advantage of the new approach is in the case of a few sensors.
  • If high number of sensors are used, random placement is efficient.

Conclusion and Future Works

26/26

 Approximations of the main 𝑚 norm optimization

problem:

 Investigation of more algorithms such as other greedy algorithms or

𝑚 norm (0<p<1) approximations.

 All comparison are done with assumption of well sorted data and

noiseless measurements with random sensor placement.

 Dependency of the results on field natural specifications.  Considering moisture values as an 2-D data and using CS in 2-D to

get rid of sorting data.  Sensor placement:

 Using data from a period of time and include statistical properties in

sensor placement.

 Investigating of validation of the results for what duration of time.  Try to find mathematical proves to justify the results.

27