Low Complexity Detection using Likelihood Based Tree Search for - - PowerPoint PPT Presentation

low complexity detection using likelihood based tree
SMART_READER_LITE
LIVE PREVIEW

Low Complexity Detection using Likelihood Based Tree Search for - - PowerPoint PPT Presentation

Low Complexity Detection using Likelihood Based Tree Search for Large MIMO Systems Saksham Agarwal (13603) Mentor - Dr. A. K. Chaturvedi November 9, 2016 Introduction 5G - the upcoming wireless standard Large MIMO systems form one of its


slide-1
SLIDE 1

Low Complexity Detection using Likelihood Based Tree Search for Large MIMO Systems

Saksham Agarwal (13603) Mentor - Dr. A. K. Chaturvedi November 9, 2016

slide-2
SLIDE 2

Introduction

5G - the upcoming wireless standard Large MIMO systems form one of its crucial components Number of antennas to be scaled up Detection problem for Large MIMO pose a challenging task Available simple detectors are infeasible

Near ML Detectors like Sphere Decoder have exponential complexity MMSE degrades performance with increasing antenna size

slide-3
SLIDE 3

Detection techniques for Large MIMO

Likelihood Ascent Search (LAS) & Reactive Tabu Search (RTS)

Find solutions searching around neighborhood of an initial guess (like MMSE) Polynomial complexity in antenna size However face performance degradation and complexity increase upon increasing constellation size

QP based detection

Single stage QP detector 2-Stage and Multi-Stage (used in Branch and Bound, to be discussed next) QP detectors have better performance than LAS & RTS without degradation at high modulation scheme Multi-stage QPs can have high complexity alongside too

slide-4
SLIDE 4

System Model

MIMO system with Nt transmit and Nr receive antennas following V-BLAST architecture. The MIMO channel model given by: ˜ y = ˜ H˜ x + ˜ n where ˜ x = [˜ x1, ˜ x2, ..., ˜ xNt]T ∈ CNt×1 denotes the transmitted signal vector, ˜ y = [˜ y1, ˜ y2, ..., ˜ yNr ]T ∈ CNr×1 received vector, ˜ H ∈ CNr×Nt is the channel gain matrix with CN(0, 1) elements and ˜ n ∈ CNr×1 denotes the complex AWGN vector with i.i.d CN(0, σ2) entries.

slide-5
SLIDE 5

Problem Formulation

The ML solution given by ˜ x∗ = arg min

˜ x∈ ˜ X Nt ||˜

y − ˜ H˜ x||2

2

where ˜ X Nt denotes the set of all possible transmit vectors. The problem converted to equivalent real-valued model: y = Hx + n where y = [ℜ(˜ y) ℑ(˜ y)]T, x = [ℜ(˜ x) ℑ(˜ x)]T, n = [ℜ(˜ n) ℑ(˜ n)]T and H = [ℜ(˜ H) −ℑ(˜ H); ℑ(˜ H) ℜ(˜ H)]. Here x ∈ R2Nt with [x1, x2, ..., xNt] equal to the real part and [xNt+1, xNt+2, ..., x2Nt] to the complex part of ˜ x and similarly for y, n and H.

slide-6
SLIDE 6

Problem Formulation (contd.)

The equivalent real model: x∗ = arg min

x∈X 2Nt ||y − Hx||2 2

where X ∈ {− √ P + 1, ..., −1, 1, ..., √ P − 1} and P is the QAM constellation size. Using z = x+(

√ P−1) 2

, we get: z∗ = arg min

z∈Λ2Nt

1 2zTQz + bTz where Λ ∈ {0, 1, 2, ..., √ P − 1}, Q = HTH is a symmetric positive semidefinite matrix and b = −HT(y + ( √ P − 1)H1)/2

slide-7
SLIDE 7

Problem Relaxation

Consider the following relaxed problem: arg min

z

1 2zTQz + bTz subject to 0 ≤ z ≤ ( √ C − 1)1 The required problem is a convex QP Can be solved using iterative techniques like interior point method Why interior point? It solves QP in almost constant number

  • f iterations in practice, independent of problem dimension!

Thus complexity does not increase exponentially by antenna size

slide-8
SLIDE 8

Solution Strategy

Why problem relaxation?

The unrelaxed problem represents generic Integer Programming problem Finding exact solution is NP hard! Can only generate approximations in polynomial time complexity

A simple approximation - use ˆ z = R(z∗), where R denotes the rounding function and returns the nearest integer to the

  • ptimal solution of the relaxed problem

Used in one-stage QP detector Solution need not always be nearest integer (hard to imagine, but true!)

How do we improve?

slide-9
SLIDE 9

Branch and Bound Algorithm

Standard IP algorithm to provide optimal result Exponential in complexity Solves problem using Divide and Conquer Strategy

Recursively solves relaxed problem on reduced search space Checks if an integer vector from the reduced search space can produce an optimal solution If yes, then repeat the process after fixing bounds as integer values on one of the vector bit and further reducing the search space Let’s see an example!

slide-10
SLIDE 10

Branch and Bound (contd.)

Figure: Progress of Branch and Bound

Image credits - https://optimization.mccormick.northwestern.edu/

slide-11
SLIDE 11

Practical Implementation

Many issues to keep in mind - branching, node selection, pruning and heuristics Branch and Bound effectively scans each and every integer in the worst case scenario! Exponential Complexity Although produces ML result, can’t be implemented Can we approximate?... Yes!

Deeper we search in tree, closer to the solution we get Limiting a max depth can help us settle with a max complexity and a desired accuracy Used in Controlled Branch and Bound1

But can we do better?

1Elghariani, Ali, and Michael Zoltowski. ”Low Complexity Detection

Algorithms in Large-Scale MIMO Systems.” IEEE Transactions on Wireless Communications 15.3 (2016): 1689-1702.

slide-12
SLIDE 12

Scope for Improvement

The branching in strategy in Controlled BnB ”dumb” - is independent of the problem at hand Looks for a lot of nodes in the search tree which (usually) do not return potential solutions Can we use problem structure to branch intelligently? Yes! Here’s what we proposed

slide-13
SLIDE 13

Error Likelihood Metric

The following quantity2 ηi = (y − Hx)THi ||Hi|| provides the probability that value of ith bit in the received signal vector might be in error. Here, Hi denotes the ith column of the channel gain matrix H. We call the bit with index j = arg maxi ηi as error bit, as it has the highest probability of being in error.

  • 2A. K. Sah and A. Chaturvedi, ”Reduced neighborhood search algorithms

for low complexity detection in mimo systems,” in 2015 IEEE Global Communications Conference (GLOBECOM), pp. 1-6, IEEE, 2015.

slide-14
SLIDE 14

Likelihood Based Tree Search

While branching in the search tree, we make use of the error likelihood metric It incorporates the detection problem structure at hand into the search process We branch upon the bit value, which is most likely to be in error Hence, our reduced search space does not contain the erroneous bits and provides us efficient tree search

slide-15
SLIDE 15

Likelihood Based Tree Search (contd.)

Hence, if the error index is j, the newly created nodes will have reduced search space: node1 :

  • node.lbi ≤ zi ≤ node.ubi,

ifi = j node.lbi ≤ zi ≤ ⌊zi⌋, ifi = j node2 :

  • node.lbi ≤ zi ≤ node.ubi,

ifi = j ⌈zi⌉ ≤ zi ≤ node.ubi, ifi = j Here node denotes the currently selected problem instance to solve and lbi and ubi are the respective lower and upper bound on ith index of the solution vector node.zval. Therefore, this reduced the search space does not include the error bit value.

slide-16
SLIDE 16

Likelihood Based Tree Search (contd.)

Since our branching strategy is more reliable, we can be more confident that our branched node would contain the optimal solution How can we reduce complexity based upon this advantage?

slide-17
SLIDE 17

Pruning Parameters

depth

provides a check on max tree size if our branching decision is correct, we need not explore tree at much lower depth as we’ll see in simulation results, our intuition did turn out to be true!

breadth

the error bit discussed before need not always be in error what could be the next likely bit in error? breadth allows us to use as many error bits creates that many sets of nodes, each with corresponding error bit

We create breadth sets of new nodes as with corresponding error bits as follows: sort η = {ηi : i = 1, 2, ..., 2Nt} and store it to ηsort ErrorBitj = ηsort[2Nt − j + 1] ∀j = 1 to breadth

slide-18
SLIDE 18

Pruning Parameters (contd.)

keep

main aim for this parameter was to limit the nodes explored, hence the complexity idea is that at every iteration, when we select a node to branch upon, we select keep many best nodes and prune the rest should work if the tree search was efficient simulations show positive results

slide-19
SLIDE 19

Algorithm

node with least optimal objective function value defined as best node prune nodes other than the best keep nodes branch upon the best node and create breadth sets of new nodes prune those which reached tree height greater than depth repeat the process until there is no new node to explore

slide-20
SLIDE 20

Complexity Analysis

The number of iterations of interior-point QP solver independent of problem dimension (the antenna size)3 Complexity follows O(n3) for each iteration Hence, there is only a polynomial gain in complexity with increase in antenna size The computations required for solving each node remain roughly the same throughout the tree Hence, complexity mainly dependent upon the number of nodes explored For keep = 1, number of nodes explored = b × d

3Gondzio, Jacek. ”Interior point methods 25 years later.” European Journal

  • f Operational Research 218.3 (2012): 587-601.
slide-21
SLIDE 21

Simulation Results I

Eb/N0 5 10 15 20 Bit Error Rate 10-4 10-3 10-2 10-1 100 BnB(3,3) 2-QP LTS(3,3,1) Eb/N0 5 10 15 20 Number of Computations ×105 1 2 3 4 5 6 7 8 9 10 BnB(3,3) 2-QP LTS(3,3,1)

Figure: Bit Error Rate and Number of Computations for32 × 32 16 QAM system for Likelihood Tree Search (LTS(d, b, k)), controlled BnB(L, M) and 2 stage QP algorithms are shown against the average transmit SNR value (Eb/N0).

slide-22
SLIDE 22

Simulation Results II

Eb/N0 5 10 15 20 Bit Error Rate 10-4 10-3 10-2 10-1 100 BnB(3,3) 2-QP LTS(3,3,1) Eb/N0 5 10 15 20 Number of Computations ×106 1 2 3 4 5 6 7 8 9 10 BnB(3,3) 2-QP LTS(3,3,1)

Figure: Bit Error Rate and Number of Computations for 64 × 64 16 QAM system.

slide-23
SLIDE 23

Simulation Results III

Eb/N0 10 15 20 25 30 Bit Error Rate 10-4 10-3 10-2 10-1 100 BnB(3,3) 2-QP LTS(3,3,1) Eb/N0 10 15 20 25 30

Number of Computations ×105 1 2 3 4 5 6 7 8 9 10 BnB(3,3) 2-QP LTS(3,3,1)

Figure: Bit Error Rate and Number of Computations for 32 × 32 64 QAM system.

slide-24
SLIDE 24

Simulation Results IV

Eb/N0 5 10 15 20 Bit Error Rate 10-4 10-3 10-2 10-1 100 d = 1 d = 2 d = 3 d = 4 Eb/N0 5 10 15 20 Number of Computations ×105 0.5 1 1.5 2 2.5 3 3.5 4 d = 1 d = 2 d = 3 d = 4

Figure: The BER performance starts to saturate upon increasing depth values and keeping branch and keep constant for a 32 × 32 16 QAM system with b = 3 and k = 1

slide-25
SLIDE 25

Simulation Results V

Eb/N0 5 10 15 20 Bit Error Rate 10-4 10-3 10-2 10-1 100 b = 1 b = 2 b = 3 b = 4 Eb/N0 5 10 15 20 Number of Computations ×105 2 3 4 5 6 7 8 9 10 b = 1 b = 2 b = 3 b = 4

Figure: The BER performance starts to saturate upon increasing breadth values and keeping depth and keep constant for a 32 × 32 16 QAM system with d = 3 and k = 1.

slide-26
SLIDE 26

Simulation Results VI

Eb/N0 5 10 15 20 Bit Error Rate 10-4 10-3 10-2 10-1 100 k = 1 k = 2 k = 4 Eb/N0 5 10 15 20 Number of Computations ×106 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 k = 1 k = 2 k = 4

Figure: The BER performance is independent of keep values. The system shown is 32 × 32 16 QAM with d = 3 and b = 3

slide-27
SLIDE 27

Conclusions

Provided a new tree based search algorithm which improves upon the controlled Branch and Bound algorithm Showed that incorporating problem structure into branching decision making can greatly enhance tree search process Achieved better performance at a similar complexity Introduced parameters can be tuned to reducing complexity or improving performance

slide-28
SLIDE 28

Future Work

Try finding a metric which can assist the user in choosing the

  • ptimal parameter values, given a margin of desired

performance and complexities. Like to see whether our algorithm can achieve ML or near-ML results in a limiting case scenario