Knapsack with Removability Hans-Joachim Bckenhauer, Jan Dreier, - - PowerPoint PPT Presentation

knapsack with removability
SMART_READER_LITE
LIVE PREVIEW

Knapsack with Removability Hans-Joachim Bckenhauer, Jan Dreier, - - PowerPoint PPT Presentation

Threshold Behaviors in Advice Complexity Knapsack with Removability Hans-Joachim Bckenhauer, Jan Dreier, Fabian Frei , Peter Rossmanith August 28, 2020 Virtual satellite workshop of MFCS 2020 Online Problems and Advice Brief Recapitulation


slide-1
SLIDE 1

Threshold Behaviors in Advice Complexity

Knapsack with Removability

Hans-Joachim Böckenhauer, Jan Dreier, Fabian Frei, Peter Rossmanith August 28, 2020 – Virtual satellite workshop of MFCS 2020

slide-2
SLIDE 2

Online Problems and Advice

Brief Recapitulation

slide-3
SLIDE 3

Recapitulation

Online Problems

  • Instance revealed piecewise
  • Solution required piecewise
  • Algorithm outputs solution parts without full information
slide-4
SLIDE 4

Recapitulation

Online Problems: Very Hard

  • Online-ness is a severe restriction
  • In exchange: Unbounded time and space resources
  • Thus no time/space complexity analysis
slide-5
SLIDE 5

Recapitulation

Online Problems: Very Hard

  • Online-ness is a severe restriction
  • In exchange: Unbounded time and space resources
  • Thus no time/space complexity analysis

Assessing Online Algorithms

  • Compare online solution to an optimal solution
  • That is: Can online algorithm compete with offline one?
  • Next slide: Formal definition of competitivity
slide-6
SLIDE 6

Recapitulation

Defining Competitivity (Strict and for Maximization Problems)

  • Competitivity of an online algorithm A on an instance I:

Gain of an optimal solution to I Gain of A’s online solution to I

slide-7
SLIDE 7

Recapitulation

Defining Competitivity (Strict and for Maximization Problems)

  • Competitivity of an online algorithm A on an instance I:

Gain of an optimal solution to I Gain of A’s online solution to I

  • Competitivity of an online algorithm: Worst case, i.e.,

max

I∈I

Gain of an optimal solution to I Gain of A’s online solution to I

slide-8
SLIDE 8

Recapitulation

Defining Competitivity (Strict and for Maximization Problems)

  • Competitivity of an online algorithm A on an instance I:

Gain of an optimal solution to I Gain of A’s online solution to I

  • Competitivity of an online algorithm: Worst case, i.e.,

max

I∈I

Gain of an optimal solution to I Gain of A’s online solution to I

  • Competitivity of a problem: Best online algorithms, i.e.,

min

A∈A max I∈I

Gain of an optimal solution to I Gain of A’s online solution to I

slide-9
SLIDE 9

This Much for Online Problems

Now Advice

slide-10
SLIDE 10

Advice Complexity

Motivation

  • Online algorithm lacks information about future
  • This makes online problems very hard
  • But how much information is lacking exactly?
slide-11
SLIDE 11

Advice Complexity

Motivation

  • Online algorithm lacks information about future
  • This makes online problems very hard
  • But how much information is lacking exactly?

Measuring the lack of information

  • Size of instance?
  • Size of solution?
  • Need for better, general measure
  • Established measuring tool: Advice
slide-12
SLIDE 12

Advice Complexity

Advice model

  • Omniscient oracle provides online algorithm with advice
  • Advice has the form of an infinite bit string
  • One tailor-made advice string for each instance
  • Online algorithm reads as many bits as it wants
  • Number of bits read: Advice complexity
slide-13
SLIDE 13

Online Computation

Online Algorithm

slide-14
SLIDE 14

Online Computation

Online Algorithm

Malicious request sequence x1, x2, x3, x4, x5, x6, x7, . . .

Adversary

slide-15
SLIDE 15

Advice Complexity

Online Algorithm

Malicious request sequence x1, x2, x3, x4, x5, x6, x7, . . .

Adversary

1 1 1 1 1 1 1 1

Oracle

slide-16
SLIDE 16

Advice Complexity

Trade-Off

  • Improve competitivity, but minimize advice
  • Remarkable behavior differences between problems

Common: Continuous Every single additional bit improves the competitivity

n Optimal: 1 Information Content [in Bits] Competitive Ratio

Knapsack: Thresholds Jumps at some thresholds, stagnating elsewhere

n 2 1 1 ≈ log n Information Content [in Bits] Competitive Ratio

slide-17
SLIDE 17

Online Knapsack

Classical Version

slide-18
SLIDE 18

Classical Online Knapsack

Problem Definition

  • Knapsack of capacity 1
  • Online Instance: Sequence of n items with s1, . . . , sn
  • Online Output:

– Pack or discard each item immediately – The decisions are permanent – Never exceed the capacity

  • Goal: Maximize packed volume
slide-19
SLIDE 19

Classical Online Knapsack

Problem Definition

  • Knapsack of capacity 1
  • Online Instance: Sequence of n items with s1, . . . , sn
  • Online Output:

– Pack or discard each item immediately – The decisions are permanent – Never exceed the capacity

  • Goal: Maximize packed volume

Remark

  • This is the proportional/simple/unweighted problem
  • That is: No size-value distinction for items
slide-20
SLIDE 20

Knapsack with Removability

Model by Iwama and Taketomi, ICALP 2002

slide-21
SLIDE 21

Knapsack with Removability

Definition of Classical Online Knapsack

  • Knapsack of capacity 1
  • Online Instance: Sequence of n items with s1, . . . , sn
  • Online Output:

– Pack or discard each item immediately – The decisions are permanent – Never exceed the capacity

  • Goal: Maximize packed volume
slide-22
SLIDE 22

Knapsack with Removability

Definition of Classical Online Knapsack

  • Knapsack of capacity 1
  • Online Instance: Sequence of n items with s1, . . . , sn
  • Online Output:

– Pack or discard each item immediately – The decisions are permanent – Never exceed the capacity

  • Goal: Maximize packed volume
slide-23
SLIDE 23

Knapsack with Removability

Definition of Classical Online Knapsack

  • Knapsack of capacity 1
  • Online Instance: Sequence of n items with s1, . . . , sn
  • Online Output:

– Pack or discard each item immediately – The decisions are permanent – Never exceed the capacity

  • Goal: Maximize packed volume
slide-24
SLIDE 24

Knapsack with Removability

Definition of Knapsack with Removability

  • Knapsack of capacity 1
  • Online Instance: Sequence of n items with s1, . . . , sn
  • Online Output:

– Pack or discard each item immediately – Packed items can be removed – Never exceed the capacity

  • Goal: Maximize packed volume
slide-25
SLIDE 25

Knapsack with Removability

Clarification on Removability

  • Any packed item can be removed from the knapsack
  • No restrictions. Algorithm may remove ...

– ... arbitrarily many items ... – ... at arbitrary points in time

  • However, once an item is removed, it is gone for good
slide-26
SLIDE 26

Knapsack with Removability

Natural Model

  • Example: Storage room
  • Keep useful things as they come along
  • Space will run out
  • Need to start disposing
  • Goal: Most useful collection
  • How much information about future needed?
slide-27
SLIDE 27

Knapsack with Removability

Two Results Known So Far

  • Without advice, the competitivity is exactly the golden

ratio Φ ≈ 1.618 [Iwama and Taketomi, ICALP 2002]

  • There is 10/7-approximative algorithm using a single ran-

dom bit [Han et al., TCS 2015]

slide-28
SLIDE 28

Knapsack with Removability

Two Results Known So Far

  • Without advice, the competitivity is exactly the golden

ratio Φ ≈ 1.618 [Iwama and Taketomi, ICALP 2002]

  • There is 10/7-approximative algorithm using a single ran-

dom bit [Han et al., TCS 2015]

  • Thus there is 10/7-competitive algorithm using a single

advice bit

slide-29
SLIDE 29

Knapsack with Removability

Competitivity vs. Advice: Our Contribution

  • Optimality requires 1 advice bit per item asymptotically
  • Near optimality with constant advice
  • Improved bounds for one advice bit
slide-30
SLIDE 30

Knapsack with Removability

Competitivity vs. Advice: Our Contribution

  • Optimality requires 1 advice bit per item asymptotically
  • Near optimality with constant advice
  • Improved bounds for one advice bit
slide-31
SLIDE 31

Knapsack with Removability

Competitivity vs. Advice: Our Contribution

  • Optimality requires 1 advice bit per item asymptotically
  • Near optimality with constant advice
  • Improved bounds for one advice bit
slide-32
SLIDE 32

Knapsack with Removability

Competitivity vs. Advice: Our Contribution

  • Optimality requires 1 advice bit per item asymptotically
  • Near optimality with constant advice
  • Improved bounds for one advice bit
slide-33
SLIDE 33

A Single Advice Bit

Upper and Lower Bound

slide-34
SLIDE 34

1 Advice Bit: Lower Bound

Hard Instance Family

x1 x2 x3 y2 y3 I1: ψ ψ2 1 − ψ2 + ε I2: ψ ψ2 1 − ψ2 + ε 1 − ψ2 I3: ψ ψ2 1 − ψ2 + ε ψ2 − ε

slide-35
SLIDE 35

1 Advice Bit: Lower Bound

Hard Instance Family

x1 x2 x3 y2 y3 I1: ψ ψ2 1 − ψ2 + ε I2: ψ ψ2 1 − ψ2 + ε 1 − ψ2 I3: ψ ψ2 1 − ψ2 + ε ψ2 − ε where ψ ≈ 0.78 is the positive root of 2(1 − x2) = x

slide-36
SLIDE 36

1 Advice Bit: Lower Bound

Hard Instance Family

x1 x2 x3 y2 y3 I1: ψ ψ2 1 − ψ2 + ε I2: ψ ψ2 1 − ψ2 + ε 1 − ψ2 I3: ψ ψ2 1 − ψ2 + ε ψ2 − ε ψ ψ2 1 − ψ2 where ψ ≈ 0.78 is the positive root of 2(1 − x2) = x

slide-37
SLIDE 37

1 Advice Bit: Lower Bound

Hard Instance Family

x1 x2 x3 y2 y3 I1: ψ ψ2 1 − ψ2 + ε I2: ψ ψ2 1 − ψ2 + ε 1 − ψ2 I3: ψ ψ2 1 − ψ2 + ε ψ2 − ε ψ ψ2 1 − ψ2 where ψ ≈ 0.78 is the positive root of 2(1 − x2) = x

slide-38
SLIDE 38

1 Advice Bit: Lower Bound

Hard Instance Family

x1 x2 x3 y2 y3 I1: ψ ψ2 1 − ψ2 + ε I2: ψ ψ2 1 − ψ2 + ε 1 − ψ2 I3: ψ ψ2 1 − ψ2 + ε ψ2 − ε ψ ψ2 1 − ψ2 where ψ ≈ 0.78 is the positive root of 2(1 − x2) = x

slide-39
SLIDE 39

1 Advice Bit: Lower Bound

Hard Instance Family

x1 x2 x3 y2 y3 I1: ψ ψ2 1 − ψ2 + ε I2: ψ ψ2 1 − ψ2 + ε 1 − ψ2 I3: ψ ψ2 1 − ψ2 + ε ψ2 − ε ψ ψ2 1 − ψ2 where ψ ≈ 0.78 is the positive root of 2(1 − x2) = x

Competitive Analysis

Unique Optimum Second Best Competitivity I1: ψ ψ2 ψ/ψ2 I2: 1 2(1 − ψ2) + ε 1/(2(1 − ψ2) + ε) I3: 1 ψ 1/ψ

slide-40
SLIDE 40

1 Advice Bit: Lower Bound

Hard Instance Family

x1 x2 x3 y2 y3 I1: ψ ψ2 1 − ψ2 + ε I2: ψ ψ2 1 − ψ2 + ε 1 − ψ2 I3: ψ ψ2 1 − ψ2 + ε ψ2 − ε ψ ψ2 1 − ψ2 where ψ ≈ 0.78 is the positive root of 2(1 − x2) = x

Competitive Analysis

Unique Optimum Second Best Competitivity I1: ψ ψ2 ψ/ψ2 I2: 1 2(1 − ψ2) + ε 1/(2(1 − ψ2) + ε) I3: 1 ψ 1/ψ

slide-41
SLIDE 41

1 Advice Bit: Lower Bound

Hard Instance Family

x1 x2 x3 y2 y3 I1: ψ ψ2 1 − ψ2 + ε I2: ψ ψ2 1 − ψ2 + ε 1 − ψ2 I3: ψ ψ2 1 − ψ2 + ε ψ2 − ε ψ ψ2 1 − ψ2 where ψ ≈ 0.78 is the positive root of 2(1 − x2) = x

Competitive Analysis

Unique Optimum Second Best Competitivity I1: ψ ψ2 ψ/ψ2 I2: 1 2(1 − ψ2) + ε 1/(2(1 − ψ2) + ε) I3: 1 ψ 1/ψ

slide-42
SLIDE 42

1 Advice Bit: Lower Bound

Hard Instance Family

x1 x2 x3 y2 y3 I1: ψ ψ2 1 − ψ2 + ε I2: ψ ψ2 1 − ψ2 + ε 1 − ψ2 I3: ψ ψ2 1 − ψ2 + ε ψ2 − ε ψ ψ2 1 − ψ2 where ψ ≈ 0.78 is the positive root of 2(1 − x2) = x

Competitive Analysis

Unique Optimum Second Best Competitivity I1: ψ ψ2 ψ/ψ2 I2: 1 2(1 − ψ2) + ε 1/(2(1 − ψ2) + ε) I3: 1 ψ 1/ψ

slide-43
SLIDE 43

1 Advice Bit: Lower Bound

Hard Instance Family

x1 x2 x3 y2 y3 I1: ψ ψ2 1 − ψ2 + ε I2: ψ ψ2 1 − ψ2 + ε 1 − ψ2 I3: ψ ψ2 1 − ψ2 + ε ψ2 − ε ψ ψ2 1 − ψ2 where ψ ≈ 0.78 is the positive root of 2(1 − x2) = x

Competitive Analysis

Unique Optimum Second Best Competitivity I1: ψ ψ2 1/ψ I2: 1 2(1 − ψ2) + ε 1/ψ + ε I3: 1 ψ 1/ψ

slide-44
SLIDE 44

1 Advice Bit: Lower Bound

Hard Instance Family

x1 x2 x3 y2 y3 I1: ψ ψ2 1 − ψ2 + ε I2: ψ ψ2 1 − ψ2 + ε 1 − ψ2 I3: ψ ψ2 1 − ψ2 + ε ψ2 − ε ψ ψ2 1 − ψ2 where ψ ≈ 0.78 is the positive root of 2(1 − x2) = x

Competitive Analysis

Unique Optimum Second Best Competitivity I1: ψ ψ2 1/ψ I2: 1 2(1 − ψ2) + ε 1/ψ + ε I3: 1 ψ 1/ψ

slide-45
SLIDE 45

1 Advice Bit: Lower Bound

Hard Instance Family

x1 x2 x3 y2 y3 I1: ψ ψ2 1 − ψ2 + ε I2: ψ ψ2 1 − ψ2 + ε 1 − ψ2 I3: ψ ψ2 1 − ψ2 + ε ψ2 − ε ψ ψ2 1 − ψ2 where ψ ≈ 0.78 is the positive root of 2(1 − x2) = x

Competitive Analysis

Unique Optimum Second Best Competitivity I1: ψ ψ2 1/ψ ≈ 1.28 I2: 1 2(1 − ψ2) + ε vs. I3: 1 ψ 1/ψ 10/7 ≈ 1.43

slide-46
SLIDE 46

1 Advice Bit

Upper Bound

slide-47
SLIDE 47

1 Advice Bit: Improved Upper Bound

High-Level Outline Only

  • Divide items into 5 size classes
  • Two different packing strategies
  • Use advice bit to indicate which one is better
  • Yields a

√ 2-competitive algorithm

tiny small medium big huge little large

1

slide-48
SLIDE 48

Near-Optimality

Using Constant Advice

slide-49
SLIDE 49

Near-Optimality with Constant Advice

Algorithm Requirements

  • Takes parameter ε > 0
  • Is (1 + ε)-competitive
  • Uses only constant advice
slide-50
SLIDE 50

Near-Optimality with Constant Advice

Algorithm Outline

  • Given ε > 0
  • Let E = log1−ε ε
  • Divide items into E + 1 size classes as shown below

· · ·

(1 − ε)E (1 − ε)E−1 (1 − ε)E−2 (1 − ε)3 (1 − ε)2 1 − ε 1

slide-51
SLIDE 51

Near-Optimality with Constant Advice

Algorithm Outline

  • Given ε > 0
  • Let E = log1−ε ε
  • Divide items into E + 1 size classes as shown below

· · ·

(1 − ε)E (1 − ε)E−1 (1 − ε)E−2 (1 − ε)3 (1 − ε)2 1 − ε 1

slide-52
SLIDE 52

Near-Optimality with Constant Advice

Algorithm Outline

  • Given ε > 0
  • Let E = log1−ε ε
  • Divide items into E + 1 size classes as shown below

· · ·

(1 − ε)E (1 − ε)E−1 (1 − ε)E−2 (1 − ε)3 (1 − ε)2 1 − ε 1

slide-53
SLIDE 53

Near-Optimality with Constant Advice

Algorithm Outline

  • Given ε > 0
  • Let E = log1−ε ε
  • Divide items into E + 1 size classes as shown below

· · ·

(1 − ε)E (1 − ε)E−1 (1 − ε)E−2 (1 − ε)3 (1 − ε)2 1 − ε 1

slide-54
SLIDE 54

Near-Optimality with Constant Advice

Algorithm Outline

  • Given ε > 0
  • Let E = log1−ε ε
  • Divide items into E + 1 size classes as shown below

· · ·

(1 − ε)E (1 − ε)E−1 (1 − ε)E−2 (1 − ε)3 (1 − ε)2 1 − ε 1

slide-55
SLIDE 55

Near-Optimality with Constant Advice

Algorithm Outline

  • Given ε > 0
  • Let E = log1−ε ε
  • Divide items into E + 1 size classes as shown below

· · ·

(1 − ε)E (1 − ε)E−1 (1 − ε)E−2 (1 − ε)3 (1 − ε)2 1 − ε 1

slide-56
SLIDE 56

Near-Optimality with Constant Advice

Algorithm Outline

  • Given ε > 0
  • Let E = log1−ε ε
  • Divide items into E + 1 size classes as shown below

· · ·

(1 − ε)E (1 − ε)E−1 (1 − ε)E−2 (1 − ε)3 (1 − ε)2 1 − ε 1

slide-57
SLIDE 57

Near-Optimality with Constant Advice

Algorithm Outline

  • Given ε > 0
  • Let E = log1−ε ε
  • Divide items into E + 1 size classes as shown below

· · ·

(1 − ε)E (1 − ε)E−1 (1 − ε)E−2 (1 − ε)3 (1 − ε)2 1 − ε 1

slide-58
SLIDE 58

Near-Optimality with Constant Advice

Algorithm Outline

  • Given ε > 0
  • Let E = log1−ε ε
  • Divide items into E + 1 size classes as shown below

· · ·

(1 − ε)E (1 − ε)E−1 (1 − ε)E−2 (1 − ε)3 (1 − ε)2 1 − ε 1

slide-59
SLIDE 59

Near-Optimality with Constant Advice

Algorithm Outline

  • Given ε > 0
  • Let E = log1−ε ε
  • Divide items into E + 1 size classes as shown below

· · ·

(1 − ε)E (1 − ε)E−1 (1 − ε)E−2 (1 − ε)3 (1 − ε)2 1 − ε 1

slide-60
SLIDE 60

Near-Optimality with Constant Advice

Algorithm Outline

  • Given ε > 0
  • Let E = log1−ε ε
  • Divide items into E + 1 size classes as shown below

· · ·

ε (1 − ε)E−1 (1 − ε)E−2 (1 − ε)3 (1 − ε)2 1 − ε 1

slide-61
SLIDE 61

Near-Optimality with Constant Advice

Algorithm Outline

  • Given ε > 0
  • Let E = log1−ε ε
  • Divide items into E + 1 size classes as shown below

· · ·

ε (1 − ε)E−1 (1 − ε)E−2 (1 − ε)3 (1 − ε)2 1 − ε 1 Small items Big items

slide-62
SLIDE 62

Near-Optimality with Constant Advice

What does the advice encode?

  • Oracle fixes arbitrary optimal solution S
  • Let u1, . . . , uk denote big items of S in appearance order
  • Let ci denote size class of ui
  • The advice encodes (c1, . . . , ck)
slide-63
SLIDE 63

Near-Optimality with Constant Advice

Only constant advice

  • 2k⌈log2(E + 1)⌉ bits suffice to encode (c1, . . . , ck)

– There are E classes for big items – Thus one class indicated by ⌈log2(E + 1)⌉ bits – A self-delimiting encoding: 2⌈log2(E + 1)⌉ bits

slide-64
SLIDE 64

Near-Optimality with Constant Advice

Only constant advice

  • 2k⌈log2(E + 1)⌉ bits suffice to encode (c1, . . . , ck)

– There are E classes for big items – Thus one class indicated by ⌈log2(E + 1)⌉ bits – A self-delimiting encoding: 2⌈log2(E + 1)⌉ bits

  • This is constant advice:

– The number k of big elements in S is bounded by 1/ε – Recall that E = log1−ε(ε)

slide-65
SLIDE 65

Near-Optimality with Constant Advice

Algorithm Procedure

  • Big items: Packed into k virtual slots

– Slot i accommodates items from class ci exclusively – In the beginning, the slots are empty – Each slot is filled with exactly one big item – The slots are filled strictly in their order – Items in filled slots replaced by smaller ones if possible

slide-66
SLIDE 66

Near-Optimality with Constant Advice

Algorithm Procedure

  • Big items: Packed into k virtual slots

– Slot i accommodates items from class ci exclusively – In the beginning, the slots are empty – Each slot is filled with exactly one big item – The slots are filled strictly in their order – Items in filled slots replaced by smaller ones if possible

  • Small items: Packed greedily

– Removed one by one whenever needed to pack a big one

slide-67
SLIDE 67

Example Execution of the Algorithm

slide-68
SLIDE 68

Execution Example

Optimal Solution:

slide-69
SLIDE 69

Execution Example

1 Optimal Solution:

slide-70
SLIDE 70

Execution Example

Optimal Solution: 2 1 3 2

slide-71
SLIDE 71

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2)

slide-72
SLIDE 72

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: 1 2 1 3 2 3 2 2 1 1 3 2

slide-73
SLIDE 73

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: Output: 1 2 1 3 2 3 2 2 1 1 3 2

slide-74
SLIDE 74

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: Output: 1 2 1 3 2 3 2 2 1 1 3 2

slide-75
SLIDE 75

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: Output: 1 2 1 3 2 3 2 2 1 1 3 2

slide-76
SLIDE 76

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: Output: 1 2 1 3 2 3 2 2 1 1 3 2

slide-77
SLIDE 77

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: Output: 1 2 1 3 2 3 2 2 1 1 3 2 2

slide-78
SLIDE 78

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: Output: 1 2 1 3 2 3 2 2 1 1 3 2 2

slide-79
SLIDE 79

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: Output: 1 2 1 3 2 3 2 2 1 1 3 2 2 1

slide-80
SLIDE 80

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: Output: 1 2 1 3 2 3 2 2 1 1 3 2 2 1

slide-81
SLIDE 81

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: Output: 1 2 1 3 2 3 2 2 1 1 3 2 2 1 3

slide-82
SLIDE 82

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: Output: 1 2 1 3 2 3 2 2 1 1 3 2 2 1 3

slide-83
SLIDE 83

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: Output: 1 2 1 3 2 3 2 2 1 1 3 2 2 1 3

slide-84
SLIDE 84

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: Output: 1 2 1 3 2 3 2 2 1 1 3 2 2 1 3

slide-85
SLIDE 85

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: Output: 1 2 1 3 2 3 2 2 1 1 3 2 2 1 3

slide-86
SLIDE 86

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: Output: 1 2 1 3 2 3 2 2 1 1 3 2 2 1 3

slide-87
SLIDE 87

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: Output: 1 2 1 3 2 3 2 2 1 1 3 2 2 1 3 2

slide-88
SLIDE 88

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: Output: 1 2 1 3 2 3 2 2 1 1 3 2 2 1 3 2

slide-89
SLIDE 89

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: Output: 1 2 1 3 2 3 2 2 1 1 3 2 2 1 3 2

slide-90
SLIDE 90

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: Output: 1 2 1 3 2 3 2 2 1 1 3 2 2 1 3 2

slide-91
SLIDE 91

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: Output: 1 2 1 3 2 3 2 2 1 1 3 2 2 1 3 2

slide-92
SLIDE 92

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: Output: 1 2 1 3 2 3 2 2 1 1 3 2 2 1 3 2

slide-93
SLIDE 93

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: Output: 1 2 1 3 2 3 2 2 1 1 3 2 2 1 3 2

slide-94
SLIDE 94

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: Output: 1 2 1 3 2 3 2 2 1 1 3 2 2 1 3 2

slide-95
SLIDE 95

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: Output: 1 2 1 3 2 3 2 2 1 1 3 2 2 1 3 2

slide-96
SLIDE 96

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: Output: 1 2 1 3 2 3 2 2 1 1 3 2 2 1 3 2

slide-97
SLIDE 97

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: Output: 1 2 1 3 2 3 2 2 1 1 3 2 2 1 3 2

slide-98
SLIDE 98

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: Output: 1 2 1 3 2 3 2 2 1 1 3 2 2 1 3 2

slide-99
SLIDE 99

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: Output: 1 2 1 3 2 3 2 2 1 1 3 2 2 1 3 2

slide-100
SLIDE 100

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: Output: 1 2 1 3 2 3 2 2 1 1 3 2 2 1 3 2

slide-101
SLIDE 101

Execution Example

Optimal Solution: 2 1 3 2 Advice: (c1, c2, c3, c4) = (2, 1, 3, 2) Input: Output: 1 2 1 3 2 3 2 2 1 1 3 2 2 1 3 2

slide-102
SLIDE 102

Near-Optimality with Constant Advice

Bounding the Competitivity

  • Big items: Losing at most a factor ε

– All slots are filled in the end (Proof by Induction) – As many big items from every class as optimal solution – Every class for big item spans factor 1 − ε

  • Small items: Losing at most a factor ε (Greedy)
  • Thus (1 + 2ε)-competitive
slide-103
SLIDE 103

Conclusion

slide-104
SLIDE 104

Advice Complexity Behavior

O(1) O(log n) n 1 1 + ε Φ 2 log n n − 1 Advice Bits Competitivity

Classical Knapsack: Two thresholds

  • No advice: Unbounded competitivity
  • Constant advice (a single bit): Drop to 2-competitivity
  • Logarithmic advice necessary for any improvement
  • Logarithmic advice sufficient for near-optimality
  • Linear advice (one bit per item) necessary for optimality
slide-105
SLIDE 105

Conclusion: Advice Complexity Behavior

O(1) O(log n) n 1 1 + ε Φ 2 log n n − 1 Advice Bits Competitivity

With removability: Collapse to a single threshold

  • No advice: Φ-competitive (Golden ratio Φ ≈ 1.618)
  • Constant advice: Jump down to near-optimality
  • Logarithmic advice: Still being near-optimal
  • Linear advice (one bit per item) necessary for optimality
slide-106
SLIDE 106

Outlook: Non-Proportional Variant

O(1) O(log n) n 1 1 + ε 2 log n n − 1 Advice Bits Competitivity

Non-Proportional Online Knapsack

  • Without removability: A single threshold at logarithmic ad-

vice, jump from unbounded to near-optimal competitivity

slide-107
SLIDE 107

Outlook: Non-Proportional Variant

O(1) O(log n) n 1 1 + ε 2 log n n − 1 Advice Bits Competitivity

Non-Proportional Online Knapsack

  • Without removability: A single threshold at logarithmic ad-

vice, jump from unbounded to near-optimal competitivity

  • With removability: Threshold moves to constant advice
slide-108
SLIDE 108

Outlook: Non-Proportional Variant

O(1) O(log n) n 1 1 + ε 2 log n n − 1 Advice Bits Competitivity

Non-Proportional Online Knapsack

  • Without removability: A single threshold at logarithmic ad-

vice, jump from unbounded to near-optimal competitivity

  • With removability: Threshold moves to constant advice
  • Proof starts similarly, but many new tricks required
slide-109
SLIDE 109

Questions?