Logics for Classical and Quantum Information Flow Sonja Smets - - PowerPoint PPT Presentation

logics for classical and quantum information flow sonja
SMART_READER_LITE
LIVE PREVIEW

Logics for Classical and Quantum Information Flow Sonja Smets - - PowerPoint PPT Presentation

Logics for Classical and Quantum Information Flow Sonja Smets (ILLC, University of Amsterdam) Financial Support Acknowledgement: 1 Overview General Research Aim: Model and reason about various forms of information flow in interacting


slide-1
SLIDE 1

Logics for Classical and Quantum Information Flow Sonja Smets (ILLC, University of Amsterdam)

Financial Support Acknowledgement:

1

slide-2
SLIDE 2

Overview

  • General Research Aim: Model and reason about various forms
  • f information flow in interacting multi-agent systems (from classical

to quantum).

  • How? Use techniques from Logic (Dynamic Epistemic Logic), AI,

Computer Science and Physics.

  • Two main research directions:
  • Logical models for interactive belief revision
  • Dynamic Logics for reasoning about quantum information flow

2

slide-3
SLIDE 3

Research Projects

  • LogiCIC Project : The Logical Structure of Correlated Information

Change

We develop a logical system to model and reason about correlated information change.

  • correlations that arise in situations in which the very act of learning new

information may directly change the reality that is being learnt.

  • E.g.: an introspective agent who changes her beliefs when learning new

higher-order information. Or a scientist who learns about a phenomenon by performing measurements that perturb the very phenomenon under study (as in quantum measurements). Or in groups of communicating agents when some agents’ beliefs about the others’ belief changes may influence their own belief change.

3

slide-4
SLIDE 4
  • VIDI Project : Reasoning about quantum interaction: Logical

Modelling and Verification of Multi-Agent Quantum Protocols

We use Logic to model and reason about quantum computation and quantum information, and especially for the formal verification of quantum communication protocols. The applications involve quantum information flow and classical knowledge transfer (by classical communication) between the agents. We model complex situations where different types of informational dynamics (classical and quantum) are combined. We develop and use a combined classical-quantum logic for the full specification and formal verification of agent-based quantum protocols for secure communication. We work with formalisms based on modal logic, especially combinations of dynamic (or temporal) logics and epistemic (or “spatial”) logics.

4

slide-5
SLIDE 5

Research Direction: Belief Revision Theory What happens if I learn a new fact ϕ that goes in con- tradiction to my old beliefs? If I accept the fact ϕ, and put it together with the set T of all the sentences I used to believe, the resulting set T ∪ {ϕ} is logically inconsistent. So I have to give up some of my old beliefs. But which of them? Maybe all of them?! No, I should maybe try to maintain as much as possible of my old beliefs, while still accepting the new fact ϕ (without arriving to a contradiction).

5

slide-6
SLIDE 6

Example Suppose I believe two facts p and q and (by logical closure) their conjunction p ∧ q. So my belief base is the following {p, q, p ∧ q}. Suppose now that I learn the last sentence was actually false. Obviously, I have to revise my belief base, eliminating the sentence p ∧ q, and replacing it with its negation: ¬(p ∧ q).

6

slide-7
SLIDE 7

But the base {p, q, ¬(p ∧ q)} is inconsistent! So I have to do more! Obviously, to accommodate the new fact ¬(p ∧ q), I have to give up either my belief in p or my belief in q. But which one? This depends on the semantics (the “doxastic model” of the agent).

7

slide-8
SLIDE 8

Dynamic Interactive Belief Revision

  • The way agents’ information and beliefs change in a “social” ,

multi-agent context, involving various types of interaction, such as communication and other forms of information flow, between agents.

  • Updating/revising the agents’ higher-level beliefs (including

their beliefs about their own, and the other agents’, beliefs etc). Note: “Traditional” Belief Revision Theory deals only with the revision of an isolated agent’s first-level beliefs.

8

slide-9
SLIDE 9

Iterated Revision and the Learning Problem Question 1. THE ITERATION PROBLEM: investigate the long-term behavior of iterated learning of higher-level doxastic information. Learning: belief revision with new true information. Long-term behavior: whether the learning process comes to an end, stabilizing the doxastic structure, or keeps changing it

  • forever. In particular, do the agent’s beliefs stabilize, reaching a fixed

point? Do the conditional beliefs? Question 2. THE LEARNING PROBLEM: Do the beliefs stabilize on truth, converging to the real world?

9

slide-10
SLIDE 10

Plan for the remainder of this talk SETTING: Start from dynamic belief revision (DEL-approach) and look at truth approximation. Focus on Learning: dynamic belief revision with new true information. Interested in the Long-term behavior: whether the learning process comes to an end, stabilizing the doxastic structure, or keeps changing it forever. Higher-level (doxastic) information: may refer to the agents’

  • wn beliefs, or even to her belief-revision policy

10

slide-11
SLIDE 11

Hint at the conclusion

  • First, I’ll show how not to converge to the truth: we may

get into infinite belief-revision cycles, even if the revision is directed towards the real world: i.e. even if we allow only (dynamic) revisions with the same truthful piece of information!

  • Second, I’ll show the conditions under which a series of

belief revisions stabilizes on true beliefs

  • Third, I’ll show the conditions under which a series of

belief revisions stabilizes such that all beliefs are true and all true sentences are believed

  • Observe: the truth-value of doxastic/epistemic sentences may

change during the learning process

11

slide-12
SLIDE 12

Contrast with Classical AGM Theory AGM Belief Revision deals only with propositional information. So

  • The process of learning new, true information always comes to

an end: the most one can learn by iterated revisions is the correct valuation (which atomic sentences are true in the real world).

  • It is useless to repeatedly revise with the same information:

after learning a propositional sentence ϕ once, learning it again would be superfluous (leaving the doxastic state unchanged).

12

slide-13
SLIDE 13

Why bother? QUESTION: Why should we worry about revision with higher-level doxastic sentences? Why would an agent revise her beliefs about her

  • wn beliefs?

ANSWER: Because the new information may explicitly refer to the agents’ beliefs.

13

slide-14
SLIDE 14

Example: Learning you are wrong Suppose somebody truthfully tells you the following sentence ϕ: “You are wrong about p.” We interpret ϕ as saying that: Bp ↔ ¬p “Whatever you currently believe about (whether or not) p is false.” This is a doxastic sentence, but it does convey new information about the real world: after learning ϕ and using introspection (about your

  • wn current beliefs), you will come to know whether p holds or not,

thus correcting your mistaken belief about p.

14

slide-15
SLIDE 15

Repeated learning is impossible?! NOTE: ϕ changes its value by being learned. After learning ϕ once, ϕ becomes false, and moreover you know it is

  • false. So you cannot possibly “learn” it again.

Learning twice a sentence such as ϕ (“Moore sentences”) is impossible. So repeated learning is still trivial.

15

slide-16
SLIDE 16

What is the general picture? Repeated learning of the same (true) doxastic information is NOT always trivial: it may give rise to “doxastic loops”! More generally, iterated revision with truthful higher-level information can be highly non-trivial.

16

slide-17
SLIDE 17

SEMANTICS: Plausibility (Grove) Structures A (finite, pointed) plausibility model is a pointed Kripke model (S, ≤, · , s0) with

  • finite set S of “states” (or “possible worlds”),
  • connected preorder ≤⊆ S × S: the plausibility relation,
  • designated world s0 ∈ S: the “real world”,
  • valuation map, assigning to each atomic sentence p a set p ⊆ S.
  • r
  • n

Read n ≤ r as “state n is at least as plausible as state r”.

17

slide-18
SLIDE 18

Example 0 Consider a pollster (Charles) with the following beliefs about how a given voter (Mary) will vote: He believes she will vote Democrat. But in case this turns out wrong, he’d rather believe that she won’t vote than accepting that she may vote Republican. Let us assume that, in reality (unknown to Charles), Marry will vote Republican!

18

slide-19
SLIDE 19

A Model for Example 0

  • r
  • n
  • d
  • The valuation is trivial: each atom r, n, d is true at the

corresponding world.

  • The real world is r: Mary will vote Republican.
  • The arrows represent the converse plausibility relation, we

skip all the loops and composed arrows.

  • Charles considers world d (voting Democrat) to be the most

plausible, and world n (not voting) to be more plausible than world r.

19

slide-20
SLIDE 20

(Conditional) Belief in Plausibility Models Bϕ Sentence ϕ is believed in (any state of) a plausibility model (S ≤) if ϕ is true in all the “most plausible” worlds BP ϕ Sentence ϕ is believed conditional on P if ϕ is true at all most plausible worlds satisfying P; i.e. in all the states in the set Min≤P := {s ∈ P : s ≤ t for all t ∈ S}.

20

slide-21
SLIDE 21

Modelling Higher-Level Belief Revision From a semantic point of view, higher-level belief revision is about “revising” the whole relational structure: changing the plausibility relation (and/or its domain). A relational transformer is a model-changing operation α, that takes any plausibility model S = (S ≤, · , s0) and returns a new model α(S) = (S′, ≤′, · ∩ S′, s0), having as set of states some subset S′ ⊆ S, as valuation the restriction of the original valuation to S′, the same real world s0 as the original model (but possibly a different order relation).

21

slide-22
SLIDE 22

Examples of Transformers (1) Update !ϕ (conditionalization with ϕ): all the non-ϕ states are deleted and the same plausibility order is kept between the remaining states. (2) Radical (“lexicographic”) upgrade ⇑ ϕ: all ϕ-worlds become “better” (more plausible) than all ¬ϕ-worlds, and within the two zones, the old ordering remains. (3) Conservative upgrade ↑ ϕ: the “best” ϕ-worlds become better than all other worlds, and in rest the old order remains.

22

slide-23
SLIDE 23

Explanation After a conservative upgrade, the agent only comes to believe that ϕ (was the case); i.e. to allow only ϕ-worlds as the most plausible ones. The Radical upgrade has a more “radical” effect: the agent comes to “strongly believe” ϕ; i.e. accept ϕ with such a conviction that she considers all ϕ-worlds more plausible than all non-ϕ ones. Finally, after an update, the agent comes to “know” ϕ in an absolute, irrevocable sense, so that all non-ϕ possibilities are forever eliminated.

23

slide-24
SLIDE 24

Iterating Upgrades To study iterated belief revision, consider a finite model S0 = (S, ≤0, · 0, s0), and an (infinite) sequence of upgrades α0, α1, . . . , αn, . . . In particular, these can be updates !ϕ0, !ϕ1, . . . , !ϕn, . . .

  • r conservative upgrades

↑ ϕ0, ↑ ϕ1, . . . , ↑ ϕn, . . .

  • r radical upgrades

⇑ ϕ0, ⇑ ϕ1, . . . , ⇑ ϕn, . . ..

24

slide-25
SLIDE 25

The iteration leads to an infinite succession of upgraded models S0, S1, . . . , Sn, . . . defined by: Sn+1 = αn(Sn).

25

slide-26
SLIDE 26

Iterated Upgrades Do Not Necessarily Stabilize! Iterated Updates always stabilize, but this is NOT the case for arbitrary upgrades. First, it is obvious that, if we allow for false upgrades, the revision may oscilate forever: the sequence ⇑ p, ⇑ ¬p, ⇑ p, ⇑ ¬p, . . . will forever keep reverting back and forth the order between the p-worlds and the non-p -worlds.

26

slide-27
SLIDE 27

Tracking the Truth SURPRISE: we may still get into an infinite belief-revision cycle, even if the revision is “directed” towards the real world: i.e. even if we allow only upgrades that are always truthful! BIGGER SURPRISE: This still holds even if we revise with the same true sentence every time:

  • Conservative case: ↑ ϕ, ↑ ϕ, . . . , ↑ ϕ, . . .

Simple beliefs never stabilize.

  • Radical case: ⇑ ϕ, ⇑ ϕ, . . . , ⇑ ϕ, . . ..

simple beliefs stabilize, but conditional beliefs don’t.

  • In the last case, when do the beliefs stabilize on the truth (real

world)?

27

slide-28
SLIDE 28

Iterating a Truthful Conservative Upgrade In Example 0, suppose a trusted informer tells Charles the following true statement ϕ: r ∨ (d ∧ ¬Bd) ∨ (¬d ∧ Bd) “Either Mary will vote Republican or else your beliefs about whether

  • r not she votes Democrat are wrong”.

In the original model

  • r
  • n
  • d

the sentence ϕ is true in worlds r and n, but not in d.

28

slide-29
SLIDE 29

Infinite Oscillations by Truthful Upgrades Let’s suppose that Charles conservatively upgrades his beliefs with this new true information ϕ. The most plausible state satisfying ϕ was n, so this becomes now the most plausible state overall:

  • r
  • d
  • n

Now ϕ is again true at the real world (r) and in world d. So this sentence can again be truthfully announced.

29

slide-30
SLIDE 30

If Charles conservatively upgrades again with ϕ, he will promote d on top, reverting to the original model! Here, The whole model (the plausibility order) keeps changing, and Charles’ (simple, un-conditional) beliefs keep

  • scillating forever (between d and n)!

30

slide-31
SLIDE 31

Iterating Truthful Radical Upgrades Consider the same original model:

  • r
  • n
  • d

But now consider the sentence ϕ: r ∨ (d ∧ ¬B¬rd) ∨ (¬d ∧ B¬rd) “If you’d truthfully learn that Marry won’t vote Republican, then your resulting belief about whether or not she votes Democrat would be wrong”. Sentence ϕ is true in the real world r and in n but not in d, so a truthful radical upgrade will give us:

31

slide-32
SLIDE 32
  • d
  • r
  • n

The same ϕ is again true in (the real world) r and in d, so it can again be truthfully announced, resulting in:

  • n
  • d
  • r

Another truthful upgrade with ϕ:

  • d
  • n
  • r

then another truthful upgrade with the same ϕ gets us back to

  • n
  • d
  • r

32

slide-33
SLIDE 33

Stable Beliefs in Oscillating Models These last two models will keep reappearing, in an endless cycle: as for conservative upgrades, the process never reaches a fixed point! However, unlike in the conservative upgrade example, in this radical example the simple (unconditional) beliefs eventually stabilize: from some moment onwards, Charles correctly believes that the real world is r (vote Republican) and he will never lose this belief again! This is a symptom of a more general phenomenon:

33

slide-34
SLIDE 34

Beliefs Stabilize in Iterated Radical Upgrades THEOREM: In any infinite sequence of truthful radical upgrades {⇑ ϕi}i

  • n an initial (finite) model S0, the set of most plausible

states stabilizes eventually, after finitely many iterations. From then onwards, the simple (un-conditional) beliefs stay the same (despite the possibly infinite oscillations of the plausibility order).

34

slide-35
SLIDE 35

Converging to the Truth? Simple beliefs stabilize after an infinite series of radical upgrades, but under what conditions does it stabilize on the truth?

35

slide-36
SLIDE 36

Strongly informative upgrade and streams Call a radical upgrade ⇑ ϕ “strongly informative” on a pointed model (M, s) iff ϕ is not already believed at (M, s). I.e. (M, s) satisfies ¬Bϕ. Now, a radical upgrade stream {⇑ ϕn}n is called “strongly informative” if each of the upgrades is strongly informative at the time when it is announced. It means that (M(ϕ1, ϕ2, ..., ϕn−1), s) satisfies ¬Bϕn.

36

slide-37
SLIDE 37

Belief correcting upgrade and streams Call a radical upgrade ⇑ ϕ “belief-correcting” on (M, s) iff ϕ is actually believed to be FALSE at (M, s). I.e. (M, s) satisfies B¬ϕ. Now, a radical upgrade stream is called “belief-correcting” if each of the upgrades is belief-correcting at the time when it is announced. It means that (M(ϕ1, ϕ2, ..., ϕn−1), s) satisfies B¬ϕn.

  • “belief correcting” ⇒ “strongly informative” (The converse fails.)

37

slide-38
SLIDE 38

Maximal Strongly informative streams An upgrade stream is a “maximal” strongly-informative (OR maximal belief-correcting), truthful stream if:

  • (1) it is strongly-informative (OR belief-correcting) and truthful,

and

  • (2) it cannot properly extended to any stream having property (1).

So a strongly informative truthful stream is “maximal” if it is either infinite or if, in case it is finite (say, of length n) then there exists NO upgrade ⇑ ϕn+1 which would be truthful and strongly informative on the last model (M(ϕ1, ϕ2, ..., ϕn), s).

38

slide-39
SLIDE 39

The results

  • 1. Every maximal belief-correcting truthful upgrade stream

{⇑ ϕn}n (starting on a given finite model (M, s)) is finite and converges to true beliefs; i.e. in its final model (M(ϕ1, ϕ2, ..., ϕn), s), all the beliefs are true.

  • 2. Every maximal strongly-informative truthful upgrade

stream {⇑ ϕn}n (starting on a given finite model (M,s)) is finite and stabilizes the beliefs on FULL TRUTH; i.e. in its final model (M(ϕ1, ϕ2, ..., ϕn), s), all beliefs are true and all true sentences are believed. In other words, in the final model, a sentence is believed if and only if it is true.

39

slide-40
SLIDE 40

Note But note that the last conclusion is NOT necessarily equivalent to saying that the set of most plausible worlds coincides in the end with

  • nly the real world!

The reason is that the language may not be expressive enough to distinguish the real world from some of other ones; and so the conclusion of 2 can still hold if the most plausible worlds are these

  • ther ones...

40

slide-41
SLIDE 41

Conclusions

  • Iterated upgrades may never reach a fixed point:

conditional beliefs may remain forever unsettled.

  • When iterating truthful radical upgrades, simple

(non-conditional) beliefs converge to some stable belief. But not the truthful conservative upgrades.

  • In iterated truthful radical upgrades that are maximal

strongly-informative, all believes converge to the truth and all true sentences are believed.

41

slide-42
SLIDE 42
  • REFERENCES:
  • A. Baltag and S. Smets, “Learning by Questions and Answers: From

Belief- Revision Cycles to Doxastic Fixed Points”. In: Makoto Kanazawa, Hiroakira Ono, en Ruy de Queiroz (eds.) LNAI Lecture Notes in Computer Science. pp. 124-139. Volume 5514. 2009

  • A. Baltag and S. Smets.“Group Belief Dynamics under Iterated

Revision: Fixed Points and Cycles of Joint Upgrades”. In: Proceedings of Theoretical Aspects of Rationality and Knowledge TARK 2009.

  • A. Baltag and S. Smets. “Keep changing your beliefs and aiming for

the truth”. in T. Kuipers and G. Schurz (eds.), Erkenntnis, 2011.

42