logics for classical and quantum information flow sonja
play

Logics for Classical and Quantum Information Flow Sonja Smets - PowerPoint PPT Presentation

Logics for Classical and Quantum Information Flow Sonja Smets (ILLC, University of Amsterdam) Financial Support Acknowledgement: 1 Overview General Research Aim: Model and reason about various forms of information flow in interacting


  1. Logics for Classical and Quantum Information Flow Sonja Smets (ILLC, University of Amsterdam) Financial Support Acknowledgement: 1

  2. Overview • General Research Aim: Model and reason about various forms of information flow in interacting multi-agent systems (from classical to quantum). • How? Use techniques from Logic (Dynamic Epistemic Logic), AI, Computer Science and Physics. • Two main research directions: - Logical models for interactive belief revision - Dynamic Logics for reasoning about quantum information flow 2

  3. Research Projects • LogiCIC Project : The Logical Structure of Correlated Information Change We develop a logical system to model and reason about correlated information change. - correlations that arise in situations in which the very act of learning new information may directly change the reality that is being learnt. - E.g.: an introspective agent who changes her beliefs when learning new higher-order information. Or a scientist who learns about a phenomenon by performing measurements that perturb the very phenomenon under study (as in quantum measurements). Or in groups of communicating agents when some agents’ beliefs about the others’ belief changes may influence their own belief change. 3

  4. • VIDI Project : Reasoning about quantum interaction: Logical Modelling and Verification of Multi-Agent Quantum Protocols We use Logic to model and reason about quantum computation and quantum information, and especially for the formal verification of quantum communication protocols. The applications involve quantum information flow and classical knowledge transfer (by classical communication) between the agents. We model complex situations where different types of informational dynamics (classical and quantum) are combined. We develop and use a combined classical-quantum logic for the full specification and formal verification of agent-based quantum protocols for secure communication. We work with formalisms based on modal logic, especially combinations of dynamic (or temporal) logics and epistemic (or “spatial”) logics. 4

  5. Research Direction: Belief Revision Theory What happens if I learn a new fact ϕ that goes in con- tradiction to my old beliefs? If I accept the fact ϕ , and put it together with the set T of all the sentences I used to believe, the resulting set T ∪ { ϕ } is logically inconsistent . So I have to give up some of my old beliefs. But which of them? Maybe all of them?! No, I should maybe try to maintain as much as possible of my old beliefs , while still accepting the new fact ϕ (without arriving to a contradiction). 5

  6. Example Suppose I believe two facts p and q and (by logical closure) their conjunction p ∧ q . So my belief base is the following { p, q, p ∧ q } . Suppose now that I learn the last sentence was actually false . Obviously, I have to revise my belief base, eliminating the sentence p ∧ q , and replacing it with its negation: ¬ ( p ∧ q ). 6

  7. But the base { p, q, ¬ ( p ∧ q ) } is inconsistent ! So I have to do more ! Obviously, to accommodate the new fact ¬ ( p ∧ q ), I have to give up either my belief in p or my belief in q . But which one? This depends on the semantics (the “doxastic model” of the agent). 7

  8. Dynamic Interactive Belief Revision • The way agents’ information and beliefs change in a “social” , multi-agent context , involving various types of interaction, such as communication and other forms of information flow, between agents. • Updating/revising the agents’ higher-level beliefs (including their beliefs about their own, and the other agents’, beliefs etc). Note: “Traditional” Belief Revision Theory deals only with the revision of an isolated agent’s first-level beliefs. 8

  9. Iterated Revision and the Learning Problem Question 1. THE ITERATION PROBLEM : investigate the long-term behavior of iterated learning of higher-level doxastic information . Learning : belief revision with new true information. Long-term behavior : whether the learning process comes to an end, stabilizing the doxastic structure , or keeps changing it forever . In particular, do the agent’s beliefs stabilize, reaching a fixed point ? Do the conditional beliefs ? Question 2. THE LEARNING PROBLEM : Do the beliefs stabilize on truth , converging to the real world ? 9

  10. Plan for the remainder of this talk SETTING : Start from dynamic belief revision (DEL-approach) and look at truth approximation. Focus on Learning : dynamic belief revision with new true information. Interested in the Long-term behavior : whether the learning process comes to an end, stabilizing the doxastic structure, or keeps changing it forever . Higher-level (doxastic) information : may refer to the agents’ own beliefs , or even to her belief-revision policy 10

  11. Hint at the conclusion • First, I’ll show how not to converge to the truth: we may get into infinite belief-revision cycles, even if the revision is directed towards the real world : i.e. even if we allow only (dynamic) revisions with the same truthful piece of information! • Second, I’ll show the conditions under which a series of belief revisions stabilizes on true beliefs • Third, I’ll show the conditions under which a series of belief revisions stabilizes such that all beliefs are true and all true sentences are believed • Observe: the truth-value of doxastic/epistemic sentences may change during the learning process 11

  12. Contrast with Classical AGM Theory AGM Belief Revision deals only with propositional information. So • The process of learning new, true information always comes to an end : the most one can learn by iterated revisions is the correct valuation (which atomic sentences are true in the real world). • It is useless to repeatedly revise with the same information: after learning a propositional sentence ϕ once, learning it again would be superfluous (leaving the doxastic state unchanged ). 12

  13. Why bother? QUESTION: Why should we worry about revision with higher-level doxastic sentences? Why would an agent revise her beliefs about her own beliefs? ANSWER: Because the new information may explicitly refer to the agents’ beliefs . 13

  14. Example: Learning you are wrong Suppose somebody truthfully tells you the following sentence ϕ : “ You are wrong about p . ” We interpret ϕ as saying that: Bp ↔ ¬ p “ Whatever you currently believe about (whether or not) p is false. ” This is a doxastic sentence, but it does convey new information about the real world : after learning ϕ and using introspection (about your own current beliefs), you will come to know whether p holds or not , thus correcting your mistaken belief about p . 14

  15. Repeated learning is impossible?! NOTE: ϕ changes its value by being learned. After learning ϕ once, ϕ becomes false, and moreover you know it is false. So you cannot possibly “learn” it again . Learning twice a sentence such as ϕ (“Moore sentences”) is impossible . So repeated learning is still trivial . 15

  16. What is the general picture? Repeated learning of the same (true) doxastic information is NOT always trivial : it may give rise to “doxastic loops”! More generally, iterated revision with truthful higher-level information can be highly non-trivial . 16

  17. SEMANTICS: Plausibility (Grove) Structures A (finite, pointed) plausibility model is a pointed Kripke model ( S, ≤ , � · � , s 0 ) with • finite set S of “ states ” (or “ possible worlds ”), • connected preorder ≤⊆ S × S : the plausibility relation , • designated world s 0 ∈ S : the “ real world ”, • valuation map , assigning to each atomic sentence p a set � p � ⊆ S . �� �� � �� �� r n �� �� �� �� Read n ≤ r as “ state n is at least as plausible as state r ”. 17

  18. Example 0 Consider a pollster (Charles) with the following beliefs about how a given voter (Mary) will vote: He believes she will vote Democrat . But in case this turns out wrong, he’d rather believe that she won’t vote than accepting that she may vote Republican. Let us assume that, in reality (unknown to Charles), Marry will vote Republican ! 18

  19. A Model for Example 0 � �� �� �� �� � �� �� r n d �� �� �� �� �� �� • The valuation is trivial: each atom r, n, d is true at the corresponding world. • The real world is r : Mary will vote Republican. • The arrows represent the converse plausibility relation , we skip all the loops and composed arrows . • Charles considers world d (voting Democrat) to be the most plausible, and world n (not voting) to be more plausible than world r . 19

  20. (Conditional) Belief in Plausibility Models Bϕ Sentence ϕ is believed in (any state of) a plausibility model ( S ≤ ) if ϕ is true in all the “most plausible” worlds B P ϕ Sentence ϕ is believed conditional on P if ϕ is true at all most plausible worlds satisfying P ; i.e. in all the states in the set Min ≤ P := { s ∈ P : s ≤ t for all t ∈ S } . 20

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend