revealing complexity through domain specific modelling and
play

Revealing Complexity through Domain-Specific Modelling and Analysis - PowerPoint PPT Presentation

Revealing Complexity through Domain-Specific Modelling and Analysis Richard Paige, Phil Brooke, Xiaocheng Ge, Chris Power, Frank Burton, Simon Poulding University of York & Teesside University [paige, xchge, cpower, frank,


  1. Revealing Complexity through Domain-Specific Modelling and Analysis Richard Paige, Phil Brooke, Xiaocheng Ge, Chris Power, Frank Burton, Simon Poulding University of York & Teesside University [paige, xchge, cpower, frank, smp]@cs.york.ac.uk pjb@scm.tees.ac.uk

  2. Context 2  Model-Driven Engineering (MDE) exploits models throughout engineering processes. – Abstract descriptions of phenomena of interest. – Constructed and manipulated by tools (automation comes first, not afterwards).  Models expressed in a variety of (general purpose, domain-specific) languages  Models can be analysed, transformed, compared, merged, validated, … – Many powerful tools. Slide 2

  3. Motivation 3  MDE is predominantly used for engineering complex systems. – Typically from system models to code. – Less frequently used for  understanding systems  explaining systems to different stakeholders  exploring systems to reveal inherent causes of complexity.  We argue that domain-specific approaches to MDE can help improve understanding of complex systems. – Particular for explaining emergent behaviour. Slide 3

  4. Contributions and Structure 4  A modelling approach designed to help reveal and explore complex systems. – Based on defining and implementing domain-specific languages – i.e., designing modelling languages for specific problems.  Task-specific analysis techniques for exploring domain-specific models. – Implemented with general-purpose MDE tooling.  Three (work-in-progress) illustrations of using the approach. Slide 4

  5. Modelling Approach 5  General idea: construct a language that specifically (and only) captures domain-specific concepts/behaviours that are of interest.  Theory: reduces the semantic gap between the logic of a domain and its formal descriptions.  Build task-specific analysis tools using general purpose tools. – Particularly transformations, validations (constraints) and model-to-text generation.  Use these to perform task-specific analyses, e.g., property checking, validation, simulation. Slide 5

  6. Summary of Modelling Approach 6 1. Identify domain concepts and relationships of interest. 2. Encode concepts and relationships in a DSL, including the abstract and concrete syntax. Tool support is not optional – we use EMF/GMF! – 3. Encode analyses of interest by transforming domain-specific models into models amenable to analysis (this is the hard/fun bit) 4. After analysis, present results to engineers in a domain-specific format. Slide 6

  7. 7 Illustrations (Work-in-Progress) Slide 7

  8. 8 Failures in Healthcare Processes Slide 8

  9. Failures in healthcare processes 9  Healthcare processes are complex sociotechnical systems.  We are interested in consequences of failures that arise when executing healthcare processes. – How do they affect completion? – E.g., is information delayed, diagnosis incorrect? – E.g., are there bottlenecks in the process?  Goal is to provide guidance on refining processes (or assessing the impact of changes on processes). Slide 9

  10. Modelling 10  Created a small DSL for modelling healthcare processes, tailored for capturing failure modes. – Inspired by BPDM.  The real challenge is identifying the failure modes associated with a process. – Assertion is that failures in a process are a result of failures in one or more of the tasks that make up the process. Slide 10

  11. Snapshot of a healthcare process 11  Task 15: Patients with suspected stroke should have specialist assessment within 24 hours of onset of symptoms; transfer to acute stroke unit Slide 11

  12. Possible task failure modes 12  Completeness: did it run to completion?  Validity: did the outcome meet requirements?  Consistency: are the results consistent across executions?  Timeliness: generated on time?  Adequacy: is the outcome fit for purpose? Slide 12

  13. Example: Failure Modes for Task 15 13  Incomplete: human resources assigned to the task are insufficient – e.g., specialist + nurse required, but no nurse available.  Late: specialists carry out task late (e.g., after 24 hours).  Inadequate: task not performed by stroke specialist. Slide 13

  14. Modelling and Analysis 14  The DSL includes entities for modelling failure modes of tasks.  Automated model transformations are used to produce a simulation model. – Effectively an interval timed coloured Petri net.  The simulation model can be used to calculate the whole-system failure behaviour. – Similar to FPTC. Slide 14

  15. Example: analysis results 15  Consider one (inadequacy) failure: incorrect judgment after task related to reviewing the investigation results (A19). – Vulnerable tasks: A5, A9, A10/11, A17. – Failures in these tasks propagated through to failures of A19. – Failure in A19 was not sensitive to failures elsewhere. – These tasks are heavily dependent on skills of the personnel carrying them out. – Can tasks for these personnel be streamlined? Siloed? Slide 15

  16. 16 Identification Scenario Slide 16

  17. Identification scenario 17 1. Alice enters a shop. 2. He tries to purchase alcohol from Matilda. 3. Alice shows an ID card to Matilda to confirm that he is older than 18. Slide 17

  18. Analysis 18  We might want to check properties related to this scenario. – if Alice is younger than 18, is his attempt to buy alcohol refused? – if Alice is at least 18, does he buy alcohol?  We might also want to assess the impact of new ID mechanisms (better cards, biometrics).  There are interesting issues associated with how to model such scenarios and their critical events.

  19. Identification is interesting 19  Matilda may form a belief that Alice is under 18, thus disrupting the success scenario.  Matilda may demand to see an ID card. – The ID card may be damaged and thus may not represent Alice accurately. – Matilda may form a belief about the ID card that supports her initial belief that Alice is under 18. – Or, she may form a belief about the ID card that is inconsistent with her initial belief.  Matilda may know Alice but has reason to not sell him alcohol. Slide 19

  20. In a nutshell 20  Scenarios like this appear conceptually simple: – Yes/no, boolean outcome. – Events happen or don’t happen. – They appear to follow a tree of boolean decisions. Easy to explore and simulate, e.g., using a model checker.  The complexity is hidden.  The outcome may be boolean (Alice buys alcohol or doesn’t buy it), but the outcome is derived from a number of probabilistic decisions. Slide 20

  21. Probabilistic Decisions 21  When Alice enters the shop, Matilda forms a belief about his age. – Abstractly, the belief is boolean: he is over 18, or not. – In a different model, Matilda’s belief about Alice’s age is a probability distribution around 18. – This represents Matilda’s ability to discern Alice’s age (and, more generally, Matilda’s ability to determine people’s ages). – This is actually hard, c.f. “Identifying Age, Cohort and Period Effects in Scientific Research Productivity ” , NBER Working Paper No. 11739, 2006. Slide 21

  22. Probabilistic Decisions 22  When Alice presents his ID card, Matilda forms a belief about the representation. – Abstractly, the belief is boolean: the card accurately represents Alice, or it does not. – In a different model, the belief is a probability distribution, taking into account accuracy of representation as well as effects such as recognition, tampering or damage.  This belief is dependent on Matilda’s initial belief about Alice’s age. Slide 22

  23. Ways to model this 23  We could model this using probabilistic logic (e.g., pCSP, prob. refinement calculi).  “Matilda thinks Alice is over eighteen 70% of the time, and no more than eighteen 30% of the time” – How do you come up with the probabilities? – An (under?) approximation; ignores shading. – Ignores dependencies (though conditional probs could be modelled).  Model using probability distributions ? Slide 23

  24. Modelling 24  Constructed a DSL for modelling scenarios.  Key concepts: – Subject: individual that initiates or triggers a scenario – Agent: a representative (e.g., police officer) of an organisation that interacts with the subject. – Organisation: owns artefacts of value – Cheat: a subject who is attempting to convince an agent that they hold a property that in fact they do not. – Actor: A generic term covering all the roles above, as well as others not explicitly noted. – Event: an atomic occurrence; probabilities and probability distributions can be attached to events. Slide 24

  25. Prototype Tool Support 25  We have built a mechanisation via a state exploration tool. – Takes models expressed in DSL and explores them. – Models of probability or probability distributions associated with events  Handles immediate events (e.g., reaction), delayed events (e.g., movement), deferred events (internal, invisible events).  Operates in interactive, exhaustive, or deferred modes (probabilities not calculated). Slide 25

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend