Concursus Event Sourcing Evolved GOTO London 2016 Introductions - - PowerPoint PPT Presentation

concursus
SMART_READER_LITE
LIVE PREVIEW

Concursus Event Sourcing Evolved GOTO London 2016 Introductions - - PowerPoint PPT Presentation

Concursus Event Sourcing Evolved GOTO London 2016 Introductions Dominic Fox Tareq Abedrabbo Twitter: @dynamic_proxy Twitter: @tareq_abedrabbo Email: dominic.fox@opencredo.com Email: tareq.abedrabbo@opencredo.com Concursus Page:


slide-1
SLIDE 1

Concursus

Event Sourcing Evolved

GOTO London 2016

slide-2
SLIDE 2
slide-3
SLIDE 3

Introductions

Dominic Fox

Twitter: @dynamic_proxy Email: dominic.fox@opencredo.com

Tareq Abedrabbo

Twitter: @tareq_abedrabbo Email: tareq.abedrabbo@opencredo.com

Concursus

Page: https://opencredo.com/publications/concursus/ Github: http://github.com/opencredo/concursus

slide-4
SLIDE 4

Agenda

  • History
  • Concepts
  • Example
  • Domain model
  • Processing model
  • Programming model
  • Future directions
slide-5
SLIDE 5

What is Concursus?

A toolkit for processing and organising messy data in an distributed context.

slide-6
SLIDE 6

The Concursus Timeline

Observations Conception and design Prototype Open source implementation Technical report and blogs

slide-7
SLIDE 7

Event Sourcing

“Event Sourcing ensures that all changes to application state are stored as a sequence of

  • events. Not just can we query these events, we can also use the event log to reconstruct

past states, and as a foundation to automatically adjust the state to cope with retroactive changes.”

http://martinfowler.com/eaaDev/EventSourcing.html

slide-8
SLIDE 8

What is Concursus?

Problems Concursus addresses: ü Processing events in a scalable and reliable way ü Processing guarantees and ordering: exactly once, out of order, repeated or missed delivery, etc.. ü Building meaningful domain models to reason about and build business logic around ü Flexibility: building additional views as needed

slide-9
SLIDE 9

Tendencies:

  • From internet of users to internet of things
  • From “presence” to “presents”
  • From monoliths to microservices

Why Concursus?

slide-10
SLIDE 10

From Internet of Users to Internet of Things

slide-11
SLIDE 11

From Presence to Presents

slide-12
SLIDE 12

From Monoliths to Microservices

slide-13
SLIDE 13

“Write First, Reason Later”

2016-10-12 09:06:31.432 Received at Depot 2016-10-12 09:06:32.106 Received at Depot 2016-10-12 09:06:34.740 Received at Depot 2016-10-12 11:35:02.163 Loaded onto Truck 2016-10-12 11:40:21.032 Loaded onto Truck 2016-10-12 11:38:51.204 Loaded onto Truck 2016-10-12 14:12:44.021 Delivery Failed 2016-10-12 15:00:31.322 Delivered 2016-10-12 15:11:05.038 Delivered

slide-14
SLIDE 14

“Write First, Reason Later”

slide-15
SLIDE 15

Handling Events

ü Delivery constraints

  • ut of order, repeated, delayed or missed delivery

ü Processing guarantees at least once or exactly once processing, idempotency ü Ordering partial ordering across aggregates (with reasonable assumptions)

slide-16
SLIDE 16

Data Processing Layers

ü Durable sufficiently durable buffer for async processing (what’s happening) ü Persistent a permanent record of everything that has happened (what happened) ü Transient fast and consistent, but also disposable state (what happens)

slide-17
SLIDE 17

Building Blocks

  • Java 8 and Kotlin: APIs
  • Cassandra: Persistent state (Event store)
  • Kafka: Durable state (Message broker)
  • Hazelcast: Transient state (cache, idempotency filters)
  • Also, RabbitMQ and Redis
slide-18
SLIDE 18

Sources of Inspiration

Stream processing frameworks such as Apache Storm and Spark Google papers: Cloud dataflow, MillWheel Apache Spark papers The Axon CQRS framework Domain Driven Design Functional programming

slide-19
SLIDE 19

Summary

Concursus

= Event sourcing + Stream processing + Bounded contexts (DDD) + Distributed computing

slide-20
SLIDE 20

Received at Depot Loaded onto Truck Delivered Delivery Failed

slide-21
SLIDE 21

Domain Model: Events

Received at Depot Loaded

  • nto

Truck Delivery Failed Received at Depot Loaded

  • nto

Truck Delivered

slide-22
SLIDE 22

aggregateType: parcel aggregateId: 69016fb5-1d69-4a34- 910b-f8ff5c702ad9 eventTimestamp: 2016-03-31 10:31:17.981 parameters: { “depotId”: “Lewisham” }

Domain Model: Events

Received at Depot Loaded

  • nto

Truck Delivery Failed Received at Depot Loaded

  • nto

Truck Delivered

slide-23
SLIDE 23

aggregateType: parcel aggregateId: 69016fb5-1d69-4a34- 910b-f8ff5c702ad9 eventTimestamp: 2016-03-38 08:15:23.104 parameters: { “truckId”: “J98 257” }

Domain Model: Events

Received at Depot Loaded

  • nto

Truck Delivery Failed Received at Depot Loaded

  • nto

Truck Delivered

slide-24
SLIDE 24

eventTimestamp: 2016-03-31T10:36:42.171Z processingTimestamp: 2016-03-31T10:36:48.3904Z parameters: { “deliveryAddress”: “123 Sudbury Avenue, Droitwich DR4 8PQ”}

Domain Model: Events

aggregateType: parcel aggregateId: 69016fb5-1d69-4a34- 910b-f8ff5c702ad9 Received at Depot Loaded onto Truck Delivery Failed Received at Depot Loaded onto Truck Delivered

slide-25
SLIDE 25

Domain Model: Summary

Every Event occurs to an Aggregate, identified by its type and id. Every Event has an eventTimestamp, generated by the source of the event. An Event History is a log of Events, ordered by eventTimestamp, with an additional processingTimestamp which records when the Event was captured.

slide-26
SLIDE 26

Network Event sources Event processors

Events arrive:

  • Partitioned
  • Interleaved
  • Out-of-order

Processing Model: Ordering

slide-27
SLIDE 27

Log is:

  • Partitioned by aggregate id
  • Ordered by event timestamp

Processing Model: Ordering

slide-28
SLIDE 28

CREATE TABLE IF NOT EXISTS concursus.Event ( aggregateType text, aggregateId text, eventTimestamp timestamp, streamId text, processingId timeuuid, name text, version text, parameters map<text, text>, characteristics int, PRIMARY KEY((aggregateType, aggregateId), eventTimestamp, streamId) ) WITH CLUSTERING ORDER BY (eventTimestamp DESC);

Cassandra Schema

slide-29
SLIDE 29

Cassandra Event Store RabbitMQ Topic Downstream processing Log events Publish events

Cassandra & AMQP

slide-30
SLIDE 30

Cassandra Event Store RabbitMQ Topic Downstream processing

  • ut-of-order events
  • rdered query results

Cassandra & AMQP

slide-31
SLIDE 31

Cassandra Event Store Kafka Topic Downstream processing Event store listener Publish events Log events

Cassandra & Kafka

slide-32
SLIDE 32

Processing Model: Summary

Events arrive partitioned, interleaved and out-of-order. Events are sorted into event histories by aggregate type and id. Events are sorted within event histories by event timestamp, not processing timestamp. Event consumers need to take into account the possibility that an event history may be incomplete at the time it is read – consider using a watermark to give incoming events time to “settle”.

slide-33
SLIDE 33

Programming Model: Core Metaphor

Received at Depot Loaded

  • nto

Truck Delivery Failed Received at Depot Loaded

  • nto

Truck Delivered

slide-34
SLIDE 34

Received at Depot Loaded

  • nto

Truck Delivery Failed Received at Depot Loaded

  • nto

Truck Delivered

Consumer<Event>

Programming Model: Core Metaphor

slide-35
SLIDE 35

You give me a Consumer<Event>, and I send Events to it one at a time:

Emitting Events

slide-36
SLIDE 36

I implement Consumer<Event>, and handle Events that are sent to me.

Handling Events

slide-37
SLIDE 37

Java 8 Mapping

slide-38
SLIDE 38

Java 8 Mapping

slide-39
SLIDE 39

Java 8 Mapping

slide-40
SLIDE 40

Kotlin Mapping

sealed class ParcelEvent { class ReceivedAtDepot(val depotId: String): ParcelEvent() class LoadedOntoTruck(val truckId: String): ParcelEvent() class Delivered(val destinationId: String): ParcelEvent() class DeliveryFailed(): ParcelEvent() }

slide-41
SLIDE 41

Kotlin Mapping

eventBus.dispatchTo(parcelId, ReceivedAtDepot(depotId = "Lewisham Depot") at start, LoadedOntoLorry(lorryId = "Truck CU50 ZCV") at start.plus(2, DAYS) )

slide-42
SLIDE 42

Kotlin Mapping

fun describeEvent(event: ParcelEvent): Unit = when (event) { is ReceivedAtDepot -> println("Received at depot: ${event.depotId}") is LoadedOntoTruck -> println("Loaded onto truck: ${event.truckId}") is Delivered -> println("Delivered to: ${event.destinationId}") is DeliveryFailed -> println("Delivery failed") }

slide-43
SLIDE 43

Event-handling middleware is a chain of Consumer<Event>s that transforms, routes, persists and dispatches events. A single event submitted to this chain may be: ■ Checked against an idempotency filter (e.g. a Hazelcast distributed cache) ■ Serialised to JSON ■ Written to a message queue topic ■ Retrieved from the topic and deserialised ■ Persisted to an event store (e.g. Cassandra) ■ Published to an event handler which maintains a query-optimised view of part of the system ■ Published to an event handler which maintains an index of aggregates by event property values (e.g. lightbulbs by wattage)

Event-Handling Middleware

slide-44
SLIDE 44
  • Kafka Streams
  • Narrative threads across event histories
  • Generic Attribute indexing
  • State management and caching
  • Improved cloud tooling

Future Directions

slide-45
SLIDE 45

Thank you for listening Any questions?

slide-46
SLIDE 46
slide-47
SLIDE 47

Three Processing Schedules

1.Transient 2.Durable 3.Persistent

slide-48
SLIDE 48

Three Processing Schedules

1.Transient 2.Durable 3.Persistent

slide-49
SLIDE 49

Three Processing Schedules

1.Transient 2.Durable 3.Persistent

slide-50
SLIDE 50

Three Processing Schedules

1.Transient - what happens 2.Durable - what’s happening 3.Persistent - what happened

slide-51
SLIDE 51

“Write First, Reason Later”