Concursus
Event Sourcing Evolved
GOTO London 2016
Concursus Event Sourcing Evolved GOTO London 2016 Introductions - - PowerPoint PPT Presentation
Concursus Event Sourcing Evolved GOTO London 2016 Introductions Dominic Fox Tareq Abedrabbo Twitter: @dynamic_proxy Twitter: @tareq_abedrabbo Email: dominic.fox@opencredo.com Email: tareq.abedrabbo@opencredo.com Concursus Page:
Event Sourcing Evolved
GOTO London 2016
Dominic Fox
Twitter: @dynamic_proxy Email: dominic.fox@opencredo.com
Tareq Abedrabbo
Twitter: @tareq_abedrabbo Email: tareq.abedrabbo@opencredo.com
Concursus
Page: https://opencredo.com/publications/concursus/ Github: http://github.com/opencredo/concursus
A toolkit for processing and organising messy data in an distributed context.
Observations Conception and design Prototype Open source implementation Technical report and blogs
“Event Sourcing ensures that all changes to application state are stored as a sequence of
past states, and as a foundation to automatically adjust the state to cope with retroactive changes.”
http://martinfowler.com/eaaDev/EventSourcing.html
Problems Concursus addresses: ü Processing events in a scalable and reliable way ü Processing guarantees and ordering: exactly once, out of order, repeated or missed delivery, etc.. ü Building meaningful domain models to reason about and build business logic around ü Flexibility: building additional views as needed
Tendencies:
2016-10-12 09:06:31.432 Received at Depot 2016-10-12 09:06:32.106 Received at Depot 2016-10-12 09:06:34.740 Received at Depot 2016-10-12 11:35:02.163 Loaded onto Truck 2016-10-12 11:40:21.032 Loaded onto Truck 2016-10-12 11:38:51.204 Loaded onto Truck 2016-10-12 14:12:44.021 Delivery Failed 2016-10-12 15:00:31.322 Delivered 2016-10-12 15:11:05.038 Delivered
ü Delivery constraints
ü Processing guarantees at least once or exactly once processing, idempotency ü Ordering partial ordering across aggregates (with reasonable assumptions)
ü Durable sufficiently durable buffer for async processing (what’s happening) ü Persistent a permanent record of everything that has happened (what happened) ü Transient fast and consistent, but also disposable state (what happens)
Stream processing frameworks such as Apache Storm and Spark Google papers: Cloud dataflow, MillWheel Apache Spark papers The Axon CQRS framework Domain Driven Design Functional programming
Concursus
= Event sourcing + Stream processing + Bounded contexts (DDD) + Distributed computing
Received at Depot Loaded onto Truck Delivered Delivery Failed
Received at Depot Loaded
Truck Delivery Failed Received at Depot Loaded
Truck Delivered
aggregateType: parcel aggregateId: 69016fb5-1d69-4a34- 910b-f8ff5c702ad9 eventTimestamp: 2016-03-31 10:31:17.981 parameters: { “depotId”: “Lewisham” }
Received at Depot Loaded
Truck Delivery Failed Received at Depot Loaded
Truck Delivered
aggregateType: parcel aggregateId: 69016fb5-1d69-4a34- 910b-f8ff5c702ad9 eventTimestamp: 2016-03-38 08:15:23.104 parameters: { “truckId”: “J98 257” }
Received at Depot Loaded
Truck Delivery Failed Received at Depot Loaded
Truck Delivered
eventTimestamp: 2016-03-31T10:36:42.171Z processingTimestamp: 2016-03-31T10:36:48.3904Z parameters: { “deliveryAddress”: “123 Sudbury Avenue, Droitwich DR4 8PQ”}
aggregateType: parcel aggregateId: 69016fb5-1d69-4a34- 910b-f8ff5c702ad9 Received at Depot Loaded onto Truck Delivery Failed Received at Depot Loaded onto Truck Delivered
Every Event occurs to an Aggregate, identified by its type and id. Every Event has an eventTimestamp, generated by the source of the event. An Event History is a log of Events, ordered by eventTimestamp, with an additional processingTimestamp which records when the Event was captured.
Network Event sources Event processors
Events arrive:
Log is:
CREATE TABLE IF NOT EXISTS concursus.Event ( aggregateType text, aggregateId text, eventTimestamp timestamp, streamId text, processingId timeuuid, name text, version text, parameters map<text, text>, characteristics int, PRIMARY KEY((aggregateType, aggregateId), eventTimestamp, streamId) ) WITH CLUSTERING ORDER BY (eventTimestamp DESC);
Cassandra Event Store RabbitMQ Topic Downstream processing Log events Publish events
Cassandra Event Store RabbitMQ Topic Downstream processing
Cassandra Event Store Kafka Topic Downstream processing Event store listener Publish events Log events
Events arrive partitioned, interleaved and out-of-order. Events are sorted into event histories by aggregate type and id. Events are sorted within event histories by event timestamp, not processing timestamp. Event consumers need to take into account the possibility that an event history may be incomplete at the time it is read – consider using a watermark to give incoming events time to “settle”.
Received at Depot Loaded
Truck Delivery Failed Received at Depot Loaded
Truck Delivered
Received at Depot Loaded
Truck Delivery Failed Received at Depot Loaded
Truck Delivered
You give me a Consumer<Event>, and I send Events to it one at a time:
I implement Consumer<Event>, and handle Events that are sent to me.
sealed class ParcelEvent { class ReceivedAtDepot(val depotId: String): ParcelEvent() class LoadedOntoTruck(val truckId: String): ParcelEvent() class Delivered(val destinationId: String): ParcelEvent() class DeliveryFailed(): ParcelEvent() }
eventBus.dispatchTo(parcelId, ReceivedAtDepot(depotId = "Lewisham Depot") at start, LoadedOntoLorry(lorryId = "Truck CU50 ZCV") at start.plus(2, DAYS) )
fun describeEvent(event: ParcelEvent): Unit = when (event) { is ReceivedAtDepot -> println("Received at depot: ${event.depotId}") is LoadedOntoTruck -> println("Loaded onto truck: ${event.truckId}") is Delivered -> println("Delivered to: ${event.destinationId}") is DeliveryFailed -> println("Delivery failed") }
Event-handling middleware is a chain of Consumer<Event>s that transforms, routes, persists and dispatches events. A single event submitted to this chain may be: ■ Checked against an idempotency filter (e.g. a Hazelcast distributed cache) ■ Serialised to JSON ■ Written to a message queue topic ■ Retrieved from the topic and deserialised ■ Persisted to an event store (e.g. Cassandra) ■ Published to an event handler which maintains a query-optimised view of part of the system ■ Published to an event handler which maintains an index of aggregates by event property values (e.g. lightbulbs by wattage)