LMAX Disruptor: 100K TPS at less than 1ms latency Dave Farley - - PowerPoint PPT Presentation

lmax disruptor
SMART_READER_LITE
LIVE PREVIEW

LMAX Disruptor: 100K TPS at less than 1ms latency Dave Farley - - PowerPoint PPT Presentation

LMAX Disruptor: 100K TPS at less than 1ms latency Dave Farley Martin Thompson GOTO rhus 2011 LMAX History Spin-off from Betfair the worlds largest sports betting exchange Massive throughput and customer numbers LMAX has the


slide-1
SLIDE 1

Dave Farley Martin Thompson GOTO Århus 2011

LMAX Disruptor:

100K TPS at less than 1ms latency

slide-2
SLIDE 2
  • Spin-off from Betfair the world’s

largest sports betting exchange

  • Massive throughput and customer

numbers

LMAX History

  • LMAX has the fastest order

execution for retail trading

  • Institutional market makers

providing committed liquidity

  • Real-time risk management of

retail customers

slide-3
SLIDE 3

How not to solve this problem

J2EE Actor SEDA

X

Rails

X

X

X

RDBMS

X X X X

slide-4
SLIDE 4

Tips for high performance computing

1. Show good “Mechanical Sympathy” 2. Keep the working set In-Memory 3. Write cache friendly code 4. Write clean compact code 5. Invest in modelling your domain 6. Take the right approach to concurrency

slide-5
SLIDE 5
  • 1. Mechanical Sympathy

Is it really “Turtles all the way down”? What is under all these layers of abstraction? "The most amazing achievement of the computer software industry is its continuing cancellation of the steady and staggering gains made by the computer hardware industry.“

  • Henry Peteroski
slide-6
SLIDE 6
  • 2. Keep the working set In-Memory

Does it feel awkward working with data remote from your address space?

  • Keep data and behaviour co-located
  • Affords rich interaction at low-latency
  • Enabled by 64-bit addressing
slide-7
SLIDE 7
  • 3. Write cache friendly code

C2 C3 C1 C4

L1 L1 L1 L1

L2 L2 L2 L2 L3 C2 C3 C1 C4

L1 L1 L1 L1

L2 L2 L2 L2 MC DRAM DRAM DRAM DRAM DRAM DRAM

Registers <1ns ~4 cycles ~1ns ~12 cycles ~3ns

MC L3

~45 cycles ~15ns QPI ~20ns ~65ns

slide-8
SLIDE 8
  • 4. Write clean compact code

"Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius -- and a lot of courage -- to move in the opposite direction."

  • Hotspot likes small compact methods
  • CPU pipelines stall if they cannot predict

branches

  • If your code is complex you probably do not

sufficiently understand the problem domain

  • “Nothing in the world is truly complex other

than Tax Law”

slide-9
SLIDE 9
  • 5. Invest in modelling your domain
  • Single responsibility – One class one thing, one method one thing, etc.
  • Know your data structures and cardinality of relationships
  • Let the relationships do the work

Elephant Wall Snake Rope TreeTrunk Supported By attached attached like a

Model of an elephant based on blind men touching one part each

slide-10
SLIDE 10
  • 6. Take the right approach to concurrency

Concurrent programming is about 2 things: Mutual Exclusion: Protect access to contended resources Visibility of Changes: Make the results public in the correct order

  • Context switch to the kernel
  • Can always make progress
  • Difficult to get right
  • Atomic read-modify-write primitives
  • Happen in user space
  • Very difficult to get right!

Atomic/CAS Instructions Locks

slide-11
SLIDE 11

What is possible when you get this stuff right?

On a single thread you have ~3 billion instructions per second to play with: 10K+ TPS

  • If you don’t do anything too stupid

100K+ TPS

  • With well organised clean code and standard libraries

1m+ TPS

  • With custom cache friendly collections
  • Good performance tests
  • Controlled garbage creation
  • Very well modelled domain
  • BTW writing good performance tests is often harder than the target code!!!
slide-12
SLIDE 12

How to address the other non-functional concerns?

  • With a very fast business logic thread we need to feed it reliably

Business Logic Receiver Network Un-Marshaller Replicator HA / DR Nodes Pipelined Process Each stage can have multiple threads Journaller File System Marshaller Publisher Network / Archive DB

slide-13
SLIDE 13

Concurrent access to Queues – The Issues

Tail Node Node Node Node Head

Link List backed Array backed

size

  • Hard to limit size
  • O(n) access times if not head or tail
  • Generates garbage which can be significant
  • Cannot resize easily
  • Difficult to get *P *C correct
  • O(1) access times for any slot and cache friendly

Cache line Head Tail size

slide-14
SLIDE 14

Journaller Replicator Un-Marshaller

Di Disrup sruptor tor in A in Action ction

Invoke Stage Business Logic

Marshaller

Event :sequence :buffer :invoker Event :sequence :object :buffer Network Receiver

103 1 2 3 4 5 6 7 n

97 1 2 3 4 5 6 7 n long waitFor(n) 101 :MIN 101 102 long waitFor(n) 97 :MIN Publisher Network / Archive DB

slide-15
SLIDE 15

Disruptor – Concurrent Programming Framework

Open Source project: http://code.google.com/p/disruptor/

  • Very active with ever increasing performance and functionality
  • Wide applicability

> Exchanges/Auctions, Risk Models, Network Probes, Market Data Processing, etc.

How do we take advantage of multi-core?

  • Pin threads to cores for specific steps in a workflow or process
  • Pass messages/events between cores with “Mechanical Sympathy”
  • Understand that a “cache miss” is the biggest cost in HPC
  • Measure! Don’t believe everything you read and hear

> Let’s bring back science in computer science!

slide-16
SLIDE 16

The Disruptor Pattern

Ring Buffer <Events> Sequence Barrier Sequencer Sequence Barrier Publishers EventProcessors EventProcessors CPU Core per thread

slide-17
SLIDE 17

Wrap UP

http://code.google.com/p/disruptor/ http://www.davefarley.net/ http://mechanical-sympathy.blogspot.com/ jobs@lmax.com