Acceptance Testing for Continuous Delivery Dave Farley - - PowerPoint PPT Presentation

acceptance testing for continuous delivery
SMART_READER_LITE
LIVE PREVIEW

Acceptance Testing for Continuous Delivery Dave Farley - - PowerPoint PPT Presentation

Acceptance Testing for Continuous Delivery Dave Farley http://www.davefarley.net @davefarley77 http://www.continuous-delivery.co.uk http://www.continuous-delivery.co.uk The Role of Acceptance Testing Local Dev. Env. Source Repository The


slide-1
SLIDE 1

Dave Farley

http://www.davefarley.net @davefarley77

http://www.continuous-delivery.co.uk

Acceptance Testing for Continuous Delivery

http://www.continuous-delivery.co.uk

slide-2
SLIDE 2

The Role of Acceptance Testing

Local Dev. Env.

Source Repository

slide-3
SLIDE 3

The Role of Acceptance Testing

Artifact Repository

Local Dev. Env.

Deployment Pipeline

Commit

Production Env.

Deployment App.

Commit Acceptance Manual Perf1 Perf2 Staged Production

Source Repository

Acceptance Component Performance System Performance

Staging Env.

Deployment App.

Manual Test Env.

Deployment App.
slide-4
SLIDE 4

The Role of Acceptance Testing

Artifact Repository

Local Dev. Env.

Deployment Pipeline

Commit

Production Env.

Deployment App.

Commit Acceptance Manual Perf1 Perf2 Staged Production

Source Repository

Acceptance Component Performance System Performance

Staging Env.

Deployment App.

Manual Test Env.

Deployment App.

Staging Env.

Deployment App.

Manual Test Env.

Deployment App.

Component Performance System Performance Acceptance

slide-5
SLIDE 5

The Role of Acceptance Testing

Artifact Repository

Local Dev. Env.

Deployment Pipeline

Commit

Production Env.

Deployment App.

Commit Acceptance Manual Perf1 Perf2 Staged Production

Source Repository

Acceptance Component Performance System Performance

Staging Env.

Deployment App.

Manual Test Env.

Deployment App.

Staging Env.

Deployment App.

Manual Test Env.

Deployment App.

Component Performance System Performance Acceptance

slide-6
SLIDE 6

What is Acceptance Testing?

  • Asserts that the code does what the users want.
  • An automated “definition of done”
  • Asserts that the code works in a “production-like” test

environment.

  • A test of the deployment and configuration of a whole

system.

  • Provides timely feedback on stories - closes a feedback

loop.

  • Acceptance Testing, ATDD, BDD, Specification by

Example, Executable Specifications.

slide-7
SLIDE 7

What is Acceptance Testing?

A Good Acceptance Test is:

An Executable Specification of the Behaviour of the System

slide-8
SLIDE 8

What is Acceptance Testing?

Unit Test Code Idea Executable spec. Build Release

slide-9
SLIDE 9

What is Acceptance Testing?

Unit Test Code Idea Executable spec. Build Release

slide-10
SLIDE 10

What is Acceptance Testing?

Unit Test Code Idea Executable spec. Build Release

slide-11
SLIDE 11

So What’s So Hard?

  • Tests break when the SUT changes (Particularly UI)
  • Tests are complex to develop
  • This is a problem of design, the tests are too tightly-

coupled to the SUT!

  • The history is littered with poor implementations:
  • UI Record-and-playback Systems
  • Record-and-playback of production data
  • Dumps of production data to test systems
  • Nasty automated testing products.
slide-12
SLIDE 12

So What’s So Hard?

  • Tests break when the SUT changes (Particularly UI)
  • Tests are complex to develop
  • This is a problem of design, the tests are too tightly-

coupled to the SUT!

  • The history is littered with poor implementations:
  • UI Record-and-playback Systems
  • Record-and-playback of production data
  • Dumps of production data to test systems
  • Nasty automated testing products.

Anti-Pattern! Anti-Pattern! Anti-Pattern! Anti-Pattern!

slide-13
SLIDE 13

Who Owns the Tests?

  • Anyone can write a test
  • Developers are the people that will break tests
  • Therefore Developers own the responsibility to

keep them working

  • Separate Testing/QA team owning automated

tests

slide-14
SLIDE 14

Who Owns the Tests?

  • Anyone can write a test
  • Developers are the people that will break tests
  • Therefore Developers own the responsibility to

keep them working

  • Separate Testing/QA team owning automated

tests Anti-Pattern!

slide-15
SLIDE 15

Who Owns the Tests?

Developers Own Acceptance Tests!

slide-16
SLIDE 16

Properties of Good Acceptance Tests

  • “What” not “How”
  • Isolated from other tests
  • Repeatable
  • Uses the language of the problem domain
  • Tests ANY change
  • Efficient
slide-17
SLIDE 17

Properties of Good Acceptance Tests

  • “What” not “How”
  • Isolated from other tests
  • Repeatable
  • Uses the language of the problem domain
  • Tests ANY change
  • Efficient
slide-18
SLIDE 18

Public API FIX API Trade Reporting Gateway …

“What” not “How”

API Traders Clearing Destination Other external end-points Market Makers UI Traders

slide-19
SLIDE 19

Public API FIX API Trade Reporting Gateway …

“What” not “How”

Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case

slide-20
SLIDE 20

Public API FIX API Trade Reporting Gateway … FIX API

“What” not “How”

Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case

slide-21
SLIDE 21

Public API FIX API Trade Reporting Gateway …

“What” not “How”

API Traders Clearing Destination Other external end-points Market Makers UI Traders Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case

slide-22
SLIDE 22

Public API FIX API Trade Reporting Gateway …

“What” not “How”

FIX API Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case API External Stubs FIX-API UI FIX-API FIX-API Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case

slide-23
SLIDE 23

Public API FIX API Trade Reporting Gateway …

“What” not “How”

FIX API Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case API External Stubs FIX-API UI FIX-API

slide-24
SLIDE 24

Public API FIX API Trade Reporting Gateway …

“What” not “How”

Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case Test Case API External Stubs FIX-API UI FIX-API

slide-25
SLIDE 25

Public API FIX API Trade Reporting Gateway …

“What” not “How”

API External Stubs FIX-API UI FIX-API

Test infrastructure common to all acceptance tests

slide-26
SLIDE 26

“What” not “How” - Separate Deployment from Testing

  • Every Test should control its start conditions,

and so should start and init the app.

  • Acceptance Test deployment should be a

rehearsal for Production Release

  • This separation of concerns provides an
  • pportunity for optimisation
  • Parallel tests in a shared environment
  • Lower test start-up overhead
slide-27
SLIDE 27

“What” not “How” - Separate Deployment from Testing

  • Every Test should control its start conditions,

and so should start and init the app.

  • Acceptance Test deployment should be a

rehearsal for Production Release

  • This separation of concerns provides an
  • pportunity for optimisation
  • Parallel tests in a shared environment
  • Lower test start-up overhead

Anti-Pattern!

slide-28
SLIDE 28

Properties of Good Acceptance Tests

  • “What” not “How”
  • Isolated from other tests
  • Repeatable
  • Uses the language of the problem domain
  • Tests ANY change
  • Efficient
slide-29
SLIDE 29

Properties of Good Acceptance Tests

  • “What” not “How”
  • Isolated from other tests
  • Repeatable
  • Uses the language of the problem domain
  • Tests ANY change
  • Efficient
slide-30
SLIDE 30

Test Isolation

  • Any form of testing is about evaluating

something in controlled circumstances

  • Isolation works on multiple levels
  • Isolating the System under test
  • Isolating test cases from each other
  • Isolating test cases from themselves (temporal isolation)
  • Isolation is a vital part of your Test Strategy
slide-31
SLIDE 31

Test Isolation - Isolating the System Under Test

slide-32
SLIDE 32

Test Isolation - Isolating the System Under Test

External System ‘A’ External System ‘C’ System Under Test ‘B’

slide-33
SLIDE 33

Test Isolation - Isolating the System Under Test

External System ‘A’ External System ‘C’ System Under Test ‘B’

slide-34
SLIDE 34

Test Isolation - Isolating the System Under Test

External System ‘A’ External System ‘C’ System Under Test ‘B’

slide-35
SLIDE 35

Test Isolation - Isolating the System Under Test

External System ‘A’ External System ‘C’ System Under Test ‘B’

?

slide-36
SLIDE 36

Test Isolation - Isolating the System Under Test

External System ‘A’ External System ‘C’ System Under Test ‘B’

Anti-Pattern!

slide-37
SLIDE 37

Test Isolation - Isolating the System Under Test

System Under Test ‘B’ Test Cases Verifiable Output

slide-38
SLIDE 38

Test Isolation - Validating The Interfaces

slide-39
SLIDE 39

Test Isolation - Validating The Interfaces

External System ‘A’ External System ‘C’ System Under Test ‘B’

slide-40
SLIDE 40

Test Isolation - Validating The Interfaces

External System ‘A’ External System ‘C’ System Under Test ‘B’

slide-41
SLIDE 41

Test Isolation - Validating The Interfaces

External System ‘A’ External System ‘C’ Test Cases Verifiable Output System Under Test ‘B’ Test Cases Verifiable Output Test Cases Verifiable Output

slide-42
SLIDE 42

Test Isolation - Validating The Interfaces

External System ‘A’ External System ‘C’ Test Cases Verifiable Output System Under Test ‘B’

Test Cases Verifiable Output Test Cases Verifiable Output
slide-43
SLIDE 43

Test Isolation - Isolating Test Cases

  • Assuming multi-user systems…
  • Tests should be efficient - We want to run LOTS!
  • What we really want is to deploy once, and run LOTS of tests
  • So we must avoid ANY dependencies between tests…
  • Use natural functional isolation e.g.
  • If testing Amazon, create a new account and a new book/product for every test-

case

  • If testing eBay create a new account and a new auction for every test-case
  • If testing GitHub, create a new account and a new repository for every test-case
slide-44
SLIDE 44
  • We want repeatable results
  • If I run my test-case twice it should work both

times

Test Isolation - Temporal Isolation

slide-45
SLIDE 45
  • We want repeatable results
  • If I run my test-case twice it should work both

times

Test Isolation - Temporal Isolation

def test_should_place_an_order(self): self.store.createBook(“Continuous Delivery”);

  • rder = self.store.placeOrder(book=“Continuous Delivery")

self.store.assertOrderPlaced(order)

slide-46
SLIDE 46
  • We want repeatable results
  • If I run my test-case twice it should work both

times

Test Isolation - Temporal Isolation

def test_should_place_an_order(self): self.store.createBook(“Continuous Delivery”);

  • rder = self.store.placeOrder(book=“Continuous Delivery")

self.store.assertOrderPlaced(order)

slide-47
SLIDE 47
  • We want repeatable results
  • If I run my test-case twice it should work both

times

Test Isolation - Temporal Isolation

def test_should_place_an_order(self): self.store.createBook(“Continuous Delivery”);

  • rder = self.store.placeOrder(book=“Continuous Delivery")

self.store.assertOrderPlaced(order)

slide-48
SLIDE 48
  • We want repeatable results
  • If I run my test-case twice it should work both

times

Test Isolation - Temporal Isolation

def test_should_place_an_order(self): self.store.createBook(“Continuous Delivery”);

  • rder = self.store.placeOrder(book=“Continuous Delivery")

self.store.assertOrderPlaced(order)

Continuous Delivery

slide-49
SLIDE 49
  • We want repeatable results
  • If I run my test-case twice it should work both

times

Test Isolation - Temporal Isolation

def test_should_place_an_order(self): self.store.createBook(“Continuous Delivery”);

  • rder = self.store.placeOrder(book=“Continuous Delivery")

self.store.assertOrderPlaced(order)

Continuous Delivery

slide-50
SLIDE 50
  • We want repeatable results
  • If I run my test-case twice it should work both

times

Test Isolation - Temporal Isolation

def test_should_place_an_order(self): self.store.createBook(“Continuous Delivery”);

  • rder = self.store.placeOrder(book=“Continuous Delivery")

self.store.assertOrderPlaced(order)

Continuous Delivery1234

slide-51
SLIDE 51
  • We want repeatable results
  • If I run my test-case twice it should work both

times

Test Isolation - Temporal Isolation

def test_should_place_an_order(self): self.store.createBook(“Continuous Delivery”);

  • rder = self.store.placeOrder(book=“Continuous Delivery")

self.store.assertOrderPlaced(order)

Continuous Delivery1234 Continuous Delivery6789

slide-52
SLIDE 52
  • We want repeatable results
  • If I run my test-case twice it should work both

times

Test Isolation - Temporal Isolation

def test_should_place_an_order(self): self.store.createBook(“Continuous Delivery”);

  • rder = self.store.placeOrder(book=“Continuous Delivery")

self.store.assertOrderPlaced(order)

Continuous Delivery1234 Continuous Delivery6789

  • Alias your functional isolation entities
  • In your test case create account ‘Dave’ in reality, in the test

infrastructure, ask the application to create account ‘Dave2938472398472’ and alias it to ‘Dave’ in your test infrastructure.

slide-53
SLIDE 53

Properties of Good Acceptance Tests

  • “What” not “How”
  • Isolated from other tests
  • Repeatable
  • Uses the language of the problem domain
  • Tests ANY change
  • Efficient
slide-54
SLIDE 54

Properties of Good Acceptance Tests

  • “What” not “How”
  • Isolated from other tests
  • Repeatable
  • Uses the language of the problem domain
  • Tests ANY change
  • Efficient
slide-55
SLIDE 55

Repeatability - Test Doubles

External System

slide-56
SLIDE 56

Repeatability - Test Doubles

External System Local Interface to External System

slide-57
SLIDE 57

Repeatability - Test Doubles

External System Local Interface to External System Communications to External System

slide-58
SLIDE 58

Repeatability - Test Doubles

External System Local Interface to External System Communications to External System

TestStub Simulating External System

Local Interface to External System

slide-59
SLIDE 59

Repeatability - Test Doubles

External System Local Interface to External System Communications to External System

TestStub Simulating External System

Local Interface to External System

Production Test Environment

kjhaskjhdkjhkjh askjhl lkjasl dkjas lkajl ajsd lkjalskjlakjsdlkajsld j lkajsdlkajsldkj lkjlakjsldkjlka laskj ljl akjl kajsldijupoqwiuepoq dlkjl iu lkajsodiuqpwouoi la ]laksjdiuqoiwuoijds
  • ijasodiaosidjuoiasud
kjhaskjhdkjhkjh askjhl lkjasl dkjas lkajl ajsd lkjalskjlakjsdlkajsld j lkajsdlkajsldkj lkjlakjsldkjlka laskj ljl akjl kajsldijupoqwiuepoq dlkjl iu lkajsodiuqpwouoi la ]laksjdiuqoiwuoijds
  • ijasodiaosidjuoiasud

C

  • n

f i g u r a t i

  • n
slide-60
SLIDE 60

Test Doubles As Part of Test Infrastructure

TestStub Simulating External System

Local Interface to External System

slide-61
SLIDE 61

Test Doubles As Part of Test Infrastructure

TestStub Simulating External System

Local Interface to External System

slide-62
SLIDE 62

Test Doubles As Part of Test Infrastructure

TestStub Simulating External System

Local Interface to External System Public Interface

slide-63
SLIDE 63

Test Doubles As Part of Test Infrastructure

TestStub Simulating External System

Local Interface to External System Public Interface

slide-64
SLIDE 64

Test Doubles As Part of Test Infrastructure

TestStub Simulating External System

Local Interface to External System Test Infrastructure Test Case Test Case Test Case Test Case Test Infrastructure Back-Channel Public Interface

System Under Test

slide-65
SLIDE 65

Properties of Good Acceptance Tests

  • “What” not “How”
  • Isolated from other tests
  • Repeatable
  • Uses the language of the problem domain
  • Tests ANY change
  • Efficient
slide-66
SLIDE 66

Properties of Good Acceptance Tests

  • “What” not “How”
  • Isolated from other tests
  • Repeatable
  • Uses the language of the problem domain
  • Tests ANY change
  • Efficient
slide-67
SLIDE 67

Language of the Problem Domain - DSL

  • A Simple ‘DSL’ Solves many of our problems
  • Ease of TestCase creation
  • Readability
  • Ease of Maintenance
  • Separation of “What” from “How”
  • Test Isolation
  • The Chance to abstract complex set-up and scenarios
slide-68
SLIDE 68

Language of the Problem Domain - DSL

@Test public void shouldSupportPlacingValidBuyAndSellLimitOrders() { trading.selectDealTicket("instrument"); trading.dealTicket.placeOrder("type: limit", ”bid: 4@10”); trading.dealTicket.checkFeedbackMessage("You have successfully sent a limit order to buy 4.00 contracts at 10.0"); trading.dealTicket.dismissFeedbackMessage(); trading.dealTicket.placeOrder("type: limit", ”ask: 4@9”); trading.dealTicket.checkFeedbackMessage("You have successfully sent a limit order to sell 4.00 contracts at 9.0"); }

slide-69
SLIDE 69

Language of the Problem Domain - DSL

@Test public void shouldSupportPlacingValidBuyAndSellLimitOrders() { trading.selectDealTicket("instrument"); trading.dealTicket.placeOrder("type: limit", ”bid: 4@10”); trading.dealTicket.checkFeedbackMessage("You have successfully sent a limit order to buy 4.00 contracts at 10.0"); trading.dealTicket.dismissFeedbackMessage(); trading.dealTicket.placeOrder("type: limit", ”ask: 4@9”); trading.dealTicket.checkFeedbackMessage("You have successfully sent a limit order to sell 4.00 contracts at 9.0"); } @Test public void shouldSuccessfullyPlaceAnImmediateOrCancelBuyMarketOrder() { fixAPIMarketMaker.placeMassOrder("instrument", "ask: 11@52", "ask: 10@51", "ask: 10@50", "bid: 10@49"); fixAPI.placeOrder("instrument", "side: buy", "quantity: 4", "goodUntil: Immediate", "allowUnmatched: true"); fixAPI.waitForExecutionReport("executionType: Fill", "orderStatus: Filled", "side: buy", "quantity: 4", "matched: 4", "remaining: 0", "executionPrice: 50", "executionQuantity: 4"); }

slide-70
SLIDE 70

Language of the Problem Domain - DSL

@Test public void shouldSupportPlacingValidBuyAndSellLimitOrders() { trading.selectDealTicket("instrument"); trading.dealTicket.placeOrder("type: limit", ”bid: 4@10”); trading.dealTicket.checkFeedbackMessage("You have successfully sent a limit order to buy 4.00 contracts at 10.0"); trading.dealTicket.dismissFeedbackMessage(); trading.dealTicket.placeOrder("type: limit", ”ask: 4@9”); trading.dealTicket.checkFeedbackMessage("You have successfully sent a limit order to sell 4.00 contracts at 9.0"); } @Test public void shouldSuccessfullyPlaceAnImmediateOrCancelBuyMarketOrder() { fixAPIMarketMaker.placeMassOrder("instrument", "ask: 11@52", "ask: 10@51", "ask: 10@50", "bid: 10@49"); fixAPI.placeOrder("instrument", "side: buy", "quantity: 4", "goodUntil: Immediate", "allowUnmatched: true"); fixAPI.waitForExecutionReport("executionType: Fill", "orderStatus: Filled", "side: buy", "quantity: 4", "matched: 4", "remaining: 0", "executionPrice: 50", "executionQuantity: 4"); } @Before public void beforeEveryTest() { adminAPI.createInstrument("name: instrument"); registrationAPI.createUser("user"); registrationAPI.createUser("marketMaker", "accountType: MARKET_MAKER"); tradingUI.loginAsLive("user"); }

slide-71
SLIDE 71

Language of the Problem Domain - DSL

public void placeOrder(final String... args) { final DslParams params = new DslParams(args, new OptionalParam("type").setDefault("Limit").setAllowedValues("limit", "market", "StopMarket"), new OptionalParam("side").setDefault("Buy").setAllowedValues("buy", "sell"), new OptionalParam("price"), new OptionalParam("triggerPrice"), new OptionalParam("quantity"), new OptionalParam("stopProfitOffset"), new OptionalParam("stopLossOffset"), new OptionalParam("confirmFeedback").setDefault("true")); getDealTicketPageDriver().placeOrder(params.value("type"), params.value("side"), params.value("price"), params.value("triggerPrice"), params.value("quantity"), params.value("stopProfitOffset"), params.value("stopLossOffset")); if (params.valueAsBoolean("confirmFeedback")) { getDealTicketPageDriver().clickOrderFeedbackConfirmationButton(); } LOGGER.debug("placeOrder(" + Arrays.deepToString(args) + ")"); }

slide-72
SLIDE 72

Language of the Problem Domain - DSL

@Test public void shouldSupportPlacingValidBuyAndSellLimitOrders() { tradingUI.showDealTicket("instrument"); tradingUI.dealTicket.placeOrder("type: limit", ”bid: 4@10”); tradingUI.dealTicket.checkFeedbackMessage("You have successfully sent a limit order to buy 4.00 contracts at 10.0"); tradingUI.dealTicket.dismissFeedbackMessage(); tradingUI.dealTicket.placeOrder("type: limit", ”ask: 4@9”); tradingUI.dealTicket.checkFeedbackMessage("You have successfully sent a limit order to sell 4.00 contracts at 9.0"); } @Test public void shouldSuccessfullyPlaceAnImmediateOrCancelBuyMarketOrder() { fixAPIMarketMaker.placeMassOrder("instrument", "ask: 11@52", "ask: 10@51", "ask: 10@50", "bid: 10@49"); fixAPI.placeOrder("instrument", "side: buy", "quantity: 4", "goodUntil: Immediate", "allowUnmatched: true"); fixAPI.waitForExecutionReport("executionType: Fill", "orderStatus: Filled", "side: buy", "quantity: 4", "matched: 4", "remaining: 0", "executionPrice: 50", "executionQuantity: 4"); }

slide-73
SLIDE 73

Language of the Problem Domain - DSL

@Test public void shouldSupportPlacingValidBuyAndSellLimitOrders() { tradingUI.showDealTicket("instrument"); tradingUI.dealTicket.placeOrder("type: limit", ”bid: 4@10”); tradingUI.dealTicket.checkFeedbackMessage("You have successfully sent a limit order to buy 4.00 contracts at 10.0"); tradingUI.dealTicket.dismissFeedbackMessage(); tradingUI.dealTicket.placeOrder("type: limit", ”ask: 4@9”); tradingUI.dealTicket.checkFeedbackMessage("You have successfully sent a limit order to sell 4.00 contracts at 9.0"); } @Test public void shouldSuccessfullyPlaceAnImmediateOrCancelBuyMarketOrder() { fixAPIMarketMaker.placeMassOrder("instrument", "ask: 11@52", "ask: 10@51", "ask: 10@50", "bid: 10@49"); fixAPI.placeOrder("instrument", "side: buy", "quantity: 4", "goodUntil: Immediate", "allowUnmatched: true"); fixAPI.waitForExecutionReport("executionType: Fill", "orderStatus: Filled", "side: buy", "quantity: 4", "matched: 4", "remaining: 0", "executionPrice: 50", "executionQuantity: 4"); }

slide-74
SLIDE 74

Language of the Problem Domain - DSL

@Channel(fixApi, dealTicket, publicApi) @Test public void shouldSuccessfullyPlaceAnImmediateOrCancelBuyMarketOrder() { trading.placeOrder("instrument", "side: buy", “price: 123.45”, "quantity: 4", "goodUntil: Immediate”); trading.waitForExecutionReport("executionType: Fill", "orderStatus: Filled", "side: buy", "quantity: 4", "matched: 4", "remaining: 0", "executionPrice: 123.45", "executionQuantity: 4"); }

slide-75
SLIDE 75

Language of the Problem Domain - DSL

@Channel(fixApi, dealTicket, publicApi) @Test public void shouldSuccessfullyPlaceAnImmediateOrCancelBuyMarketOrder() { trading.placeOrder("instrument", "side: buy", “price: 123.45”, "quantity: 4", "goodUntil: Immediate”); trading.waitForExecutionReport("executionType: Fill", "orderStatus: Filled", "side: buy", "quantity: 4", "matched: 4", "remaining: 0", "executionPrice: 123.45", "executionQuantity: 4"); }

slide-76
SLIDE 76

Properties of Good Acceptance Tests

  • “What” not “How”
  • Isolated from other tests
  • Repeatable
  • Uses the language of the problem domain
  • Tests ANY change
  • Efficient
slide-77
SLIDE 77

Properties of Good Acceptance Tests

  • “What” not “How”
  • Isolated from other tests
  • Repeatable
  • Uses the language of the problem domain
  • Tests ANY change
  • Efficient
slide-78
SLIDE 78

Testing with Time

  • Test Cases should be deterministic
  • Time is a problem for determinism - There are

two options:

  • Ignore time
  • Control time
slide-79
SLIDE 79

Testing With Time - Ignore Time

Mechanism Filter out time-based values in your test infrastructure so that they are ignored Pros:

  • Simple!

Cons:

  • Can miss errors
  • Prevents any hope of testing complex time-based

scenarios

slide-80
SLIDE 80

Mechanism Treat Time as an external dependency, like any external system - and Fake it! Pros:

  • Very Flexible!
  • Can simulate any time-based scenario, with time under the

control of the test case.

Cons:

  • Slightly more complex infrastructure

Testing With Time - Controlling Time

slide-81
SLIDE 81

Testing With Time - Controlling Time

@Test public void shouldBeOverdueAfterOneMonth() { book = library.borrowBook(“Continuous Delivery”); assertFalse(book.isOverdue()); time.travel(“+1 week”); assertFalse(book.isOverdue()); time.travel(“+4 weeks”); assertTrue(book.isOverdue()); }

slide-82
SLIDE 82

Testing With Time - Controlling Time

@Test public void shouldBeOverdueAfterOneMonth() { book = library.borrowBook(“Continuous Delivery”); assertFalse(book.isOverdue()); time.travel(“+1 week”); assertFalse(book.isOverdue()); time.travel(“+4 weeks”); assertTrue(book.isOverdue()); }

slide-83
SLIDE 83

Testing With Time - Controlling Time

slide-84
SLIDE 84

Testing With Time - Controlling Time

Test Infrastructure Test Case Test Case Test Case Test Case

System Under Test

public void someTimeDependentMethod() { time = System.getTime(); }

System Under Test

slide-85
SLIDE 85

Testing With Time - Controlling Time

Test Infrastructure Test Case Test Case Test Case Test Case

System Under Test

include Clock; public void someTimeDependentMethod() { time = Clock.getTime(); }

System Under Test

slide-86
SLIDE 86

Testing With Time - Controlling Time

Test Infrastructure Test Case Test Case Test Case Test Case

System Under Test

include Clock; public void someTimeDependentMethod() { time = Clock.getTime(); } public class Clock { public static clock = new SystemClock(); public static void setTime(long newTime) { clock.setTime(newTime); } public static long getTime() { return clock.getTime(); }

System Under Test

slide-87
SLIDE 87

Testing With Time - Controlling Time

Test Infrastructure Test Case Test Case Test Case Test Case

System Under Test

include Clock; public void someTimeDependentMethod() { time = Clock.getTime(); }

public void onInit() { // Remote Call - back-channel systemUnderTest.setClock(new TestClock()); } public void time-travel(String time) { long newTime = parseTime(time); // Remote Call - back-channel systemUnderTest.setTime(newTime); }

Test Infrastructure Back-Channel

public class Clock { public static clock = new SystemClock(); public static void setTime(long newTime) { clock.setTime(newTime); } public static long getTime() { return clock.getTime(); }

System Under Test

slide-88
SLIDE 88

Test Environment Types

  • Some Tests need special treatment.
  • Tag Tests with properties and allocate them

dynamically:

slide-89
SLIDE 89

Test Environment Types

  • Some Tests need special treatment.
  • Tag Tests with properties and allocate them

dynamically:

@TimeTravel @Test public void shouldDoSomethingThatNeedsFakeTime() … @Destructive @Test public void shouldDoSomethingThatKillsPartOfTheSystem() … @FPGA(version=1.3) @Test public void shouldDoSomethingThatRequiresSpecificHardware() …

slide-90
SLIDE 90

Test Environment Types

  • Some Tests need special treatment.
  • Tag Tests with properties and allocate them

dynamically:

@TimeTravel @Test public void shouldDoSomethingThatNeedsFakeTime() … @Destructive @Test public void shouldDoSomethingThatKillsPartOfTheSystem() … @FPGA(version=1.3) @Test public void shouldDoSomethingThatRequiresSpecificHardware() …

slide-91
SLIDE 91

Test Environment Types

slide-92
SLIDE 92

Test Environment Types

slide-93
SLIDE 93

Properties of Good Acceptance Tests

  • “What” not “How”
  • Isolated from other tests
  • Repeatable
  • Uses the language of the problem domain
  • Tests ANY change
  • Efficient
slide-94
SLIDE 94

Properties of Good Acceptance Tests

  • “What” not “How”
  • Isolated from other tests
  • Repeatable
  • Uses the language of the problem domain
  • Tests ANY change
  • Efficient
slide-95
SLIDE 95

Production-like Test Environments

slide-96
SLIDE 96

Production-like Test Environments

slide-97
SLIDE 97

Production-like Test Environments

slide-98
SLIDE 98

Production-like Test Environments

slide-99
SLIDE 99

Production-like Test Environments

slide-100
SLIDE 100

Production-like Test Environments

slide-101
SLIDE 101

Production-like Test Environments

slide-102
SLIDE 102

Make Test Cases Internally Synchronous

slide-103
SLIDE 103

Make Test Cases Internally Synchronous

  • Look for a “Concluding Event” listen for that in

your DSL to report an async call as complete

slide-104
SLIDE 104

Make Test Cases Internally Synchronous

Example DSL level Implementation… public String placeOrder(String params…)

{

  • rderSent = sendAsyncPlaceOrderMessage(parseOrderParams(params));

return waitForOrderConfirmedOrFailOnTimeOut(orderSent); }

  • Look for a “Concluding Event” listen for that in

your DSL to report an async call as complete

slide-105
SLIDE 105

Make Test Cases Internally Synchronous

Example DSL level Implementation… public String placeOrder(String params…)

{

  • rderSent = sendAsyncPlaceOrderMessage(parseOrderParams(params));

return waitForOrderConfirmedOrFailOnTimeOut(orderSent); }

  • Look for a “Concluding Event” listen for that in

your DSL to report an async call as complete

slide-106
SLIDE 106

Make Test Cases Internally Synchronous

  • Look for a “Concluding Event” listen for that in

your DSL to report an async call as complete

  • If you really have to, implement a 


“poll-and-timeout” mechanism in your test- infrastructure

  • Never, Never, Never, put a “wait(xx)” and expect

your tests to be (a) Reliable or (b) Efficient!

  • Look for a “Concluding Event” listen for that in

your DSL to report an async call as complete

slide-107
SLIDE 107

Make Test Cases Internally Synchronous

  • Look for a “Concluding Event” listen for that in

your DSL to report an async call as complete

  • If you really have to, implement a 


“poll-and-timeout” mechanism in your test- infrastructure

  • Never, Never, Never, put a “wait(xx)” and expect

your tests to be (a) Reliable or (b) Efficient!

  • Look for a “Concluding Event” listen for that in

your DSL to report an async call as complete

Anti-Pattern!

slide-108
SLIDE 108

Scaling-Up

Artifact Repository

Deployment Pipeline

Acceptance Commit Component Performance System Performance

Staging Env.

Deployment App.

Production Env.

Deployment App.

Source Repository

Manual Test Env.

Deployment App.
slide-109
SLIDE 109

Scaling-Up

Artifact Repository

Deployment Pipeline

Acceptance Commit Component Performance System Performance

Staging Env.

Deployment App.

Production Env.

Deployment App.

Source Repository

Manual Test Env.

Deployment App.

Deployment Pipeline

Commit

Manual Test Env.

Deployment App.

Artifact Repository

Acceptance

Acceptance Test Environment

slide-110
SLIDE 110

Scaling-Up

Artifact Repository

Deployment Pipeline

Acceptance Commit Component Performance System Performance

Staging Env.

Deployment App.

Production Env.

Deployment App.

Source Repository

Manual Test Env.

Deployment App.

Deployment Pipeline

Commit

Manual Test Env.

Deployment App.

Artifact Repository

Acceptance

Acceptance Test Environment

A

slide-111
SLIDE 111

Scaling-Up

Artifact Repository

Deployment Pipeline

Acceptance Commit Component Performance System Performance

Staging Env.

Deployment App.

Production Env.

Deployment App.

Source Repository

Manual Test Env.

Deployment App.

Deployment Pipeline

Commit

Manual Test Env.

Deployment App.

Artifact Repository

Acceptance

Acceptance Test Environment

A A

slide-112
SLIDE 112

Scaling-Up

Artifact Repository

Deployment Pipeline

Acceptance Commit Component Performance System Performance

Staging Env.

Deployment App.

Production Env.

Deployment App.

Source Repository

Manual Test Env.

Deployment App.

Deployment Pipeline

Commit

Manual Test Env.

Deployment App.

Artifact Repository

Acceptance

Acceptance Test Environment

Test Host Test Host Test Host Test Host Test Host

A A

slide-113
SLIDE 113

Scaling-Up

Artifact Repository

Deployment Pipeline

Acceptance Commit Component Performance System Performance

Staging Env.

Deployment App.

Production Env.

Deployment App.

Source Repository

Manual Test Env.

Deployment App.

Deployment Pipeline

Commit

Manual Test Env.

Deployment App.

Artifact Repository

Acceptance Acceptance

Acceptance Test Environment

Test Host Test Host Test Host Test Host Test Host

A

slide-114
SLIDE 114

Anti-Patterns in Acceptance Testing

slide-115
SLIDE 115

Anti-Patterns in Acceptance Testing

  • Don’t use UI Record-and-playback Systems
slide-116
SLIDE 116

Anti-Patterns in Acceptance Testing

  • Don’t use UI Record-and-playback Systems
  • Don’t Record-and-playback production data. This has a role, but it is NOT

Acceptance Testing

slide-117
SLIDE 117

Anti-Patterns in Acceptance Testing

  • Don’t use UI Record-and-playback Systems
  • Don’t Record-and-playback production data. This has a role, but it is NOT

Acceptance Testing

  • Don’t dump production data to your test systems, instead define the absolute

minimum data that you need

slide-118
SLIDE 118

Anti-Patterns in Acceptance Testing

  • Don’t use UI Record-and-playback Systems
  • Don’t Record-and-playback production data. This has a role, but it is NOT

Acceptance Testing

  • Don’t dump production data to your test systems, instead define the absolute

minimum data that you need

  • Don’t assume Nasty Automated Testing Products(tm) will do what you need. Be very

sceptical about them. Start with YOUR strategy and evaluate tools against that.

slide-119
SLIDE 119

Anti-Patterns in Acceptance Testing

  • Don’t use UI Record-and-playback Systems
  • Don’t Record-and-playback production data. This has a role, but it is NOT

Acceptance Testing

  • Don’t dump production data to your test systems, instead define the absolute

minimum data that you need

  • Don’t assume Nasty Automated Testing Products(tm) will do what you need. Be very

sceptical about them. Start with YOUR strategy and evaluate tools against that.

  • Don’t have a separate Testing/QA team! Quality is down to everyone - Developers
  • wn Acceptance Tests!!!
slide-120
SLIDE 120

Anti-Patterns in Acceptance Testing

  • Don’t use UI Record-and-playback Systems
  • Don’t Record-and-playback production data. This has a role, but it is NOT

Acceptance Testing

  • Don’t dump production data to your test systems, instead define the absolute

minimum data that you need

  • Don’t assume Nasty Automated Testing Products(tm) will do what you need. Be very

sceptical about them. Start with YOUR strategy and evaluate tools against that.

  • Don’t have a separate Testing/QA team! Quality is down to everyone - Developers
  • wn Acceptance Tests!!!
  • Don’t let every Test start and init the app. Optimise for Cycle-Time, be efficient in

your use of test environments.

slide-121
SLIDE 121

Anti-Patterns in Acceptance Testing

  • Don’t use UI Record-and-playback Systems
  • Don’t Record-and-playback production data. This has a role, but it is NOT

Acceptance Testing

  • Don’t dump production data to your test systems, instead define the absolute

minimum data that you need

  • Don’t assume Nasty Automated Testing Products(tm) will do what you need. Be very

sceptical about them. Start with YOUR strategy and evaluate tools against that.

  • Don’t have a separate Testing/QA team! Quality is down to everyone - Developers
  • wn Acceptance Tests!!!
  • Don’t let every Test start and init the app. Optimise for Cycle-Time, be efficient in

your use of test environments.

  • Don’t include Systems outside of your control in your Acceptance Test Scope
slide-122
SLIDE 122

Anti-Patterns in Acceptance Testing

  • Don’t use UI Record-and-playback Systems
  • Don’t Record-and-playback production data. This has a role, but it is NOT

Acceptance Testing

  • Don’t dump production data to your test systems, instead define the absolute

minimum data that you need

  • Don’t assume Nasty Automated Testing Products(tm) will do what you need. Be very

sceptical about them. Start with YOUR strategy and evaluate tools against that.

  • Don’t have a separate Testing/QA team! Quality is down to everyone - Developers
  • wn Acceptance Tests!!!
  • Don’t let every Test start and init the app. Optimise for Cycle-Time, be efficient in

your use of test environments.

  • Don’t include Systems outside of your control in your Acceptance Test Scope
  • Don’t Put ‘wait()’ instructions in your tests hoping it will solve intermittency
slide-123
SLIDE 123

Tricks for Success

slide-124
SLIDE 124

Tricks for Success

  • Do Ensure That Developers Own the Tests
slide-125
SLIDE 125

Tricks for Success

  • Do Ensure That Developers Own the Tests
  • Do Focus Your Tests on “What” not “How”
slide-126
SLIDE 126

Tricks for Success

  • Do Ensure That Developers Own the Tests
  • Do Focus Your Tests on “What” not “How”
  • Do Think of Your Tests as “Executable Specifications”
slide-127
SLIDE 127

Tricks for Success

  • Do Ensure That Developers Own the Tests
  • Do Focus Your Tests on “What” not “How”
  • Do Think of Your Tests as “Executable Specifications”
  • Do Make Acceptance Testing Part of your “Definition of Done”
slide-128
SLIDE 128

Tricks for Success

  • Do Ensure That Developers Own the Tests
  • Do Focus Your Tests on “What” not “How”
  • Do Think of Your Tests as “Executable Specifications”
  • Do Make Acceptance Testing Part of your “Definition of Done”
  • Do Keep Tests Isolated from one-another
slide-129
SLIDE 129

Tricks for Success

  • Do Ensure That Developers Own the Tests
  • Do Focus Your Tests on “What” not “How”
  • Do Think of Your Tests as “Executable Specifications”
  • Do Make Acceptance Testing Part of your “Definition of Done”
  • Do Keep Tests Isolated from one-another
  • Do Keep Your Tests Repeatable
slide-130
SLIDE 130

Tricks for Success

  • Do Ensure That Developers Own the Tests
  • Do Focus Your Tests on “What” not “How”
  • Do Think of Your Tests as “Executable Specifications”
  • Do Make Acceptance Testing Part of your “Definition of Done”
  • Do Keep Tests Isolated from one-another
  • Do Keep Your Tests Repeatable
  • Do Use the Language of the Problem Domain - Do try the DSL approach, whatever

your tech.

slide-131
SLIDE 131

Tricks for Success

  • Do Ensure That Developers Own the Tests
  • Do Focus Your Tests on “What” not “How”
  • Do Think of Your Tests as “Executable Specifications”
  • Do Make Acceptance Testing Part of your “Definition of Done”
  • Do Keep Tests Isolated from one-another
  • Do Keep Your Tests Repeatable
  • Do Use the Language of the Problem Domain - Do try the DSL approach, whatever

your tech.

  • Do Stub External Systems
slide-132
SLIDE 132

Tricks for Success

  • Do Ensure That Developers Own the Tests
  • Do Focus Your Tests on “What” not “How”
  • Do Think of Your Tests as “Executable Specifications”
  • Do Make Acceptance Testing Part of your “Definition of Done”
  • Do Keep Tests Isolated from one-another
  • Do Keep Your Tests Repeatable
  • Do Use the Language of the Problem Domain - Do try the DSL approach, whatever

your tech.

  • Do Stub External Systems
  • Do Test in “Production-Like” Environments
slide-133
SLIDE 133

Tricks for Success

  • Do Ensure That Developers Own the Tests
  • Do Focus Your Tests on “What” not “How”
  • Do Think of Your Tests as “Executable Specifications”
  • Do Make Acceptance Testing Part of your “Definition of Done”
  • Do Keep Tests Isolated from one-another
  • Do Keep Your Tests Repeatable
  • Do Use the Language of the Problem Domain - Do try the DSL approach, whatever

your tech.

  • Do Stub External Systems
  • Do Test in “Production-Like” Environments
  • Do Make Instructions Appear Synchronous at the Level of the Test Case
slide-134
SLIDE 134

Tricks for Success

  • Do Ensure That Developers Own the Tests
  • Do Focus Your Tests on “What” not “How”
  • Do Think of Your Tests as “Executable Specifications”
  • Do Make Acceptance Testing Part of your “Definition of Done”
  • Do Keep Tests Isolated from one-another
  • Do Keep Your Tests Repeatable
  • Do Use the Language of the Problem Domain - Do try the DSL approach, whatever

your tech.

  • Do Stub External Systems
  • Do Test in “Production-Like” Environments
  • Do Make Instructions Appear Synchronous at the Level of the Test Case
  • Do Test for ANY change
slide-135
SLIDE 135

Tricks for Success

  • Do Ensure That Developers Own the Tests
  • Do Focus Your Tests on “What” not “How”
  • Do Think of Your Tests as “Executable Specifications”
  • Do Make Acceptance Testing Part of your “Definition of Done”
  • Do Keep Tests Isolated from one-another
  • Do Keep Your Tests Repeatable
  • Do Use the Language of the Problem Domain - Do try the DSL approach, whatever

your tech.

  • Do Stub External Systems
  • Do Test in “Production-Like” Environments
  • Do Make Instructions Appear Synchronous at the Level of the Test Case
  • Do Test for ANY change
  • Do Keep your Tests Efficient
slide-136
SLIDE 136

Q&A

http://www.continuous-delivery.co.uk Dave Farley http://www.davefarley.net @davefarley77