Continuable asynchronous programming with allocation aware futures - - PowerPoint PPT Presentation

continuable
SMART_READER_LITE
LIVE PREVIEW

Continuable asynchronous programming with allocation aware futures - - PowerPoint PPT Presentation

Continuable asynchronous programming with allocation aware futures /Naios/continuable Denis Blank <denis.blank@outlook.com> Meeting C++ 2018 Introduction About me Denis Blank Masters student @Technical University of Munich


slide-1
SLIDE 1

Continuable

asynchronous programming with allocation aware futures

/Naios/continuable Denis Blank <denis.blank@outlook.com>

Meeting C++ 2018

slide-2
SLIDE 2

2

Introduction

About me

Denis Blank

  • Master’s student @Technical University of Munich
  • GSoC participant in 2017 @STEllAR-GROUP/hpx
  • Author of the continuable and function2 libraries
  • Interested in: compiler engineering, asynchronous

programming and metaprogramming

slide-3
SLIDE 3

3

Introduction

Table of contents

1. The future pattern (and its disadvantages) 2. Rethinking futures

○ Continuable implementation ○ Usage examples of continuable

3. Connections

○ Traversals for arbitrarily nested packs ○ Expressing connections with continuable

4. Coroutines

The continuable library talk:

/Naios/continuable

slide-4
SLIDE 4

The future pattern

4

slide-5
SLIDE 5

The future pattern

5

promises and futures

Future result std::future<int> Resolver std::promise<int> Creates Resolves

slide-6
SLIDE 6

std::promise<int> promise; std::future<int> future = promise.get_future(); promise.set_value(42); int result = future.get();

The future pattern

6

In C++17 we can only pool or wait for the result synchronously

Synchronous wait

slide-7
SLIDE 7

future<std::string> other = future .then([](future<int> future) { return std::to_string(future.get()); }); Resolve the next future Asynchronous return types

The future pattern

The Concurrency TS proposed a then method for adding a continuation handler, now reworked in the “A Unified Futures” and executors proposal.

7

Asynchronous continuation chaining

slide-8
SLIDE 8

The future pattern

8

The shared state

std::promise<int> std::future<int> Shared state

  • n the heap
slide-9
SLIDE 9

The future pattern

9

template<typename T> class shared_state { std::variant< std::monostate, T, std::exception_ptr > result_; std::function<void(future<T>)> then_; std::mutex lock_; };

Shared state implementation

Simplified version

The shared state contains a result storage, continuation storage and synchronization primitives.

slide-10
SLIDE 10

The future pattern

10

  • std::future
  • boost::future
  • folly::Future
  • hpx::future
  • stlab::future
  • ...

Implementations with a shared state

slide-11
SLIDE 11

11

Future disadvantages

Shared state overhead

  • Attaching a continuation (then) creates a new future

and shared state every time (allocation overhead)!

  • Maybe allocation for the continuation as well
  • Result read/write not wait free

Lock acquisition or spinlock ○ Can be optimized to an atomic wait free state read/write in the single producer and consumer case (non shared future/promise).

  • If futures are shared across multiple cores:

Shared-nothing futures can be zero cost (Seastar).

slide-12
SLIDE 12

12

Future disadvantages

Shared state overhead

  • Attaching a continuation (then) creates a new future

and shared state every time (allocation overhead)!

  • Maybe allocation for the continuation as well
  • Result read/write not wait free

Lock acquisition or spinlock ○ Can be optimized to an atomic wait free state read/write in the single producer and consumer case (non shared future/promise).

  • If futures are shared across multiple cores:

Shared-nothing futures can be zero cost (Seastar).

slide-13
SLIDE 13

13

Future disadvantages

Strict eager evaluation

std::future<std::string> future = std::async([] { return "Hello Meeting C++!"s; });

  • Futures represent the asynchronous result of an

already running operation!

  • Impossible not to request it
  • Execution is non deterministic:

○ Leads to unintended side effects! ○ No ensured execution order!

  • Possible: Wrapping into a lambda to achieve laziness.
slide-14
SLIDE 14

14

Future disadvantages

Strict eager evaluation

  • Futures represent the asynchronous result of an

already running operation!

  • Impossible not to request it
  • Execution is non deterministic:

○ Leads to unintended side effects! ○ No ensured execution order!

  • Possible: Wrapping into a lambda to achieve laziness.

std::future<std::string> future = std::async([] { return "Hello Meeting C++!"s; });

slide-15
SLIDE 15

15

Future disadvantages

Unwrapping and R-value correctness

  • future::then L-value callable although consuming

○ Should be R-value callable only (for detecting misuse)

  • Always required to call future::get

○ But: Fine grained exception control possible (not needed)

  • Repetition of type

○ Becomes worse in compound futures (connections) future.then([] (future<std::tuple<future<int>, future<int>>> future) { int a = std::get<0>(future.get()).get(); int b = std::get<1>(future.get()).get(); return a + b; });

slide-16
SLIDE 16

16

Future disadvantages

Unwrapping and R-value correctness

  • future::then L-value callable although consuming

○ Should be R-value callable only (for detecting misuse)

  • Always required to call future::get

○ But: Fine grained exception control possible (not needed)

  • Repetition of type

○ Becomes worse in compound futures (connections) future.then([] (future<std::tuple<future<int>, future<int>>> future) { int a = std::get<0>(future.get()).get(); int b = std::get<1>(future.get()).get(); return a + b; });

slide-17
SLIDE 17

17

Future disadvantages

Exception propagation

  • Propagation overhead through rethrowing on get
  • No error codes as exception type possible

make_exceptional_future<int>(std::exception{}) .then([] (future<int> future) { int result = future.get(); return result; }) .then([] (future<int> future) { int result = future.get(); return result; })

.then([] (future<int> future) { try { int result = future.get(); } catch (std::exception const& e) { // Handle the exception } });

slide-18
SLIDE 18

18

Future disadvantages

Availability

  • std::future::experimental::then will change heavily:

○ Standardization date unknown ○ “A Unified Future” proposal maybe C++23

  • Other implementations require a large framework,

runtime or are difficult to build

Now C++20 C++23 Future

slide-19
SLIDE 19

Rethinking futures

19

slide-20
SLIDE 20

20

Rethinking futures

  • Usable in a broad case of usage scenarios (boost, Qt)
  • Portable, platform independent and simple to use
  • Agnostic to user provided executors and runtimes
  • Should resolve the previously mentioned disadvantages:

○ Shared state overhead ○ Strict eager evaluation ○ Unwrapping and R-value correctness ○ Exception propagation ○ Availability

Designing goals

slide-21
SLIDE 21

21

Rethinking futures

  • Usable in a broad case of usage scenarios (boost, Qt)
  • Portable, platform independent and simple to use
  • Agnostic to user provided executors and runtimes
  • Should resolve the previously mentioned disadvantages:

○ Shared state overhead ○ Strict eager evaluation ○ Unwrapping and R-value correctness ○ Exception propagation ○ Availability

Designing goals

slide-22
SLIDE 22

22

Rethinking futures

Why we don’t use callbacks

  • Difficult to express complicated chains
  • But: Simple and performant to express an

asynchronous continuation.

  • But: Work nicely with existing libraries

signal_set.async_wait([](auto error, int slot) {

signal_set.async_wait([](auto error, int slot) {

signal_set.async_wait([](auto error, int slot) {

signal_set.async_wait([](auto error, int slot) { // handle the result here });

});

});

}); Callback hell

slide-23
SLIDE 23

23

Rethinking futures

How we could use callbacks

  • Idea: Transform the callbacks into something

easier to use without the callback hell

○ Long history in JavaScript: q, bluebird ○ Much more complicated in C++ because of static typing, requires heavy metaprogramming.

  • Mix this with syntactic sugar and C++ candies

like operator overloading. And finished is the continuable

Not trivial...

slide-24
SLIDE 24

24

Rethinking futures

How we could use callbacks

  • Idea: Transform the callbacks into something

easier to use without the callback hell

○ Long history in JavaScript: q, bluebird ○ Much more complicated in C++ because of static typing, requires heavy metaprogramming.

  • Mix this with syntactic sugar and C++ candies

like operator overloading. And finished is the continuable

Not trivial...

slide-25
SLIDE 25

auto continuable = make_continuable<int>([](auto&& promise) { // Resolve the promise immediately or store // it for later resolution. promise.set_value(42); }); Resolve the promise, set_value alias for operator() Arbitrary asynchronous return types

Creating continuables

A continuable_base is creatable through make_continuable, which requires its types trough template arguments and accepts a callable type

The promise might be moved or stored

25

slide-26
SLIDE 26

auto continuable = make_continuable<int>([](auto&& promise) { // Resolve the promise immediately or store // it for later resolution. promise.set_value(42); }); Resolve the promise, set_value alias for operator() Arbitrary asynchronous return types

Creating continuables

A continuable_base is creatable through make_continuable, which requires its types trough template arguments and accepts a callable type

The promise might be moved or stored

26

slide-27
SLIDE 27

make_ready_continuable(42) .then([] (int value) { // return something });

Chaining continuables

A continuable_base is chainable through its then method, which accepts a continuation handler. We work on values directly rather than continuables.

This ready continuable resolves the given result instantly

27

Continuation chaining

Optional return value:

  • Plain object
  • Tuple of objects
  • The next continuable

to resolve

slide-28
SLIDE 28

http_request("example.com") .then([] (int status, std::string body) { return mysql_query("SELECT * FROM `users` LIMIT 1"); }) .then(do_delete_caches()) .then(do_shutdown());

Chaining continuables

then may also return plain objects, a tuple of

  • bjects or the next continuable_base to resolve.

Return the next continuable_base to resolve Just a dummy function which returns a continuable_base of int, std::string

28

Continue from callbacks

Ignore previous results

slide-29
SLIDE 29

http_request("example.com") .then([] (int status, std::string body) { return mysql_query("SELECT * FROM `users` LIMIT 1"); }) .then(do_delete_caches()) .then(do_shutdown());

Chaining continuables

then may also return plain objects, a tuple of

  • bjects or the next continuable_base to resolve.

Return the next continuable_base to resolve Just a dummy function which returns a continuable_base of int, std::string

29

Continue from callbacks

Ignore previous results

slide-30
SLIDE 30

make_ready_continuable(‘a’, 2, 3) .then([] (char a) { return std::make_tuple(‘d’, 5); }) .then([] (char c, int d) { // ... });

Chaining continuables

The continuation passed to then may also accept the result partially, and may pass multiple objects wrapped inside a std::tuple to the next handler.

Return multiple objects that are passed to the next continuation directly Use the asynchronous arguments partially

30

Continuation chaining sugar

slide-31
SLIDE 31

make_ready_continuable(‘a’, 2, 3) .then([] (char a) { return std::make_tuple(‘d’, 5); }) .then([] (char c, int d) { // ... });

Chaining continuables

The continuation passed to then may also accept the result partially, and may pass multiple objects wrapped inside a std::tuple to the next handler.

Use the asynchronous arguments partially

31

Continuation chaining sugar

Return multiple objects that are passed to the next continuation directly

slide-32
SLIDE 32

Continuable implementation

32

slide-33
SLIDE 33

33

Continuable implementation

Creating ready continuables

make_ready_continuable(0, 1) make_continuable<int, int>([] (auto&& promise) { promise.set_value(0, 1); });

The implementation stores the arguments into a std::tuple first and sets the promise with the content of the tuple upon request (std::apply).

slide-34
SLIDE 34

34

Continuable implementation

Decorating the continuation result

.then([] (auto result) { return; }) .then([] (auto result) { return make_ready_continuable(); })

Transform the continuation result such that it is always a continuable_base of the corresponding result.

.then([] (auto result) { return std::make_tuple(0, 1); }) .then([] (auto result) { return make_ready_continuable(0, 1); })

slide-35
SLIDE 35

35

Continuable implementation

Decorating the continuation result

.then([] (auto result) { return; }) .then([] (auto result) { return make_ready_continuable(); })

Transform the continuation result such that it is always a continuable_base of the corresponding result.

.then([] (auto result) { return std::make_tuple(0, 1); }) .then([] (auto result) { return make_ready_continuable(0, 1); })

slide-36
SLIDE 36

36

Continuable implementation

Invoker selection through tag dispatching

using result_t = std::invoke_result_t<Callback, Args...>; // ^ std::tuple<int, int> for example auto invoker = invoker_of(identity<result_t>{}); // void auto invoker_of(identity<void>); // T template<typename T> auto invoker_of(identity<T>); // std::tuple<T...> template<typename... T> auto invoker_of(identity<std::tuple<T...>>); 3 2 1

slide-37
SLIDE 37

37

Continuable implementation

Attaching a continuation

auto continuation = [=](auto promise) { promise(1); }; auto callback = [] (int result) { return make_ready_continuable(); };

Attaching a callback to a continuation yields a new continuation with new argument types.

auto new_continuation = [](auto next_callback) { auto proxy = decorate(callback, next_callback) continuation(proxy); };

slide-38
SLIDE 38

38

Continuable implementation

Decorating the callback

auto proxy = [ callback, next_callback ] (auto&&... args) { auto next_continuation = callback(std::forward<decltype(args)>(args)...); next_continuation(next_callback); };

The proxy callback passed to the previous continuation invokes the next continuation with the next callback.

slide-39
SLIDE 39

39

Continuable implementation

Seeing the big picture

Yield result of

  • uter continuation

continuation callback

continuation callback

continuation callback

continuation callback continuation callback

Invocation

slide-40
SLIDE 40

40

Continuable implementation

Seeing the big picture

continuation callback

continuation callback

continuation callback

continuation callback continuation callback

Invocation Yield result of

  • uter continuation

callback passed to this continuation through then!

slide-41
SLIDE 41

Russian Matryoshka doll

41

Continuable implementation

Seeing the big picture

slide-42
SLIDE 42

42

Continuable implementation

Exception handling

When the promise is resolved with an exception an exception_ptr is passed to the next available failure handler.

read_file("entries.csv") .then([] (std::string content) { // ... }) .fail([] (std::exception_ptr exception) { // handle the exception }) promise.set_exception(...) On exceptions skip the result handlers between.

slide-43
SLIDE 43

43

Continuable implementation

Split asynchronous control flows

Results Exceptions Others throw recover

slide-44
SLIDE 44

44

Continuable implementation

Split asynchronous control flows

template<typename... Args> struct callback { auto operator() (Args&&... args); auto operator() (dispatch_error_tag, std::exception_ptr); // dispatch_error_tag is exception_arg_t in the // "Unified Futures" standard proposal. }; Or any other error type

slide-45
SLIDE 45

45

Continuable implementation

Exception propagation

template<typename... Args> struct proxy { Callback failure_callback_; NextCallback next_callback_ void operator() (Args&&... args) { // The next callback has the same signature next_callback_(std::forward<Args>(args)...); } void operator() (dispatch_error_tag, std::exception_ptr exception) { failure_callback_(exception); } }; On a valid result forward it to the next available result handler

slide-46
SLIDE 46

46

Continuable implementation

Result handler conversion

template<typename... Args> struct proxy { Callback callback_; NextCallback next_callback_ void operator() (Args&&... args) { auto continuation = callback_(std::forward<Args>(args)...); continuation(next_callback); } void operator() (dispatch_error_tag, std::exception_ptr exception) { next_callback_(dispatch_error_tag{}, exception); } }; Forward the exception to the next available handler

slide-47
SLIDE 47

47

The continuable_base

The wrapper

The continuable_base is convertible when the types

  • f Continuation are convertible to each other.

template<typename Continuation, typename Strategy> class continuable_base { Continuation continuation_;

  • wnership ownership_;

template<typename C, typename E = this_thread_executor> auto then(C&& callback, E&& executor = this_thread_executor{}) &&; };

  • void (until now)
  • strategy_all_tag
  • strategy_seq_tag
  • strategy_any_tag

consuming = R-value std::move(continuable).then(...);

slide-48
SLIDE 48

48

The continuable_base

The wrapper

The continuable_base is convertible when the types

  • f Continuation are convertible to each other.

template<typename Continuation, typename Strategy> class continuable_base { Continuation continuation_;

  • wnership ownership_;

template<typename C, typename E = this_thread_executor> auto then(C&& callback, E&& executor = this_thread_executor{}) &&; };

  • void (until now)
  • strategy_all_tag
  • strategy_seq_tag
  • strategy_any_tag

consuming = R-value std::move(continuable).then(...);

slide-49
SLIDE 49

49

The continuable_base

The ownership model

The continuation is invoked when the continuable_base is still valid and being destroyed (race condition free continuation chaining).

npc->talk("Greetings traveller, how is your name?") .then([log, player] { log->info("Player {} asked for name.", player->name()); return player->ask_for_name(); }) .then([](std::string name) { // ... }); Invoke the continuation here

slide-50
SLIDE 50

50

Memory allocation

The continuable_base

  • Increases the amount of types the compiler has

to generate ⇒ slower compilation

  • Better for compiler optimization
  • Increases the executable size
  • ⇒ We require a concrete type for APIs where we

don’t want to expose our implementation

Until now: no memory allocation involved!

then always returns an object of an unknown type

slide-51
SLIDE 51

51

Concrete types

The continuable_base

continuable<int, std::string> http_request(std::string url) { return [=](promise<int, std::string> promise) { // Resolve the promise later promise.set_value(200, "<html> ... </html>"); }; }

Preserve unknown types across the continuation chaining, convert it to concrete types in APIs on request

slide-52
SLIDE 52

52

The continuable_base

Type erasure

For the callable type erasure my function2 library is used that provides move only and multi signature capable type erasures + small functor optimization.

using callback_t = function<void(Args...), void(dispatch_error_tag, std::exception_ptr)>; using continuation_t = function<void(callback_t)>; Erased callable for promise<Args…> Erased callable for continuable<Args…>

slide-53
SLIDE 53

53

The continuable_base

Type erasure

For the callable type erasure my function2 library is used that provides move only and multi signature capable type erasures + small functor optimization.

using callback_t = function<void(Args...), void(dispatch_error_tag, std::exception_ptr)>; using continuation_t = function<void(callback_t)>; Erased callable for promise<Args…> Erased callable for continuable<Args…>

slide-54
SLIDE 54

54

The continuable_base

Type erasure aliases

template<typename... Args> using promise = promise_base<callback_t<Args...>>; template<typename... Args> using continuable = continuable_base< function<void(promise<Args...>)>, void >;

template<typename... Args> using callback_t = function<void(Args...), void(dispatch_error_tag, std::exception_ptr)>;

continuable_base type erasure works implicitly and with any type erasure wrapper out of the box.

slide-55
SLIDE 55

55

The continuable_base

Apply type erasure when needed

futures requires a minimum of two fixed allocations per then whereas continuable requires a maximum of two allocations per type erasure.

// auto do_sth(); continuable<> cont = do_sth() .then([] { return do_sth(); }) .then([] { return do_sth(); }); // future<void> do_sth(); future<void> cont = do_sth() .then([] (future<void>) { return do_sth(); }) .then([] (future<void>) { return do_sth(); });

Max 2 allocs

  • n need

2*2 fixed + 2*1 maybe continuations allocs

slide-56
SLIDE 56

56

The continuable_base

Apply type erasure when needed

futures requires a minimum of two fixed allocations per then whereas continuable requires a maximum of two allocations per type erasure.

// auto do_sth(); continuable<> cont = do_sth() .then([] { return do_sth(); }) .then([] { return do_sth(); }); // future<void> do_sth(); future<void> cont = do_sth() .then([] (future<void>) { return do_sth(); }) .then([] (future<void>) { return do_sth(); });

Max 2 allocs

  • n need

2*2 fixed + 2*1 maybe continuations allocs

slide-57
SLIDE 57

Executor support

57

slide-58
SLIDE 58

58

Usage cases

Executor support

mysql_query("SELECT `id`, `name` FROM `users` WHERE `id` = 123") .then([](ResultSet result) { // On which thread this continuation runs? }); promise.set_value(result);

  • On which thread the continuation runs:

○ Resolving thread? (default) ○ Thread which created the continuation?

  • When does the continuation run:

○ Immediately on resolving? (default) ○ Later?

  • Can we cancel the continuation chain?

That should be up to you!

slide-59
SLIDE 59

59

Using an executor

Executor support

struct my_executor_proxy { template<typename T> void operator()(T&& work) { std::forward<T>(work)(); } }; mysql_query("SELECT `id`, `name` FROM `users` WHERE `id` = 123") .then([](ResultSet result) { // Pass this continuation to my_executor }, my_executor_proxy{}); second argument

  • f then
  • Invoke the work
  • Drop the work
  • Move the work to another

thread or executor

slide-60
SLIDE 60

60

No executor propagation

Executor support

The executor isn’t propagated to the next handler and has to be passed again to avoid unnecessary type erasure (we could make it a type parameter).

continuable<> next = do_sth().then([] { // Do sth. }, my_executor_proxy{}); std::move(next).then([] { // No ensured propagation! }); Propagation would lead to type erasure although it isn’t requested here!

slide-61
SLIDE 61

Context of execution

Executor support

⇒ We can neglect executor propagation when moving heavy tasks to a continuation, except in case of data races!

do_sth().then([] { // Do something short }); continuable<> do_sth() { return [] (auto&& promise) { // Do something long promise.set_value(); }; }

Continuation Callback

slide-62
SLIDE 62

62

Check design goals

  • No shared state overhead
  • No strict eager evaluation
  • No unwrapping and R-value correctness
  • Exception propagation
  • Availability

○ C++14 ○ Header-only (depends on function2) ○ GCC / Clang / MSVC

slide-63
SLIDE 63

63

Check design goals

  • No Shared state overhead
  • No strict eager evaluation
  • No Unwrapping and R-value correctness
  • Exception propagation
  • Availability

○ C++14 ○ Header-only (depends on function2) ○ GCC / Clang / MSVC

slide-64
SLIDE 64

Connections

64

slide-65
SLIDE 65

65

Connections

The call graph

when_all, when_any usable to express relations between multiple continuables. ⇒ Guided/graph based execution requires a shared state (not available)

when_all then when_any Ready when any dependent continuable is ready Ready when all dependent continuables are ready

slide-66
SLIDE 66

66

Connections

Lazy evaluation advantages

Using lazy (on request) evaluation over an eager one makes it possible to choose the evaluation strategy. ⇒ Moves this responsibility from the executor to the evaluator!

when_seq Invokes all dependent continuables in sequential order Thoughts (not implemented)

  • when_pooled - pooling
  • when_every - request all
  • when_first_succeeded /

when_first_failed - exception strategies

slide-67
SLIDE 67

67

Connections

Simplification over std::experimental::when_all

std::when_all introduces code overhead because of unnecessary fine grained exception handling.

// continuable<int> do_sth(); when_all(do_sth(), do_sth()) .then([] (int a, int b) { return a == b; }); // std::future<int> do_sth(); std::experimental::when_all(do_sth(), do_sth()) .then([] (std::tuple<std::future<int>, std::future<int>> res) { return std::get<0>(res).get() == std::get<1>(res).get(); });

slide-68
SLIDE 68

68

Connections

Simplification over std::experimental::when_all

std::when_all introduces code overhead because of unnecessary fine grained exception handling.

// continuable<int> do_sth(); when_all(do_sth(), do_sth()) .then([] (int a, int b) { return a == b; }); // std::future<int> do_sth(); std::experimental::when_all(do_sth(), do_sth()) .then([] (std::tuple<std::future<int>, std::future<int>> res) { return std::get<0>(res).get() == std::get<1>(res).get(); });

slide-69
SLIDE 69

69

Connections implementation

Based on GSoC @STEllAR-GROUP/hpx

The map_pack, traverse_pack and traverse_pack_async API helps to apply an arbitrary connection between continuables contained in a variadic pack.

hpx::when_all hpx::dataflow hpx::unwrapped hpx::when_any hpx::when_some hpx::wait_any hpx::wait_some hpx::wait_all synchronous traversal (map_pack) synchronous mapping (traverse_pack) asynchronous traversal (traverse_pack_async)

Basically the same implementation

slide-70
SLIDE 70

70

Connections implementation

Based on GSoC @STEllAR-GROUP/hpx

The map_pack, traverse_pack and traverse_pack_async API helps to apply an arbitrary connection between continuables contained in a variadic pack.

hpx::when_all hpx::dataflow hpx::unwrapped hpx::when_any hpx::when_some hpx::wait_any hpx::wait_some hpx::wait_all synchronous traversal (map_pack) synchronous mapping (traverse_pack) asynchronous traversal (traverse_pack_async)

Basically the same implementation

slide-71
SLIDE 71

71

Connections implementation

Indexer example (map_pack)

Because when_any returns the first ready result of a common denominator, map_pack could be used to apply an index to the continuables.

index_continuables(do_sth(), do_sth(), do_sth()); // Shall return: std::tuple< continuable<size_t /*= 0*/, int>, continuable<size_t /*= 1*/, int>, continuable<size_t /*= 2*/, int> > // continuable<int> do_sth(); when_any(do_sth(), do_sth(), do_sth()); .then([] (int a) { // ?: We don’t know which // continuable became ready });

slide-72
SLIDE 72

72

Connections implementation

Indexer example (map_pack)

Because when_any returns the first ready result of a common denominator, map_pack could be used to apply an index to the continuables.

index_continuables(do_sth(), do_sth(), do_sth()); // Shall return: std::tuple< continuable<size_t /*= 0*/, int>, continuable<size_t /*= 1*/, int>, continuable<size_t /*= 2*/, int> > // continuable<int> do_sth(); when_any(do_sth(), do_sth(), do_sth()); .then([] (int a) { // ?: We don’t know which // continuable became ready });

slide-73
SLIDE 73

73

Connections implementation

Indexer example (map_pack)

map_pack(indexer{}, do_sth(), do_sth(), do_sth());

struct indexer { size_t index = 0; template <typename T, std::enable_if_t<is_continuable<std::decay_t<T>>::value>* = nullptr> auto operator()(T&& continuable) { auto current = ++index; return std::forward<T>(continuable).then([=] (auto&&... args) { return std::make_tuple(current, std::forward<decltype(args)>(args)...); }); } };

map_pack transforms an arbitrary argument pack through a callable mapper.

slide-74
SLIDE 74

74

Connections implementation

Indexer example (map_pack)

map_pack(indexer{}, do_sth(), do_sth(), do_sth());

struct indexer { size_t index = 0; template <typename T, std::enable_if_t<is_continuable<std::decay_t<T>>::value>* = nullptr> auto operator()(T&& continuable) { auto current = ++index; return std::forward<T>(continuable).then([=] (auto&&... args) { return std::make_tuple(current, std::forward<decltype(args)>(args)...); }); } };

map_pack transforms an arbitrary argument pack through a callable mapper.

slide-75
SLIDE 75

75

Connections implementation

Arbitrary and nested arguments

map_pack and friends can work with plain values and nested packs too and so can when_all.

continuable<int> aggregate(std::tuple<int, continuable<int>, std::vector<continuable<int>>> all) { return when_all(std::move(all)) .then([] (std::tuple<int, int, std::vector<int>> result) { int aggregated = 0; traverse_pack([&] (int current) { aggregated += current; }, std::move(result)); return aggregated; }); }

slide-76
SLIDE 76

76

Connections implementation

Arbitrary and nested arguments

map_pack and friends can work with plain values and nested packs too and so can when_all.

continuable<int> aggregate(std::tuple<int, continuable<int>, std::vector<continuable<int>>> all) { return when_all(std::move(all)) .then([] (std::tuple<int, int, std::vector<int>> result) { int aggregated = 0; traverse_pack([&] (int current) { aggregated += current; }, std::move(result)); return aggregated; }); }

slide-77
SLIDE 77

77

Connections implementation

when_all/when_seq

Connections require a shared state by design, concurrent writes to the same box never happen.

int, continuable<int>, std::vector<continuable<int>> when_all/seq map_pack(boxify{}, ...) Ready traverse_pack(resolve{}, ...) Continue map_pack(unwrap{}, ...) Counter int, box<expected<int>, continuable<int>>, std::vector<box<expected<int>, continuable<int>>>

slide-78
SLIDE 78

78

Connections implementation

when_all/when_seq

Connections require a shared state by design, concurrent writes to the same box never happen.

int, continuable<int>, std::vector<continuable<int>> int, box<expected<int>, continuable<int>>, std::vector<box<expected<int>, continuable<int>>> when_all/seq map_pack(boxify{}, ...) Ready traverse_pack(resolve{}, ...) Continue map_pack(unwrap{}, ...) Counter

slide-79
SLIDE 79

79

Connections implementation

when_all/when_seq

Connections require a shared state by design, concurrent writes to the same box never happen.

int, continuable<int>, std::vector<continuable<int>> when_all/seq map_pack(boxify{}, ...) Ready traverse_pack(resolve{}, ...) Continue map_pack(unwrap{}, ...) Counter int, box<expected<int>, continuable<int>>, std::vector<box<expected<int>, continuable<int>>>

slide-80
SLIDE 80

80

Operator overloading

Express connections

Operator overloading allows expressive connections between continuables.

(http_request("example.com/a") && http_request("example.com/b")) .then([] (http_response a, http_response b) { // ... return wait_until(20s) || wait_key_pressed(KEY_SPACE) || wait_key_pressed(KEY_ENTER); }); when_any: operator|| when_all: operator&&

slide-81
SLIDE 81

81

Operator overloading

Difficulties

A naive operator overloading approach where we instantly connect 2 continuables would lead to unintended evaluations and thus requires linearization.

return wait_until(20s) || wait_key_pressed(KEY_SPACE) || wait_key_pressed(KEY_ENTER); when_any when_any wait_key_pressed(KEY_SPACE) wait_until(20s) wait_key_pressed(KEY_ENTER)

slide-82
SLIDE 82

82

Operator overloading

Correct operator evaluation required

when_any wait_key_pressed(KEY_SPACE) wait_until(20s) wait_key_pressed(KEY_ENTER)

slide-83
SLIDE 83

83

Operator overloading

Implementation

Set the continuable_base into an intermediate state (strategy), materialize the connection on use or when the strategy changes (expression template).

wait_until(20s) || wait_key_pressed(KEY_SPACE) ... || wait_key_pressed(KEY_SPACE) .then(...) continuable_base<std::tuple<..., ...>, strategy_any_tag> continuable_base<std::tuple<..., ..., ...>, strategy_any_tag> continuable_base<..., void> Materialization

slide-84
SLIDE 84

Continuable & Coroutines TS

84

slide-85
SLIDE 85

85

Continuable & Coroutines TS

Interoperability

continuable_base implements operator co_await() and specializes coroutine_traits and thus is compatible to the Coroutines TS.

continuable<int> interoperability_check() { try { auto response = co_await http_request("example.com/c"); } catch (std::exception const& e) { co_return 0; } auto other = cti::make_ready_continuable(0, 1); auto [ first, second ] = co_await std::move(other); co_return first + second; }

slide-86
SLIDE 86

86

Continuable & Coroutines TS

Interoperability

continuable_base implements operator co_await() and specializes coroutine_traits and thus is compatible to the Coroutines TS.

continuable<int> interoperability_check() { try { auto response = co_await http_request("example.com/c"); } catch (std::exception const& e) { co_return 0; } auto other = cti::make_ready_continuable(0, 1); auto [ first, second ] = co_await std::move(other); co_return first + second; }

slide-87
SLIDE 87

87

Continuable & Coroutines TS

Do Coroutines deprecate Continuables?

  • A coroutine isn’t necessarily allocation free

○ Recursive coroutines frames ○ Depends on compiler optimization

  • Connections
  • Executors (difficult to do with plain coroutines)
  • Takes time until Coroutines are widely supported

○ Libraries that work with plain callbacks (legacy codebases)

  • But: Coroutines have much better Call Stacks!

Probably not!

There are many things a plain coroutine doesn’t provide

slide-88
SLIDE 88

88

Continuable & Coroutines TS

Do Coroutines deprecate Continuables?

  • A coroutine isn’t necessarily allocation free

○ Recursive coroutines frames ○ Depends on compiler optimization

  • Connections
  • Executors (difficult to do with plain coroutines)
  • Takes time until Coroutines are widely supported

○ Libraries that work with plain callbacks (legacy codebases)

  • But: Coroutines have much better Call Stacks!

Probably not!

There are many things a plain coroutine doesn’t provide

slide-89
SLIDE 89

89

Questions?

Thank you for your attention

/Naios/continuable Denis Blank <denis.blank@outlook.com>

MIT Licensed

/Naios/talks

slides code me