Channel bonding of low-rate links using MPTCP for Airborne Flight - - PowerPoint PPT Presentation

channel bonding of low rate links using mptcp for
SMART_READER_LITE
LIVE PREVIEW

Channel bonding of low-rate links using MPTCP for Airborne Flight - - PowerPoint PPT Presentation

Channel bonding of low-rate links using MPTCP for Airborne Flight Research Joseph Ishac Matthew Sargent mptcp working group IETF 98 Chicago, IL John H. Glenn Research Center at Lewis Field March 30, 2017 www.nasa.gov 1 Quick Background


slide-1
SLIDE 1

John H. Glenn Research Center at Lewis Field www.nasa.gov

1

March 30, 2017

Channel bonding of low-rate links using MPTCP for Airborne Flight Research

Joseph Ishac Matthew Sargent mptcp working group IETF 98 – Chicago, IL

slide-2
SLIDE 2

John H. Glenn Research Center at Lewis Field www.nasa.gov

2

March 30, 2017

Quick Background

  • Iridium modems are used to communicate to on-board payloads
  • Channels are bonded using standard Multi-Link PPP (MLPPP)
  • System uses both UDP and TCP

– TCP/IP performs poorly - Cannot discern losses between links

  • Desire to scale the system to even more links (ie: 8, 12)

– MLPPP breaks down rapidly after 4 in this environment

  • Additional Goals

– Ensure fairness between flows as link conditions degrade – Increase reliability in connections

slide-3
SLIDE 3

John H. Glenn Research Center at Lewis Field www.nasa.gov

3

March 30, 2017

Existing Architecture Illustration

Payload 1 Payload 2 Payload 3 Flight CPU Satcomm Server (MLPPP)

Iridium Modem Iridium Modem Iridium Modem Iridium Modem

Iridium Constellation

POTS Modem POTS Modem POTS Modem POTS Modem

Iridium Ground Station Ground Server (MLPPP) POTS Network User 1 User 2 User 3

slide-4
SLIDE 4

John H. Glenn Research Center at Lewis Field www.nasa.gov

4

March 30, 2017

Link Characteristics

  • Iridium Modems “go down” fairly often similar to poor cell phone service

– Degrade: Some information is lost but the call is maintained – Drop: Total loss of link, similar to dropping a cell phone call

  • The fully operational system is slow by modern standards

– Each Iridium link is rated at 2.4 Kbit/s or 300 bytes per second – Currently 4 channels are used to provide a total of 9.6 Kbit/s

  • Round Trip Time (RTT) is very long

– Roughly 2 seconds for SYNs – Roughly 4 seconds for a 500 byte packet

slide-5
SLIDE 5

John H. Glenn Research Center at Lewis Field www.nasa.gov

5

March 30, 2017

Test Flight November 18

slide-6
SLIDE 6

John H. Glenn Research Center at Lewis Field www.nasa.gov

6

March 30, 2017

Frequency of Transient Links

  • Flight Duration: 13 Hours
  • Events that changed the

number of active links: 325

– 25 changes / hour

  • Nearly one fourth of the flight is

in a “degraded” state Seconds Percent 4 35643 76.26% 3 8269 17.69% 2 1969 4.21% 1 235 0.50% 624 1.34% Number of Active Links

slide-7
SLIDE 7

John H. Glenn Research Center at Lewis Field www.nasa.gov

7

March 30, 2017

MPTCP Architecture Illustration

Payload 1 Payload 2 Payload 3 Flight CPU Satcomm Server (MPTCP)

Iridium Modem Iridium Modem Iridium Modem

Iridium Constellation

POTS Modem POTS Modem POTS Modem

Iridium Ground Station Ground Server (MPTCP) POTS Network User 1 User 2 User 3

slide-8
SLIDE 8

John H. Glenn Research Center at Lewis Field www.nasa.gov

8

March 30, 2017

Handling MPTCP Endpoint Limitations

  • MPTCP designed to work best when at least one side is directly

attached to the point of multiple interfaces (and thus paths)

  • Both endpoints must be MPTCP aware

Example, if all nodes have MPTCP enabled, options (a), (b), and (c) all benefit. However, option (d) cannot as neither side knows the number of paths.

Source Iridium Modems Destination Ground Station (a) (d)

X

(b) (c)

slide-9
SLIDE 9

John H. Glenn Research Center at Lewis Field www.nasa.gov

9

March 30, 2017

Service Specific Proxies

  • Used a set of proxies or servers using open source solutions for the

types of services in use during flight

– HTTP proxy (Squid) – IRC server (unrealircd) – configured as a chat proxy (a “hub”)

  • Installed a proxy on each MPTCP system

– Flight CPU attached to the Iridium links – NASA ground station

slide-10
SLIDE 10

John H. Glenn Research Center at Lewis Field www.nasa.gov

10

March 30, 2017

MPTCP Flight Configuration

slide-11
SLIDE 11

John H. Glenn Research Center at Lewis Field www.nasa.gov

11

March 30, 2017

Initial Problems Configuring MPTCP

  • First Configuration Attempt

– IP address for each PPP interface – Full Mesh path manager

  • Used IPtable rules to limit cross

flows (ie: IP1 to IPb)

  • Implementation issue limited

number of sub-flows to 32

  • Complexity in configuration

increases rapidly with additional interfaces

Aircraft Satcomm (“Full Mesh”) Ground Station (“Full Mesh”) ppp0 IP1 ppp1 IP2 ppp2 IP3 ppp3 IP4 ppp0 IPa ppp1 IPb ppp2 IPc ppp3 IPd

slide-12
SLIDE 12

John H. Glenn Research Center at Lewis Field www.nasa.gov

12

March 30, 2017

Alternate 1: Using Default Path Manager

  • Reduced the number of sub-

flows generated by the aircraft

  • Works great for connections

initiated from the aircraft

  • Does not allow MPTCP use from

the ground to the aircraft

– Limited to a single normal TCP

connection

Aircraft Satcomm (“Full Mesh”) Ground Station (“Default”) ppp0 IP1 ppp1 IP2 ppp2 IP3 ppp3 IP4 ppp0 IPa ppp1 IPa ppp2 IPa ppp3 IPa

slide-13
SLIDE 13

John H. Glenn Research Center at Lewis Field www.nasa.gov

13

March 30, 2017

Alternate 2: Almost Works

  • Changing ground station back to

the full mesh scheduler allowed ground to air connections to establish multiple sub-flows

  • New Issue: Remove Address for

any sub-flow containing IPa would remove ALL sub-flows

Aircraft Satcomm (“Full Mesh”) Ground Station (“Full Mesh”) ppp0 IP1 ppp1 IP2 ppp2 IP3 ppp3 IP4 ppp0 IPa ppp1 IPa ppp2 IPa ppp3 IPa

slide-14
SLIDE 14

John H. Glenn Research Center at Lewis Field www.nasa.gov

14

March 30, 2017

MPTCP Implementation Patch

  • Patch contributor: Christoph Paasch (Thank you!!)
  • Issue:

– MPTCP would tear down all sub-flows if it encountered a

REMOVE_ADDR for the single ground station address

  • Solution:

– Add an option to disable generating REMOVE_ADDR

  • Enabled at the ground station, where only a single address is used
  • Aircraft no longer tears down all other active and healthy sub-flows
slide-15
SLIDE 15

John H. Glenn Research Center at Lewis Field www.nasa.gov

15

March 30, 2017

Example MPTCP HTTP Connection

Gray guides help to visualize the change in transfer rate as links come and go

slide-16
SLIDE 16

John H. Glenn Research Center at Lewis Field www.nasa.gov

16

March 30, 2017

Results

  • MPTCP did an excellent job of keeping connections active during

multiple link transitions

– Allowed long lived healthy connections – Dynamically leveraged the amount of available resources – Greatly improves connection stability to the end users

  • Service specific proxies worked well in conjunction with MPTCP

– Not all future services may have easy proxy options (ie: ssh)

  • MPTCP is not magic

– System still suffers if resources are strained (ie: opening 20 TCP connections)

slide-17
SLIDE 17

John H. Glenn Research Center at Lewis Field www.nasa.gov

17

March 30, 2017

Implementation Observations

  • Current implementation has some hard coded limits

– Cannot use more than 8 addresses, 32 total sub-flows – Spec clearly allows for many more

  • Implementation will send more than two consecutive ACKs to fit all

MPTCP options (Particularly MP_JOIN)

– Actually beneficial in our case – reduces setup time – OK if done prior to any data? No chance to trigger fast retransmit? – Need for more TCP option space?

  • REMOVE_ADDR behavior
slide-18
SLIDE 18

John H. Glenn Research Center at Lewis Field www.nasa.gov

18

March 30, 2017

Thank you Questions?

slide-19
SLIDE 19

John H. Glenn Research Center at Lewis Field www.nasa.gov

19

March 30, 2017

Backup Slides

slide-20
SLIDE 20

John H. Glenn Research Center at Lewis Field www.nasa.gov

20

March 30, 2017

Handling UDP Traffic

  • MPTCP is TCP/IP specific

– Needed a solution to “route” UDP traffic across all available PPP links

  • Created simple open source program that routes data from payloads

and transmits it evenly over all available Iridium channel(s)

– Use smart queues to store only the latest data from each UDP source – Replaces the manually tuned filtering functionality – Fairly limits all sources and adjusts dynamically – Will throttle all sources when TCP data is present or if channels are lost