fbtftp
play

FBTFTP Facebooks Python3 open-source framework to build dynamic tftp - PowerPoint PPT Presentation

FBTFTP Facebooks Python3 open-source framework to build dynamic tftp servers Angelo Failla Production Engineer Cluster infrastructure team Facebook Ireland Who am I? A Production Engineer Similar to SRE / DevOps Based


  1. FBTFTP Facebook’s Python3 open-source framework to build dynamic tftp servers Angelo Failla Production Engineer 
 Cluster infrastructure team 
 Facebook Ireland

  2. Who am I? • A Production Engineer • Similar to SRE / DevOps 
 • Based in Facebook Ireland, Dublin • Since 2011 
 • Cluster Infrastructure team member • Owns data center core services • Owns E2E automation for bare metal provisioning and cluster management.

  3. “There is no cloud, just other people’s computers…” - a (very wise) person on the interwebz “… and someone’s got to provision them.” - Angelo

  4. POPs locations are fictional Data center locations POPs: Point of Presence

  5. HANDS FREE PROVISIONING:

  6. model vendor BIOS firmware OOB DHCP bootloader v6/v4 UEFI TFTP inventory sys location bootloader mysql config buildcontrol anaconda HTTP repos server type 3rd party kickstart kernel tier initrd partitioning OS schemas RPM's cyborg chef

  7. model vendor BIOS firmware OOB DHCP bootloader v6/v4 UEFI TFTP inventory sys location bootloader mysql config buildcontrol anaconda HTTP repos server type 3rd party kickstart kernel tier initrd partitioning OS schemas RPM's cyborg chef

  8. TFTP

  9. It’s common in Data Center/ISP environments Simple protocol specifications Easy to implement UDP based -> produces small code footprint Fits in small boot ROMs Embedded devices and network equipment Traditionally used for netboot (with DHCPv[46])

  10. Provisioning phases REBOOT PROVISIONED POWER ON ANACONDA NETBOOT CHEF DHCPv[46] - KEA TFTP NBP • provides network config • provides NBPs • fetches config via tftp • provides path for NBPs binaries • provides config files for NBPs • fetches kernel/initrd 
 • provides kernel/initrd (via http or tftp)

  11. 30+ years old protocol me, ~1982 circa

  12. Protocol in a nutshell (RRQ) X RRQ 69 DAT 1 X Y Y X ACK 1 SERVER CLIENT DAT N X Y Y X ACK N

  13. File size Block Latency Time to Size download 80 MB 512 B 150ms 12.5 hours 80 MB 1400 B 150ms 4.5 hours 512 B/ 80 MB 1ms <1 minute 1400 B DC Latency: ~150ms POP 69 X RR X DAT Y SERVER CLIENT POPs locations are fictional X ACK Y

  14. A look in the past ~2014 (and its problems) • Physical load balancers Servers • Waste of resources • Automation needs to know Cluster 
 VIP which server is active HW LB • No stats • TFTP is a bad protocol in in.tftpd 
 in.tftpd 
 REPO (active) (passive) high latency environments • Too many moving parts Write config rsync 7GB Automation

  15. How did we solve those problems?

  16. We built FBTFTP… …A python3 framework to build dynamic TFTP servers • Supports only RRQ (fetch operation) • Main TFTP spec[1], Option Extension[2], Block size option[3], Timeout Interval and Transfer Size Options[4]. • Extensible: • Define your own logic • Push your own statistics (per session or global) [1] RFC1350, [2] RFC2347, [3] RFC2348, [4] RFC2349

  17. Framework overview Monitoring Infrastructure server session callback callback get_handler() RRQ Client BaseServer BaseHandler fork() get_response_data() ResponseData transfer session child process

  18. 
 Example: 
 a simple server serving files from disk

  19. A file-like class that represents a file served: class FileResponseData(ResponseData): Monitoring Infrastructure def __init__(self, path): self._size = os.stat(path).st_size self._reader = open(path, 'rb') server session callback callback def read(self, n): return self._reader.read(n) get_handler() RRQ def size(self): BaseServer BaseHandler Client fork() return self._size get_response_data() def close(self): transfer session self._reader.close() ResponseData child process

  20. A class that deals with a transfer session: Monitoring Infrastructure class StaticHandler(BaseHandler): def __init__(self, server_addr, peer, path, options, root, stats_callback): server session callback super().__init__( 
 callback server_addr, peer, path, options, stats_callback) self._root = root get_handler() self._path = path RRQ BaseServer BaseHandler Client fork() def get_response_data(self): get_response_data() return FileResponseData( 
 os.path.join(self._root, self._path)) transfer session ResponseData child process

  21. BaseServer class ties everything together: Monitoring Infrastructure class StaticServer(BaseServer): def __init__( self, address, port, retries, timeout, server root, handler_stats_callback, session callback callback server_stats_callback ): self._root = root self._handler_stats_callback = \ get_handler() RRQ BaseServer BaseHandler Client handler_stats_callback fork() super().__init__( get_response_data() address, port, retries, timeout, server_stats_callback) transfer session ResponseData def get_handler(self, server_addr, peer, path, options): return StaticHandler( server_addr, peer, path, options, self._root, child process self._handler_stats_callback)

  22. The “main” Monitoring Infrastructure def print_session_stats(stats): print(stats) server def print_server_stats(stats): session callback callback counters = stats.get_and_reset_all_counters() print('Server stats - every {} seconds’.format( stats.interval)) print(counters) get_handler() RRQ BaseServer BaseHandler Client fork() server = StaticServer( get_response_data() ip='', port='69', retries=3, timeout=5, root='/var/tftproot/', print_session_stats, transfer session print_server_stats) ResponseData try: server.run() child process except KeyboardInterrupt: server.close()

  23. How do we use it? Improvements • No more physical LBs • No waste of resources Servers • Stats! requests can 
 • TFTP servers are dynamic hit any server dynamic 
 local • Config files (e.g. grub/ipxe Provisioning 
 files tftp disk tftp fbtftp backends cache configs) are generated static files • Static files are streamed HTTP 
 • You can hit any server repo • No need to rsync data • Container-friendly

  24. Routing TFTP tra ffi c LBs are gone: which TFTP server will serve a given client? NetNorad publishes latency maps periodically, DHCP consumes it. Closest 
 NetNorad Latency Maps DHCP TFTP 
 server Location of TFTP Service 
 server to Health checks discovery provision Read about NetNorad on our blog: http://tinyurl.com/hacrw7c

  25. POPs locations are fictional POP1 local 
 Fetches static files DC fbtftp from closest origin only for cache misses or if files changed POP2 local 
 fbtftp

  26. Thanks for listening! Project home: 
 https://github.com/facebook/fbtftp/ Install and play with it: 
 $ pip3 install fbtftp Poster session Tuesday at 14.45: 
 Python in Production Engineering Feel free to email me at pallotron@fb.com

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend