RUMP KERNELS and {why,how} we got here New Directions in Operating - - PowerPoint PPT Presentation

rump kernels and why how we got here
SMART_READER_LITE
LIVE PREVIEW

RUMP KERNELS and {why,how} we got here New Directions in Operating - - PowerPoint PPT Presentation

RUMP KERNELS and {why,how} we got here New Directions in Operating Systems November 2014, London Antti Kantee, Fixup Software Ltd. pooka@rumpkernel.org @anttikantee Motivations want to run an application, not an OS want a better


slide-1
SLIDE 1

New Directions in Operating Systems November 2014, London Antti Kantee, Fixup Software Ltd. pooka@rumpkernel.org @anttikantee

RUMP KERNELS and {why,how} we got here

slide-2
SLIDE 2

Motivations

  • want to run an application, not an OS
  • want a better operating system
  • “operating system gets in the way”
slide-3
SLIDE 3

FIRST HALF

what is an operating system

slide-4
SLIDE 4

Summary of OS's

  • drivers

– for enabling applications to run – n*106 LoC

  • optional goop defining relation between drivers

and applications

– for protection, resource sharing, ... – 103 – 105 LoC

slide-5
SLIDE 5

kernel

driver driver driver driver driver application application

slide-6
SLIDE 6

kernel

driver driver driver application application

Server (“OS”)

driver driver

slide-7
SLIDE 7

kernel

application application

server

driver

server

driver

server

driver

slide-8
SLIDE 8

driver application

cpu core

driver application

cpu core

driver application

cpu core

slide-9
SLIDE 9

kernel

driver driver driver driver application

slide-10
SLIDE 10

kernel

driver driver driver driver driver application application

slide-11
SLIDE 11

SECOND HALF

what is a rump kernel

slide-12
SLIDE 12

platform hypercall interface

rump kernel

hypercall implementation

TCP/IP file systems device drvs syscalls

... callers (i.e. “clients”)

slide-13
SLIDE 13

rump (n):

small or inferior remnant or offshoot; especially: a group (as a parliament) carrying on in the name of the original body after the departure or expulsion

  • f a large number of its members
slide-14
SLIDE 14

rump kernel (n):

small or inferior remnant or offshoot; specifically: a monolithic OS kernel carrying on in the name of the original body after the departure or expulsion

  • f a large number of its subsystems
slide-15
SLIDE 15

A rump kernel does not provide threads, a scheduler, exec, or virtual memory, nor does it require privileged mode (or emulation of it)

  • r interrupts

> runs anywhere > integrates into other systems

slide-16
SLIDE 16

Wait, that doesn't explain where the drivers come from < anykernel (NetBSD)

slide-17
SLIDE 17

platform hypercall interface

rump kernel

hypercall implementation libc

syscall traps rump kernel calls

application(s) userspace libraries

TCP/IP file systems device drvs

unmodified NetBSD code (~106 lines) unmodified POSIX userspace code (10n lines) platform-specific code (~103 lines)

same thread throughout entire stack

e.g. Genode OS, Xen, userspace, bare-metal,

syscalls

...

Platform-independent glue code (~104 lines)

glue code

AN EXAMPLE!

slide-18
SLIDE 18

THIRD HALF

(with operating systems, expect the unexpected)

how rump kernels happened

slide-19
SLIDE 19

ad-hoc shims

Step 1: RUMP (2007)

NetBSD userspace hypercall implementation

unmodified file system driver VFS emustub rump kernel

userspace fs framework

(userspace part)

userspace kernel file system driver

userspace fs framework

(kernel part)

application

syscalls, VFS, etc.

slide-20
SLIDE 20

Step 2: UKFS (2007)

userspace kernel

Q: how hard can implementing a few syscalls be? A: very

userspace hypercall implementation

unmodified file system driver ad-hoc shims VFS emustub rump kernel

UKFS application (e.g. fs-utils)

slide-21
SLIDE 21

Step 3: a lot (2008 - 2011)

userspace hypercall interface

rump kernel

hypercall implementation

TCP/IP file systems device drvs syscalls

...

glue code

syscalls, vfs, etc. hijack application / service

  • support for all driver

subsystems

  • isolation from the host
  • stable hypercall interface
  • anykernel completed
  • production quality
  • rump kernels used for

testing NetBSD

  • no libc for rump kernels,

applications ran partially

  • n the host
slide-22
SLIDE 22

Step 3.5: visions (not an actual step)

  • ca. turn of the year 2011/2012:

“An anykernel architecture can be seen as a gateway from current all-purpose operating systems to more specialized operating systems running on ASICs. The anykernel enables the device manufacturer to provide a compact hypervisor and select only the critical drivers from the original OS for their purposes. The unique advantage is that drivers which have been used and proven in general purpose systems, e.g. the TCP/IP stack, may be included without modification as standalone drivers in embedded products.”

slide-23
SLIDE 23

Step 4: portability to POSIX 2007-2012, 2012-

buildrump.sh (2012-)

slide-24
SLIDE 24

4.4STEP: beyond POSIX (201[234])

slide-25
SLIDE 25
slide-26
SLIDE 26
slide-27
SLIDE 27
slide-28
SLIDE 28

Step 5.1: rumprun (2013, 2014)

platform hypercall interface

rump kernel

hypercall implementation

TCP/IP file systems device drvs syscalls

...

glue code

slide-29
SLIDE 29

Step 5.2: rumprun (2013, 2014)

platform hypercall interface

rump kernel

hypercall implementation libc

syscall traps rump kernel calls

application(s) userspace libraries

TCP/IP file systems device drvs syscalls

...

glue code

slide-30
SLIDE 30

FINAL HALF

conclusions & other tidbits

slide-31
SLIDE 31

All le gory technical details: http://book.rumpkernel.org/

2nd edition is work in progress Will be available as free pdf, hopefully printed too

slide-32
SLIDE 32

Community

  • http://rumpkernel.org/
  • http://repo.rumpkernel.org/

– BSD-licensed source code

  • http://wiki.rumpkernel.org/
  • rumpkernel-users@lists.sourceforge.net
  • #rumpkernel on irc.freenode.net
  • @rumpkernel
slide-33
SLIDE 33

The actual conclusions

slide-34
SLIDE 34

You can make an omelette without breaking the kitchen!