Lab 4 Tutorial Instructor: Youngjin Kwon What weve done so far Lab - - PowerPoint PPT Presentation

lab 4 tutorial
SMART_READER_LITE
LIVE PREVIEW

Lab 4 Tutorial Instructor: Youngjin Kwon What weve done so far Lab - - PowerPoint PPT Presentation

Lab 4 Tutorial Instructor: Youngjin Kwon What weve done so far Lab 1: Booting OS from BIOS and initializing kernel Lab 2: Physical memory management and memory mapping (kernel) Lab 3: Defining user environment, handling


slide-1
SLIDE 1

Lab 4 Tutorial

Instructor: Youngjin Kwon

slide-2
SLIDE 2

What we’ve done so far

  • Lab 1: Booting OS from BIOS and initializing kernel
  • Lab 2: Physical memory management and memory mapping (kernel)
  • Lab 3: Defining user environment, handling interrupt/exception/system call
slide-3
SLIDE 3

What you will be given for lab 4

  • Lab 1: Booting OS from BIOS and initializing kernel
  • Lab 2: Physical memory management and memory mapping (kernel)
  • Lab 3: Defining user environment, handling interrupt/exception/system call
  • (JOS LAB 4): Multi-process environments, scheduler, IPC primitives
  • (JOS LAB 5): File system, read/write syscalls, shell
  • (JOS LAB 6): Network stack, network driver (memory-mapped IO)
slide-4
SLIDE 4

JOS OS architecture

JOS OS

Application (user environment) File system server (FS environment) Network server (NS environment)

Read/Write IPC fsipc()

Hardware

Direct hardware access

Not enabled in LAB 4

slide-5
SLIDE 5

JOS VMM overview

JOS hOS + VMM

JOS gOS (guest OS) environment File system server (FS environment) Network server (NS environment)

Read/Write IPC fsipc()

Hardware

Direct hardware access

Not enabled in LAB 4

slide-6
SLIDE 6

Let’s see demo “Run JOS on JOS”

slide-7
SLIDE 7

Steps to run VMM

  • JOS booting
  • Launch FS server (fs_fs) and shell server (user_icode)
  • Run application called “vmm” in shell
  • vmm application: launching guest OS environment
  • Declare itself as gOS (OS environment) to hOS
  • Load gOS kernel to memory
  • sys_yield(): control goes to hOS
  • JOS hOS
  • Turn hOS to hOS + VMM by enabling intel VT
  • Execute the vmm environment as gOS

JOS hOS vmm (user env.) JOS hOS + VMM vmm (guest OS)

slide-8
SLIDE 8

vmm: declare itself as guest OS

vmm (user environment) user/vmm.c hOS kern/syscall.c Exercise 1 Newly added member in struct Env Prepare env for guest OS type

slide-9
SLIDE 9

vmm: load gOS kernel to memory

  • Exercise 3
  • Open GUEST_KERN (elf format)
  • Load elf sections to guest memory with

map_in_guest()

  • vmm env.: map_in_guest()
  • Allocate a temp page (to where?)
  • Read given elf section

(specified by fd and offset)

  • Call sys_ept_map() to ask hOS to do the

mapping in EPT

  • hOS: sys_ept_map()
  • Do some checks for error conditions
  • Call ept_map_hva2gpa()

vmm (user environment) hOS

slide-10
SLIDE 10

vmm: load gOS kernel to memory

  • Exercise 2: handling EPT
  • Call ept_map_hva2gpa to do the

mapping

  • ept_map_hva2gpa
  • Map hva to gpa
  • Use ept_lookup_gpa() to find ept

entry of given gpa

  • ept_lookup_gpa
  • Similar to page_lookup but it walks

ept and returns the leaf ept entry

Same address? Or not?

slide-11
SLIDE 11

Host virtual and guest physical address

JOS hOS create env with 5GB 0x10000000 0x10000000 + 5G - 1

Guest OS

4GB

Guest physical Host Virtual Guest virtual

0x0 4GB - 1 Guest page table 0x50000000 Host page table

Host physical

Used for guest physical

Mov $rdx, addr

Non root mode Root mode

slide-12
SLIDE 12

How to get host VA in JOS?

Kernel virtual address

slide-13
SLIDE 13

Steps to run VMM

  • JOS booting
  • Launch FS server (fs_fs) and shell server (user_icode)
  • Run application called “vmm” in shell
  • vmm application: launching guest OS environment
  • Declare itself as gOS to hOS
  • Load gOS kernel to memory
  • sys_yield(): control goes to hOS
  • JOS hOS
  • Turn hOS to hOS + VMM by Intel VT
  • Execute the vmm environment as gOS

JOS hOS vmm (user env.) JOS hOS + VMM vmm (guest OS)

slide-14
SLIDE 14

hOS: Turn hOS to hOS + VMM

Sched_yield (kern/sched.c)

slide-15
SLIDE 15

hOS + VMM: execute vmm env. as gOS

  • vmx_vmrun()
  • Execute environment as guest
  • peration system

Env_run() (kern/env.c)

slide-16
SLIDE 16

Background: Intel-VT

When Guest OS executes VMX root-privileged instructions

slide-17
SLIDE 17

Virtual-Machine Control data Structure (VMCS)

  • VMCS data area
  • Guest-state area
  • Host-state area
  • VM execution control field
  • VM exit control field
  • VM entry control field
  • VM exit information field

Detailed layout of VMCS data area at vmm/vmx.h How to manipulate VMCS: vmcs_ctls_init()

slide-18
SLIDE 18

VMCS control example

  • Scenario: page fault validation
  • VMM hijacks page faults happened in a guest OS
  • Verify the page faults
  • If it is a legal page fault, VMM injects the page faults

to the guest OS

slide-19
SLIDE 19

Hijacking page faults

  • Using exception bitmap: 32 bit
  • If a bit of a certain position is set, the exception cause vm exit.

Otherwise CPU delivers the exception to guest OS IDT

  • if 14 bit (exception vector 14 == page fault) is set, VMM takes a control

when PF happens in the guest OS

Vmcs_ctls_init() – vmm/vmx.c

slide-20
SLIDE 20

injects page faults

  • Using vm entry control field: 32 bit
  • On vm entry, CPU delivers an event through the guest OS IDT

Vmcs_ctls_init() – vmm/vmx.c

slide-21
SLIDE 21

Vmlaunch/vmresume and vmexit

  • Exercise 4
  • Write code for vmlaunch/vmresume
  • How to determine vmlaunch or

vmresume?

  • When vmlaunch/vmresume returns, it

means vmexit (guest OS completely stops)

asm_vmrun (vmm/vmx.c)

slide-22
SLIDE 22

What causes vmexit?

Bugnion et al, Hardware and Software Support for Virtualization, Morgan & Calypool Publisher

slide-23
SLIDE 23

vmcall

hOS + VMM gOS

slide-24
SLIDE 24

Vmexit handler

  • Exercise 5,6,7 (trap-and-emulate)
  • gOS traps  vmexit()
  • Find out vmexit reason (how?)
  • Implement the corresponding vmexit handler
  • Exercise 5,7: vmcall
  • Exercise 6: cpuid instruction