Gym: A VNF Testing Framework - Design and Prototype Insights
- Prof. Christian Rothenberg
Raphael Vicente Rosa Claudio Bertoldo Jan-2017 University of Campinas (UNICAMP), Brazil in technical collaboration with Ericsson Research
Gym: A VNF Testing Framework - Design and Prototype Insights Prof. - - PowerPoint PPT Presentation
Gym: A VNF Testing Framework - Design and Prototype Insights Prof. Christian Rothenberg Raphael Vicente Rosa Claudio Bertoldo Jan-2017 University of Campinas (UNICAMP), Brazil in technical collaboration with Ericsson Research Imagine an
Raphael Vicente Rosa Claudio Bertoldo Jan-2017 University of Campinas (UNICAMP), Brazil in technical collaboration with Ericsson Research
○ Indices ■ Blood sugar levels ■ Hemoglobine % ■ Transpiration ○ When?.. ■ Start ■ Steady (long run) ■ Final Sprint ○ Why? ■ High performance ■ Sport Analytics ■ Athlete Profile
○ Goals: ■ Deliver the best ■ Conscious actions and body reactions ○ But… ■ Weather conditions may interfere ■ Over... [stretching | heating ] ○ In the end… ■ Break records ■ Keep competing ■ No injuries
Extracted from: http://www.etsi.org/deliver/etsi gs/NFV-IFA/001 099/001/01.01.01 60/gs NFV-IFA001v010101p.pdf http://www.etsi.org/deliver/etsi gs/NFV-EVE/001 099/004/01.01.01 60/gs NFV-EVE004v010101p.pdf
○ Modular architecture with stand-alone programmable components ○ Simple messaging system among them following generic RPC guidelines ○ Extensible set of tools and associated metrics ○ Programmability of tests in dynamic compositions of modules ○ And respective flexible methods for the interpretation/processing of evaluations' output
○ Develop tests for profiling VNFs performance aligned with agile Continuous Development/Integration of DevOps methodologies, speeding their time-to-market
○ Enhance offered services QoS with tested deployable scenarios (e.g., varying workloads in multiple sites), containing transparent sets of required VNF metrics
○ Via VNF testing in their execution environments, would increase reliability with compliance methodologies (e.g., energy consumption)
➢ Comparability:
○
easily usable (e.g., imported by big data applications).
➢ Repeatability:
○ testing setup must be defined by a handful/flexible design model, able to be interpreted and executed by the testing platform repeatedly with customization.
➢ Interoperability:
○ tests should be able to be ported in different environments with lightweight technologies providing the means for it.
➢ Configurability:
○
flexibility when composing tests descriptions and configurations.
○ e.g., containers
○ e.g., JSON and REST APIs
○ Extensible classes as tools’ interfaces
○ Outlines and Profiles
★ Outline:
specifies how to test a VNF that may be specific to it or applicable to several VNF types. It includes structural (e.g., agents/monitors) and functional sketches with variable parameters (e.g., probers/listeners properties), used as inputs by Gym to perform VNFs tests
★ Profile:
is composed by the outputs of an Outline execution, and represents a mapping between virtualized resources (e.g., vCPU, memory) and performance (e.g., throughput, latency between in/out or ports) at a given
desired resources to deliver a given (predictable/measured) performance quality
➔ Service Provider Question: In which cloud would be cheaper to deploy my vIMS? ➔ Why? I want to parameterize my service, so I can:
1. Select some cloud providers: e.g., Microsoft, Google, Amazon 2. Get a VNF: vIMS (Project Clearwater) 3. Have a stimulus: sipp prober 4. Have a monitor for the environment: linux host listener 5. Target some metrics: CPU vs. Calls/s 6. OK, write your Outline, submit it to Player wait for the target Profile
a. then, select your metrics, and… b. apply algorithms for analysis, c. make graphics, etc
➔ Deploying Gym’s components
◆ How?
◆ Where?
○ Are your customers in Australia? ○ Should you group agents: one-to-many, many-to-many?
➔ How to compose the Outline?
◆ Gym provides a flexible syntax
◆ Need to understand what you want
vIMS components decomposed in OpenStack VMs (different flavors) Monitors installed in each vIMS VM -> listening host (cpu, memory, disk i/o, network) Agent with sipp prober in VM Manager/Player in another VM All components in the same network
❖ Consistency:
➢ Such VNF when deployed in a tested execution environment will deliver the given performance described in its extracted Profile ?; specially when tested and put in production, making use of multiple virtualization technologies, for instance
❖ Stability:
➢ VNF testing measurements need to present consistent results when applied over different scenarios. So, do testing descriptions transparently handle service/resource definitions and metrics of VNFs placed in heterogeneous execution environments?
❖ Goodness:
➢ A VNF might be tested with different allocated resources and stimulus, unlike its possible production environment. Crucially, how good testing results, and associated stimulus, correspond to VNF measured performance when running in execution environments under actual workloads?
➢ We leave VNF specific testing tools for users (VNF Developers) development
○ They know better a VNF state machine testing procedure (e.g., sipp) ○ e.g., it is possible to code a prober to interface NetFPGA card or DPDK pktgen
➢ Metrics and their interpretation are open field for research
○ e.g., docker containers presents dozens of features among cpu, memory, disk, etc
➢ Framework in early childhood
○ Many tests, errors, debugging going on ○ Improving design of models, messages, behaviors
➢ Important related work and foundations
○ OPNFV (yardstick, bottlenecks, etc), ToDD, many more (please, tell us about yours) ○ IETF BMWG, IRTF NFVRG, ETSI NFV (TST)
➔ Gym will be open source ➔ Open repository of VNF profiles ➔ New probers/listeners ➔ New scenarios coming from SDN/NFV outcomes for 5G realization
Can we outperform continuous monitoring? When? How? Are we modeling tests correctly? Are tools able to express actual service workloads and heterogeneous behaviors? “Trust, but verify” or verify, then trust?