Документ взят из кэша поисковой машины. Адрес оригинального документа : http://mirror.msu.net/pub/rfc-editor/internet-drafts/draft-rosa-bmwg-vnfbench-00.txt
Дата изменения: Mon Mar 21 20:26:28 2016
Дата индексирования: Sun Apr 10 06:47:33 2016
Кодировка:

Поисковые слова: einstein




BMWG R. Rosa, Ed.
Internet-Draft Unicamp
Intended status: Informational R. Szabo
Expires: September 22, 2016 Ericsson
March 21, 2016


VNF Benchmarking Methodology
draft-rosa-bmwg-vnfbench-00

Abstract

This document describes VNF benchmarking methodologies.

Status of This Memo

This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.

Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/.

Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."

This Internet-Draft will expire on September 22, 2016.

Copyright Notice

Copyright (c) 2016 IETF Trust and the persons identified as the
document authors. All rights reserved.

This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License.






Rosa & Szabo Expires September 22, 2016 [Page 1]

Internet-Draft VNFBench March 2016


Table of Contents

1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2
2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 2
3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
4. Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . 4
5. VNF Benchmarking Considerations . . . . . . . . . . . . . . . 5
6. Methodology . . . . . . . . . . . . . . . . . . . . . . . . . 5
6.1. Benchmarking . . . . . . . . . . . . . . . . . . . . . . . 6
6.1.1. Throughput . . . . . . . . . . . . . . . . . . . . . . . 6
6.1.2. Latency . . . . . . . . . . . . . . . . . . . . . . . . . 7
6.1.3. Frame Loss Rate . . . . . . . . . . . . . . . . . . . . . 7
7. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 8
9. Security Considerations . . . . . . . . . . . . . . . . . . . 8
10. Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . 8
11. Informative References . . . . . . . . . . . . . . . . . . . 8
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 9

1. Introduction

New paradigms of network services envisioned by NFV bring VNFs as
software based entities, which can be deployed in virtualized
environments [ETS14a]. In order to be managed/orchestrated or
compared with physical network functions, VNF Descriptors can specify
performance profiles containing metrics (e.g., throughput) associated
with allocated resources (e.g., vCPU). This document describes
benchmarking methodologies to obtain VNF profiles (resource -
performance figures).

2. Terminology

The reader is assumed to be familiar with the terminology as defined
in the European Telecommunications Standards Institute (ETSI) NFV
document [ETS14b]. Some of these terms, and others commonly used in
this document, are defined below.

NFV: Network Function Virtualization - The principle of separating
network functions from the hardware they run on by using virtual
hardware abstraction.

NFVI PoP: NFV Infrastructure Point of Presence - Any combination of
virtualized compute, storage and network resources.

NFVI: NFV Infrastructure - Collection of NFVI PoPs under one
orchestrator.





Rosa & Szabo Expires September 22, 2016 [Page 2]

Internet-Draft VNFBench March 2016


VIM: Virtualized Infrastructure Manager - functional block that is
responsible for controlling and managing the NFVI compute, storage
and network resources, usually within one operator's
Infrastructure Domain (e.g. NFVI-PoP).

NFVO: NFV Orchestrator - functional block that manages the Network
Service (NS) life-cycle and coordinates the management of NS life-
cycle, VNF life-cycle (supported by the VNFM) and NFVI resources
(supported by the VIM) to ensure an optimized allocation of the
necessary resources and connectivity

VNF: Virtualized Network Function - a software-based network
function.

VNFD: Virtualised Network Function Descriptor - configuration
template that describes a VNF in terms of its deployment and
operational behaviour, and is used in the process of VNF on-
boarding and managing the life cycle of a VNF instance.

VNF-FG: Virtualized Network Function Forwarding Graph - an ordered
list of VNFs creating a service chain.

MANO: Management and Orchestration - In the ETSI NFV framework
[ETS14a], this is the global entity responsible for management and
orchestration of NFV life-cycle.

Network Service: composition of Network Functions and defined by its
functional and behavioural specification.

Additional terminology not defined by ETSI NFV ISG.

VNF-BP: VNF Benchmarking Profile - the specification how to measure
a VNF Profile. VNF-BP may be specific to a VNF or applicable to
several VNF types. The specification includes structural and
functional instructions, and variable parameters (metrics) at
different abstractions (e.g., vCPU, memory, throughput, latency;
session, transaction, tenants, etc.).

VNF Profile: is a mapping between virtualized resources (e.g., vCPU,
memory) and VNF performance (e.g., throughput, latency between in/
out ports) at a given NFVI PoP. An orchestration function can use
the VNF Profile to select a host (NFVI PoP) for a VNF and to
allocate necessary resources to deliver the required performance
characteristics.

Customer: A user/subscriber/consumer of ETSI's Network Service.





Rosa & Szabo Expires September 22, 2016 [Page 3]

Internet-Draft VNFBench March 2016


Agents: Network Functions performing benchmarking tasks (e.g.,
synthetic traffic sources and sinks; measurement and observation
functions, etc.).

SUT: System Under Test comprises the VNF under test.

3. Scope

This document assumes VNFs as black boxes when defining VNF
performance benchmarking methodologies. White box benchmarking of
VNFs are left for further studies and may be added later.

4. Assumptions

We assume a VNF benchmarking set-up as shown in Figure 1. Customers
can request Network Services (NS) from an NFVO with associated
service level specifications (e.g., throughput and delay). The NFVO,
in turn, must select hosts and software resource allocations for the
VNFs and build the necessary network overlay to meet the
requirements. Therefore, the NFVO must know VNF Profiles per target
hosts to perform location and resource assignments.

In a highly dynamic environment, where both the VNF instances (e.g.,
revised VM image) and the NFVI resources (e.g., hw upgrades) are
changing, the NFVO should be able to create VNF Profiles on-demand.

We assume, that based on VNF Benchmarking Profile definitions NFVOs
can run benchmarking evaluations to learn VNF Profiles per target
hosts.

In a virtualization environment, however, not only the SUT but all
the other benchmarking agents may be software defined (physical or
virtualized network functions).

Figure 1 shows an example, where the NFVO can use PoPa and PoPb to
set-up benchmarking functions to test VNFs hosted in PoP 1, 2, 3
domains corresponding to VIM 1, 2 and 3. The NFVO uses the VNF
Benchmarking Profiles to deploy agents according to the SUT VNF. The
VNF Benchmarking Profile is defined by the VNF Developer. The
results of the VNF benchmarking is stored in a VNF Profile.

,----.
,----. ( VNF2 )
{VNF1: {10Mbps,200ms}{ ( VNF1 ) `----'
{{2CPU, 8GB}@PoP1} `----'
{{8CPU, 16GB}@PoP2} +---------+ +--------------+
{{4CPU, 4GB}@PoP3}}} |Customers| |VNF Developers|
{20Mbps,300ms}...} +-----+---+ +------.-------+



Rosa & Szabo Expires September 22, 2016 [Page 4]

Internet-Draft VNFBench March 2016


{VNF2:{10Mbps,200ms}{ | |
{{8CPU, 16GB}@PoP1} | |
...}} +-----+-------+ ,------+--------.
,---------------. | |<->(VNF Benchmarking )
( VNF-Profiles )<--->| NFVO / VNFM | \ Profiles /
`---------------' | | `-------------'
+-+----+----+-+
____....----'/ | \---..__
...----''' V V V ```--...__
+----+-+ +------+ +------+ +------+ +-+----+
| VIMa | | VIM1 | | VIM2 | | VIM3 | | VIMb |
+-----++ +-+----+ +-+----+ +-+----+ +-----++
| | | | NFVI |
+------+--+ *-------+--------+--------+--------* +------+--+
|PoPa | | | | | | |PoPb |
|+------+ |SAP | +-----+-+ +---+---+ +-+-----+ | SAP| +------+|
||Agents|=|>O--+-| PoP 1 |--| PoP 2 |--| PoP 3 |--+--O>|=|Agents||
|+------+ | | +-------+ +-------+ +-------+ | | +------+|
| | | PoP1 PoP2 PoP3 | | |
| | | Container Enhanced Baremetal | | |
| | | OS Hypervisor | | |
+---------+ *----------------------------------* +---------+

Figure 1: VNF Testing Scenario

5. VNF Benchmarking Considerations

VNF benchmarking considerations are defined in [Mor15].
Additionally, VNF pre-deployment testing considerations are well
explored in [ETS14c].

This document list further considerations:

Black-Box SUT with Black-Box Benchmarking Agents: In virtualization
environments neither the VNF instance nor the underlying
virtualization environment nor the agents specifics may be known
by the entity managing abstract resources. This implies black box
testing with black box functional components, which are configured
by opaque configuration parameters defined by the VNF developers
or alike for the benchmarking entity (e.g., NFVO).

6. Methodology

Following the ETSI's model ([ETS14c]), we distinguish three methods
for VNF evaluation:

Benchmarking: Where resource {cpu, memory, storage} parameters are
provided and the corresponding {latency, throughput} performance



Rosa & Szabo Expires September 22, 2016 [Page 5]

Internet-Draft VNFBench March 2016


parameters are obtained. Note, such request might create multiple
reports, for example, with minimal latency or maximum throughput
results.

Verification: Both resources {cpu, memory, storage} and performance
{latency, throughput} parameters are provided and agents verifies
if the given association is correct or not.

Dimensioning: Where performance parameters {latency, throughput} are
provided and the corresponding {cpu, memory, storage} resource
parameters obtained. Note, multiple deployment interactions may
be required, or if possible, underlying allocated resources need
to be dynamically altered.

Note: Verification and Dimensioning can be reduced to Benchmarking.
Therefore, we detail Benchmarking in what follows.

6.1. Benchmarking

All benchmarking methodologies described in this section consider the
definition of VNF-BPs for each testing procedure. Information about
Benchmarking Methodology for Network Interconnect Devices, defined in
[rfc2544], is considered in all subsections below. Besides, the
tests are defined based on notions introduced and discussed in the IP
Performance Metrics (IPPM) Framework document [rfc2330].

6.1.1. Throughput

Objective: Provide, for a particular set of resources allocated, the
throughput among two or more VNF ports, expressed in VNF-BP.

Prerequisite: VNF (SUT) must be deployed and stable and its
allocated resources collected. VNF must be reachable by agents.
The frame size to be used for agents must be defined in the VNF-
BP.

Procedure:

1. Establish connectivity between agents and VNF ports.

2. Agents initiate source of traffic, specifically designed for
VNF test, increasing rate periodically.

3. Throughput is measured when traffic rate is achieved without
frame losses.

Reporting Format: report must contain VNF allocated resources and
throughput measured (aka throughput in [rfc2544]).



Rosa & Szabo Expires September 22, 2016 [Page 6]

Internet-Draft VNFBench March 2016


6.1.2. Latency

Objective: Provide, for a particular set of resources allocated, the
latency among two or more VNF ports, expressed in VNF-BP.

Prerequisite: VNF (SUT) must be deployed and stable and its
allocated resources collected. VNF must be reachable by agents.
The frame size and respective throughput to be used for agents
must be defined in the VNF-BP.

Procedure:

1. Establish connectivity between agents and VNF ports.

2. Agents initiate source of traffic, throughput and frame size
specifically designed for VNF test.

3. Latency is measured when throughput is achieved for the period
of time specified in VNF-BP.

Reporting Format: report must contain VNF allocated resources,
throughput used for stimulus and latency measurement (aka latency
in [rfc2544]).

6.1.3. Frame Loss Rate

Objective: Provide, for a particular set of resources allocated, the
frame loss rate among two or more VNF ports, expressed in VNF-BP.

Prerequisite: VNF (SUT) must be deployed and stable, its allocated
resources collected specifying any particular feature of the
underlying VNF virtualized environment, provided by NFVO/VIM or
independently extracted. VNF must be reachable by agents. Rate
of source traffic and frame type used for agents stimulus must be
defined in VNF-BP.

Procedure:

1. Establish connectivity between agents and VNF ports.

2. Agents initiate source of traffic, specifically designed for
VNF test, achieving rate of source traffic defined in VNF-BP.

3. Frame loss rate is measured when pre-defined traffic rate is
achieved for period of time established in VNF-BP.






Rosa & Szabo Expires September 22, 2016 [Page 7]

Internet-Draft VNFBench March 2016


Reporting Format: report must contain VNF allocated resources, rate
of source traffic used as stimulus and frame loss rate measurement
(aka frame loss rate in [rfc2544]).

7. Summary

This document describes black-box benchmarking methodologies for
black-box VNFs in virtualization environments (e.g., ETSI NFV
framework) to create VNF Profiles containing the association of
resources and performance metrics of a given VNF at a given host
(e.g., NFVI PoP).

The authors see the following next steps:

VNF Scaling: Two scaling options: single instance with more
resources or multiple instances. Questions: What is the maximum
performance of a single instance VNF at a given host with
increasing resources? How many independent VNF instances (or
components) can be run with maximum performance at a given host?
On the other hand, what is the performance of the smallest
resource footprint VNF allocation?

VNF instantiation time: this metric concerns at least three
components: VNF bootstraping (SUT), execution environment and the
orchestration process.

8. IANA Considerations

This memo includes no request to IANA.

9. Security Considerations

TBD

10. Acknowledgement

The authors would like to thank the support of Ericsson Research,
Brazil.

This work is partially supported by FP7 UNIFY, a research project
partially funded by the European Community under the Seventh
Framework Program (grant agreement no. 619609). The views expressed
here are those of the authors only. The European Commission is not
liable for any use that may be made of the information in this
document.

11. Informative References




Rosa & Szabo Expires September 22, 2016 [Page 8]

Internet-Draft VNFBench March 2016


[ETS14a] ETSI, "Architectural Framework - ETSI GS NFV 002 V1.2.1",
Dec 2014, 001\_099/002/01.02.01-\_60/gs\_NFV002v010201p.pdf>.

[ETS14b] ETSI, "Terminology for Main Concepts in NFV - ETSI GS NFV
003 V1.2.1", Dec 2014, etsi_gs/NFV/001_099-/003/01.02.01_60/
gs_NFV003v010201p.pdf>.

[ETS14c] ETSI, "NFV Pre-deployment Testing - ETSI GS NFV TST001
V0.0.15", February 2016, Open/DRAFTS/TST001_-_Pre-deployment_Validation/NFV-
TST001v0015.zip>.

[Mor15] A. Morton, "Considerations for Benchmarking Virtual
Network Functions and Their Infrastructure", February
2015, virtual-net-03>.

[rfc2330] V. Paxson, G. Almes, J. Mahdavi, M. Mathis, "Framework for
IP Performance Metrics", May 1998, /html/rfc2330>.

[rfc2544] S. Bradner and J. McQuaid, "Benchmarking Methodology for
Network Interconnect Devices", March 1999, tools.ietf.org/html/rfc2544>.

Authors' Addresses

Raphael Vicente Rosa (editor)
University of Campinas
Av. Albert Einstein 300
Campinas, Sao Paulo 13083-852
Brazil

Email: raphaelvrosa@gmail.com
URI: http://www.intrig.dca.fee.unicamp.br/


Robert Szabo
Ericsson Research, Hungary
Irinyi Jozsef u. 4-20
Budapest 1117
Hungary

Email: robert.szabo@ericsson.com
URI: http://www.ericsson.com/




Rosa & Szabo Expires September 22, 2016 [Page 9]