Difference between revisions of "Performance"

From OpenVZ Virtuozzo Containers Wiki
Jump to: navigation, search
m (Summary)
(Removed like buttons as our users dislike them)
 
(16 intermediate revisions by 6 users not shown)
Line 1: Line 1:
== Response Time ==
+
== Benchmark results ==
  
=== Benchmark Description ===
+
This page is to collect a list of benchmark results available and demonstrate not only the words about containers technology being more superior from performance point of view compared to hypervisors but also to provide some data proving that.
The aim of this benchmark is to measure how fast can application inside of virtual machine (VM) or operating system container (CT) react on external request under various conditions:
 
* Idle system and idle VM/CT
 
* Busy system and idle VM/CT
 
* Busy system and busy VM/CT
 
  
Described benchmark case is common for many latency sensitive real life workloads. For example: high performance computing, image processing and rendering, web and database servers and so on.
+
{| class="wikitable"
 +
|+ Benchmarks
 +
|-
 +
|| '''Benchmark''' || '''Description'''
 +
|-
 +
|| [[/Response Time/]]
 +
|| Microbenchmark demonstrating latency issues of interactive applications in virtualized and loaded systems (netperf RR in various conditions).
 +
|-
 +
|| [[/Network Throughput/]]
 +
|| 10Gbit simple network throughput comparison using netperf test.
 +
|-
 +
|| [[/LAMP/]]
 +
|| Linux Apache+MySql+PHP (LAMP) stack benchmark in multiple simultaneously running virtualization instances.
 +
|-
 +
|| [[/vConsolidate-UP/]]
 +
|| UP configuration of Intel vConsolidate server consolidation benchmark (Java+Apache+MySQL workloads).
 +
|-
 +
|| [[/vConsolidate-SMP/]]
 +
|| SMP configuration of Intel vConsolidate server consolidation benchmark (Java+Apache+MySQL workloads).
 +
|-
 +
|| [[/Microbenchmarks/]]
 +
|| Various microbenchmarks like context switch, system call, etc. Plus Unixbench resuls.
 +
|}
  
=== Implementation ===
+
== External sources ==
To measure response time we use well known netperf TCP_RR test. To emulate busy VM/CT we run CPU eater program (busyloop) inside of it. To emulate busy system we run several busy VM/CT (to eat all the host CPU time). Netperf runs in server mode inside of '''one''' VM/CT. On the separate physical host we run netperf TCP_RR test against selected VM/CT over the 1Gbit network.
+
* [http://www.hpl.hp.com/techreports/2007/HPL-2007-59R1.pdf HP Labs: Performance Evaluation of Virtualization Technologies for Server Consolidation]
 
+
* [http://www.ieee-jp.org/section/kansai/chapter/ces/1569177239.pdf A Comparative Study of Open Source Softwares for Virtualization with Streaming Server Applications] - conference proceedings of the 13th IEEE International Symposium on Consumer Electronics (ISCE2009)
=== Testbed Configuration ===
+
* [http://thesai.org/Downloads/Volume2No9/Paper%2020%20-%20The%20Performance%20between%20XEN-HVM,%20XEN-PV%20and%20Open-VZ%20during%20live-migration.pdf The Performance between XEN-HVM, XEN-PV and Open-VZ during live-migration]
Server: 4xHexCore Intel Xeon (2.66 GHz), 32 GB RAM
 
 
 
Client: 4xHexCore Intel Xeon (2.136 GHz), 32 GB RAM
 
 
 
Network: 1Gbit direct server<>client connection
 
 
 
Virtualization Software: ESXi4.1upd1, XenServer5.6fp1, HyperV (R2), PVC 4.7 (RH6) 2.6.32-042test006.1.x86_64
 
 
 
Guest OS: Centos 5.5 x86_64
 
 
 
Software and Tunings:
 
* netperf v2.4.5
 
* '''one''' VM/CT with netperf in server mode configured with 1 vCPU, 1 GB RAM
 
* '''six''' VMs/CTs (which needed to load server CPU - see testcases) configured with 4vCPU 1GB RAM
 
* netperf run string:
 
** in VM/CT; netserver -p 30300
 
** on the client: netperf -p 30300 -H 172.0.1.1 -t TCP_RR -l 120 -- -r 128 -s 128
 
* Firewall was turned off
 
* All other tunings were left at default values.
 
 
 
=== Benchmark Results ===
 
 
 
[[File:Response_time.png]]
 
 
 
 
 
'''In all the three cases (idle system and idle VM/CT, busy system and idle VM/CT, busy system and busy VM/CT) Virtuozzo Containers show the lowest overhead over all the tested virtualization solutions'''
 
 
 
 
 
== 10 Gbit Network Throughput ==
 
 
 
=== Benchmark Description ===
 
 
 
In this benchmark we measure throughput over 10 Gbit network connection in two directions:
 
* from VM/CT to physical client
 
* from physical client to VM/CT
 
 
 
 
 
=== Implementation ===
 
 
 
To measure network throughput we use standard performance test '''netperf'''. Host with VM/CT and physical client connected are interconnected directly (without switches, etc.)
 
 
 
=== Testbed Configuration ===
 
Server: 4xHexCore Intel Xeon (2.66 GHz), 32 GB RAM, Intel 82598EB 10-Gigabit network card
 
 
 
Client: 4xHexCore Intel Xeon (2.136 GHz), 32 GB RAM, Intel 82598EB 10-Gigabit network card
 
 
 
Network: 10Gbit direct server<>client optical connection
 
 
 
Virtualization Software: ESXi4.1upd1, XenServer5.6fp1, HyperV (R2), PVC 4.7 (RH6) 2.6.32-042test006.1.x86_64
 
 
 
Guest OS: Centos 5.5 x86_64
 
 
 
Software and Tunings:
 
* netperf v2.4.5
 
* '''one''' VM/CT with netperf configured with 4 vCPU, 4 GB RAM
 
* where it was possible, we set offloading & hardware checksumming (gro, gso,etc...) and jumbo frames (MTU=9000) features
 
* netperf run string:
 
** Server: netserver -p PORT (5 instances)
 
** Client: netperf -p PORT -HOST -t TCP_SENDFILE -l 300 (several instanes)
 
* Firewall was turned off
 
* All other tunings were left at default values
 
 
 
=== Benchmark Results ===
 
 
 
[[File:10gbit_throughput.png]]
 
 
 
=== Summary ===
 
 
 
*Parallels Virtuozzo Containers support near native 10Gbit network throughput: 9.70Gbit in receive and 9.87Gbit in send tests
 
*Parallels Virtuozzo Containers shows the best network throughput over all the solutions tested
 
*In Receive performance test (physical client-> VM/CT) Parallels Virtuozzo Containers shows great benefits over hypervisors: x2 Times faster than ESXi4.1 and x5 Times faster than XenServer5.6
 

Latest revision as of 08:58, 28 September 2015

Benchmark results[edit]

This page is to collect a list of benchmark results available and demonstrate not only the words about containers technology being more superior from performance point of view compared to hypervisors but also to provide some data proving that.

Benchmarks
Benchmark Description
Response Time Microbenchmark demonstrating latency issues of interactive applications in virtualized and loaded systems (netperf RR in various conditions).
Network Throughput 10Gbit simple network throughput comparison using netperf test.
LAMP Linux Apache+MySql+PHP (LAMP) stack benchmark in multiple simultaneously running virtualization instances.
vConsolidate-UP UP configuration of Intel vConsolidate server consolidation benchmark (Java+Apache+MySQL workloads).
vConsolidate-SMP SMP configuration of Intel vConsolidate server consolidation benchmark (Java+Apache+MySQL workloads).
Microbenchmarks Various microbenchmarks like context switch, system call, etc. Plus Unixbench resuls.

External sources[edit]