Editing Performance

Jump to: navigation, search

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 1: Line 1:
== Benchmark results ==
+
== Response Time ==
  
This page is to collect a list of benchmark results available and demonstrate not only the words about containers technology being more superior from performance point of view compared to hypervisors but also to provide some data proving that.
+
=== Benchmark Description ===
 +
The aim of this benchmark is to measure how fast can application inside of virtual machine (VM) or operating system container (CT) react on external request under various conditions:
 +
* Idle system and idle VM/CT
 +
* Busy system and idle VM/CT
 +
* Busy system and busy VM/CT
  
{| class="wikitable"
+
Described benchmark case is common for many latency sensitive real life workloads. For example: high performance computing, image processing and rendering, web and database servers and so on.
|+ Benchmarks
 
|-
 
|| '''Benchmark''' || '''Description'''
 
|-
 
|| [[/Response Time/]]
 
|| Microbenchmark demonstrating latency issues of interactive applications in virtualized and loaded systems (netperf RR in various conditions).
 
|-
 
|| [[/Network Throughput/]]
 
|| 10Gbit simple network throughput comparison using netperf test.
 
|-
 
|| [[/LAMP/]]
 
|| Linux Apache+MySql+PHP (LAMP) stack benchmark in multiple simultaneously running virtualization instances.
 
|-
 
|| [[/vConsolidate-UP/]]
 
|| UP configuration of Intel vConsolidate server consolidation benchmark (Java+Apache+MySQL workloads).
 
|-
 
|| [[/vConsolidate-SMP/]]
 
|| SMP configuration of Intel vConsolidate server consolidation benchmark (Java+Apache+MySQL workloads).
 
|-
 
|| [[/Microbenchmarks/]]
 
|| Various microbenchmarks like context switch, system call, etc. Plus Unixbench resuls.
 
|}
 
  
== External sources ==
+
=== Implementation ===
* [http://www.hpl.hp.com/techreports/2007/HPL-2007-59R1.pdf HP Labs: Performance Evaluation of Virtualization Technologies for Server Consolidation]
+
To measure response time we use well known netperf TCP_RR test. To emulate busy VM/CT we run CPU eater program (busyloop) inside of it. To emulate busy system we run several busy VM/CT (to eat all the host CPU time). Netperf runs in server mode inside of '''one''' VM/CT. On the separate physical host we run netperf TCP_RR test against selected VM/CT over the 1Gbit network.
* [http://www.ieee-jp.org/section/kansai/chapter/ces/1569177239.pdf A Comparative Study of Open Source Softwares for Virtualization with Streaming Server Applications] - conference proceedings of the 13th IEEE International Symposium on Consumer Electronics (ISCE2009)
+
 
* [http://thesai.org/Downloads/Volume2No9/Paper%2020%20-%20The%20Performance%20between%20XEN-HVM,%20XEN-PV%20and%20Open-VZ%20during%20live-migration.pdf The Performance between XEN-HVM, XEN-PV and Open-VZ during live-migration]
+
=== Testbed Configuration ===
 +
Server: 4xHexCore Intel Xeon (2.66 GHz), 32 GB RAM
 +
 
 +
Client: 4xHexCore Intel Xeon (2.136 GHz), 32 GB RAM
 +
 
 +
Network: 1Gbit direct server<>client connection
 +
 
 +
Virtualization Software: ESXi4.1upd1, XenServer5.6fp1, HyperV (R2), PVC 4.7 (RH6) 2.6.32-042test006.1.x86_64
 +
 
 +
Guest OS: Centos 5.5 x86_64
 +
 
 +
Software and Tunings:  
 +
* netperf v2.4.5
 +
* '''one''' VM/CT with netperf in server mode configured with 1 vCPU, 1 GB RAM
 +
* '''six''' VMs/CTs (which needed to load server CPU - see testcases) configured with 4vCPU 1GB RAM
 +
* netperf run string:
 +
** in VM/CT; netperf -p 30300
 +
** on the client: netperf -p 30300 -H 172.0.1.1 -t TCP_RR -l 120 -- -r 128 -s 128
 +
* Firewall was turned off
 +
* All other tunings were left at default values.
 +
 
 +
=== Benchmark Results ===
 +
 
 +
[[File:Response_time.png]]
 +
 
 +
 
 +
'''In all the three cases (idle system and idle VM/CT, busy system and idle VM/CT, busy system and busy VM/CT) Virtuozzo Containers show the lowest overhead over all the tested virtualization solutions'''

Please note that all contributions to OpenVZ Virtuozzo Containers Wiki may be edited, altered, or removed by other contributors. If you don't want your writing to be edited mercilessly, then don't submit it here.
If you are going to add external links to an article, read the External links policy first!

To edit this page, please answer the question that appears below (more info):

Cancel Editing help (opens in new window)