Changes

Jump to: navigation, search

Performance

128 bytes removed, 08:58, 28 September 2015
Removed like buttons as our users dislike them
== Response Time Benchmark results ==
=== Benchmark Description ===The aim of this benchmark This page is to measure how fast can application inside collect a list of virtual machine (VM) or operating system container (CT) react on external request under various conditions:* Idle system benchmark results available and idle VM/CT* Busy system and idle VM/CT* Busy system and busy VM/CTdemonstrate not only the words about containers technology being more superior from performance point of view compared to hypervisors but also to provide some data proving that.
Described {| class="wikitable"|+ Benchmarks|-|| '''Benchmark''' || '''Description'''|-|| [[/Response Time/]]|| Microbenchmark demonstrating latency issues of interactive applications in virtualized and loaded systems (netperf RR in various conditions).|-|| [[/Network Throughput/]]|| 10Gbit simple network throughput comparison using netperf test.|-|| [[/LAMP/]]|| Linux Apache+MySql+PHP (LAMP) stack benchmark in multiple simultaneously running virtualization instances.|-|| [[/vConsolidate-UP/]]|| UP configuration of Intel vConsolidate server consolidation benchmark (Java+Apache+MySQL workloads).|-|| [[/vConsolidate-SMP/]]|| SMP configuration of Intel vConsolidate server consolidation benchmark case is common for many latency sensitive real life (Java+Apache+MySQL workloads). For example: high performance computing|-|| [[/Microbenchmarks/]]|| Various microbenchmarks like context switch, image processing and renderingsystem call, web and database servers and so onetc. Plus Unixbench resuls.|}
=== Implementation =External sources ==To measure response time we use well known netperf TCP_RR test* [http://www. To emulate busy VM/CT we run CPU eater program (busyloop) inside of ithpl.hp. To emulate busy system we run several busy VMcom/CT (to eat all the host CPU time). Netperf runs in server mode inside of '''one''' VMtechreports/CT. On the separate physical host we run netperf TCP_RR test against selected VM2007/CT over the 1Gbit networkHPL-2007-59R1=== Testbed Configuration ===pdf HP Labs: Performance Evaluation of Virtualization Technologies for Server: 4xHexCore Intel Xeon (2.66 GHz), 32 GB RAMConsolidation] Client* [http: 4xHexCore Intel Xeon (2//www.136 GHz), 32 GB RAM Network: 1Gbit direct server<>client connection Virtualization Software: ESXi4ieee-jp.1upd1, XenServer5org/section/kansai/chapter/ces/1569177239.6fp1, HyperV (R2), PVC 4.7 pdf A Comparative Study of Open Source Softwares for Virtualization with Streaming Server Applications] - conference proceedings of the 13th IEEE International Symposium on Consumer Electronics (RH6ISCE2009) 2.6.32-042test006.1.x86_64 Guest OS* [http: Centos 5.5 x86_64 Software and Tunings: * netperf v2//thesai.4.5* '''one''' VMorg/CT with netperf in server mode configured with 1 vCPU, 1 GB RAM* '''six''' VMsDownloads/CTs (which needed to load server CPU - see testcases) configured with 4vCPU 1GB RAM* netperf run string:** in VMVolume2No9/CT; netperf Paper%2020%20-p 30300** on the client: netperf %20The%20Performance%20between%20XEN-p 30300 HVM,%20XEN-H 172.0.1.1 PV%20and%20Open-t TCP_RR VZ%20during%20live-l 120 migration.pdf The Performance between XEN-HVM, XEN- PV and Open-r 128 VZ during live-s 128* Firewall was turned off* All other tunings were left at default values. === Benchmark Results === [[File:Response_time.pngmigration]]  '''In all the three cases (idle system and idle VM/CT, busy system and idle VM/CT, busy system and busy VM/CT) Virtuozzo Containers show the lowest overhead over all the tested virtualization solutions'''

Navigation menu