Changes

Jump to: navigation, search

Performance

1,918 bytes removed, 08:58, 28 September 2015
Removed like buttons as our users dislike them
== Response Time Benchmark results ==
=== Benchmark Description ===The aim of this benchmark This page is to measure how fast can application inside collect a list of virtual machine (VM) or operating system container (CT) react on external request under various conditions:* Idle system benchmark results available and idle VM/CT* Busy system and idle VM/CT* Busy system and busy VM/CTdemonstrate not only the words about containers technology being more superior from performance point of view compared to hypervisors but also to provide some data proving that.
Described {| class="wikitable"|+ Benchmarks|-|| '''Benchmark''' || '''Description'''|-|| [[/Response Time/]]|| Microbenchmark demonstrating latency issues of interactive applications in virtualized and loaded systems (netperf RR in various conditions).|-|| [[/Network Throughput/]]|| 10Gbit simple network throughput comparison using netperf test.|-|| [[/LAMP/]]|| Linux Apache+MySql+PHP (LAMP) stack benchmark in multiple simultaneously running virtualization instances.|-|| [[/vConsolidate-UP/]]|| UP configuration of Intel vConsolidate server consolidation benchmark (Java+Apache+MySQL workloads).|-|| [[/vConsolidate-SMP/]]|| SMP configuration of Intel vConsolidate server consolidation benchmark case is common for many latency sensitive real life (Java+Apache+MySQL workloads). For example: high performance computing|-|| [[/Microbenchmarks/]]|| Various microbenchmarks like context switch, image processing and renderingsystem call, web and database servers and so onetc. Plus Unixbench resuls.|}
=== Implementation =External sources ==To measure response time we use well known netperf TCP_RR test. To emulate busy VM* [http:/CT we run CPU eater program (busyloop) inside of it. To emulate busy system we run several busy VM/CT (to eat all the host CPU time). Netperf runs in server mode inside of '''one''' VM/CT. On the separate physical host we run netperf TCP_RR test against selected VM/CT over the 1Gbit network. === Testbed Configuration ===Server: 4xHexCore Intel Xeon (2.66 GHz), 32 GB RAM Client: 4xHexCore Intel Xeon (2.136 GHz), 32 GB RAM Network: 1Gbit direct server<>client connection Virtualization Software: ESXi4.1upd1, XenServer5.6fp1, HyperV (R2), PVC 4.7 (RH6) 2.6.32-042test006www.1hpl.x86_64 Guest OS: Centos 5hp.5 x86_64 Software and Tunings: * netperf v2.4.5* '''one''' VMcom/CT with netperf in server mode configured with 1 vCPU, 1 GB RAM* '''six''' VMstechreports/CTs (which needed to load server CPU - see testcases) configured with 4vCPU 1GB RAM* netperf run string:** in VM2007/CT; netserver HPL-p 30300** on the client: netperf 2007-p 30300 -H 17259R1.0.1.1 -t TCP_RR -l 120 -- -r 128 -s 128pdf HP Labs: Performance Evaluation of Virtualization Technologies for Server Consolidation]* Firewall was turned off* All other tunings were left at default values. === Benchmark Results === [[Filehttp:Response_time//www.ieee-jp.png]]  '''In all the three cases (idle system and idle VMorg/CT, busy system and idle VMsection/CT, busy system and busy VMkansai/CT) Virtuozzo Containers show the lowest overhead over all the tested virtualization solutions'''  == 10 Gbit Network Throughput == === Benchmark Description === In this benchmark we measure throughput over 10 Gbit network connection in two directions:* from VMchapter/CT to physical client* from physical client to VMces/CT  === Implementation === To measure network throughput we use standard performance test '''netperf'''1569177239. Host pdf A Comparative Study of Open Source Softwares for Virtualization with VM/CT and physical client connected are interconnected directly (without switches, etc.) === Testbed Configuration ===Streaming Server: 4xHexCore Intel Xeon Applications] - conference proceedings of the 13th IEEE International Symposium on Consumer Electronics (2.66 GHzISCE2009), 32 GB RAM, Intel 82598EB 10-Gigabit network card Client* [http: 4xHexCore Intel Xeon (2//thesai.136 GHz), 32 GB RAM, Intel 82598EB 10org/Downloads/Volume2No9/Paper%2020%20-Gigabit network card Network: 10Gbit direct server<>client optical connection Virtualization Software: ESXi4.1upd1, XenServer5.6fp1, HyperV (R2), PVC 4.7 (RH6) 2.6.32%20The%20Performance%20between%20XEN-042test006.1.x86_64 Guest OS: Centos 5.5 x86_64 Software and Tunings: * netperf v2.4.5* '''one''' VM/CT with netperf in server mode configured with 1 vCPU, 1 GB RAM* where it was possibleHVM, we set offloading & hardware checksumming (gro, gso,etc...) and jumbo frames (MTU=9000) features* netperf run string:** Server: netserver %20XEN-p PORT (5 instances)** Client: netperf PV%20and%20Open-p PORT VZ%20during%20live-HOST migration.pdf The Performance between XEN-t TCP_SENDFILE HVM, XEN-l 300 (several instanes)* Firewall was turned off* All other tunings were left at default values === Benchmark Results === [[File:10gbit_throughput.png]] === Summary ===  * Parallels Virtuozzo Containers support near native 10Gbit network throughput: 9.70Gbit in receive PV and 9.87Gbit in send tests * Parallels Virtuozzo Containers shows the best network throughput over all the solutions tested * In Receive performance test (physical clientOpen-VZ during live-> VM/CT) Parallels Virtuozzo Containers shows great benefits over hypervisors: x2 Times faster than ESXi4.1 and x5 Times faster than XenServer5.6migration]

Navigation menu