Changes

Jump to: navigation, search

Performance

3,494 bytes removed, 08:58, 28 September 2015
Removed like buttons as our users dislike them
== Response Time Benchmark results ==
=== Benchmark Description ===The aim of this benchmark This page is to measure how fast can application inside collect a list of virtual machine (VM) or operating system container (CT) react on external request under various conditions:* Idle system benchmark results available and idle VM/CT* Busy system and idle VM/CT* Busy system and busy VM/CTdemonstrate not only the words about containers technology being more superior from performance point of view compared to hypervisors but also to provide some data proving that.
Described {| class="wikitable"|+ Benchmarks|-|| '''Benchmark''' || '''Description'''|-|| [[/Response Time/]]|| Microbenchmark demonstrating latency issues of interactive applications in virtualized and loaded systems (netperf RR in various conditions).|-|| [[/Network Throughput/]]|| 10Gbit simple network throughput comparison using netperf test.|-|| [[/LAMP/]]|| Linux Apache+MySql+PHP (LAMP) stack benchmark in multiple simultaneously running virtualization instances.|-|| [[/vConsolidate-UP/]]|| UP configuration of Intel vConsolidate server consolidation benchmark (Java+Apache+MySQL workloads).|-|| [[/vConsolidate-SMP/]]|| SMP configuration of Intel vConsolidate server consolidation benchmark case is common for many latency sensitive real life (Java+Apache+MySQL workloads). For example: high performance computing|-|| [[/Microbenchmarks/]]|| Various microbenchmarks like context switch, image processing and renderingsystem call, web and database servers and so onetc. Plus Unixbench resuls.|}
=== Implementation =External sources ==To measure response time we use well known netperf TCP_RR test. To emulate busy VM* [http:/CT we run CPU eater program (busyloop) inside of it. To emulate busy system we run several busy VM/CT (to eat all the host CPU time). Netperf runs in server mode inside of '''one''' VM/CTwww. On the separate physical host we run netperf TCP_RR test against selected VM/CT over the 1Gbit networkhpl=== Testbed Configuration ===Server: 4xHexCore Intel Xeon (2hp.66 GHz), 32 GB RAM Client: 4xHexCore Intel Xeon (2.136 GHz), 32 GB RAM Network: 1Gbit direct server<>client connection Virtualization Software: ESXi4.1upd1, XenServer5.6fp1, HyperV (R2), OpenVZ (RH6) 2.6.32-042test006.1.x86_64 Guest OS: Centos 5.5 x86_64 Software and Tunings: * netperf v2.4.5* '''one''' VMcom/CT with netperf in server mode configured with 1 vCPU, 1 GB RAM* '''six''' VMstechreports/CTs (which needed to load server CPU - see testcases) configured with 4vCPU 1GB RAM* netperf run string:** in VM2007/CT; netserver -p 30300** on the client: netperf HPL-p 30300 2007-H 17259R1.0.1.1 -t TCP_RR -l 120 -- -r 128 -s 128* Firewall was turned offpdf HP Labs: Performance Evaluation of Virtualization Technologies for Server Consolidation]* All other tunings were left at default values. === Benchmark Results === [[Filehttp:Response_time//www.ieee-jp.png]]  '''In all the three cases (idle system and idle VMorg/CT, busy system and idle VMsection/CT, busy system and busy VMkansai/CT) Virtuozzo Containers show the lowest overhead over all the tested virtualization solutions'''  == 10 Gbit Network Throughput == === Benchmark Description === In this benchmark we measure throughput over 10 Gbit network connection in two directions:* from VMchapter/CT to physical client* from physical client to VMces/CT  === Implementation === To measure network throughput we use standard performance test '''netperf'''1569177239. Host with VM/CT and physical client connected are interconnected directly (without switches, etc.) === Testbed Configuration ===Server: 4xHexCore Intel Xeon (2.66 GHz), 32 GB RAM, Intel 82598EB 10-Gigabit network card Client: 4xHexCore Intel Xeon (2.136 GHz), 32 GB RAM, Intel 82598EB 10-Gigabit network card Network: 10Gbit direct server<>client optical connection pdf A Comparative Study of Open Source Softwares for Virtualization Software: ESXi4.1upd1, XenServer5.6fp1, HyperV (R2), OpenVZ (RH6) 2.6.32-042test006.1.x86_64 Guest OS: Centos 5.5 x86_64 Software and Tunings: * netperf v2.4.5* '''one''' VM/CT with netperf configured with 4 vCPU, 4 GB RAM* where it was possible, we set offloading & hardware checksumming (gro, gso,etc...) and jumbo frames (MTU=9000) features* netperf run string:** Streaming Server: netserver Applications] -p PORT (5 instances)** Client: netperf -p PORT -HOST -t TCP_SENDFILE -l 300 conference proceedings of the 13th IEEE International Symposium on Consumer Electronics (several instanesISCE2009)* Firewall was turned off* All other tunings were left at default values === Benchmark Results === [[File:10gbit_throughput.png]] === Summary === *OpenVZ support near native 10Gbit network throughput: 9.70Gbit in receive and 9.87Gbit in send tests*OpenVZ shows the best network throughput over all the solutions tested*In Receive performance test (physical client-> VM/CT) OpenVZ shows great benefits over hypervisors: x2 Times faster than ESXi4.1 and x5 Times faster than XenServer5.6  == LAMP Stack == === Benchmark Description ===LAMP (acronym for Linux, Apache, MySQL, PHP) software stack is widely used for building modern web sites. We measure not only performance (how many requests can deliver server), but also maximum response time - to understand QoS. === Implementation === To measure LAMP software stack performance and density we use DVD-Store E-Commerce benchmark developed by [http://linuxthesai.dell.comorg/Downloads/dvdstoreVolume2No9/ Dell].  === Testbed Configuration ===Server: 4xHexCore Intel Xeon (2.66 GHz), 64 GB RAM, HP MSA1500 SAN Storage, 8 SATA (7200 RPM) Disks in RAID0 Client: 4xHexCore Intel Xeon (2.136 GHz), 32 GB RAM, Intel 82598EB 10Paper%2020%20-Gigabit network card Network: Gbit direct server<>client connection Virtualization Software: ESXi4.1upd1, XenServer5.6fp1, HyperV (R2), OpenVZ (RH6) 2.6.32%20The%20Performance%20between%20XEN-042test006.1.x86_64 Guest OS: Centos 5.5 x86_64 Software and Tunings:* Each VM/CT was configured with 1 vCPUHVM, 1 GB RAM* Small db was deployed from DVD Store samples* Dvd Store benchmark client run string: ds2webdriver.exe %20XEN-PV%20and%20Open-target=172.0.1.VZ%VM20during% 20live--think_time=0migration.05 pdf The Performance between XEN-HVM, XEN-n_threads=3 PV and Open-VZ during live-warmup_time=10 --run_time=10 --db_size_str=S --n_line_items=1 --pct_newcustomers=1* Firewall was turned off* All other tunings were left at default values === Benchmark Results === [[File:lamp_performance.png]migration]  [[File:lamp_rt.png]]  === Summary === * OpenVZ shows the best performance over solutions tested: OpenVZ 38% faster than XenServer and more than x2 times faster than HyperV and ESXi* OpenVZ shows the best response time over solutions tested: OpenVZ has 33% better response time than ESXi and x2 times better response time than XenServer and HyperV

Navigation menu