Performance

From OpenVZ Virtuozzo Containers Wiki
Revision as of 02:09, 7 April 2011 by 209.119.62.91 (talk) (Testbed Configuration: misc fixes)
Jump to: navigation, search

Response Time

Benchmark Description

The aim of this benchmark is to measure how fast can application inside of virtual machine (VM) or operating system container (CT) react on external request under various conditions:

  • Idle system and idle VM/CT
  • Busy system and idle VM/CT
  • Busy system and busy VM/CT

Described benchmark case is common for many latency sensitive real life workloads. For example: high performance computing, image processing and rendering, web and database servers and so on.

Implementation

To measure response time we use well known netperf TCP_RR test. To emulate busy VM/CT we run CPU eater program (busyloop) inside of it. To emulate busy system we run several busy VM/CT (to eat all the host CPU time). Netperf runs in server mode inside of one VM/CT. On the separate physical host we run netperf TCP_RR test against selected VM/CT over the 1Gbit network.

Testbed Configuration

Server: 4xHexCore Intel Xeon (2.66 GHz), 32 GB RAM

Client: 4xHexCore Intel Xeon (2.136 GHz), 32 GB RAM

Network: 1Gbit direct server<>client connection

Virtualization Software: ESXi4.1upd1, XenServer5.6fp1, HyperV (R2), OpenVZ (RHEL6) 2.6.32-042test006.1.x86_64

Guest OS: CentOS 5.5 x86_64

Software and Tunings:

  • netperf v2.4.5
  • one VM/CT with netperf in server mode configured with 1 vCPU, 1 GB RAM
  • six VMs/CTs (to load server CPU - see testcases) configured with 4 vCPU, 1GB RAM
  • netperf command
    • in VM/CT: netserver -p 30300
    • on the client: netperf -p 30300 -H 172.0.1.1 -t TCP_RR -l 120 -- -r 128 -s 128
  • Firewall was turned off
  • All other tunables were left at default values.

Benchmark Results

Response time v2.png


In all the three cases (idle system and idle VM/CT, busy system and idle VM/CT, busy system and busy VM/CT) OpenVZ show the lowest overhead over all the tested virtualization solutions


10 Gbit Network Throughput

Benchmark Description

In this benchmark we measure throughput over 10 Gbit network connection in two directions:

  • from VM/CT to physical client
  • from physical client to VM/CT


Implementation

To measure network throughput we use standard performance test netperf. Host with VM/CT and physical client connected are interconnected directly (without switches, etc.)

Testbed Configuration

Server: 4xHexCore Intel Xeon (2.66 GHz), 32 GB RAM, Intel 82598EB 10-Gigabit network card

Client: 4xHexCore Intel Xeon (2.136 GHz), 32 GB RAM, Intel 82598EB 10-Gigabit network card

Network: 10Gbit direct server<>client optical connection

Virtualization Software: ESXi4.1upd1, XenServer5.6fp1, HyperV (R2), OpenVZ (RH6) 2.6.32-042test006.1.x86_64

Guest OS: Centos 5.5 x86_64

Software and Tunings:

  • netperf v2.4.5
  • one VM/CT with netperf configured with 4 vCPU, 4 GB RAM
  • where it was possible, we set offloading & hardware checksumming (gro, gso,etc...) and jumbo frames (MTU=9000) features
  • netperf run string:
    • Server: netserver -p PORT (5 instances)
    • Client: netperf -p PORT -HOST -t TCP_SENDFILE -l 300 (several instanes)
  • Firewall was turned off
  • All other tunings were left at default values

Benchmark Results

10gbit throughput v2.png

Summary

  • OpenVZ support near native 10Gbit network throughput: 9.70Gbit in receive and 9.87Gbit in send tests
  • OpenVZ shows the best network throughput over all the solutions tested
  • In Receive performance test (physical client-> VM/CT) OpenVZ shows great benefits over hypervisors: x2 Times faster than ESXi4.1 and x5 Times faster than XenServer5.6


LAMP Stack

Benchmark Description

LAMP (acronym for Linux, Apache, MySQL, PHP) software stack is widely used for building modern web sites. We measure not only performance (how many requests can deliver server), but also maximum response time - to understand QoS.

Implementation

To measure LAMP software stack performance and density we use DVD-Store E-Commerce benchmark developed by Dell.

Testbed Configuration

Server: 4xHexCore Intel Xeon (2.66 GHz), 64 GB RAM, HP MSA1500 SAN Storage, 8 SATA (7200 RPM) Disks in RAID0

Client: 4xHexCore Intel Xeon (2.136 GHz), 32 GB RAM, Intel 82598EB 10-Gigabit network card

Network: Gbit direct server<>client connection

Virtualization Software: ESXi4.1upd1, XenServer5.6fp1, HyperV (R2), OpenVZ (RH6) 2.6.32-042test006.1.x86_64

Guest OS: Centos 5.5 x86_64

Software and Tunings:

  • Each VM/CT was configured with 1 vCPU, 1 GB RAM
  • Small db was deployed from DVD Store samples
  • Dvd Store benchmark client run string: ds2webdriver.exe --target=172.0.1.%VM% --think_time=0.05 --n_threads=3 --warmup_time=10 --run_time=10 --db_size_str=S --n_line_items=1 --pct_newcustomers=1
  • Firewall was turned off
  • All other tunings were left at default values

Benchmark Results

Lamp performance v2.png


Lamp rt v2.png


Summary

  • OpenVZ shows the best performance over solutions tested: OpenVZ 38% faster than XenServer and more than x2 times faster than HyperV and ESXi
  • OpenVZ shows the best response time over solutions tested: OpenVZ has 33% better response time than ESXi and x2 times better response time than XenServer and HyperV