Open main menu

OpenVZ Virtuozzo Containers Wiki β

Performance/Response Time

< Performance
Revision as of 13:14, 14 April 2011 by Vlukovnikov (talk | contribs) (Created page with 'Response Time === Benchmark Description === The aim of this benchmark is to measure how fast can application inside of virtual machine (VM) or operating system container (CT) re…')
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Response Time

Contents

Benchmark Description

The aim of this benchmark is to measure how fast can application inside of virtual machine (VM) or operating system container (CT) react on external request under various conditions:

  • Idle system and idle VM/CT
  • Busy system and idle VM/CT
  • Busy system and busy VM/CT

Described benchmark case is common for many latency sensitive real life workloads. For example: high performance computing, image processing and rendering, web and database servers and so on.

Implementation

To measure response time we use well known netperf TCP_RR test. To emulate busy VM/CT we run CPU eater program (busyloop) inside of it. To emulate busy system we run several busy VM/CT (to eat all the host CPU time). Netperf runs in server mode inside of one VM/CT. On the separate physical host we run netperf TCP_RR test against selected VM/CT over the 1Gbit network.

Testbed Configuration

Server: 4xHexCore Intel Xeon (2.66 GHz), 32 GB RAM

Client: 4xHexCore Intel Xeon (2.136 GHz), 32 GB RAM

Network: 1Gbit direct server<>client connection

Virtualization Software: ESXi4.1upd1, XenServer5.6fp1, HyperV (R2), OpenVZ (RHEL6) 2.6.32-042test006.1.x86_64

Guest OS: CentOS 5.5 x86_64

Software and Tunings:

  • netperf v2.4.5
  • one VM/CT with netperf in server mode configured with 1 vCPU, 1 GB RAM
  • six VMs/CTs (to load server CPU - see testcases) configured with 4 vCPU, 1GB RAM
  • netperf command
    • in VM/CT: netserver -p 30300
    • on the client: netperf -p 30300 -H 172.0.1.1 -t TCP_RR -l 120 -- -r 128 -s 128
  • Firewall was turned off
  • All other tunables were left at default values.

Benchmark Results

 


In all the three cases (idle system and idle VM/CT, busy system and idle VM/CT, busy system and busy VM/CT) OpenVZ show the lowest overhead over all the tested virtualization solutions