Difference between revisions of "Performance/Response Time"

From OpenVZ Virtuozzo Containers Wiki
Jump to: navigation, search
(Created page with 'Response Time === Benchmark Description === The aim of this benchmark is to measure how fast can application inside of virtual machine (VM) or operating system container (CT) re…')
 
 
(2 intermediate revisions by the same user not shown)
Line 4: Line 4:
 
The aim of this benchmark is to measure how fast can application inside of virtual machine (VM) or operating system container (CT) react on external request under various conditions:
 
The aim of this benchmark is to measure how fast can application inside of virtual machine (VM) or operating system container (CT) react on external request under various conditions:
 
* Idle system and idle VM/CT
 
* Idle system and idle VM/CT
* Busy system and idle VM/CT
+
* Loaded system and idle VM/CT
* Busy system and busy VM/CT
+
* Loaded system and loaded VM/CT
  
Described benchmark case is common for many latency sensitive real life workloads. For example: high performance computing, image processing and rendering, web and database servers and so on.
+
This workload simulates latency sensitive real life application which receives an event through the network and have to respond. Such workloads are very sensitive for CPU scheduling algorithms and work best when interactivity is taken into account.
 +
Examples of such workloads: high performance computing environments (which introduce small synchronization messages), web and database servers and so on.
  
 
=== Implementation ===
 
=== Implementation ===
To measure response time we use well known netperf TCP_RR test. To emulate busy VM/CT we run CPU eater program (busyloop) inside of it. To emulate busy system we run several busy VM/CT (to eat all the host CPU time). Netperf runs in server mode inside of '''one''' VM/CT. On the separate physical host we run netperf TCP_RR test against selected VM/CT over the 1Gbit network.
+
To measure response time a well known netperf TCP_RR test is used which measures how many round-trips a message can do between 2 hosts. To add a load in VM/CT simple CPU hog application (doing a busyloop and nothing else) is started. To emulate loaded system we run several loaded VM/CT (to eat all the host CPU time). Netperf runs in server mode inside of '''one''' of VM/CT. On the separate physical host we run netperf TCP_RR test against selected VM/CT over the 1Gbit network.
  
 
=== Testbed Configuration ===
 
=== Testbed Configuration ===
Server: 4xHexCore Intel Xeon (2.66 GHz), 32 GB RAM
+
Hardware
 +
* Server: 4xHexCore Intel Xeon (2.66 GHz), 32 GB RAM
 +
* Client: 4xHexCore Intel Xeon (2.136 GHz), 32 GB RAM
 +
* Network: 1Gbit direct server<>client connection
  
Client: 4xHexCore Intel Xeon (2.136 GHz), 32 GB RAM
+
Platform:
 
+
* Virtualization Software: ESXi4.1upd1, XenServer5.6fp1, HyperV (R2), OpenVZ (RHEL6) 2.6.32-042test006.1.x86_64
Network: 1Gbit direct server<>client connection
+
* Guest OS: CentOS 5.5 x86_64
 
 
Virtualization Software: ESXi4.1upd1, XenServer5.6fp1, HyperV (R2), OpenVZ (RHEL6) 2.6.32-042test006.1.x86_64
 
 
 
Guest OS: CentOS 5.5 x86_64
 
  
 
Software and Tunings:  
 
Software and Tunings:  
Line 37: Line 37:
 
[[File:Response_time_v2.png]]
 
[[File:Response_time_v2.png]]
  
 +
=== Summary ===
  
'''In all the three cases (idle system and idle VM/CT, busy system and idle VM/CT, busy system and busy VM/CT) OpenVZ show the lowest overhead over all the tested virtualization solutions'''
+
* '''In all the three cases (1 - idle system and idle VM/CT, 2 - loaded system and idle VM/CT, 3 - loaded system and loaded VM/CT) OpenVZ shows the lowest overhead over all the tested virtualization solutions and demonstrates latencies close to the native results of non-virtualized systems.'''
 +
* '''VM virtualization solutions (like Xen, ESX) demonstrate latencies by an order of magnitude worse then native response times and introduce ~700-3000% overhead.'''

Latest revision as of 15:46, 21 April 2011

Response Time

Benchmark Description[edit]

The aim of this benchmark is to measure how fast can application inside of virtual machine (VM) or operating system container (CT) react on external request under various conditions:

  • Idle system and idle VM/CT
  • Loaded system and idle VM/CT
  • Loaded system and loaded VM/CT

This workload simulates latency sensitive real life application which receives an event through the network and have to respond. Such workloads are very sensitive for CPU scheduling algorithms and work best when interactivity is taken into account. Examples of such workloads: high performance computing environments (which introduce small synchronization messages), web and database servers and so on.

Implementation[edit]

To measure response time a well known netperf TCP_RR test is used which measures how many round-trips a message can do between 2 hosts. To add a load in VM/CT simple CPU hog application (doing a busyloop and nothing else) is started. To emulate loaded system we run several loaded VM/CT (to eat all the host CPU time). Netperf runs in server mode inside of one of VM/CT. On the separate physical host we run netperf TCP_RR test against selected VM/CT over the 1Gbit network.

Testbed Configuration[edit]

Hardware

  • Server: 4xHexCore Intel Xeon (2.66 GHz), 32 GB RAM
  • Client: 4xHexCore Intel Xeon (2.136 GHz), 32 GB RAM
  • Network: 1Gbit direct server<>client connection

Platform:

  • Virtualization Software: ESXi4.1upd1, XenServer5.6fp1, HyperV (R2), OpenVZ (RHEL6) 2.6.32-042test006.1.x86_64
  • Guest OS: CentOS 5.5 x86_64

Software and Tunings:

  • netperf v2.4.5
  • one VM/CT with netperf in server mode configured with 1 vCPU, 1 GB RAM
  • six VMs/CTs (to load server CPU - see testcases) configured with 4 vCPU, 1GB RAM
  • netperf command
    • in VM/CT: netserver -p 30300
    • on the client: netperf -p 30300 -H 172.0.1.1 -t TCP_RR -l 120 -- -r 128 -s 128
  • Firewall was turned off
  • All other tunables were left at default values.

Benchmark Results[edit]

Response time v2.png

Summary[edit]

  • In all the three cases (1 - idle system and idle VM/CT, 2 - loaded system and idle VM/CT, 3 - loaded system and loaded VM/CT) OpenVZ shows the lowest overhead over all the tested virtualization solutions and demonstrates latencies close to the native results of non-virtualized systems.
  • VM virtualization solutions (like Xen, ESX) demonstrate latencies by an order of magnitude worse then native response times and introduce ~700-3000% overhead.