48
edits
Changes
Created page with 'Response Time === Benchmark Description === The aim of this benchmark is to measure how fast can application inside of virtual machine (VM) or operating system container (CT) re…'
Response Time
=== Benchmark Description ===
The aim of this benchmark is to measure how fast can application inside of virtual machine (VM) or operating system container (CT) react on external request under various conditions:
* Idle system and idle VM/CT
* Busy system and idle VM/CT
* Busy system and busy VM/CT
Described benchmark case is common for many latency sensitive real life workloads. For example: high performance computing, image processing and rendering, web and database servers and so on.
=== Implementation ===
To measure response time we use well known netperf TCP_RR test. To emulate busy VM/CT we run CPU eater program (busyloop) inside of it. To emulate busy system we run several busy VM/CT (to eat all the host CPU time). Netperf runs in server mode inside of '''one''' VM/CT. On the separate physical host we run netperf TCP_RR test against selected VM/CT over the 1Gbit network.
=== Testbed Configuration ===
Server: 4xHexCore Intel Xeon (2.66 GHz), 32 GB RAM
Client: 4xHexCore Intel Xeon (2.136 GHz), 32 GB RAM
Network: 1Gbit direct server<>client connection
Virtualization Software: ESXi4.1upd1, XenServer5.6fp1, HyperV (R2), OpenVZ (RHEL6) 2.6.32-042test006.1.x86_64
Guest OS: CentOS 5.5 x86_64
Software and Tunings:
* netperf v2.4.5
* '''one''' VM/CT with netperf in server mode configured with 1 vCPU, 1 GB RAM
* '''six''' VMs/CTs (to load server CPU - see testcases) configured with 4 vCPU, 1GB RAM
* netperf command
** in VM/CT: <code>netserver -p 30300</code>
** on the client: <code>netperf -p 30300 -H 172.0.1.1 -t TCP_RR -l 120 -- -r 128 -s 128</code>
* Firewall was turned off
* All other tunables were left at default values.
=== Benchmark Results ===
[[File:Response_time_v2.png]]
'''In all the three cases (idle system and idle VM/CT, busy system and idle VM/CT, busy system and busy VM/CT) OpenVZ show the lowest overhead over all the tested virtualization solutions'''
=== Benchmark Description ===
The aim of this benchmark is to measure how fast can application inside of virtual machine (VM) or operating system container (CT) react on external request under various conditions:
* Idle system and idle VM/CT
* Busy system and idle VM/CT
* Busy system and busy VM/CT
Described benchmark case is common for many latency sensitive real life workloads. For example: high performance computing, image processing and rendering, web and database servers and so on.
=== Implementation ===
To measure response time we use well known netperf TCP_RR test. To emulate busy VM/CT we run CPU eater program (busyloop) inside of it. To emulate busy system we run several busy VM/CT (to eat all the host CPU time). Netperf runs in server mode inside of '''one''' VM/CT. On the separate physical host we run netperf TCP_RR test against selected VM/CT over the 1Gbit network.
=== Testbed Configuration ===
Server: 4xHexCore Intel Xeon (2.66 GHz), 32 GB RAM
Client: 4xHexCore Intel Xeon (2.136 GHz), 32 GB RAM
Network: 1Gbit direct server<>client connection
Virtualization Software: ESXi4.1upd1, XenServer5.6fp1, HyperV (R2), OpenVZ (RHEL6) 2.6.32-042test006.1.x86_64
Guest OS: CentOS 5.5 x86_64
Software and Tunings:
* netperf v2.4.5
* '''one''' VM/CT with netperf in server mode configured with 1 vCPU, 1 GB RAM
* '''six''' VMs/CTs (to load server CPU - see testcases) configured with 4 vCPU, 1GB RAM
* netperf command
** in VM/CT: <code>netserver -p 30300</code>
** on the client: <code>netperf -p 30300 -H 172.0.1.1 -t TCP_RR -l 120 -- -r 128 -s 128</code>
* Firewall was turned off
* All other tunables were left at default values.
=== Benchmark Results ===
[[File:Response_time_v2.png]]
'''In all the three cases (idle system and idle VM/CT, busy system and idle VM/CT, busy system and busy VM/CT) OpenVZ show the lowest overhead over all the tested virtualization solutions'''