Difference between revisions of "Performance/Response Time"
Line 40: | Line 40: | ||
* '''In all the three cases (1 - idle system and idle VM/CT, 2 - loaded system and idle VM/CT, 3 - loaded system and loaded VM/CT) OpenVZ shows the lowest overhead over all the tested virtualization solutions and demonstrates latencies close to the native results of non-virtualized systems.''' | * '''In all the three cases (1 - idle system and idle VM/CT, 2 - loaded system and idle VM/CT, 3 - loaded system and loaded VM/CT) OpenVZ shows the lowest overhead over all the tested virtualization solutions and demonstrates latencies close to the native results of non-virtualized systems.''' | ||
− | * '''VM virtualization solutions like Xen | + | * '''VM virtualization solutions (like Xen, ESX) demonstrate latencies by an order of magnitude worse then native response times and introduce ~700-3000% overhead.''' |
Latest revision as of 15:46, 21 April 2011
Response Time
Contents
Benchmark Description[edit]
The aim of this benchmark is to measure how fast can application inside of virtual machine (VM) or operating system container (CT) react on external request under various conditions:
- Idle system and idle VM/CT
- Loaded system and idle VM/CT
- Loaded system and loaded VM/CT
This workload simulates latency sensitive real life application which receives an event through the network and have to respond. Such workloads are very sensitive for CPU scheduling algorithms and work best when interactivity is taken into account. Examples of such workloads: high performance computing environments (which introduce small synchronization messages), web and database servers and so on.
Implementation[edit]
To measure response time a well known netperf TCP_RR test is used which measures how many round-trips a message can do between 2 hosts. To add a load in VM/CT simple CPU hog application (doing a busyloop and nothing else) is started. To emulate loaded system we run several loaded VM/CT (to eat all the host CPU time). Netperf runs in server mode inside of one of VM/CT. On the separate physical host we run netperf TCP_RR test against selected VM/CT over the 1Gbit network.
Testbed Configuration[edit]
Hardware
- Server: 4xHexCore Intel Xeon (2.66 GHz), 32 GB RAM
- Client: 4xHexCore Intel Xeon (2.136 GHz), 32 GB RAM
- Network: 1Gbit direct server<>client connection
Platform:
- Virtualization Software: ESXi4.1upd1, XenServer5.6fp1, HyperV (R2), OpenVZ (RHEL6) 2.6.32-042test006.1.x86_64
- Guest OS: CentOS 5.5 x86_64
Software and Tunings:
- netperf v2.4.5
- one VM/CT with netperf in server mode configured with 1 vCPU, 1 GB RAM
- six VMs/CTs (to load server CPU - see testcases) configured with 4 vCPU, 1GB RAM
- netperf command
- in VM/CT:
netserver -p 30300
- on the client:
netperf -p 30300 -H 172.0.1.1 -t TCP_RR -l 120 -- -r 128 -s 128
- in VM/CT:
- Firewall was turned off
- All other tunables were left at default values.
Benchmark Results[edit]
Summary[edit]
- In all the three cases (1 - idle system and idle VM/CT, 2 - loaded system and idle VM/CT, 3 - loaded system and loaded VM/CT) OpenVZ shows the lowest overhead over all the tested virtualization solutions and demonstrates latencies close to the native results of non-virtualized systems.
- VM virtualization solutions (like Xen, ESX) demonstrate latencies by an order of magnitude worse then native response times and introduce ~700-3000% overhead.