Open main menu

OpenVZ Virtuozzo Containers Wiki β

Changes

Performance/Network Throughput

253 bytes added, 16:09, 21 April 2011
no edit summary
=== Benchmark Description ===
In this This benchmark we measure tests throughput over 10 Gbit network connection which can be achieved in a virtualized environments, testing two traffic flow directions:
* from VM/CT to physical client
* from physical client to VM/CT
 
=== Implementation ===
To measure network throughput we use standard performance test '''netperf'''is used. Host with VM/CT and physical client are connected are interconnected directly using cross cable (without switches, possible other traffic effects etc.)
=== Testbed Configuration ===
Hardware:* Server: 4xHexCore Intel Xeon (2.66 GHz), 32 GB RAM, Intel 82598EB 10-Gigabit network card* Client: 4xHexCore Intel Xeon (2.136 GHz), 32 GB RAM, Intel 82598EB 10-Gigabit network card* Network: 10Gbit direct server <-> client optical connection
ClientPlatform: 4xHexCore Intel Xeon (2.136 GHz), 32 GB RAM, Intel 82598EB 10-Gigabit network card Network: 10Gbit direct server<>client optical connection * Virtualization Software: ESXi4.1upd1, XenServer5.6fp1, HyperV (R2), OpenVZ (RH6) 2.6.32-042test006.1.x86_64 * Guest OS: Centos 5.5 x86_64
Software and Tunings:
=== Summary ===
*OpenVZ support provides near native 10Gbit network throughput: 9.70Gbit in on receive and 9.87Gbit in on send tests*OpenVZ shows is the best network only virtualization solution capable to achieve native 10Gbit throughput over all the solutions testedin both scenarios*In Receive OpenVZ demonstrates >2x times greater performance test on receive compared to hypervisors (physical client-> VM/CT) OpenVZ shows great benefits over hypervisors: x2 Times times faster than ESXi4.1 and x5 Times times faster than XenServer5.6) === TODO ===* record and provide CPU usage details to outline additional low CPU usage in this test case