48
edits
Changes
Created page with '=== Benchmark Description === In this benchmark we measure throughput over 10 Gbit network connection in two directions: * from VM/CT to physical client * from physical client t…'
=== Benchmark Description ===
In this benchmark we measure throughput over 10 Gbit network connection in two directions:
* from VM/CT to physical client
* from physical client to VM/CT
=== Implementation ===
To measure network throughput we use standard performance test '''netperf'''. Host with VM/CT and physical client connected are interconnected directly (without switches, etc.)
=== Testbed Configuration ===
Server: 4xHexCore Intel Xeon (2.66 GHz), 32 GB RAM, Intel 82598EB 10-Gigabit network card
Client: 4xHexCore Intel Xeon (2.136 GHz), 32 GB RAM, Intel 82598EB 10-Gigabit network card
Network: 10Gbit direct server<>client optical connection
Virtualization Software: ESXi4.1upd1, XenServer5.6fp1, HyperV (R2), OpenVZ (RH6) 2.6.32-042test006.1.x86_64
Guest OS: Centos 5.5 x86_64
Software and Tunings:
* netperf v2.4.5
* '''one''' VM/CT with netperf configured with 4 vCPU, 4 GB RAM
* where it was possible, we set offloading & hardware checksumming (gro, gso,etc...) and jumbo frames (MTU=9000) features
* netperf run string:
** Server: netserver -p PORT (5 instances)
** Client: netperf -p PORT -HOST -t TCP_SENDFILE -l 300 (several instanes)
* Firewall was turned off
* All other tunings were left at default values
=== Benchmark Results ===
[[File:10gbit_throughput_v2.png]]
=== Summary ===
*OpenVZ support near native 10Gbit network throughput: 9.70Gbit in receive and 9.87Gbit in send tests
*OpenVZ shows the best network throughput over all the solutions tested
*In Receive performance test (physical client-> VM/CT) OpenVZ shows great benefits over hypervisors: x2 Times faster than ESXi4.1 and x5 Times faster than XenServer5.6
In this benchmark we measure throughput over 10 Gbit network connection in two directions:
* from VM/CT to physical client
* from physical client to VM/CT
=== Implementation ===
To measure network throughput we use standard performance test '''netperf'''. Host with VM/CT and physical client connected are interconnected directly (without switches, etc.)
=== Testbed Configuration ===
Server: 4xHexCore Intel Xeon (2.66 GHz), 32 GB RAM, Intel 82598EB 10-Gigabit network card
Client: 4xHexCore Intel Xeon (2.136 GHz), 32 GB RAM, Intel 82598EB 10-Gigabit network card
Network: 10Gbit direct server<>client optical connection
Virtualization Software: ESXi4.1upd1, XenServer5.6fp1, HyperV (R2), OpenVZ (RH6) 2.6.32-042test006.1.x86_64
Guest OS: Centos 5.5 x86_64
Software and Tunings:
* netperf v2.4.5
* '''one''' VM/CT with netperf configured with 4 vCPU, 4 GB RAM
* where it was possible, we set offloading & hardware checksumming (gro, gso,etc...) and jumbo frames (MTU=9000) features
* netperf run string:
** Server: netserver -p PORT (5 instances)
** Client: netperf -p PORT -HOST -t TCP_SENDFILE -l 300 (several instanes)
* Firewall was turned off
* All other tunings were left at default values
=== Benchmark Results ===
[[File:10gbit_throughput_v2.png]]
=== Summary ===
*OpenVZ support near native 10Gbit network throughput: 9.70Gbit in receive and 9.87Gbit in send tests
*OpenVZ shows the best network throughput over all the solutions tested
*In Receive performance test (physical client-> VM/CT) OpenVZ shows great benefits over hypervisors: x2 Times faster than ESXi4.1 and x5 Times faster than XenServer5.6