Difference between revisions of "Performance/Network Throughput"

From OpenVZ Virtuozzo Containers Wiki
Jump to: navigation, search
(Created page with '=== Benchmark Description === In this benchmark we measure throughput over 10 Gbit network connection in two directions: * from VM/CT to physical client * from physical client t…')
 
 
Line 1: Line 1:
 
=== Benchmark Description ===
 
=== Benchmark Description ===
  
In this benchmark we measure throughput over 10 Gbit network connection in two directions:
+
This benchmark tests throughput over 10 Gbit network connection which can be achieved in a virtualized environments, testing two traffic flow directions:
 
* from VM/CT to physical client
 
* from VM/CT to physical client
 
* from physical client to VM/CT
 
* from physical client to VM/CT
 
  
 
=== Implementation ===
 
=== Implementation ===
  
To measure network throughput we use standard performance test '''netperf'''. Host with VM/CT and physical client connected are interconnected directly (without switches, etc.)
+
To measure network throughput standard performance test '''netperf''' is used. Host with VM/CT and physical client are connected directly using cross cable (without switches, possible other traffic effects etc.)
  
 
=== Testbed Configuration ===
 
=== Testbed Configuration ===
Server: 4xHexCore Intel Xeon (2.66 GHz), 32 GB RAM, Intel 82598EB 10-Gigabit network card
+
Hardware:
 +
* Server: 4xHexCore Intel Xeon (2.66 GHz), 32 GB RAM, Intel 82598EB 10-Gigabit network card
 +
* Client: 4xHexCore Intel Xeon (2.136 GHz), 32 GB RAM, Intel 82598EB 10-Gigabit network card
 +
* Network: 10Gbit direct server <-> client optical connection
  
Client: 4xHexCore Intel Xeon (2.136 GHz), 32 GB RAM, Intel 82598EB 10-Gigabit network card
+
Platform:
 
+
* Virtualization Software: ESXi4.1upd1, XenServer5.6fp1, HyperV (R2), OpenVZ (RH6) 2.6.32-042test006.1.x86_64
Network: 10Gbit direct server<>client optical connection
+
* Guest OS: Centos 5.5 x86_64
 
 
Virtualization Software: ESXi4.1upd1, XenServer5.6fp1, HyperV (R2), OpenVZ (RH6) 2.6.32-042test006.1.x86_64
 
 
 
Guest OS: Centos 5.5 x86_64
 
  
 
Software and Tunings:  
 
Software and Tunings:  
Line 37: Line 35:
 
=== Summary ===
 
=== Summary ===
  
*OpenVZ support near native 10Gbit network throughput: 9.70Gbit in receive and 9.87Gbit in send tests
+
* OpenVZ provides near native 10Gbit network throughput: 9.70Gbit on receive and 9.87Gbit on send
*OpenVZ shows the best network throughput over all the solutions tested
+
* OpenVZ is the only virtualization solution capable to achieve native 10Gbit throughput in both scenarios
*In Receive performance test (physical client-> VM/CT) OpenVZ shows great benefits over hypervisors: x2 Times faster than ESXi4.1 and x5 Times faster than XenServer5.6
+
* OpenVZ demonstrates >2x times greater performance on receive compared to hypervisors (x2 times faster than ESXi4.1 and x5 times faster than XenServer5.6)
 +
 
 +
=== TODO ===
 +
* record and provide CPU usage details to outline additional low CPU usage in this test case

Latest revision as of 16:09, 21 April 2011

Benchmark Description[edit]

This benchmark tests throughput over 10 Gbit network connection which can be achieved in a virtualized environments, testing two traffic flow directions:

  • from VM/CT to physical client
  • from physical client to VM/CT

Implementation[edit]

To measure network throughput standard performance test netperf is used. Host with VM/CT and physical client are connected directly using cross cable (without switches, possible other traffic effects etc.)

Testbed Configuration[edit]

Hardware:

  • Server: 4xHexCore Intel Xeon (2.66 GHz), 32 GB RAM, Intel 82598EB 10-Gigabit network card
  • Client: 4xHexCore Intel Xeon (2.136 GHz), 32 GB RAM, Intel 82598EB 10-Gigabit network card
  • Network: 10Gbit direct server <-> client optical connection

Platform:

  • Virtualization Software: ESXi4.1upd1, XenServer5.6fp1, HyperV (R2), OpenVZ (RH6) 2.6.32-042test006.1.x86_64
  • Guest OS: Centos 5.5 x86_64

Software and Tunings:

  • netperf v2.4.5
  • one VM/CT with netperf configured with 4 vCPU, 4 GB RAM
  • where it was possible, we set offloading & hardware checksumming (gro, gso,etc...) and jumbo frames (MTU=9000) features
  • netperf run string:
    • Server: netserver -p PORT (5 instances)
    • Client: netperf -p PORT -HOST -t TCP_SENDFILE -l 300 (several instanes)
  • Firewall was turned off
  • All other tunings were left at default values

Benchmark Results[edit]

10gbit throughput v2.png

Summary[edit]

  • OpenVZ provides near native 10Gbit network throughput: 9.70Gbit on receive and 9.87Gbit on send
  • OpenVZ is the only virtualization solution capable to achieve native 10Gbit throughput in both scenarios
  • OpenVZ demonstrates >2x times greater performance on receive compared to hypervisors (x2 times faster than ESXi4.1 and x5 times faster than XenServer5.6)

TODO[edit]

  • record and provide CPU usage details to outline additional low CPU usage in this test case