Editing Performance/Network Throughput

Jump to: navigation, search

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 1: Line 1:
 
=== Benchmark Description ===
 
=== Benchmark Description ===
  
This benchmark tests throughput over 10 Gbit network connection which can be achieved in a virtualized environments, testing two traffic flow directions:
+
In this benchmark we measure throughput over 10 Gbit network connection in two directions:
 
* from VM/CT to physical client
 
* from VM/CT to physical client
 
* from physical client to VM/CT
 
* from physical client to VM/CT
 +
  
 
=== Implementation ===
 
=== Implementation ===
  
To measure network throughput standard performance test '''netperf''' is used. Host with VM/CT and physical client are connected directly using cross cable (without switches, possible other traffic effects etc.)
+
To measure network throughput we use standard performance test '''netperf'''. Host with VM/CT and physical client connected are interconnected directly (without switches, etc.)
  
 
=== Testbed Configuration ===
 
=== Testbed Configuration ===
Hardware:
+
Server: 4xHexCore Intel Xeon (2.66 GHz), 32 GB RAM, Intel 82598EB 10-Gigabit network card
* Server: 4xHexCore Intel Xeon (2.66 GHz), 32 GB RAM, Intel 82598EB 10-Gigabit network card
 
* Client: 4xHexCore Intel Xeon (2.136 GHz), 32 GB RAM, Intel 82598EB 10-Gigabit network card
 
* Network: 10Gbit direct server <-> client optical connection
 
  
Platform:
+
Client: 4xHexCore Intel Xeon (2.136 GHz), 32 GB RAM, Intel 82598EB 10-Gigabit network card
* Virtualization Software: ESXi4.1upd1, XenServer5.6fp1, HyperV (R2), OpenVZ (RH6) 2.6.32-042test006.1.x86_64
+
 
* Guest OS: Centos 5.5 x86_64
+
Network: 10Gbit direct server<>client optical connection
 +
 
 +
Virtualization Software: ESXi4.1upd1, XenServer5.6fp1, HyperV (R2), OpenVZ (RH6) 2.6.32-042test006.1.x86_64
 +
 
 +
Guest OS: Centos 5.5 x86_64
  
 
Software and Tunings:  
 
Software and Tunings:  
Line 35: Line 37:
 
=== Summary ===
 
=== Summary ===
  
* OpenVZ provides near native 10Gbit network throughput: 9.70Gbit on receive and 9.87Gbit on send
+
*OpenVZ support near native 10Gbit network throughput: 9.70Gbit in receive and 9.87Gbit in send tests
* OpenVZ is the only virtualization solution capable to achieve native 10Gbit throughput in both scenarios
+
*OpenVZ shows the best network throughput over all the solutions tested
* OpenVZ demonstrates >2x times greater performance on receive compared to hypervisors (x2 times faster than ESXi4.1 and x5 times faster than XenServer5.6)
+
*In Receive performance test (physical client-> VM/CT) OpenVZ shows great benefits over hypervisors: x2 Times faster than ESXi4.1 and x5 Times faster than XenServer5.6
 
 
=== TODO ===
 
* record and provide CPU usage details to outline additional low CPU usage in this test case
 

Please note that all contributions to OpenVZ Virtuozzo Containers Wiki may be edited, altered, or removed by other contributors. If you don't want your writing to be edited mercilessly, then don't submit it here.
If you are going to add external links to an article, read the External links policy first!

To edit this page, please answer the question that appears below (more info):

Cancel Editing help (opens in new window)