Open main menu

OpenVZ Virtuozzo Containers Wiki β

Changes

Performance tuning

2,736 bytes added, 12:29, 3 August 2007
New page: This page describes how to do correct performance measurements on OpenVZ system. = Test conditions = == kernels with different versions == * If you want to compare performance of the ker...
This page describes how to do correct performance measurements on OpenVZ system.

= Test conditions =

== kernels with different versions ==
* If you want to compare performance of the kernel on different hosts, or if you want to measure OpenVZ performance overhead, it's strongly recommended to compare the same kernel version with similar .config file and maybe with the same linux distribution.

* If you compare kernels with different versions, please check all .config options that are differ, especially _DEBUG_ options. For example, on unixbench pipe throughput test on 2.6.18 kernel, disabled CONFIG_DEBUG_HIGHMEM option will increase performance up to <font color=red>20%</font>

== running services ==
* Before performing test measurement, you should disable '''all''' default services in your runlevel and then '''reboot''' your host.

It is not enough to just stop services, because some services, like <code>audit</code> will affect performance even if the daemon is already stopped. For some unixbench tests it can be <font color=red>~20%</font> overhead if <code>auditd</code> was started once on a host before reboot.

On RedHat distributions use <code>chkconfig</code> or <code>ntsysv</code> utility to disable default services. (<code>rc-update</code> in Gentoo, <code>update-rc.dv</code> for Debian)

== filesystem tests ==
* If you perform filesystem tests, please keep in mind filesystem type, block size, mount options and so on.

For example, ext3 filesystem performance highly depends on journal type and mount options.

* Also please always note/report IO-scheduler type. Different IO-schedulers can highly affect your tests results (up to <font color=red>30%</font>).

If your kernel support different IO-schedulers, you can get/set the type here:

# cat /sys/block/hda/queue/scheduler
noop anticipatory deadline [cfq]
# echo noop > /sys/block/hda/queue/scheduler

== network isolation ==
* You should disable local network/internet connection if your tests doesn't require it.

== CPU distribution inside VE on SMP hosts ==
* If the number of VE's in your host is more than CPUs number, and there are many tasks/tests running inside each VE, and that tasks are scheduled quite often, it's better to give just one CPU for each VE. In this case the VirtualCPU-scheduler performance overhead can be significantly decreased, and performance can increase up to <font color=red>100%</font>!

To set the number of CPUs available inside VE use:

# vzctl set $VEID --cpus N

== network performance ==
* please do not use file transferring utilities to test the network performance, because the bottleneck of these tests is usually file system performance - not TCP/IP stack

== network checksumming ==
'''TODO'''