Open main menu

OpenVZ Virtuozzo Containers Wiki β

Performance tuning

Revision as of 12:29, 3 August 2007 by Alexandr Andreev (talk | contribs) (New page: This page describes how to do correct performance measurements on OpenVZ system. = Test conditions = == kernels with different versions == * If you want to compare performance of the ker...)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

This page describes how to do correct performance measurements on OpenVZ system.

Contents

Test conditions

kernels with different versions

  • If you want to compare performance of the kernel on different hosts, or if you want to measure OpenVZ performance overhead, it's strongly recommended to compare the same kernel version with similar .config file and maybe with the same linux distribution.
  • If you compare kernels with different versions, please check all .config options that are differ, especially _DEBUG_ options. For example, on unixbench pipe throughput test on 2.6.18 kernel, disabled CONFIG_DEBUG_HIGHMEM option will increase performance up to 20%

running services

  • Before performing test measurement, you should disable all default services in your runlevel and then reboot your host.

It is not enough to just stop services, because some services, like audit will affect performance even if the daemon is already stopped. For some unixbench tests it can be ~20% overhead if auditd was started once on a host before reboot.

On RedHat distributions use chkconfig or ntsysv utility to disable default services. (rc-update in Gentoo, update-rc.dv for Debian)

filesystem tests

  • If you perform filesystem tests, please keep in mind filesystem type, block size, mount options and so on.

For example, ext3 filesystem performance highly depends on journal type and mount options.

  • Also please always note/report IO-scheduler type. Different IO-schedulers can highly affect your tests results (up to 30%).

If your kernel support different IO-schedulers, you can get/set the type here:

# cat /sys/block/hda/queue/scheduler
noop anticipatory deadline [cfq]
# echo noop > /sys/block/hda/queue/scheduler

network isolation

  • You should disable local network/internet connection if your tests doesn't require it.

CPU distribution inside VE on SMP hosts

  • If the number of VE's in your host is more than CPUs number, and there are many tasks/tests running inside each VE, and that tasks are scheduled quite often, it's better to give just one CPU for each VE. In this case the VirtualCPU-scheduler performance overhead can be significantly decreased, and performance can increase up to 100%!

To set the number of CPUs available inside VE use:

# vzctl set $VEID --cpus N

network performance

  • please do not use file transferring utilities to test the network performance, because the bottleneck of these tests is usually file system performance - not TCP/IP stack

network checksumming

TODO