On-demand accounting
|
This page describes a very promising way of beancounters optimisation.
Current accounting model
Basically allocation of any kind of resource looks like this:
struct some_resource *get_the_resource(int amount) { struct some_resource *ret; ret = find_or_allocate_the_resource(amount); return ret; }
We change this behaviour to work like this:
struct some_resource *get_the_resource(int amount) { struct some_resource *ret; if (charge_beancounter(amount) < 0) return NULL; ret = find_or_allocate_the_resource(amount); if (ret != NULL) return ret; uncharge_beancounter(amount); return NULL; }
The charge_beancounter()
call is responsible for checking whether the user is allowed to consume desired amount of the resource, i.e. if the resource consumption level is lower than the limit set.
Obviously, this change slows down the original code, as charge_beancounter() takes some slow operations like taking locks. We have an idea of how to optimize this behavior.
On-demand accounting basics
The main idea sonds like this:
Apparently, when the estimation exceeds the limit, we must switch to the slower mode, that will give us more presice value of the consumption level and (probably) allocate another portion of the resource.
Examples
Let's look at some examples of how this will work.
The user memory
Currently we account for the physpages resource. That is -- the number of physical pages consumed by the processes. The accounting hooks are placed inside the page faults and hurt the performance. The accounting looks like this:
struct page *get_new_page(struct mm_struct *mm) { struct page *pg; if (charge_beancounter(1) < 0) return NULL; pg = alloc_new_page(mm); if (pg != NULL) return pg; uncharge_beancounter(1); return NULL; }
However, we have a good estimation of the RSS size -- that is the lenghts of mappings of the processes. Since the physical pages can only be allocated within these mappgins the RSS value can never exceed the sum of theis lenghs. The accounting will then look like this:
struct vm_area_struct *get_new_mapping(struct mm_struct *mm, unsigned long pages) { if (!mm->fast_accounting) goto allocate; if (charge_beancounter(pages) == 0) goto allocate; mm->fast_accounting = 0; recalculate_the_rss(mm); allocate: expand_mapping(mm); } struct page *get_new_page(struct mm_struct *mm) { if (mm->fast_accounting) goto fast_path; if (charge_beancounter(1) < 0) return NULL; fast_path: pg = alloc_new_page(mm); if (pg != NULL) return pg; if (!mm->fast_accounting) uncharge_beancounter(1); return NULL; }
We do not call the slow charge_beancounter()
function in the page fault (get_new_page()
). Instead we account for the upper estimation in get_new_mapping()
call that happens rarely and thus increase the performance.
Note, that the recalculate_the_rss()
is called to calculate the exact RSS value on the beancounter.