82
edits
Changes
no edit summary
== What makes containers to be perfectly suitable for high density? ==
# Containers do not reserve memory assigned to it. They exhibit real-time elastic behavior, if applications do not use memory, then it is free for other containers usage. i.e. overcommit is natural and easy like on standalone Linux box when multiple applications compete for memory resources.As a result simple container running ssh, apache, init and cron takes only 10-20Mb of physical RAM. Sure, for efficient caching of apache pages more RAM is needed.
# explain Containers memory management is system-wide wise. If one container needs more physical RAM (e.g. for apache pages caching) and hardware node have no more memory available, kernel will automatically reclaim last recently used caches of other containers. # Parallels Virtuozzo containers product does a step further and introduces containers templates, which are used as a basis for all containers and special Copy-on-Write filesystem makes sure that even best hypervisors original template is kept untouched and container gets its own private copy of template file when it tries to modify it. As a result all common files are shared across the containers and present on disk and in memory caches in a single instance. This saves memory, reduces I/O and makes L2/L3 caches work more effective. == Why Virtual Machines are not that good? == Hypervisors (Xen, ESX, KVM) achieve are not that good on high density scenarios. There are multiple reasons for that:# Memory is basically reserved on the host on VM start. e.g. KVM and XEN by default reserve whole memory and do not allow memory overcommitment. As a result you can't run more then 15 VMs with 1GB RAM on 16Gb box. ESX as the most advanced hypervisor uses page sharing and ballooning technologies to introduce memory overcommitment. However, on practice it allows to get only about 2x memory overcommittimes overcommitment on the guests of the same type. Actually 50% From our experiments half of it are this improvement is due to page sharing, and 50% half due to balloonballooning. # Multiple kernels and their system data structures in memory.There are some technologies like RAM pages sharing and ballooning which help a bit.
# what happens when CT is above it's limit
# what happens when node RAM is exhausted
# plots and examples. Kir had http_load plot in the past. We will have LAMP results as well.