Difference between revisions of "CT storage backends"

From OpenVZ Virtuozzo Containers Wiki
Jump to: navigation, search
m (add 'translate' tags)
(Some ZFS fixes and updates)
 
(8 intermediate revisions by 3 users not shown)
Line 1: Line 1:
 
{{stub}}
 
{{stub}}
 +
<translate>
 +
 +
= Comparison table =
  
<translate>
+
<!--T:1-->
 
{| class="wikitable sortable" style="text-align: center;"
 
{| class="wikitable sortable" style="text-align: center;"
 +
|-
 
! Feature
 
! Feature
! Ploop
+
! OVZ Ploop
! SIMFS
+
! OVZ SimFS (ext4)
 +
! LVM (ext4)
 +
! ZFS
 +
|-
 +
!colspan="11" style="font-style:bold;background-color:gold;"|1. Solidity in front of failures and security
 +
|-
 +
|'''I/O isolation'''
 +
|{{Yes|Good}}
 +
|{{No|Bad}}: Possibility of "no inodes" issues (when file system journal become a bottleneck).
 +
|{{Yes|Good}}
 +
|{{Yes|Good}}
 
|-
 
|-
|'''Maturity'''
+
|'''Security'''
|Since 2012
+
|{{Yes|Good}}
|Since 2005 (?)
+
|{{No|Bad}}: Some bug could be exploited to escape CT and access HN file system <ref>[https://bugs.openvz.org/browse/OVZ-6296 CVE-2015-2925]</ref> <ref>[http://www.openwall.com/lists/oss-security/2014/06/24/16 CVE-2014-3519]</ref>
 +
|
 +
|
 
|-
 
|-
|'''Maximum disk space'''
+
|'''Reliability'''
|Limited:<ref>[[Ploop/Limits]]</ref> ploop v1 - 2 Tb, ploop v2 - 4 Tb
+
|{{No|Low}}: big amount of files produce ext4 corruption so often
|Limited by ext4 filesystem
+
|Medium: fsck, power loss and HW Raid without cache can kill whole data
 +
|High: LVM metadata can be corrupted completely
 +
|{{Yes|Excellent}}: no write hole, checksumming and COW
 +
|-
 +
|'''Filesystem over filesystem'''
 +
|Yes
 +
|No
 +
|No
 +
|{{Yes|Using zvol}}
 +
|-
 +
|'''Effect of HN filesystem corruption at /vz'''
 +
|{{Yes|No corruption}}
 +
|{{No|Possible corruption}}
 +
|?
 +
|?
 +
|-
 +
|'''Maturity in O/VZ'''
 +
|{{Yes|Since 2012}}
 +
|{{Yes|Since ~2005}}
 +
|{{Yes|Since 1998}}
 +
|{{Yes|Since 2014}}
 +
|-
 +
!colspan="11" style="font-style:bold;background-color:gold;"|2. Performance and design features
 +
|-
 +
|'''Maximum container volume space'''
 +
|4 TiB <ref>[[Ploop/Limits]]</ref>
 +
|1 EiB <ref>[https://en.wikipedia.org/wiki/Ext4 Ext4]</ref>
 +
|?
 +
|256 ZiB
 
|-
 
|-
 
|'''Disk space overhead'''
 
|'''Disk space overhead'''
|{{Yes}}, up to 20% for allocated ext4 metadata
+
|Up to 20%
|{{No}}
+
|No
 +
|Up to 20%
 +
|No, but if using zvol is up to 50% depending on volblocksize
 
|-
 
|-
|'''Speed'''
+
|'''Disk I/O speed'''
 +
|Fast
 +
|Fast only with small amount of containers per node, slowdown in case of big number of small files.
 +
|Fast
 
|Fast
 
|Fast
|Fast only with small amount of containers per node
 
|-
 
|'''I/O isolation'''
 
|Good
 
|Bad, "no inodes" issues (when file system journal is bottleneck)
 
|-
 
|'''Need for run external tools for compaction VE images'''
 
|{{Yes}}, you should vzctl compact every few days for saving your disk space
 
|{{No}}
 
 
|-
 
|-
 
|'''Disk space overcommit (provide more space for containers than available on server now)'''
 
|'''Disk space overcommit (provide more space for containers than available on server now)'''
 
|{{Yes}}
 
|{{Yes}}
 +
|{{Yes}}
 +
|No
 
|{{Yes}}
 
|{{Yes}}
 
|-
 
|-
|'''Reliability'''
+
|'''Different containers may use file systems of different types and properties'''
|Low: big amount of files produce ext4 corruption so often
+
|{{Yes}}
|High: fsck, power loss and HW Raid without cache can kill whole data
+
|{{No}}
 +
|{{Yes}}
 +
|{{Yes|Using zvol}}
 
|-
 
|-
|'''Access to private area from host '''
+
|'''Second level quotes in Linux (inside container)'''
 
|{{Yes}}
 
|{{Yes}}
 
|{{Yes}}
 
|{{Yes}}
 +
|{{Yes}}
 +
|{{Yes|Using zvol}}
 
|-
 
|-
|'''Fear to use filesystem over filesystem'''
+
|'''Potential support for QCOW2 and other image formats'''
 
|{{Yes}}
 
|{{Yes}}
 
|{{No}}
 
|{{No}}
|-
+
|{{No}}
|'''Live backup is easy and consistent'''
+
|{{No}}
|{{Yes}}<ref name="ploop backup">[http://openvz.livejournal.com/44508.html ploop snapshots and backups]</ref><ref>[[Ploop/Backup]]</ref>, fast block level backup
 
|{{No}} (in case of big number of files )
 
 
|-
 
|-
 
|'''Incremental backup support on filesystem level'''
 
|'''Incremental backup support on filesystem level'''
|{{Yes}} (snapshots)
+
|{{Yes}}, through snapshots
 +
|{{No}}
 
|{{No}}
 
|{{No}}
 +
|{{Yes}}
 +
|-
 +
|'''Shared storage support (Virtuozzo storage, NFS)'''
 +
|{{Yes|Yes}}
 +
|{{No|No}}
 +
|{{Yes|Yes}}
 +
|{{Yes|NFS only}}
 
|-
 
|-
|'''Different containers may use file systems of different types and properties'''
+
!colspan="11" style="font-style:bold;background-color:gold;"|3. Maintenance
|{{Yes}}
 
|{{No}}
 
 
|-
 
|-
|'''Live migration is reliable and efficient'''
+
|'''vzctl integration'''
|{{Yes}}
+
|{{Yes|Complete}}
|{{No}}, when apps rely on files i-node numbers being constant (which is normally the case), those apps are not surviving the migration
+
|{{Yes|Complete}}
 +
|{{No}}, many manual operations
 +
|{{No}}, some manual operations
 
|-
 
|-
|'''Continue failed CT migration'''
+
|'''External compaction for container volumes'''
|{{Yes}}, in [https://lists.openvz.org/pipermail/users/2015-July/006335.html vzctl] from OpenVZ -stable
+
|{{No|Needed}} for saving HN space
|{{Yes}}, option "--keep-dst"
+
|{{Yes|No}}
 +
|{{No|Not available}}
 +
|{{Yes|Not required}}
 
|-
 
|-
|'''Second level quotes in Linux (inside container)'''
+
|'''Access to private area from host'''
 
|{{Yes}}
 
|{{Yes}}
 
|{{Yes}}
 
|{{Yes}}
 +
|?
 +
|{{Yes|Only using ZFS filesystem}}
 
|-
 
|-
|'''[Potential] support for QCOW2 and other image formats'''
+
|'''Live backup'''
|{{Yes}}
+
|{{Yes|Easy, fast and consistent}}<ref>[http://openvz.livejournal.com/44508.html ploop snapshots and backups]</ref> <ref>[[Ploop/Backup]]</ref>
|{{No}}
+
|{{No|Easy, slow, and sometimes inconsistent}} in case some application depends on inode IDs
 +
|{{No|No}}
 +
|{{Yes|Fast}} via ZFS Send/Receive
 
|-
 
|-
|'''No problems with fs corruption on /vz parition'''
+
|'''Snapshot support'''
 +
|{{Yes}}<ref>[http://openvz.livejournal.com/44508.html ploop snapshots and backups]</ref>
 +
|{{No}} theoretically, because of much/small files to be copied
 
|{{Yes}}
 
|{{Yes}}
|{{No}}
 
|-
 
|'''Snapshot support'''
 
|{{Yes}}<ref name="ploop backup">[http://openvz.livejournal.com/44508.html ploop snapshots and backups]</ref>
 
|{{No}}, (because there is a lot of small files that need to be copied)
 
|-
 
|'''Better security'''
 
 
|{{Yes}}
 
|{{Yes}}
|{{No}} (bugs can be exploited to escape the simfs and let container access the host file system: [https://bugs.openvz.org/browse/OVZ-6296 CVE-2015-2925], [http://www.openwall.com/lists/oss-security/2014/06/24/16 CVE-2014-3519], CVE-2015-6927)
 
 
|-
 
|-
|'''Shared storage support (Virtuozzo storage, NFS)'''
+
|'''Live migration'''
|{{Yes}}
+
|{{Yes|Reliable and fast}}
|{{No}}
+
|{{No|Not reliable and slow}}, if some application depends on inode IDs
 +
|{{No|Not implemented}}
 +
|{{Yes|Fast}} via ZFS Send/Receive
 
|-
 
|-
|''' Disk space footprint'''
+
|'''Continue failed CT migration'''
|{{Yes}}
+
|{{Yes}}, in [https://lists.openvz.org/pipermail/users/2015-July/006335.html vzctl] from OpenVZ -stable
|{{No}}
+
|{{Yes}}, option "--keep-dst"
 +
|{{No|Not implemented}}
 +
|?
 
|-
 
|-
 
|}
 
|}
 +
 
</translate>
 
</translate>
  
 
[[Category: Storage]]
 
[[Category: Storage]]

Latest revision as of 18:02, 15 December 2022

<translate>

Comparison table[edit]

Feature OVZ Ploop OVZ SimFS (ext4) LVM (ext4) ZFS
1. Solidity in front of failures and security
I/O isolation Good Bad: Possibility of "no inodes" issues (when file system journal become a bottleneck). Good Good
Security Good Bad: Some bug could be exploited to escape CT and access HN file system [1] [2]
Reliability Low: big amount of files produce ext4 corruption so often Medium: fsck, power loss and HW Raid without cache can kill whole data High: LVM metadata can be corrupted completely Excellent: no write hole, checksumming and COW
Filesystem over filesystem Yes No No Using zvol
Effect of HN filesystem corruption at /vz No corruption Possible corruption ? ?
Maturity in O/VZ Since 2012 Since ~2005 Since 1998 Since 2014
2. Performance and design features
Maximum container volume space 4 TiB [3] 1 EiB [4] ? 256 ZiB
Disk space overhead Up to 20% No Up to 20% No, but if using zvol is up to 50% depending on volblocksize
Disk I/O speed Fast Fast only with small amount of containers per node, slowdown in case of big number of small files. Fast Fast
Disk space overcommit (provide more space for containers than available on server now) Yes Yes No Yes
Different containers may use file systems of different types and properties Yes No Yes Using zvol
Second level quotes in Linux (inside container) Yes Yes Yes Using zvol
Potential support for QCOW2 and other image formats Yes No No No
Incremental backup support on filesystem level Yes, through snapshots No No Yes
Shared storage support (Virtuozzo storage, NFS) Yes No Yes NFS only
3. Maintenance
vzctl integration Complete Complete No, many manual operations No, some manual operations
External compaction for container volumes Needed for saving HN space No Not available Not required
Access to private area from host Yes Yes ? Only using ZFS filesystem
Live backup Easy, fast and consistent[5] [6] Easy, slow, and sometimes inconsistent in case some application depends on inode IDs No Fast via ZFS Send/Receive
Snapshot support Yes[7] No theoretically, because of much/small files to be copied Yes Yes
Live migration Reliable and fast Not reliable and slow, if some application depends on inode IDs Not implemented Fast via ZFS Send/Receive
Continue failed CT migration Yes, in vzctl from OpenVZ -stable Yes, option "--keep-dst" Not implemented ?

</translate>