Difference between revisions of "Virtuozzo Storage"

From OpenVZ Virtuozzo Containers Wiki
Jump to: navigation, search
(added overview)
m (Reverted edits by 83.70.181.105 (talk) to last revision by 91.195.22.23)
 
(31 intermediate revisions by 7 users not shown)
Line 1: Line 1:
= Parallels Cloud Storage (Pstorage) =
+
'''Virtuozzo Storage''' is a highly-available distributed storage (virtual SAN) with built-in replication and disaster recovery.
  
Parallels Storage (Pstorage) is a highly-available distributed storage (virtual SAN) with built-in replication and disaster recovery. Pstorage provides a storage virtualization platform on top of commodity hardware with locally attached hard drives and enables the unification of storage into a cluster in scenarios like virtualization with the help of virtual machines (VMs) and/or Containers (CTs). Pstorage ensures a fast live migration of VMs and CTs across hardware nodes, without the need to copy VM/CT data, and high availability as the storage becomes available remotely.
+
Virtuozzo Storage provides a storage virtualization platform on top of commodity hardware with locally attached hard drives and enables the unification of storage into a cluster in scenarios like virtualization with the help of virtual machines (VMs) and/or Containers ([[CT]]s). Pstorage ensures a fast live migration of VMs and CTs across hardware nodes, without the need to copy VM/CT data, and high availability as the storage becomes available remotely.
  
The main Pstorage features are listed below:
+
== Features ==
 +
<br clear="both">
 +
[[File:Parallels_Cloud_Storage_is_a_software_defined_storage.png|300px|right|link=http://www.youtube.com/watch?v=6oEzW9w-1rg|Virtuozzo Storage is a software defined storage]]
 +
The main Virtuozzo Storage features are listed below:
  
 
* No special hardware requirements. Commodity hardware (SATA/SAS drives, 1Gbit+ Ethernet) can be used to create a storage.
 
* No special hardware requirements. Commodity hardware (SATA/SAS drives, 1Gbit+ Ethernet) can be used to create a storage.
 
* Strong consistency semantics. This makes Pstorage suitable for iSCSI, VMs and CTs running on top of it (unlike object storage such as Amazon S3 or Swift).
 
* Strong consistency semantics. This makes Pstorage suitable for iSCSI, VMs and CTs running on top of it (unlike object storage such as Amazon S3 or Swift).
 +
* Usable for Containers or exportable as iSCSI, NFS, S3 object storage
 
* Built-in replication.
 
* Built-in replication.
 
* Automatic disaster recovery on hard drive or node failures.
 
* Automatic disaster recovery on hard drive or node failures.
Line 15: Line 19:
 
* Scales to Petabytes
 
* Scales to Petabytes
 
* More uniform hardware performance and capacity utilization across the nodes, so overutilized nodes benefit from idle ones.
 
* More uniform hardware performance and capacity utilization across the nodes, so overutilized nodes benefit from idle ones.
* High performance - comparable to SAN (FIXME link).
+
* High performance - comparable to SAN.
  
NOTE: Parallels Cloud Storage is available as a TECHNOLOGY PREVIEW ONLY for OpenVZ users and can't be licensed for production. To unlock for running in production you should upgrade to a full Parallels Cloud Server product (see below). Available free capacity for in technology preview mode is 100GB of logical (Containers usable) disk space. After hitting this limit writers can get blocked w/o errors expecting for a limit to be extended, so please avoid hitting the limit.
+
See a brief [http://www.youtube.com/watch?v=6oEzW9w-1rg video on YouTube].
 +
<br clear="both">
  
==Pstorage components==
+
== Pstorage for OpenVZ limitations ==
Any Pstorage includes three components:
+
 
 +
{{Warning|
 +
* Virtuozzo Storage is available as a TECHNOLOGY PREVIEW ONLY for OpenVZ users and can't be licensed for production.
 +
* To unlock for running in production, you should upgrade to a full [[Virtuozzo]] product (see below).
 +
* Maximum capacity limited for usage in technology preview mode is 100 GB of logical (usable by containers) disk space.
 +
* After hitting this limit, writers can get blocked w/o errors expecting for a limit to be extended, so please avoid hitting the limit (it's not a bug :) ).
 +
}}
 +
 
 +
=== Components ===
 +
 
 +
[[File:Parallels_Cloud_Storage_components.png|650px|top|Virtuozzo Storage Components]]
 +
 
 +
Any Virtuozzo Storage includes three components:
  
 
* Metadata server (MDS). MDSs manage metadata, like file names, and keep control over how files are split into chunks and where the chunks are stored. They also track versions of chunks and ensure that the cluster has enough replicas. An MDS can be run in multiple instances to provide high availability. Besides, MDSs keep a global log of important events that happen in the cluster.
 
* Metadata server (MDS). MDSs manage metadata, like file names, and keep control over how files are split into chunks and where the chunks are stored. They also track versions of chunks and ensure that the cluster has enough replicas. An MDS can be run in multiple instances to provide high availability. Besides, MDSs keep a global log of important events that happen in the cluster.
 +
 
* Chunk server (CS). A CS is a service responsible for storing real user data chunks and providing access to these data. A Pstorage cluster must have multiple instances of CSs for high availability.
 
* Chunk server (CS). A CS is a service responsible for storing real user data chunks and providing access to these data. A Pstorage cluster must have multiple instances of CSs for high availability.
* Clients. Clients access a Pstorage cluster by communicating with MDSs and CSs. Parallels Containers and virtual machines can be run natively, i.e. directly from the Pstorage cluster. An additional Pstorage client -  can be used to mount Pstorage as a conventional file system (though Pstorage is not POSIX-compliant). Besides, Pstorage files can be mounted as a block device using the "ploop" feature and formatted as ext4 file system for other needs.
+
 
 +
* Clients. Clients access a Virtuozzo Storage cluster by communicating with MDSs and CSs. Virtuozzo Containers and virtual machines can be run natively, i.e. directly from the Virtuozzo Storage cluster. An additional Virtuozzo Storage client -  can be used to mount Virtuozzo Storage as a conventional file system (though Pstorage is not POSIX-compliant). Besides, Pstorage files can be mounted as a block device using the "ploop" feature and formatted as ext4 file system for other needs.
  
 
A recommended cluster setup includes from 3 to 5 MDS instances (allowing you to survive the loss of 1 or 2 of MDSs, respectively) and multiple CSs providing storage capacity.
 
A recommended cluster setup includes from 3 to 5 MDS instances (allowing you to survive the loss of 1 or 2 of MDSs, respectively) and multiple CSs providing storage capacity.
  
=Pstorage setup HOWTO=
+
== Setup ==
Below HOWTO explains how to setup Parallels Cloud Storage (Pstorage) cluster and run OpenVZ containers stored there. Please note, that it's just a brief HOWTO for quick and easy evaluation of Parallels Cloud Storage (configuring only 1x MDS and CS service) and is not a real manual. We highly recommend to consult Pstorage manual (FIXME) and man pages (such as pstorage, pstorage-make-cs, pstorage-make-mds etc.) as it contain a lot of important details on types of SSD drives supported, what are the recommended configurations, how to configure big clusters with failure domains and so on.
 
  
==Installing Parallels Cloud Storage software==
+
This HOWTO explains how to setup Virtuozzo Storage cluster and run OpenVZ containers stored there. Please note, that it's just a brief HOWTO for quick and easy evaluation of Virtuozzo Storage (configuring only 1x MDS and CS service) and is not a real manual. We highly recommend to consult [http://download.parallels.com/doc/pcs/pdf/Parallels_Cloud_Storage.pdf Pstorage manual] and man pages (such as pstorage, pstorage-make-cs, pstorage-make-mds etc.) as it contain a lot of important details on types of SSD drives supported, what are the recommended configurations, how to configure big clusters with failure domains and so on.
  
In order to install Pstorage RPM packages log in as root to all the machines planned to be added to the cluster and perform the following actions.
+
=== Installing Virtuozzo Storage software ===
  
Download and install the following RPM packages: pstorage-ctl, pstorage-libs-shared and pstorage-metadata-server. These packages can be downloaded from http://download.openvz.org/pstorage:
+
In order to install Pstorage RPM packages, log in as root to all the machines planned to be added to the cluster and perform the following actions.
  
<code>
+
Set up pstorage yum repository:
# wget FIXME
 
# yum install pstorage-ctl pstorage-libs-shared pstorage-metadata-server pstorage-chunk-server pstorage-client
 
</code>
 
  
 +
cat << EOF > /etc/yum.repos.d/pstorage.repo
 +
[openvz-pstorage]
 +
name=Virtuozzo Storage for OpenVZ
 +
baseurl=http://download.openvz.org/pstorage/current
 +
enabled=1
 +
gpgcheck=0
 +
EOF
  
==Creating a cluster==
+
Install needed packages:
 +
 
 +
yum install pstorage-metadata-server pstorage-chunk-server pstorage-client
 +
 
 +
=== Creating a cluster ===
  
 
Every Pstorage cluster has a unique cluster name used for remote service discovery and during authorization.
 
Every Pstorage cluster has a unique cluster name used for remote service discovery and during authorization.
 
So choose a name for the cluster that will uniquely identify it among other clusters in your network and avoid reusing it on cluster recreate. A name may contain the characters a-z, A-Z, 0-9, dash (-), and underscore (_). Here we will use 'test_cluster' as a cluster name.
 
So choose a name for the cluster that will uniquely identify it among other clusters in your network and avoid reusing it on cluster recreate. A name may contain the characters a-z, A-Z, 0-9, dash (-), and underscore (_). Here we will use 'test_cluster' as a cluster name.
  
===Create metadata servers (MDS)===
+
==== Create metadata servers (MDS) ====
  
 
Log in to the computers you want to configure as a metadata server as root.
 
Log in to the computers you want to configure as a metadata server as root.
  
To create the very first MDS and your cluster type:
+
To create the cluster and the very first MDS type:
  
<code># pstorage -c test_cluster make-mds -I -a 10.30.100.101 -r /pstorage/test_cluster-mds -p</code>
+
pstorage -c test_cluster make-mds -I -a 10.30.100.101 -r /pstorage/test_cluster-mds -p
  
This command creates new Parallels Cloud Storage cluster and metadata server and configures the IP address of 10.30.100.101 for communication with this server (replace 10.30.100.101 with IP address of your own MDS server). MDS will store its data at location specified by -r option. The command will also ask you to enter the password for authentication in your cluster.
+
This command creates new Virtuozzo Storage cluster and metadata server and configures the IP address of ''10.30.100.101'' for communication with this server (replace ''10.30.100.101'' with IP address of your own MDS server). MDS will store its data at location specified by '''-r''' option. The command will also ask you to enter the password for authentication in your cluster.
  
After you have created the MDS server, start the MDS management service (pstorage-mdsd) and configure it to start automatically when the server boots:
+
After you have created the MDS server, start the MDS management service ('''pstorage-mdsd''') and configure it to start automatically when the server boots:
  
<code># service pstorage-mdsd start</code>
+
service pstorage-mdsd start
 +
chkconfig pstorage-mdsd on
  
<code># chkconfig pstorage-mdsd on</code>
+
To create a 2nd and subsequent MDS services on other nodes, do the following:
  
To create 2nd and subsequent MDS services on other nodes do:
+
1. Login to the node as root.
# Login to the node as root.
+
 
# Setup cluster discovery. Normally, all the Pstorage components should be capable to discover each other on the network using multicast discovery (mDNS). This may not work however in Virtual Machines or if your network doesn't support multicasts. In this case you need to setup an MDS bootstrap list on the nodes manually. To do so create the bs.list file in the /etc/pstorage/clusters/<cluster_name> directory (make this directory if it does not exist) on the server you are configuring for the cluster and specify IP addresses and ports of the MDS servers in the cluster.
+
2. Setup cluster discovery. Normally, all the Pstorage components should be capable to discover each other on the network using multicast discovery (mDNS). This may not work however in Virtual Machines or if your network doesn't support multicasts. In this case you need to setup an MDS bootstrap list on the nodes manually. To do so create the '''bs.list''' file in the '''/etc/pstorage/clusters/<cluster_name>''' directory (make this directory if it does not exist) on the server you are configuring for the cluster and specify IP addresses and ports of the MDS servers in the cluster.
 
For example to create a bootstrap list for above cluster created type:
 
For example to create a bootstrap list for above cluster created type:
<code># echo "10.30.100.101:2510" >> /etc/pstorage/clusters/test_cluster/bs.list</code>
+
echo "10.30.100.101:2510" >> /etc/pstorage/clusters/test_cluster/bs.list
 
Now future Pstorage services started on this machine will be able to discover other parties.
 
Now future Pstorage services started on this machine will be able to discover other parties.
# Authenticate the server in the cluster:
 
<code># pstorage -c test_cluster auth-node</code>
 
# Add new MDS to the cluster using similar to above make-mds command w/o -I option.
 
  
===Create chunk server (CS)===
+
3. Authenticate the server in the cluster and add a new MDS to the cluster using similar to the above make-mds command w/o -I and -p options:
 +
pstorage -c test_cluster auth-node
 +
pstorage -c test_cluster make-mds -a 10.30.100.102 -r /pstorage/test_cluster-mds
 +
 
 +
==== Create a chunk server (CS) ====
  
 
Log in to the computer you want to configure as a chunk server as root.
 
Log in to the computer you want to configure as a chunk server as root.
 +
Note, you may need to setup a bootstrap list as described above in case cluster auto-discovery doesn't work.
  
 
Authenticate the server in the cluster (skip this step if configured MDS or CS already on that server):
 
Authenticate the server in the cluster (skip this step if configured MDS or CS already on that server):
NOTE: you may need to setup a bootstrap list as described above in case cluster auto-discovery doesn't work.
+
pstorage -c test_cluster auth-node
 
 
<code># pstorage -c test_cluster auth-node</code>
 
  
 
The command will ask you the password that you specified when setting up the first MDS server.
 
The command will ask you the password that you specified when setting up the first MDS server.
  
Create CS:
+
Create a CS:
  
<code># pstorage -c test_cluster make-cs -r /pstorage/test_cluster-cs</code>
+
pstorage -c test_cluster make-cs -r /pstorage/test_cluster-cs
  
This command will create a CS service and use the directory specified after -r option for CS data store.
+
This command will create a CS service and use the directory specified after '''-r''' option for CS data store.
After you have created the chunk server, start is as a service (pstorage-csd) and configure it to start automatically when the machine boots:
+
After you have created the chunk server, start is as a service ('''pstorage-csd''') and configure it to start automatically when the machine boots:
  
<code># service pstorage-csd start</code>
+
service pstorage-csd start
 +
chkconfig pstorage-csd on
  
<code># chkconfig pstorage-csd on</code>
+
==== Setting up a client ====
 
 
===Setting up a client===
 
  
 
Log in to the computer you want to act as a client as root.
 
Log in to the computer you want to act as a client as root.
 +
Note, you may need to setup a bootstrap list as described above in case cluster auto-discovery doesn't work.
  
 
Authenticate the server in the cluster (skip this step if configured MDS or CS already on that server):
 
Authenticate the server in the cluster (skip this step if configured MDS or CS already on that server):
NOTE: you may need to setup a bootstrap list as described above in case cluster auto-discovery doesn't work.
 
  
<code># pstorage -c test_cluster auth-node</code>
+
pstorage -c test_cluster auth-node
  
 
The command will ask you the password that you specified when setting up the first MDS server.
 
The command will ask you the password that you specified when setting up the first MDS server.
  
Create the directory to mount the Parallels Cloud Storage cluster to:
+
Create the directory to mount the Virtuozzo Storage cluster to and then mount it as a conventional file system:
  
<code># mkdir -p /pcs</code>
+
mkdir -p /pcs
 +
pstorage-mount -c test_cluster /pcs
  
Mount Pstorage cluster as a file system:
+
You may want to add this mount to /etc/fstab to make it happen automatically on node reboot. Consult ''man pstorage-mount'' for more details.
 
 
<code># pstorage-mount -c test_cluster /pcs/</code>
 
 
 
You may want to add this mount to /etc/fstab to make it happen automatically on node boot. Consult man pstorage-mount for more details.
 
  
 
Now you can access your data from all the client machines and ready to run containers!
 
Now you can access your data from all the client machines and ready to run containers!
  
===Create a container running in the cluster===
+
==== Create a container running in the cluster ====
  
 
Running a container over Pstrage is no different from any other local file system, so below example is just for the reference.
 
Running a container over Pstrage is no different from any other local file system, so below example is just for the reference.
Log in to the computer running OpenVZ and that you have configured to act as a client for the Parallels Cloud Storage cluster.
+
Log in to the computer running OpenVZ and that you have configured to act as a client for the Virtuozzo Storage cluster.
 
 
Mount Pstorage cluster at /pcs as described above if not done yet.
 
  
 
Load OpenVZ ploop kernel modules if they aren't loaded yet:
 
Load OpenVZ ploop kernel modules if they aren't loaded yet:
  
<code># modprobe ploop pfmt_ploop1 pio_kaio</code>
+
modprobe ploop pfmt_ploop1 pio_kaio
  
Create a folder on Pstorage for the containers:
+
Mount Pstorage cluster at '''/pcs''' as described above if not done yet. Create a folder on Pstorage for the containers:
<code># mkdir -p /pcs/containers</code>
+
mkdir -p /pcs/containers
  
 
Create a ploop-based container with CTID=101 (put your own template name below):
 
Create a ploop-based container with CTID=101 (put your own template name below):
 +
vzctl create 101 --layout ploop --ostemplate centos-6-x86_64 --private /pcs/containers/101
  
<code>vzctl create 101 --layout ploop --ostemplate centos-6-x86_64 --private /pcs/containers/</code>
+
Now container with CTID=101 is ready for use and can be started on '''any''' Pstorage client node (note, however, that you need to register container first if want to run on node different from creator one):
  
Now container with CTID=101 is ready for use and can be started right from the cluster shared Pstorage:
+
vzctl start 101
<code>vzctl start 101</code>
 
  
 
In order to quickly relocate the container to another node (w/o data migration), just stop and unregister it on source node, then register and start on destination.
 
In order to quickly relocate the container to another node (w/o data migration), just stop and unregister it on source node, then register and start on destination.
  
==Upgrading to Parallels Cloud Server==
+
== Upgrading to Virtuozzo ==
Please contact sales FIXME
+
 
 +
'''[http://www.odin.com/products/virtuozzo/ Virtuozzo]''' is a unique virtualization server platform combining both hypervisor and container-based virtualization together with innovative storage virtualization.
 +
 
 +
Please request more information on upgrading to Virtuozzo at the [http://www.odin.com/products/virtuozzo/ product page] (look for '''Request Information''' button).
 +
 
 +
== External links ==
 +
 
 +
* [http://www.odin.com/fileadmin/media/hcap/pcs/documents/ParCloudServer6_DataSheet_EN_Ltr_111312.pdf Virtuozzo product datasheet]
 +
* [http://www.odin.com/fileadmin/media/hcap/pcs/documents/ParCloudStorage_DataSheet_EN_Ltr_02262013.pdf Parallels Cloud Storage product datasheet]
 +
* [http://download.parallels.com/doc/pcs/pdf/Parallels_Cloud_Storage_Administrators_Guide.pdf Parallels Cloud Storage Administrator's Guide]
 +
* [http://www.odin.com/fileadmin/media/hcap/pcs/documents/PCloudStorage_Performance_Results_WP_EN_Ltr_02192013_web.pdf Pstorage performance whitepaper]
 +
* [http://www.youtube.com/watch?v=6oEzW9w-1rg Pstorage introduction video]
  
==Pstorage Documentation==
+
[[Category: Storage]]
* For more information on setting up pstorage cluster please refer to the Parallels Cloud Storage documentation [http://download.parallels.com/doc/pcs/pdf/Parallels_Cloud_Storage.pdf].
 
* Pstroage performance whitepaper [http://www.parallels.com/fileadmin/parallels/documents/hosting-cloud-enablement/pcs/Production_Whitepapers/PCloudStorage_Performance_Results_WP_EN_Ltr_02192013_web.pdf].
 

Latest revision as of 20:04, 21 February 2016

Virtuozzo Storage is a highly-available distributed storage (virtual SAN) with built-in replication and disaster recovery.

Virtuozzo Storage provides a storage virtualization platform on top of commodity hardware with locally attached hard drives and enables the unification of storage into a cluster in scenarios like virtualization with the help of virtual machines (VMs) and/or Containers (CTs). Pstorage ensures a fast live migration of VMs and CTs across hardware nodes, without the need to copy VM/CT data, and high availability as the storage becomes available remotely.

Features[edit]


Virtuozzo Storage is a software defined storage

The main Virtuozzo Storage features are listed below:

  • No special hardware requirements. Commodity hardware (SATA/SAS drives, 1Gbit+ Ethernet) can be used to create a storage.
  • Strong consistency semantics. This makes Pstorage suitable for iSCSI, VMs and CTs running on top of it (unlike object storage such as Amazon S3 or Swift).
  • Usable for Containers or exportable as iSCSI, NFS, S3 object storage
  • Built-in replication.
  • Automatic disaster recovery on hard drive or node failures.
  • High availability. Data remain accessible even in case of hard drive or node failures.
  • Optional SSD caching. SSD caches boost the overall performance in the cluster on write and read operations.
  • Data checksumming and scrubbing. Checksumming and scrubbing greatly enhance data reliability.
  • Grow on demand. More storage nodes can be added to the cluster to increase its disk space. A VM/CT image size is not limited by the size of any of the hard drives.
  • Scales to Petabytes
  • More uniform hardware performance and capacity utilization across the nodes, so overutilized nodes benefit from idle ones.
  • High performance - comparable to SAN.

See a brief video on YouTube.

Pstorage for OpenVZ limitations[edit]

Warning.svg Warning:
  • Virtuozzo Storage is available as a TECHNOLOGY PREVIEW ONLY for OpenVZ users and can't be licensed for production.
  • To unlock for running in production, you should upgrade to a full Virtuozzo product (see below).
  • Maximum capacity limited for usage in technology preview mode is 100 GB of logical (usable by containers) disk space.
  • After hitting this limit, writers can get blocked w/o errors expecting for a limit to be extended, so please avoid hitting the limit (it's not a bug :) ).

Components[edit]

Virtuozzo Storage Components

Any Virtuozzo Storage includes three components:

  • Metadata server (MDS). MDSs manage metadata, like file names, and keep control over how files are split into chunks and where the chunks are stored. They also track versions of chunks and ensure that the cluster has enough replicas. An MDS can be run in multiple instances to provide high availability. Besides, MDSs keep a global log of important events that happen in the cluster.
  • Chunk server (CS). A CS is a service responsible for storing real user data chunks and providing access to these data. A Pstorage cluster must have multiple instances of CSs for high availability.
  • Clients. Clients access a Virtuozzo Storage cluster by communicating with MDSs and CSs. Virtuozzo Containers and virtual machines can be run natively, i.e. directly from the Virtuozzo Storage cluster. An additional Virtuozzo Storage client - can be used to mount Virtuozzo Storage as a conventional file system (though Pstorage is not POSIX-compliant). Besides, Pstorage files can be mounted as a block device using the "ploop" feature and formatted as ext4 file system for other needs.

A recommended cluster setup includes from 3 to 5 MDS instances (allowing you to survive the loss of 1 or 2 of MDSs, respectively) and multiple CSs providing storage capacity.

Setup[edit]

This HOWTO explains how to setup Virtuozzo Storage cluster and run OpenVZ containers stored there. Please note, that it's just a brief HOWTO for quick and easy evaluation of Virtuozzo Storage (configuring only 1x MDS and CS service) and is not a real manual. We highly recommend to consult Pstorage manual and man pages (such as pstorage, pstorage-make-cs, pstorage-make-mds etc.) as it contain a lot of important details on types of SSD drives supported, what are the recommended configurations, how to configure big clusters with failure domains and so on.

Installing Virtuozzo Storage software[edit]

In order to install Pstorage RPM packages, log in as root to all the machines planned to be added to the cluster and perform the following actions.

Set up pstorage yum repository:

cat << EOF > /etc/yum.repos.d/pstorage.repo
[openvz-pstorage]
name=Virtuozzo Storage for OpenVZ
baseurl=http://download.openvz.org/pstorage/current
enabled=1
gpgcheck=0
EOF

Install needed packages:

yum install pstorage-metadata-server pstorage-chunk-server pstorage-client

Creating a cluster[edit]

Every Pstorage cluster has a unique cluster name used for remote service discovery and during authorization. So choose a name for the cluster that will uniquely identify it among other clusters in your network and avoid reusing it on cluster recreate. A name may contain the characters a-z, A-Z, 0-9, dash (-), and underscore (_). Here we will use 'test_cluster' as a cluster name.

Create metadata servers (MDS)[edit]

Log in to the computers you want to configure as a metadata server as root.

To create the cluster and the very first MDS type:

pstorage -c test_cluster make-mds -I -a 10.30.100.101 -r /pstorage/test_cluster-mds -p

This command creates new Virtuozzo Storage cluster and metadata server and configures the IP address of 10.30.100.101 for communication with this server (replace 10.30.100.101 with IP address of your own MDS server). MDS will store its data at location specified by -r option. The command will also ask you to enter the password for authentication in your cluster.

After you have created the MDS server, start the MDS management service (pstorage-mdsd) and configure it to start automatically when the server boots:

service pstorage-mdsd start
chkconfig pstorage-mdsd on

To create a 2nd and subsequent MDS services on other nodes, do the following:

1. Login to the node as root.

2. Setup cluster discovery. Normally, all the Pstorage components should be capable to discover each other on the network using multicast discovery (mDNS). This may not work however in Virtual Machines or if your network doesn't support multicasts. In this case you need to setup an MDS bootstrap list on the nodes manually. To do so create the bs.list file in the /etc/pstorage/clusters/<cluster_name> directory (make this directory if it does not exist) on the server you are configuring for the cluster and specify IP addresses and ports of the MDS servers in the cluster. For example to create a bootstrap list for above cluster created type:

echo "10.30.100.101:2510" >> /etc/pstorage/clusters/test_cluster/bs.list

Now future Pstorage services started on this machine will be able to discover other parties.

3. Authenticate the server in the cluster and add a new MDS to the cluster using similar to the above make-mds command w/o -I and -p options:

pstorage -c test_cluster auth-node
pstorage -c test_cluster make-mds -a 10.30.100.102 -r /pstorage/test_cluster-mds

Create a chunk server (CS)[edit]

Log in to the computer you want to configure as a chunk server as root. Note, you may need to setup a bootstrap list as described above in case cluster auto-discovery doesn't work.

Authenticate the server in the cluster (skip this step if configured MDS or CS already on that server):

pstorage -c test_cluster auth-node

The command will ask you the password that you specified when setting up the first MDS server.

Create a CS:

pstorage -c test_cluster make-cs -r /pstorage/test_cluster-cs

This command will create a CS service and use the directory specified after -r option for CS data store. After you have created the chunk server, start is as a service (pstorage-csd) and configure it to start automatically when the machine boots:

service pstorage-csd start
chkconfig pstorage-csd on

Setting up a client[edit]

Log in to the computer you want to act as a client as root. Note, you may need to setup a bootstrap list as described above in case cluster auto-discovery doesn't work.

Authenticate the server in the cluster (skip this step if configured MDS or CS already on that server):

pstorage -c test_cluster auth-node

The command will ask you the password that you specified when setting up the first MDS server.

Create the directory to mount the Virtuozzo Storage cluster to and then mount it as a conventional file system:

mkdir -p /pcs
pstorage-mount -c test_cluster /pcs

You may want to add this mount to /etc/fstab to make it happen automatically on node reboot. Consult man pstorage-mount for more details.

Now you can access your data from all the client machines and ready to run containers!

Create a container running in the cluster[edit]

Running a container over Pstrage is no different from any other local file system, so below example is just for the reference. Log in to the computer running OpenVZ and that you have configured to act as a client for the Virtuozzo Storage cluster.

Load OpenVZ ploop kernel modules if they aren't loaded yet:

modprobe ploop pfmt_ploop1 pio_kaio

Mount Pstorage cluster at /pcs as described above if not done yet. Create a folder on Pstorage for the containers:

mkdir -p /pcs/containers

Create a ploop-based container with CTID=101 (put your own template name below):

vzctl create 101 --layout ploop --ostemplate centos-6-x86_64 --private /pcs/containers/101

Now container with CTID=101 is ready for use and can be started on any Pstorage client node (note, however, that you need to register container first if want to run on node different from creator one):

vzctl start 101

In order to quickly relocate the container to another node (w/o data migration), just stop and unregister it on source node, then register and start on destination.

Upgrading to Virtuozzo[edit]

Virtuozzo is a unique virtualization server platform combining both hypervisor and container-based virtualization together with innovative storage virtualization.

Please request more information on upgrading to Virtuozzo at the product page (look for Request Information button).

External links[edit]