Editing Virtual Ethernet device

Jump to: navigation, search

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 1: Line 1:
<translate>
+
'''Virtual Ethernet device''' is an Ethernet-like device which can be used
<!--T:1-->
+
inside a [[container]]. Unlike [[venet]] network device, [[veth]] device
'''Virtual Ethernet device''' is an Ethernet-like device that can be used
+
has a MAC address, therefore it can be used in configurations, when veth
inside a [[container]]. Unlike a [[venet]] network device, a [[veth]] device
+
is bridged to ethX or other device and container's user fully sets up
has a MAC address. Therefore, it can be used in more configurations. When veth
+
his networking himself, including IPs, gateways etc.
is bridged to a [[CT0]] network interface (e.g., eth0), the container can act as an
 
independent host on the network. The container's user can set up all of the networking
 
himself, including IPs, gateways, etc.
 
  
<!--T:2-->
+
Virtual Ethernet device consist of two Ethernet devices --
A virtual Ethernet device consists of two Ethernet devices,
+
the one in [[CT0]] and another one in CT. These devices are connected
one in [[CT0]] (e.g., vethN.0) and a corresponding one in CT (e.g., eth0) that are
+
to each other, so if a packet goes to one
connected to each other. If a packet is sent to one device it will come out the other device.
+
device it will come out from the other device.
  
== Virtual Ethernet device usage == <!--T:3-->
+
== Virtual Ethernet device usage ==
  
=== Kernel module === <!--T:4-->
+
=== Kernel module ===
The <code>vzethdev</code> module should be loaded. You can check it with the following commands.
+
First of all, make sure the <code>vzethdev</code> module is loaded:
 
<pre>
 
<pre>
 
# lsmod | grep vzeth
 
# lsmod | grep vzeth
Line 24: Line 21:
 
</pre>
 
</pre>
  
<!--T:5-->
 
 
In case it is not loaded, load it:
 
In case it is not loaded, load it:
 
<pre>
 
<pre>
Line 30: Line 26:
 
</pre>
 
</pre>
  
=== MAC addresses === <!--T:6-->
+
=== MAC addresses ===
The following steps to generate a MAC address are not necessary, since newer versions
+
In the below commands, you should use random MAC addresses. Do not use MAC addresses of real eth devices, because this can lead to collisions.
of vzctl will automatically generate a MAC address for you. These steps are provided
 
in case you want to set a MAC address manually.
 
  
<!--T:7-->
+
MAC addresses must be entered in XX:XX:XX:XX:XX:XX format.
You should use a random MAC address when adding a network interface to a container. Do not use MAC addresses of real eth devices, because this can lead to collisions.
 
  
<!--T:8-->
+
YOU MAY NOT NEED TO GENERATE MAC ADDRESSES BY HAND BECAUSE vzctl --veth_add
MAC addresses must be entered in XX:XX:XX:XX:XX:XX format.
+
MAY GENERATE THEM AUTOMATICALLY AS NECESSARY.
  
<!--T:9-->
+
Nevertheless, there is a utility script available for generating MAC addresses: http://www.easyvmx.com/software/easymac.sh. It is to be used like this:
There is a utility script available for generating MAC addresses: https://github.com/moutai/eucalyptus-utils/blob/master/easymac.sh. It is used like this:
 
  
  <!--T:10-->
+
  chmod +x easymac.sh
chmod +x easymac.sh
 
 
  ./easymac.sh -R
 
  ./easymac.sh -R
  
=== Adding veth to a CT === <!--T:11-->
+
=== Adding veth to a CT ===
  
  <!--T:12-->
+
  vzctl set <CTID> --netif_add <ifname>[,<mac>,<host_ifname>,<host_mac>,<bridge>]
vzctl set <CTID> --netif_add <ifname>[,<mac>,<host_ifname>,<host_mac>,<bridge>]
 
  
<!--T:13-->
 
 
Here
 
Here
 
* <tt>ifname</tt> is the Ethernet device name in the CT
 
* <tt>ifname</tt> is the Ethernet device name in the CT
 
* <tt>mac</tt> is its MAC address in the CT
 
* <tt>mac</tt> is its MAC address in the CT
 
* <tt>host_ifname</tt>  is the Ethernet device name on the host ([[CT0]])
 
* <tt>host_ifname</tt>  is the Ethernet device name on the host ([[CT0]])
* <tt>host_mac</tt> is its MAC address on the host ([[CT0]]), if you want independent communication with the Container through the bridge, you should explicitly specify multicast MAC address here (FE:FF:FF:FF:FF:FF).
+
* <tt>host_mac</tt> is its MAC address on the host ([[CT0]])
* <tt>bridge</tt> is an optional parameter which can be used in custom network start scripts to automatically add  the interface to a bridge. (See the reference to the vznetaddbr script below and persistent bridge configurations.)
+
* <tt>bridge</tt> is an optional parameter which can be used in custom network start scripts to automatically add  the interface to a bridge.
  
<!--T:14-->
+
{{Note|All parameters except <code>ifname</code> are optional and are automatically generated if not specified.}}
{{Note|All parameters except <code>ifname</code> are optional. Missing parameters, except for bridge, are automatically generated, if not specified.}}
 
  
<!--T:15-->
 
 
Example:
 
Example:
  
  <!--T:16-->
+
  vzctl set 101 --netif_add eth0 --save
vzctl set 101 --netif_add eth0 --save
 
  
<!--T:17-->
+
Or, if you want to specify everything:
If you want to specify everything:
 
  
  <!--T:18-->
+
  vzctl set 101 --netif_add eth0,00:12:34:56:78:9A,veth101.0,00:12:34:56:78:9B --save
vzctl set 101 --netif_add eth0,00:12:34:56:78:9A,veth101.0,00:12:34:56:78:9B --save
 
  
<!--T:19-->
+
Or, if you want to specify the bridge and leave the other values autogenerated:
If you want to use independent communication through the bridge:
 
  
  <!--T:20-->
+
  vzctl set 101 --netif_add eth0,,,,vmbr1 --save
vzctl set 101 --netif_add eth0,00:12:34:56:78:9A,veth101.0,FE:FF:FF:FF:FF:FF,vzbr0 --save
 
  
<!--T:21-->
+
=== Removing veth from a CT ===
If you want to specify the bridge and autogenerate the other values:
 
  
  <!--T:22-->
+
  vzctl set <CTID> --netif_del <dev_name>|all
vzctl set 101 --netif_add eth0,,,,vzbr0 --save
 
  
=== Removing veth from a CT === <!--T:23-->
 
 
<!--T:24-->
 
vzctl set <CTID> --netif_del <dev_name>|all
 
 
<!--T:25-->
 
 
Here
 
Here
 
* <code>dev_name</code> is the Ethernet device name in the [[CT]].
 
* <code>dev_name</code> is the Ethernet device name in the [[CT]].
  
<!--T:26-->
 
 
{{Note|If you want to remove all Ethernet devices in CT, use <code>all</code>.}}
 
{{Note|If you want to remove all Ethernet devices in CT, use <code>all</code>.}}
  
<!--T:27-->
 
 
Example:
 
Example:
  
  <!--T:28-->
+
  vzctl set 101 --netif_del eth0 --save
vzctl set 101 --netif_del eth0 --save
 
  
== Common configurations with virtual Ethernet devices == <!--T:29-->
+
== Common configurations with virtual Ethernet devices ==
 
Module <tt>vzethdev</tt> must be loaded to operate with veth devices.
 
Module <tt>vzethdev</tt> must be loaded to operate with veth devices.
  
=== Simple configuration with virtual Ethernet device === <!--T:30-->
+
=== Simple configuration with virtual Ethernet device ===
  
<!--T:31-->
+
Assuming you have 192.168.0.0/24 on your LAN, you will learn how to integrate a container in this LAN using veth.
Assuming that 192.168.0.0/24 is being used on your LAN, the following sections show how to configure a container for the LAN using veth.
 
  
==== Start a CT ==== <!--T:32-->
+
==== Start a CT ====
  
  <!--T:33-->
+
  [host-node]# vzctl start 101
[host-node]# vzctl start 101
 
  
==== Add veth device to CT ==== <!--T:34-->
+
==== Add veth device to CT ====
  
  <!--T:35-->
+
  [host-node]# vzctl set 101 --netif_add eth0 --save
[host-node]# vzctl set 101 --netif_add eth0 --save
 
  
<!--T:36-->
 
 
This allocates a MAC address and associates it with the host eth0 port.
 
This allocates a MAC address and associates it with the host eth0 port.
  
==== Configure devices in CT0 ==== <!--T:37-->
+
==== Configure devices in CT0 ====
The following steps are needed when the [[CT]] is '''not''' bridged to a [[CT0]] network interface. That is because the [[CT]] is connected to a virtual network that is "behind" [[CT0]]. [[CT0]] must forward packets between its physical network interface and the virtual network interface where [[CT]] is located. The first step below to configure the interface is not necessary if the container has been started, since the device will have been initialized.
 
 
<pre>
 
<pre>
 
[host-node]# ifconfig veth101.0 0
 
[host-node]# ifconfig veth101.0 0
Line 137: Line 103:
 
</pre>
 
</pre>
  
==== Configure device in CT ==== <!--T:38-->
+
==== Configure device in CT ====
The following steps show an example of a quick manual configuration of the [[CT]] network interface. Typically, you would configure the network settings in /etc/network/interfaces (Debian, see below) or however it is normally configured on your distribution. You can also comment or remove the configuration for venet0, if it exists, because that device will not be used.
 
 
<pre>
 
<pre>
 
[host-node]# vzctl enter 101
 
[host-node]# vzctl enter 101
Line 146: Line 111:
 
</pre>
 
</pre>
  
<!--T:39-->
 
 
Notes:
 
Notes:
 
* Until you ifconfig eth0 it won't appear. When you do it will use the mac address netif_add added earlier
 
* Until you ifconfig eth0 it won't appear. When you do it will use the mac address netif_add added earlier
Line 154: Line 118:
 
** http://openvz.org/pipermail/users/2005-November/000020.html
 
** http://openvz.org/pipermail/users/2005-November/000020.html
  
==== Add route in [[CT0]] ==== <!--T:40-->
+
==== Add route in [[CT0]] ====
Since [[CT0]] is acting as a router between its physical network interface and the virtual network interface of the [[CT]], we need to add a route to the [[CT]] to direct traffic to the right destination.
+
 
 
  [host-node]# ip route add 192.168.0.101 dev veth101.0
 
  [host-node]# ip route add 192.168.0.101 dev veth101.0
  
=== Using a directly routed IPv4 with virtual Ethernet device === <!--T:41-->
 
  
==== Situation ==== <!--T:42-->
+
=== Using a directly routed IPv4 with virtual Ethernet device ===
 +
 
 +
==== Situation ====
 
Hardware Node (HN/CT0) has 192.168.0.1/24 with router 192.168.0.254.
 
Hardware Node (HN/CT0) has 192.168.0.1/24 with router 192.168.0.254.
  
<!--T:43-->
 
 
We also know that IPv4 10.0.0.1/32 is directly routed to 192.168.0.1 (this is called a ''fail-over IP'').
 
We also know that IPv4 10.0.0.1/32 is directly routed to 192.168.0.1 (this is called a ''fail-over IP'').
  
<!--T:44-->
 
 
We want to give this directly routed IPv4 address to a container (CT).
 
We want to give this directly routed IPv4 address to a container (CT).
  
==== Start container ==== <!--T:45-->
+
==== Start container ====
  
  <!--T:46-->
+
  [host-node]# vzctl start 101
[host-node]# vzctl start 101
 
  
==== Add veth device to CT ==== <!--T:47-->
+
==== Add veth device to CT ====
  
  <!--T:48-->
+
  [host-node]# vzctl set 101 --netif_add eth0 --save
[host-node]# vzctl set 101 --netif_add eth0 --save
 
  
<!--T:49-->
 
 
This allocates a MAC address and associates it with the host eth0 port.
 
This allocates a MAC address and associates it with the host eth0 port.
  
==== Configure device and add route in CT0 ==== <!--T:50-->
+
==== Configure device and add route in CT0 ====
  
<!--T:51-->
 
 
<pre>
 
<pre>
 
[host-node]# ifconfig veth101.0 0
 
[host-node]# ifconfig veth101.0 0
Line 190: Line 149:
 
</pre>
 
</pre>
  
<!--T:52-->
 
 
You can automatize this at VPS creation by using a mount script <tt>$VEID.mount</tt>.
 
You can automatize this at VPS creation by using a mount script <tt>$VEID.mount</tt>.
  
<!--T:53-->
 
 
The problem here is that the ''veth'' interface appears in CT0 '''after''' VPS has started, therefore we cannot directly use the commands in the mount script. We launch a shell script (enclosed by { }) in background (operator '''&''') that waits for the interface to be ready and then adds the IP route.
 
The problem here is that the ''veth'' interface appears in CT0 '''after''' VPS has started, therefore we cannot directly use the commands in the mount script. We launch a shell script (enclosed by { }) in background (operator '''&''') that waits for the interface to be ready and then adds the IP route.
  
<!--T:54-->
 
 
Contents of the mount script <tt>/etc/vz/conf/101.mount</tt>:
 
Contents of the mount script <tt>/etc/vz/conf/101.mount</tt>:
 
<pre>
 
<pre>
Line 202: Line 158:
 
# This script source VPS configuration files in the same order as vzctl does
 
# This script source VPS configuration files in the same order as vzctl does
  
<!--T:55-->
 
 
# if one of these files does not exist then something is really broken
 
# if one of these files does not exist then something is really broken
 
[ -f /etc/vz/vz.conf ] || exit 1
 
[ -f /etc/vz/vz.conf ] || exit 1
 
[ -f $VE_CONFFILE ] || exit 1
 
[ -f $VE_CONFFILE ] || exit 1
  
<!--T:56-->
 
 
# source both files. Note the order, it is important
 
# source both files. Note the order, it is important
 
. /etc/vz/vz.conf
 
. /etc/vz/vz.conf
 
. $VE_CONFFILE
 
. $VE_CONFFILE
  
<!--T:57-->
 
 
# Configure veth with IP after VPS has started
 
# Configure veth with IP after VPS has started
 
{
 
{
Line 227: Line 180:
 
</pre>
 
</pre>
  
==== Make sure IPv4 forwarding is enabled in CT0 ==== <!--T:58-->
+
==== Make sure IPv4 forwarding is enabled in CT0 ====
  
<!--T:59-->
 
 
<pre>
 
<pre>
 
[host-node]# echo 1 > /proc/sys/net/ipv4/ip_forward
 
[host-node]# echo 1 > /proc/sys/net/ipv4/ip_forward
Line 237: Line 189:
 
You can permanently set this by using <tt>/etc/sysctl.conf</tt>.
 
You can permanently set this by using <tt>/etc/sysctl.conf</tt>.
  
==== Configure device in CT ==== <!--T:60-->
+
==== Configure device in CT ====
  
<!--T:61-->
 
 
1. Configure IP address
 
1. Configure IP address
  
<!--T:62-->
 
 
2. Add gateway
 
2. Add gateway
  
<!--T:63-->
 
 
3. Add default route
 
3. Add default route
  
<!--T:64-->
 
 
<pre>
 
<pre>
 
[ve-101]# /sbin/ifconfig eth0 10.0.0.1 netmask 255.255.255.255
 
[ve-101]# /sbin/ifconfig eth0 10.0.0.1 netmask 255.255.255.255
Line 255: Line 203:
 
</pre>
 
</pre>
  
<!--T:65-->
 
 
In a Debian container, you can configure this permanently by using <tt>/etc/network/interfaces</tt>:
 
In a Debian container, you can configure this permanently by using <tt>/etc/network/interfaces</tt>:
 
<pre>
 
<pre>
Line 266: Line 213:
 
</pre>
 
</pre>
  
=== Virtual Ethernet device with IPv6 === <!--T:66-->
+
=== Virtual Ethernet device with IPv6 ===
  
<!--T:67-->
 
 
See the [[VEs and HNs in same subnets]] article.
 
See the [[VEs and HNs in same subnets]] article.
  
=== Independent Virtual Ethernet communication through the bridge === <!--T:68-->
+
=== Virtual Ethernet devices can be joined in one bridge ===
Bridging a [[CT]] interface to a [[CT0]] interface is the magic that allows the [[CT]] to be an independent host on the network with its own IP address, gateway, etc. [[CT0]] does not need any configuration for forwarding packets to the [[CT]] or performing proxy arp for the [[CT]] or event the routing.
+
Perform steps 1 - 4 from Simple configuration chapter for several containers and/or veth devices
  
<!--T:69-->
+
==== Create bridge device ====
To manually configure a bridge and add devices to it, perform steps 1 - 4 from Simple configuration chapter for several containers and/or veth devices using FE:FF:FF:FF:FF:FF as a [[CT0]] veth side MAC address and then follow these steps.
 
 
 
==== Create bridge device ==== <!--T:70-->
 
 
<pre>
 
<pre>
 
[host-node]# brctl addbr vzbr0
 
[host-node]# brctl addbr vzbr0
 
</pre>
 
</pre>
  
==== Add veth devices to bridge ==== <!--T:71-->
+
==== Add veth devices to bridge ====
 
<pre>
 
<pre>
 
[host-node]# brctl addif vzbr0 veth101.0
 
[host-node]# brctl addif vzbr0 veth101.0
Line 293: Line 236:
 
</pre>
 
</pre>
  
==== Configure bridge device ==== <!--T:72-->
+
==== Configure bridge device ====
 
<pre>
 
<pre>
 
[host-node]# ifconfig vzbr0 0
 
[host-node]# ifconfig vzbr0 0
 
</pre>
 
</pre>
  
=== Automating the bridge === <!--T:73-->
+
==== Add routes in [[CT0]] ====
The most convenient method is to automatically create the bridge at boot as a network interface, add the physical interface from [[CT0]] and then add the interface from each [[CT]] as it starts. All devices are connected to a virtual switch, and containers directly access the network just as any other host without additional configuration on [[CT0]].
+
<pre>
 +
[host-node]# ip route add 192.168.101.1 dev vzbr0
 +
...
 +
[host-node]# ip route add 192.168.101.n dev vzbr0
 +
[host-node]# ip route add 192.168.102.1 dev vzbr0
 +
...
 +
...
 +
[host-node]# ip route add 192.168.XXX.N dev vzbr0
 +
</pre>
  
<!--T:74-->
+
Thus you'll have more convinient configuration, i.e. all routes to containers will be through this bridge and containers can communicate with each other even without these routes.
In Debian, configure the network interface on [[CT0]] to plug into a bridge in /etc/network/interfaces. The [[CT0]] physical device is added to the bridge as the "uplink" port to the physical network. You need to have bridge-utils installed for this to work.
 
  
<!--T:75-->
 
The bridge forwarding delay is set to 0 seconds so that forwarding begins immediately when a new interface is added to a bridge. The default delay is 30 seconds, during which the bridge pauses all traffic to listen and figure out where devices are. This can interrupt services when a container is added to the bridge. If you aren't running the spanning tree protocol (off by default) and the bridge does not create a loop in your network, then there is no need for a forwarding delay.
 
<pre>
 
iface eth0 inet manual
 
  
<!--T:76-->
 
auto vzbr0
 
iface vzbr0 inet static
 
        bridge_ports eth0
 
        bridge_fd 0
 
        address 192.168.1.100
 
        netmask 255.255.255.0
 
        gateway 192.168.1.254
 
</pre>
 
Follow the steps below for making a veth bridge persistent with the included script. That will automatically add each container to the bridge when it is started. Finally, specify vzbr0 as the bridge when adding the network interface to a container, as describe above. No configuration is needed on [[CT0]] for forwarding packets, proxy arp or additional routes. The interface in each [[CT]] can be configured as desired. Everything "just works" according to normal network interface configuration and default routing rules. Note that as discussed in the troubleshooting section below, bridged packets by default pass through the FORWARD iptables chain. Take care when adding rules to that table that bridged packets are not mistakenly blocked. This behavior can be disabled, if desired (sysctl: <code>net.bridge.bridge-nf-call-iptables</code>).
 
  
=== Making a veth-device persistent === <!--T:77-->
 
These steps are no longer necessary, as the veth device is automatically created when the container is started. They remain here as a reference.
 
  
<!--T:78-->
+
=== Making a veth-device persistent ===
 
According to http://bugzilla.openvz.org/show_bug.cgi?id=301 , a bug that stopped the veth device persistent was "Obsoleted now when --veth_add/--veth_del are introduced"
 
According to http://bugzilla.openvz.org/show_bug.cgi?id=301 , a bug that stopped the veth device persistent was "Obsoleted now when --veth_add/--veth_del are introduced"
  
<!--T:79-->
 
 
See http://wiki.openvz.org/w/index.php?title=Virtual_Ethernet_device&diff=5990&oldid=5989#Making_a_veth-device_persistent for a workaround that used to be described in this section.
 
See http://wiki.openvz.org/w/index.php?title=Virtual_Ethernet_device&diff=5990&oldid=5989#Making_a_veth-device_persistent for a workaround that used to be described in this section.
  
<!--T:80-->
 
 
That's it! At this point, when you restart the CT you should see a new line in the output, indicating that the interface is being configured and a new route being added. And you should be able to ping the host, and to enter the CT and use the network.
 
That's it! At this point, when you restart the CT you should see a new line in the output, indicating that the interface is being configured and a new route being added. And you should be able to ping the host, and to enter the CT and use the network.
  
=== Making a bridged veth-device persistent === <!--T:81-->
+
=== Making a bridged veth-device persistent ===
  
<!--T:82-->
 
 
Like the above example, here it is how to add the veth device to a bridge in a persistent way.  
 
Like the above example, here it is how to add the veth device to a bridge in a persistent way.  
  
<!--T:83-->
+
vzctl include a 'vznetaddbr' script, which makes use of the ''bridge'' parameter of the --netif_add switch.
vzctl includes a 'vznetaddbr' script, which makes use of the ''bridge'' parameter of the --netif_add switch.
 
  
<!--T:84-->
 
 
Just create /etc/vz/vznet.conf containing the following.
 
Just create /etc/vz/vznet.conf containing the following.
  
<!--T:85-->
 
 
<pre>
 
<pre>
 +
#!/bin/bash
 
EXTERNAL_SCRIPT="/usr/sbin/vznetaddbr"
 
EXTERNAL_SCRIPT="/usr/sbin/vznetaddbr"
 
</pre>
 
</pre>
  
<!--T:86-->
 
Or just run command
 
<pre>
 
echo 'EXTERNAL_SCRIPT="/usr/sbin/vznetaddbr"' > /etc/vz/vznet.conf
 
</pre>
 
 
<!--T:87-->
 
 
The script uses 'vmbr0' as default bridge name when no bridge is specified.
 
The script uses 'vmbr0' as default bridge name when no bridge is specified.
  
=== Virtual Ethernet devices + VLAN === <!--T:88-->
+
=== Virtual Ethernet devices + VLAN ===
 
This configuration can be done by adding vlan device to the previous configuration.
 
This configuration can be done by adding vlan device to the previous configuration.
  
== See also == <!--T:89-->
+
== See also ==
 
* [[Virtual network device]]
 
* [[Virtual network device]]
 
* [[Differences between venet and veth]]
 
* [[Differences between venet and veth]]
Line 367: Line 289:
 
* Troubleshooting: [[Bridge doesn't forward packets]]
 
* Troubleshooting: [[Bridge doesn't forward packets]]
  
== External links == <!--T:90-->
+
== External links ==
 
* [http://www.tldp.org/HOWTO/Linux+IPv6-HOWTO/hints-daemons-radvd.html Linux IPv6 HOWTO, a chapter about radvd]
 
* [http://www.tldp.org/HOWTO/Linux+IPv6-HOWTO/hints-daemons-radvd.html Linux IPv6 HOWTO, a chapter about radvd]
* [http://sysadmin-ivanov.blogspot.com/2008/02/2-veth-with-2-bridges-on-openvz-at.html 2 veth with 2 bridges setup]
+
* [http://vireso.blogspot.com/2008/02/2-veth-with-2-brindges-on-openvz-at.html 2 veth with 2 bridges setup]
* [https://forum.proxmox.com/threads/physical-host-with-2-nics-each-with-different-gateways.1733/#post-9287 Non default gateway for CentOS OpenVZ container] - this applies to BlueOnyx in Proxmox as well. | [[Media:TwoGWsPVECentOS.pdf|Cache]]
 
 
 
</translate>
 
  
 
[[Category: Networking]]
 
[[Category: Networking]]
 
[[Category: HOWTO]]
 
[[Category: HOWTO]]

Please note that all contributions to OpenVZ Virtuozzo Containers Wiki may be edited, altered, or removed by other contributors. If you don't want your writing to be edited mercilessly, then don't submit it here.
If you are going to add external links to an article, read the External links policy first!

To edit this page, please answer the question that appears below (more info):

Cancel Editing help (opens in new window)

Template used on this page: