https://wiki.openvz.org/api.php?action=feedcontributions&user=195.214.233.10&feedformat=atomOpenVZ Virtuozzo Containers Wiki - User contributions [en]2024-03-29T01:46:46ZUser contributionsMediaWiki 1.31.1https://wiki.openvz.org/index.php?title=Porting_the_kernel&diff=1769Porting the kernel2006-06-20T16:10:29Z<p>195.214.233.10: added to kernel and development cats.</p>
<hr />
<div>OpenVZ kernel supports x86, x86_64, and IA64 architectures as of now. Below are the quick and dirty information about how to port the kernel to yet another architecture.<br />
<br />
* UBC: need to account any platform specific VMAs created by hand in arch specific code. i.e. if there are calls of insert_vma_struct() this should be accounted with ub_memory_charge(). Didn't find such thing on sparc64.<br />
<br />
* If there are user triggerable printk()'s (related to the user, not the system as a whole) better replace them with ve_printk(). Otherwise user can flood (DoS). minor actually.<br />
<br />
* Call to functions find_task_by_pid(), for_each_process() and do_each_thread()/while_each_thread() should be replaced with it's counterparts - find_task_by_pid_XXX(), for_each_process_XXX() and do_each_thread_XXX()/while_each_thread_XXX(), where XXX is 'all' or 've'. 'all' means that all system processes in the system will be scanned, while 've' means that only VE (VPS) accessiable from this task (current context - get_exec_env()) will be visible. So you need to decide, whether the code in question is about system or user context.<br />
<br />
* task->pid should be changed with virt_pid(task) in some places. The rule is simple: user should see only virtual pids, while kernel operate on global pids. e.g. in signals, virtual pid should be delivered to app.<br />
<br />
* In interrupt handlers one need to set global host (VE0) context. i.e. set_exec_env(), set_exec_ub(). i.e. interrupt handlers are running in VE0 context.<br />
<br />
* In kernel_thread() one needs to prohibit kernel threads in VE. mostly security related...<br />
<br />
* show_registers() better to extend to show current VE.<br />
<br />
* utsname should be virtualized. this mostly means that 'system_utsnames' should be replaced with 've_utsname'. See any arch code for this.<br />
<br />
* some exports will be required. e.g. show_mem() and probably cpu_khz. easy.<br />
<br />
* everything else are bugfixes.<br />
<br />
All these are straightforward and really simple, so it should take a few hours to do.<br />
<br />
== External links ==<br />
* [http://forum.openvz.org/index.php?t=msg&goto=3338&&srch=sparc#msg_num_5 Original forum post]<br />
<br />
[[Category: Kernel]]<br />
[[Category: Development]]</div>195.214.233.10https://wiki.openvz.org/index.php?title=Porting_the_kernel&diff=1768Porting the kernel2006-06-20T16:09:55Z<p>195.214.233.10: created (taken from forum)</p>
<hr />
<div>OpenVZ kernel supports x86, x86_64, and IA64 architectures as of now. Below are the quick and dirty information about how to port the kernel to yet another architecture.<br />
<br />
* UBC: need to account any platform specific VMAs created by hand in arch specific code. i.e. if there are calls of insert_vma_struct() this should be accounted with ub_memory_charge(). Didn't find such thing on sparc64.<br />
<br />
* If there are user triggerable printk()'s (related to the user, not the system as a whole) better replace them with ve_printk(). Otherwise user can flood (DoS). minor actually.<br />
<br />
* Call to functions find_task_by_pid(), for_each_process() and do_each_thread()/while_each_thread() should be replaced with it's counterparts - find_task_by_pid_XXX(), for_each_process_XXX() and do_each_thread_XXX()/while_each_thread_XXX(), where XXX is 'all' or 've'. 'all' means that all system processes in the system will be scanned, while 've' means that only VE (VPS) accessiable from this task (current context - get_exec_env()) will be visible. So you need to decide, whether the code in question is about system or user context.<br />
<br />
* task->pid should be changed with virt_pid(task) in some places. The rule is simple: user should see only virtual pids, while kernel operate on global pids. e.g. in signals, virtual pid should be delivered to app.<br />
<br />
* In interrupt handlers one need to set global host (VE0) context. i.e. set_exec_env(), set_exec_ub(). i.e. interrupt handlers are running in VE0 context.<br />
<br />
* In kernel_thread() one needs to prohibit kernel threads in VE. mostly security related...<br />
<br />
* show_registers() better to extend to show current VE.<br />
<br />
* utsname should be virtualized. this mostly means that 'system_utsnames' should be replaced with 've_utsname'. See any arch code for this.<br />
<br />
* some exports will be required. e.g. show_mem() and probably cpu_khz. easy.<br />
<br />
* everything else are bugfixes.<br />
<br />
All these are straightforward and really simple, so it should take a few hours to do.<br />
<br />
== External links ==<br />
* [http://forum.openvz.org/index.php?t=msg&goto=3338&&srch=sparc#msg_num_5 Original forum post]</div>195.214.233.10https://wiki.openvz.org/index.php?title=Traffic_shaping_with_tc&diff=1712Traffic shaping with tc2006-06-16T07:48:35Z<p>195.214.233.10: Minor English fixes in /* Limiting incoming bandwidth */</p>
<hr />
<div>Sometimes it's necessary to limit traffic bandwidth from and to the VPS.<br />
You can do it using ordinary <tt>tc</tt> tool.<br />
<br />
== Packet routes ==<br />
First of all, a few words about how packets travel from and to a [[VE]].<br />
Suppose we have [[Hardware Node]] (HN) with a VE on it, and this VE talks<br />
to some Remote Host (RH). HN has one "real" network interface <tt>eth0</tt> and, <br />
thanks to OpenVZ, there is also "virtual" network interface <tt>venet0</tt>.<br />
Inside VPS we have interface <tt>venet0:0</tt>.<br />
<br />
<pre><br />
venet0:0 venet0 eth0<br />
VE >------------->-------------> HN >--------->--------> RH<br />
<br />
venet0:0 venet0 eth0<br />
VE <-------------<-------------< HN <---------<--------< RH<br />
</pre><br />
<br />
== Limiting outgoing bandwidth ==<br />
We can limit VE outgoing bandwidth by setting the <tt>tc</tt> filter on <tt>eth0</tt>.<br />
<pre><br />
DEV=eth0<br />
tc qdisc del dev $DEV root<br />
tc qdisc add dev $DEV root handle 1: cbq avpkt 1000 bandwidth 100mbit<br />
tc class add dev $DEV parent 1: classid 1:1 cbq rate 256kbit allot 1500 prio 5 bounded isolated<br />
tc filter add dev $DEV parent 1: protocol ip prio 16 u32 match ip src X.X.X.X flowid 1:1<br />
tc qdisc add dev $DEV parent 1:1 sfq perturb 10<br />
</pre><br />
X.X.X.X is an IP address of VE.<br />
<br />
== Limiting incoming bandwidth ==<br />
This can be done by setting the <tt>tc</tt> filter on <tt>venet0</tt>:<br />
<pre><br />
DEV=venet0<br />
tc qdisc del dev $DEV root<br />
tc qdisc add dev $DEV root handle 1: cbq avpkt 1000 bandwidth 100mbit<br />
tc class add dev $DEV parent 1: classid 1:1 cbq rate 256kbit allot 1500 prio 5 bounded isolated<br />
tc filter add dev $DEV parent 1: protocol ip prio 16 u32 match ip dst X.X.X.X flowid 1:1<br />
tc qdisc add dev $DEV parent 1:1 sfq perturb 10<br />
</pre><br />
Note that X.X.X.X is an IP address of VE.<br />
<br />
== Limiting VE to HN talks ==<br />
As you can see, two filters above don't limit VE to HN talks.<br />
I mean a VE can emit as much traffic as it wishes. To make such a limitation from the HN,<br />
it is necessary to use <tt>tc</tt> police on <tt>venet0</tt>:<br />
<pre><br />
DEV=venet0<br />
tc filter add dev $DEV parent 1: protocol ip prio 20 u32 match u32 1 0x0000 police rate 2kbit buffer 10k drop flowid :1<br />
</pre><br />
<br />
== External links ==<br />
* [http://lartc.org/howto/ Linux Advanced Routing & Traffic Control HOWTO]<br />
<br />
[[Category: HOWTO]]<br />
[[Category: Networking]]</div>195.214.233.10https://wiki.openvz.org/index.php?title=Traffic_shaping_with_tc&diff=1711Traffic shaping with tc2006-06-16T07:47:38Z<p>195.214.233.10: Fixed English in /* Limiting VE to HN talks */, added /* External links */</p>
<hr />
<div>Sometimes it's necessary to limit traffic bandwidth from and to the VPS.<br />
You can do it using ordinary <tt>tc</tt> tool.<br />
<br />
== Packet routes ==<br />
First of all, a few words about how packets travel from and to a [[VE]].<br />
Suppose we have [[Hardware Node]] (HN) with a VE on it, and this VE talks<br />
to some Remote Host (RH). HN has one "real" network interface <tt>eth0</tt> and, <br />
thanks to OpenVZ, there is also "virtual" network interface <tt>venet0</tt>.<br />
Inside VPS we have interface <tt>venet0:0</tt>.<br />
<br />
<pre><br />
venet0:0 venet0 eth0<br />
VE >------------->-------------> HN >--------->--------> RH<br />
<br />
venet0:0 venet0 eth0<br />
VE <-------------<-------------< HN <---------<--------< RH<br />
</pre><br />
<br />
== Limiting outgoing bandwidth ==<br />
We can limit VE outgoing bandwidth by setting the <tt>tc</tt> filter on <tt>eth0</tt>.<br />
<pre><br />
DEV=eth0<br />
tc qdisc del dev $DEV root<br />
tc qdisc add dev $DEV root handle 1: cbq avpkt 1000 bandwidth 100mbit<br />
tc class add dev $DEV parent 1: classid 1:1 cbq rate 256kbit allot 1500 prio 5 bounded isolated<br />
tc filter add dev $DEV parent 1: protocol ip prio 16 u32 match ip src X.X.X.X flowid 1:1<br />
tc qdisc add dev $DEV parent 1:1 sfq perturb 10<br />
</pre><br />
X.X.X.X is an IP address of VE.<br />
<br />
== Limiting incoming bandwidth ==<br />
I can be done by setting the <tt>tc</tt> filter on <tt>venet0</tt>:<br />
<pre><br />
DEV=venet0<br />
tc qdisc del dev $DEV root<br />
tc qdisc add dev $DEV root handle 1: cbq avpkt 1000 bandwidth 100mbit<br />
tc class add dev $DEV parent 1: classid 1:1 cbq rate 256kbit allot 1500 prio 5 bounded isolated<br />
tc filter add dev $DEV parent 1: protocol ip prio 16 u32 match ip dst X.X.X.X flowid 1:1<br />
tc qdisc add dev $DEV parent 1:1 sfq perturb 10<br />
</pre><br />
X.X.X.X is an IP address of VE.<br />
<br />
== Limiting VE to HN talks ==<br />
As you can see, two filters above don't limit VE to HN talks.<br />
I mean a VE can emit as much traffic as it wishes. To make such a limitation from the HN,<br />
it is necessary to use <tt>tc</tt> police on <tt>venet0</tt>:<br />
<pre><br />
DEV=venet0<br />
tc filter add dev $DEV parent 1: protocol ip prio 20 u32 match u32 1 0x0000 police rate 2kbit buffer 10k drop flowid :1<br />
</pre><br />
<br />
== External links ==<br />
* [http://lartc.org/howto/ Linux Advanced Routing & Traffic Control HOWTO]<br />
<br />
[[Category: HOWTO]]<br />
[[Category: Networking]]</div>195.214.233.10https://wiki.openvz.org/index.php?title=Traffic_shaping_with_tc&diff=1710Traffic shaping with tc2006-06-16T07:37:10Z<p>195.214.233.10: English fixed, some linking and formatting in /* Packet routes */</p>
<hr />
<div>Sometimes it's necessary to limit traffic bandwidth from and to the VPS.<br />
You can do it using ordinary <tt>tc</tt> tool.<br />
<br />
== Packet routes ==<br />
First of all, a few words about how packets travel from and to a [[VE]].<br />
Suppose we have [[Hardware Node]] (HN) with a VE on it, and this VE talks<br />
to some Remote Host (RH). HN has one "real" network interface <tt>eth0</tt> and, <br />
thanks to OpenVZ, there is also "virtual" network interface <tt>venet0</tt>.<br />
Inside VPS we have interface <tt>venet0:0</tt>.<br />
<br />
<pre><br />
venet0:0 venet0 eth0<br />
VE >------------->-------------> HN >--------->--------> RH<br />
<br />
venet0:0 venet0 eth0<br />
VE <-------------<-------------< HN <---------<--------< RH<br />
</pre><br />
<br />
== Limiting outgoing bandwidth ==<br />
We can limit VE outgoing bandwidth by setting the <tt>tc</tt> filter on <tt>eth0</tt>.<br />
<pre><br />
DEV=eth0<br />
tc qdisc del dev $DEV root<br />
tc qdisc add dev $DEV root handle 1: cbq avpkt 1000 bandwidth 100mbit<br />
tc class add dev $DEV parent 1: classid 1:1 cbq rate 256kbit allot 1500 prio 5 bounded isolated<br />
tc filter add dev $DEV parent 1: protocol ip prio 16 u32 match ip src X.X.X.X flowid 1:1<br />
tc qdisc add dev $DEV parent 1:1 sfq perturb 10<br />
</pre><br />
X.X.X.X is an IP address of VE.<br />
<br />
== Limiting incoming bandwidth ==<br />
I can be done by setting the <tt>tc</tt> filter on <tt>venet0</tt>:<br />
<pre><br />
DEV=venet0<br />
tc qdisc del dev $DEV root<br />
tc qdisc add dev $DEV root handle 1: cbq avpkt 1000 bandwidth 100mbit<br />
tc class add dev $DEV parent 1: classid 1:1 cbq rate 256kbit allot 1500 prio 5 bounded isolated<br />
tc filter add dev $DEV parent 1: protocol ip prio 16 u32 match ip dst X.X.X.X flowid 1:1<br />
tc qdisc add dev $DEV parent 1:1 sfq perturb 10<br />
</pre><br />
X.X.X.X is an IP address of VE.<br />
<br />
== Limiting VE to HN talks ==<br />
As you can see, two filters above don't limit VE to HN talks.<br />
I mean, that VE can emmit as much traffic as it wish. To make such limitation from HN,<br />
it's necessary to use <tt>tc</tt> policer on <tt>venet0</tt>:<br />
<pre><br />
DEV=venet0<br />
tc filter add dev $DEV parent 1: protocol ip prio 20 u32 match u32 1 0x0000 police rate 2kbit buffer 10k drop flowid :1<br />
</pre><br />
<br />
[[Category: HOWTO]]<br />
[[Category: Networking]]</div>195.214.233.10https://wiki.openvz.org/index.php?title=Virtual_Ethernet_device&diff=1608Virtual Ethernet device2006-06-13T06:22:27Z<p>195.214.233.10: /* Configure bridge device */ ifconfig up -> ifconfig 0</p>
<hr />
<div>Virtual ethernet device is ethernet device which can be used inside a [[VE]]. Unlike<br />
venet network device, veth device has a MAC address. Due to this, it can be used in configurations, when veth is bridged to <br />
ethX or other device and VPS user fully setups his networking himself, <br />
including IPs, gateways etc.<br />
<br />
Virtual ethernet device consist of two ethernet devices - one in [[VE0]] and another one <br />
in VE. These devices are connected to each other, so if a packet goes to one<br />
device it will come out from the other device.<br />
<br />
== Differences between venet and veth ==<br />
* veth allows broadcasts in VE, so you can use even dhcp server inside VE or samba server with domain broadcasts or other such stuff.<br />
* veth has some security implications, so is not recommended in untrusted environments like HSP. This is due to broadcasts, traffic sniffing, possible IP collisions etc. i.e. VE user can actually ruin your ethernet network with such direct access to ethernet layer.<br />
* With venet device, only node administrator can assign an IP to a VE. With veth device, network settings can be fully done on VE side. VE should setup correct GW, IP/mask etc and node admin then can only choose where your traffic goes.<br />
* veth devices can be bridged together and/or with other devices. For example, in host system admin can bridge veth from 2 VEs with some VLAN eth0.X. In this case, these 2 VEs will be connected to this VLAN.<br />
* venet device is a bit faster and more efficient.<br />
* With veth devices IPv6 auto generates an address from MAC.<br />
<br />
The brief summary:<br />
{| class="wikitable" style="text-align: center;"<br />
|+ '''Differences between veth and venet'''<br />
! Feature !! veth !! venet<br />
|-<br />
! MAC address<br />
| {{yes}} || {{no}}<br />
|-<br />
! Broadcasts inside VE<br />
| {{yes}} || {{no}}<br />
|-<br />
! Traffic sniffing<br />
| {{yes}} || {{no}}<br />
|-<br />
! Network security<br />
| low <ref>Due to broadcasts, sniffing and possible IP collisions etc.</ref> || hi<br />
|- <br />
! Can be used in bridges<br />
| {{yes}} || {{no}}<br />
|-<br />
! Performance<br />
| fast || fastest<br />
|-<br />
|}<br />
<references/><br />
<br />
== Virtual ethernet device usage ==<br />
<br />
=== Adding veth to a VE ===<br />
<pre><br />
vzctl set <VEID> --veth_add <dev_name>,<dev_addr>,<ve_dev_name>,<ve_dev_addr><br />
</pre><br />
Here <br />
* <tt>dev_name</tt> is ethernet device name in the [[VE0|host system]]<br />
* <tt>dev_addr</tt> is its MAC address<br />
* <tt>ve_dev_name</tt> is an ethernet device name in the VE<br />
* <tt>ve_dev_addr</tt> is its MAC address<br />
<br />
MAC addresses must be entered in XX:XX:XX:XX:XX:XX format. Note that this option<br />
is incremental, so devices are added to already existing ones.<br />
<br />
==== Examples ====<br />
<pre><br />
vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save<br />
</pre><br />
After executing this command <tt>veth</tt> device will be created for VE 101 and veth configuration will be saved to a VE configuration file.<br />
Host-side ethernet device will have <tt>veth101.0</tt> name and <tt>00:12:34:56:78:9A</tt> MAC address.<br />
VE-side ethernet device will have <tt>eth0</tt> name and <tt>00:12:34:56:78:9B</tt> MAC address.<br />
{{Note|Use random MAC addresses. Do not use MAC addresses of real eth devices, beacuse this can lead to collisions.}}<br />
<br />
=== Removing veth from a VE ===<br />
<pre><br />
vzctl set <VEID> --veth_del <dev_name><br />
</pre><br />
Here <tt>dev_name</tt> is the ethernet device name in the [[VE0|host system]].<br />
<br />
==== Example ====<br />
<pre><br />
vzctl set 101 --veth_del veth101.0 --save<br />
</pre><br />
After executing this command veth device with host-side ethernet name veth101.0 will be removed from VE 101 and veth configuration will be updated in VE config file.<br />
<br />
== Common configurations with virtual ethernet devices ==<br />
Module <tt>vzethdev</tt> must be loaded to operate with veth devices.<br />
<br />
=== Simple configuration with virtual ethernet device ===<br />
<br />
==== Start a VE ====<br />
<pre><br />
[host-node]# vzctl start 101<br />
</pre><br />
<br />
==== Add veth device to VE ====<br />
<pre><br />
[host-node]# vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save<br />
</pre><br />
<br />
==== Configure devices in VE0 ====<br />
<pre><br />
[host-node]# ifconfig veth101.0 0<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/veth101.0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/veth101.0/proxy_arp<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/eth0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp<br />
</pre><br />
<br />
==== Configure device in VE ====<br />
<pre><br />
[host-node]# vzctl enter 101<br />
[ve-101]# /sbin/ifconfig eth0 0<br />
[ve-101]# /sbin/ip addr add 192.168.0.101 dev eth0<br />
[ve-101]# /sbin/ip route add default dev eth0<br />
</pre><br />
<br />
==== Add route in [[VE0]] ====<br />
<pre><br />
[host-node]# ip route add 192.168.0.101 dev veth101.0<br />
</pre><br />
<br />
=== Virtual ethernet device with IPv6 ===<br />
<br />
==== Start [[VE]] ====<br />
<pre><br />
[host-node]# vzctl start 101<br />
</pre><br />
<br />
==== Add veth device to [[VE]] ====<br />
<pre><br />
[host-node]# vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save<br />
</pre><br />
<br />
==== Configure devices in [[VE0]] ====<br />
<pre><br />
[host-node]# ifconfig veth101.0 0<br />
[host-node]# echo 1 > /proc/sys/net/ipv6/conf/veth101.0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv6/conf/eth0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv6/conf/all/forwarding<br />
</pre><br />
<br />
==== Configure device in [[VE]] ====<br />
<pre><br />
[host-node]# vzctl enter 101<br />
[ve-101]# /sbin/ifconfig eth0 0<br />
</pre><br />
<br />
==== Start router advertisement daemon (radvd) for IPv6 in VE0 ====<br />
First you need to edit radvd configuration file. Here is a simple example of <tt>/etc/radv.conf</tt>:<br />
<pre><br />
interface veth101.0<br />
{<br />
AdvSendAdvert on;<br />
MinRtrAdvInterval 3;<br />
MaxRtrAdvInterval 10;<br />
AdvHomeAgentFlag off;<br />
<br />
prefix 3ffe:2400:0:0::/64<br />
{<br />
AdvOnLink on;<br />
AdvAutonomous on;<br />
AdvRouterAddr off;<br />
};<br />
};<br />
<br />
interface eth0<br />
{<br />
AdvSendAdvert on;<br />
MinRtrAdvInterval 3;<br />
MaxRtrAdvInterval 10;<br />
AdvHomeAgentFlag off;<br />
<br />
prefix 3ffe:0302:0011:0002::/64<br />
{<br />
AdvOnLink on;<br />
AdvAutonomous on;<br />
AdvRouterAddr off;<br />
};<br />
};<br />
</pre><br />
<br />
Then, start radvd:<br />
<pre><br />
[host-node]# /etc/init.d/radvd start<br />
</pre><br />
<br />
==== Add IPv6 addresses to devices in [[VE0]] ====<br />
<pre><br />
[host-node]# ip addr add dev veth101.0 3ffe:2400::212:34ff:fe56:789a/64<br />
[host-node]# ip addr add dev eth0 3ffe:0302:0011:0002:211:22ff:fe33:4455/64<br />
</pre><br />
<br />
=== Virtual ethernet devices can be joined in one bridge ===<br />
Perform steps 1 - 4 from Simple configuration chapter for several VEs and/or veth devices<br />
<br />
==== Create bridge device ====<br />
<pre><br />
[host-node]# brctl addbr vzbr0<br />
</pre><br />
<br />
==== Add veth devices to bridge ====<br />
<pre><br />
[host-node]# brctl addif vzbr0 veth101.0<br />
...<br />
[host-node]# brctl addif vzbr0 veth101.n<br />
[host-node]# brctl addif vzbr0 veth102.0<br />
...<br />
...<br />
[host-node]# brctl addif vzbr0 vethXXX.N<br />
</pre><br />
<br />
==== Configure bridge device ====<br />
<pre><br />
[host-node]# ifconfig vzbr0 0<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/vzbr0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/vzbr0/proxy_arp<br />
</pre><br />
<br />
==== Add routes in [[VE0]] ====<br />
<pre><br />
[host-node]# ip route add 192.168.101.1 dev vzbr0<br />
...<br />
[host-node]# ip route add 192.168.101.n dev vzbr0<br />
[host-node]# ip route add 192.168.102.1 dev vzbr0<br />
...<br />
...<br />
[host-node]# ip route add 192.168.XXX.N dev vzbr0<br />
</pre><br />
<br />
Thus you'll have more convinient configuration, i.e. all routes to VEs will be through this bridge and VEs can communicate with each other even without these routes.<br />
<br />
=== Virtual ethernet devices + VLAN ===<br />
This configuration can be done by adding vlan device to the previous configuration.<br />
<br />
== External links ==<br />
* [http://www.tldp.org/HOWTO/Linux+IPv6-HOWTO/hints-daemons-radvd.html Linux IPv6 HOWTO, a chapter about radvd]<br />
<br />
<br />
[[Category: Networking]]<br />
[[Category: HOWTO]]</div>195.214.233.10https://wiki.openvz.org/index.php?title=Virtual_Ethernet_device&diff=1607Virtual Ethernet device2006-06-13T06:20:32Z<p>195.214.233.10: /* Configure device in VE */ ifconfig up -> ifconfig 0</p>
<hr />
<div>Virtual ethernet device is ethernet device which can be used inside a [[VE]]. Unlike<br />
venet network device, veth device has a MAC address. Due to this, it can be used in configurations, when veth is bridged to <br />
ethX or other device and VPS user fully setups his networking himself, <br />
including IPs, gateways etc.<br />
<br />
Virtual ethernet device consist of two ethernet devices - one in [[VE0]] and another one <br />
in VE. These devices are connected to each other, so if a packet goes to one<br />
device it will come out from the other device.<br />
<br />
== Differences between venet and veth ==<br />
* veth allows broadcasts in VE, so you can use even dhcp server inside VE or samba server with domain broadcasts or other such stuff.<br />
* veth has some security implications, so is not recommended in untrusted environments like HSP. This is due to broadcasts, traffic sniffing, possible IP collisions etc. i.e. VE user can actually ruin your ethernet network with such direct access to ethernet layer.<br />
* With venet device, only node administrator can assign an IP to a VE. With veth device, network settings can be fully done on VE side. VE should setup correct GW, IP/mask etc and node admin then can only choose where your traffic goes.<br />
* veth devices can be bridged together and/or with other devices. For example, in host system admin can bridge veth from 2 VEs with some VLAN eth0.X. In this case, these 2 VEs will be connected to this VLAN.<br />
* venet device is a bit faster and more efficient.<br />
* With veth devices IPv6 auto generates an address from MAC.<br />
<br />
The brief summary:<br />
{| class="wikitable" style="text-align: center;"<br />
|+ '''Differences between veth and venet'''<br />
! Feature !! veth !! venet<br />
|-<br />
! MAC address<br />
| {{yes}} || {{no}}<br />
|-<br />
! Broadcasts inside VE<br />
| {{yes}} || {{no}}<br />
|-<br />
! Traffic sniffing<br />
| {{yes}} || {{no}}<br />
|-<br />
! Network security<br />
| low <ref>Due to broadcasts, sniffing and possible IP collisions etc.</ref> || hi<br />
|- <br />
! Can be used in bridges<br />
| {{yes}} || {{no}}<br />
|-<br />
! Performance<br />
| fast || fastest<br />
|-<br />
|}<br />
<references/><br />
<br />
== Virtual ethernet device usage ==<br />
<br />
=== Adding veth to a VE ===<br />
<pre><br />
vzctl set <VEID> --veth_add <dev_name>,<dev_addr>,<ve_dev_name>,<ve_dev_addr><br />
</pre><br />
Here <br />
* <tt>dev_name</tt> is ethernet device name in the [[VE0|host system]]<br />
* <tt>dev_addr</tt> is its MAC address<br />
* <tt>ve_dev_name</tt> is an ethernet device name in the VE<br />
* <tt>ve_dev_addr</tt> is its MAC address<br />
<br />
MAC addresses must be entered in XX:XX:XX:XX:XX:XX format. Note that this option<br />
is incremental, so devices are added to already existing ones.<br />
<br />
==== Examples ====<br />
<pre><br />
vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save<br />
</pre><br />
After executing this command <tt>veth</tt> device will be created for VE 101 and veth configuration will be saved to a VE configuration file.<br />
Host-side ethernet device will have <tt>veth101.0</tt> name and <tt>00:12:34:56:78:9A</tt> MAC address.<br />
VE-side ethernet device will have <tt>eth0</tt> name and <tt>00:12:34:56:78:9B</tt> MAC address.<br />
{{Note|Use random MAC addresses. Do not use MAC addresses of real eth devices, beacuse this can lead to collisions.}}<br />
<br />
=== Removing veth from a VE ===<br />
<pre><br />
vzctl set <VEID> --veth_del <dev_name><br />
</pre><br />
Here <tt>dev_name</tt> is the ethernet device name in the [[VE0|host system]].<br />
<br />
==== Example ====<br />
<pre><br />
vzctl set 101 --veth_del veth101.0 --save<br />
</pre><br />
After executing this command veth device with host-side ethernet name veth101.0 will be removed from VE 101 and veth configuration will be updated in VE config file.<br />
<br />
== Common configurations with virtual ethernet devices ==<br />
Module <tt>vzethdev</tt> must be loaded to operate with veth devices.<br />
<br />
=== Simple configuration with virtual ethernet device ===<br />
<br />
==== Start a VE ====<br />
<pre><br />
[host-node]# vzctl start 101<br />
</pre><br />
<br />
==== Add veth device to VE ====<br />
<pre><br />
[host-node]# vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save<br />
</pre><br />
<br />
==== Configure devices in VE0 ====<br />
<pre><br />
[host-node]# ifconfig veth101.0 0<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/veth101.0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/veth101.0/proxy_arp<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/eth0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp<br />
</pre><br />
<br />
==== Configure device in VE ====<br />
<pre><br />
[host-node]# vzctl enter 101<br />
[ve-101]# /sbin/ifconfig eth0 0<br />
[ve-101]# /sbin/ip addr add 192.168.0.101 dev eth0<br />
[ve-101]# /sbin/ip route add default dev eth0<br />
</pre><br />
<br />
==== Add route in [[VE0]] ====<br />
<pre><br />
[host-node]# ip route add 192.168.0.101 dev veth101.0<br />
</pre><br />
<br />
=== Virtual ethernet device with IPv6 ===<br />
<br />
==== Start [[VE]] ====<br />
<pre><br />
[host-node]# vzctl start 101<br />
</pre><br />
<br />
==== Add veth device to [[VE]] ====<br />
<pre><br />
[host-node]# vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save<br />
</pre><br />
<br />
==== Configure devices in [[VE0]] ====<br />
<pre><br />
[host-node]# ifconfig veth101.0 0<br />
[host-node]# echo 1 > /proc/sys/net/ipv6/conf/veth101.0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv6/conf/eth0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv6/conf/all/forwarding<br />
</pre><br />
<br />
==== Configure device in [[VE]] ====<br />
<pre><br />
[host-node]# vzctl enter 101<br />
[ve-101]# /sbin/ifconfig eth0 0<br />
</pre><br />
<br />
==== Start router advertisement daemon (radvd) for IPv6 in VE0 ====<br />
First you need to edit radvd configuration file. Here is a simple example of <tt>/etc/radv.conf</tt>:<br />
<pre><br />
interface veth101.0<br />
{<br />
AdvSendAdvert on;<br />
MinRtrAdvInterval 3;<br />
MaxRtrAdvInterval 10;<br />
AdvHomeAgentFlag off;<br />
<br />
prefix 3ffe:2400:0:0::/64<br />
{<br />
AdvOnLink on;<br />
AdvAutonomous on;<br />
AdvRouterAddr off;<br />
};<br />
};<br />
<br />
interface eth0<br />
{<br />
AdvSendAdvert on;<br />
MinRtrAdvInterval 3;<br />
MaxRtrAdvInterval 10;<br />
AdvHomeAgentFlag off;<br />
<br />
prefix 3ffe:0302:0011:0002::/64<br />
{<br />
AdvOnLink on;<br />
AdvAutonomous on;<br />
AdvRouterAddr off;<br />
};<br />
};<br />
</pre><br />
<br />
Then, start radvd:<br />
<pre><br />
[host-node]# /etc/init.d/radvd start<br />
</pre><br />
<br />
==== Add IPv6 addresses to devices in [[VE0]] ====<br />
<pre><br />
[host-node]# ip addr add dev veth101.0 3ffe:2400::212:34ff:fe56:789a/64<br />
[host-node]# ip addr add dev eth0 3ffe:0302:0011:0002:211:22ff:fe33:4455/64<br />
</pre><br />
<br />
=== Virtual ethernet devices can be joined in one bridge ===<br />
Perform steps 1 - 4 from Simple configuration chapter for several VEs and/or veth devices<br />
<br />
==== Create bridge device ====<br />
<pre><br />
[host-node]# brctl addbr vzbr0<br />
</pre><br />
<br />
==== Add veth devices to bridge ====<br />
<pre><br />
[host-node]# brctl addif vzbr0 veth101.0<br />
...<br />
[host-node]# brctl addif vzbr0 veth101.n<br />
[host-node]# brctl addif vzbr0 veth102.0<br />
...<br />
...<br />
[host-node]# brctl addif vzbr0 vethXXX.N<br />
</pre><br />
<br />
==== Configure bridge device ====<br />
<pre><br />
[host-node]# ifconfig vzbr0 up<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/vzbr0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/vzbr0/proxy_arp<br />
</pre><br />
<br />
==== Add routes in [[VE0]] ====<br />
<pre><br />
[host-node]# ip route add 192.168.101.1 dev vzbr0<br />
...<br />
[host-node]# ip route add 192.168.101.n dev vzbr0<br />
[host-node]# ip route add 192.168.102.1 dev vzbr0<br />
...<br />
...<br />
[host-node]# ip route add 192.168.XXX.N dev vzbr0<br />
</pre><br />
<br />
Thus you'll have more convinient configuration, i.e. all routes to VEs will be through this bridge and VEs can communicate with each other even without these routes.<br />
<br />
=== Virtual ethernet devices + VLAN ===<br />
This configuration can be done by adding vlan device to the previous configuration.<br />
<br />
== External links ==<br />
* [http://www.tldp.org/HOWTO/Linux+IPv6-HOWTO/hints-daemons-radvd.html Linux IPv6 HOWTO, a chapter about radvd]<br />
<br />
<br />
[[Category: Networking]]<br />
[[Category: HOWTO]]</div>195.214.233.10https://wiki.openvz.org/index.php?title=Virtual_Ethernet_device&diff=1606Virtual Ethernet device2006-06-13T06:20:04Z<p>195.214.233.10: /* Configure devices in VE0 */ ifconfig up -> ifconfig 0</p>
<hr />
<div>Virtual ethernet device is ethernet device which can be used inside a [[VE]]. Unlike<br />
venet network device, veth device has a MAC address. Due to this, it can be used in configurations, when veth is bridged to <br />
ethX or other device and VPS user fully setups his networking himself, <br />
including IPs, gateways etc.<br />
<br />
Virtual ethernet device consist of two ethernet devices - one in [[VE0]] and another one <br />
in VE. These devices are connected to each other, so if a packet goes to one<br />
device it will come out from the other device.<br />
<br />
== Differences between venet and veth ==<br />
* veth allows broadcasts in VE, so you can use even dhcp server inside VE or samba server with domain broadcasts or other such stuff.<br />
* veth has some security implications, so is not recommended in untrusted environments like HSP. This is due to broadcasts, traffic sniffing, possible IP collisions etc. i.e. VE user can actually ruin your ethernet network with such direct access to ethernet layer.<br />
* With venet device, only node administrator can assign an IP to a VE. With veth device, network settings can be fully done on VE side. VE should setup correct GW, IP/mask etc and node admin then can only choose where your traffic goes.<br />
* veth devices can be bridged together and/or with other devices. For example, in host system admin can bridge veth from 2 VEs with some VLAN eth0.X. In this case, these 2 VEs will be connected to this VLAN.<br />
* venet device is a bit faster and more efficient.<br />
* With veth devices IPv6 auto generates an address from MAC.<br />
<br />
The brief summary:<br />
{| class="wikitable" style="text-align: center;"<br />
|+ '''Differences between veth and venet'''<br />
! Feature !! veth !! venet<br />
|-<br />
! MAC address<br />
| {{yes}} || {{no}}<br />
|-<br />
! Broadcasts inside VE<br />
| {{yes}} || {{no}}<br />
|-<br />
! Traffic sniffing<br />
| {{yes}} || {{no}}<br />
|-<br />
! Network security<br />
| low <ref>Due to broadcasts, sniffing and possible IP collisions etc.</ref> || hi<br />
|- <br />
! Can be used in bridges<br />
| {{yes}} || {{no}}<br />
|-<br />
! Performance<br />
| fast || fastest<br />
|-<br />
|}<br />
<references/><br />
<br />
== Virtual ethernet device usage ==<br />
<br />
=== Adding veth to a VE ===<br />
<pre><br />
vzctl set <VEID> --veth_add <dev_name>,<dev_addr>,<ve_dev_name>,<ve_dev_addr><br />
</pre><br />
Here <br />
* <tt>dev_name</tt> is ethernet device name in the [[VE0|host system]]<br />
* <tt>dev_addr</tt> is its MAC address<br />
* <tt>ve_dev_name</tt> is an ethernet device name in the VE<br />
* <tt>ve_dev_addr</tt> is its MAC address<br />
<br />
MAC addresses must be entered in XX:XX:XX:XX:XX:XX format. Note that this option<br />
is incremental, so devices are added to already existing ones.<br />
<br />
==== Examples ====<br />
<pre><br />
vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save<br />
</pre><br />
After executing this command <tt>veth</tt> device will be created for VE 101 and veth configuration will be saved to a VE configuration file.<br />
Host-side ethernet device will have <tt>veth101.0</tt> name and <tt>00:12:34:56:78:9A</tt> MAC address.<br />
VE-side ethernet device will have <tt>eth0</tt> name and <tt>00:12:34:56:78:9B</tt> MAC address.<br />
{{Note|Use random MAC addresses. Do not use MAC addresses of real eth devices, beacuse this can lead to collisions.}}<br />
<br />
=== Removing veth from a VE ===<br />
<pre><br />
vzctl set <VEID> --veth_del <dev_name><br />
</pre><br />
Here <tt>dev_name</tt> is the ethernet device name in the [[VE0|host system]].<br />
<br />
==== Example ====<br />
<pre><br />
vzctl set 101 --veth_del veth101.0 --save<br />
</pre><br />
After executing this command veth device with host-side ethernet name veth101.0 will be removed from VE 101 and veth configuration will be updated in VE config file.<br />
<br />
== Common configurations with virtual ethernet devices ==<br />
Module <tt>vzethdev</tt> must be loaded to operate with veth devices.<br />
<br />
=== Simple configuration with virtual ethernet device ===<br />
<br />
==== Start a VE ====<br />
<pre><br />
[host-node]# vzctl start 101<br />
</pre><br />
<br />
==== Add veth device to VE ====<br />
<pre><br />
[host-node]# vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save<br />
</pre><br />
<br />
==== Configure devices in VE0 ====<br />
<pre><br />
[host-node]# ifconfig veth101.0 0<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/veth101.0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/veth101.0/proxy_arp<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/eth0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp<br />
</pre><br />
<br />
==== Configure device in VE ====<br />
<pre><br />
[host-node]# vzctl enter 101<br />
[ve-101]# /sbin/ifconfig eth0 0<br />
[ve-101]# /sbin/ip addr add 192.168.0.101 dev eth0<br />
[ve-101]# /sbin/ip route add default dev eth0<br />
</pre><br />
<br />
==== Add route in [[VE0]] ====<br />
<pre><br />
[host-node]# ip route add 192.168.0.101 dev veth101.0<br />
</pre><br />
<br />
=== Virtual ethernet device with IPv6 ===<br />
<br />
==== Start [[VE]] ====<br />
<pre><br />
[host-node]# vzctl start 101<br />
</pre><br />
<br />
==== Add veth device to [[VE]] ====<br />
<pre><br />
[host-node]# vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save<br />
</pre><br />
<br />
==== Configure devices in [[VE0]] ====<br />
<pre><br />
[host-node]# ifconfig veth101.0 0<br />
[host-node]# echo 1 > /proc/sys/net/ipv6/conf/veth101.0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv6/conf/eth0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv6/conf/all/forwarding<br />
</pre><br />
<br />
==== Configure device in [[VE]] ====<br />
<pre><br />
[host-node]# vzctl enter 101<br />
[ve-101]# /sbin/ifconfig eth0 up<br />
</pre><br />
<br />
==== Start router advertisement daemon (radvd) for IPv6 in VE0 ====<br />
First you need to edit radvd configuration file. Here is a simple example of <tt>/etc/radv.conf</tt>:<br />
<pre><br />
interface veth101.0<br />
{<br />
AdvSendAdvert on;<br />
MinRtrAdvInterval 3;<br />
MaxRtrAdvInterval 10;<br />
AdvHomeAgentFlag off;<br />
<br />
prefix 3ffe:2400:0:0::/64<br />
{<br />
AdvOnLink on;<br />
AdvAutonomous on;<br />
AdvRouterAddr off;<br />
};<br />
};<br />
<br />
interface eth0<br />
{<br />
AdvSendAdvert on;<br />
MinRtrAdvInterval 3;<br />
MaxRtrAdvInterval 10;<br />
AdvHomeAgentFlag off;<br />
<br />
prefix 3ffe:0302:0011:0002::/64<br />
{<br />
AdvOnLink on;<br />
AdvAutonomous on;<br />
AdvRouterAddr off;<br />
};<br />
};<br />
</pre><br />
<br />
Then, start radvd:<br />
<pre><br />
[host-node]# /etc/init.d/radvd start<br />
</pre><br />
<br />
==== Add IPv6 addresses to devices in [[VE0]] ====<br />
<pre><br />
[host-node]# ip addr add dev veth101.0 3ffe:2400::212:34ff:fe56:789a/64<br />
[host-node]# ip addr add dev eth0 3ffe:0302:0011:0002:211:22ff:fe33:4455/64<br />
</pre><br />
<br />
=== Virtual ethernet devices can be joined in one bridge ===<br />
Perform steps 1 - 4 from Simple configuration chapter for several VEs and/or veth devices<br />
<br />
==== Create bridge device ====<br />
<pre><br />
[host-node]# brctl addbr vzbr0<br />
</pre><br />
<br />
==== Add veth devices to bridge ====<br />
<pre><br />
[host-node]# brctl addif vzbr0 veth101.0<br />
...<br />
[host-node]# brctl addif vzbr0 veth101.n<br />
[host-node]# brctl addif vzbr0 veth102.0<br />
...<br />
...<br />
[host-node]# brctl addif vzbr0 vethXXX.N<br />
</pre><br />
<br />
==== Configure bridge device ====<br />
<pre><br />
[host-node]# ifconfig vzbr0 up<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/vzbr0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/vzbr0/proxy_arp<br />
</pre><br />
<br />
==== Add routes in [[VE0]] ====<br />
<pre><br />
[host-node]# ip route add 192.168.101.1 dev vzbr0<br />
...<br />
[host-node]# ip route add 192.168.101.n dev vzbr0<br />
[host-node]# ip route add 192.168.102.1 dev vzbr0<br />
...<br />
...<br />
[host-node]# ip route add 192.168.XXX.N dev vzbr0<br />
</pre><br />
<br />
Thus you'll have more convinient configuration, i.e. all routes to VEs will be through this bridge and VEs can communicate with each other even without these routes.<br />
<br />
=== Virtual ethernet devices + VLAN ===<br />
This configuration can be done by adding vlan device to the previous configuration.<br />
<br />
== External links ==<br />
* [http://www.tldp.org/HOWTO/Linux+IPv6-HOWTO/hints-daemons-radvd.html Linux IPv6 HOWTO, a chapter about radvd]<br />
<br />
<br />
[[Category: Networking]]<br />
[[Category: HOWTO]]</div>195.214.233.10https://wiki.openvz.org/index.php?title=Virtual_Ethernet_device&diff=1605Virtual Ethernet device2006-06-13T06:19:31Z<p>195.214.233.10: /* Configure device in VE */ ifconfig up -> ifconfig 0</p>
<hr />
<div>Virtual ethernet device is ethernet device which can be used inside a [[VE]]. Unlike<br />
venet network device, veth device has a MAC address. Due to this, it can be used in configurations, when veth is bridged to <br />
ethX or other device and VPS user fully setups his networking himself, <br />
including IPs, gateways etc.<br />
<br />
Virtual ethernet device consist of two ethernet devices - one in [[VE0]] and another one <br />
in VE. These devices are connected to each other, so if a packet goes to one<br />
device it will come out from the other device.<br />
<br />
== Differences between venet and veth ==<br />
* veth allows broadcasts in VE, so you can use even dhcp server inside VE or samba server with domain broadcasts or other such stuff.<br />
* veth has some security implications, so is not recommended in untrusted environments like HSP. This is due to broadcasts, traffic sniffing, possible IP collisions etc. i.e. VE user can actually ruin your ethernet network with such direct access to ethernet layer.<br />
* With venet device, only node administrator can assign an IP to a VE. With veth device, network settings can be fully done on VE side. VE should setup correct GW, IP/mask etc and node admin then can only choose where your traffic goes.<br />
* veth devices can be bridged together and/or with other devices. For example, in host system admin can bridge veth from 2 VEs with some VLAN eth0.X. In this case, these 2 VEs will be connected to this VLAN.<br />
* venet device is a bit faster and more efficient.<br />
* With veth devices IPv6 auto generates an address from MAC.<br />
<br />
The brief summary:<br />
{| class="wikitable" style="text-align: center;"<br />
|+ '''Differences between veth and venet'''<br />
! Feature !! veth !! venet<br />
|-<br />
! MAC address<br />
| {{yes}} || {{no}}<br />
|-<br />
! Broadcasts inside VE<br />
| {{yes}} || {{no}}<br />
|-<br />
! Traffic sniffing<br />
| {{yes}} || {{no}}<br />
|-<br />
! Network security<br />
| low <ref>Due to broadcasts, sniffing and possible IP collisions etc.</ref> || hi<br />
|- <br />
! Can be used in bridges<br />
| {{yes}} || {{no}}<br />
|-<br />
! Performance<br />
| fast || fastest<br />
|-<br />
|}<br />
<references/><br />
<br />
== Virtual ethernet device usage ==<br />
<br />
=== Adding veth to a VE ===<br />
<pre><br />
vzctl set <VEID> --veth_add <dev_name>,<dev_addr>,<ve_dev_name>,<ve_dev_addr><br />
</pre><br />
Here <br />
* <tt>dev_name</tt> is ethernet device name in the [[VE0|host system]]<br />
* <tt>dev_addr</tt> is its MAC address<br />
* <tt>ve_dev_name</tt> is an ethernet device name in the VE<br />
* <tt>ve_dev_addr</tt> is its MAC address<br />
<br />
MAC addresses must be entered in XX:XX:XX:XX:XX:XX format. Note that this option<br />
is incremental, so devices are added to already existing ones.<br />
<br />
==== Examples ====<br />
<pre><br />
vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save<br />
</pre><br />
After executing this command <tt>veth</tt> device will be created for VE 101 and veth configuration will be saved to a VE configuration file.<br />
Host-side ethernet device will have <tt>veth101.0</tt> name and <tt>00:12:34:56:78:9A</tt> MAC address.<br />
VE-side ethernet device will have <tt>eth0</tt> name and <tt>00:12:34:56:78:9B</tt> MAC address.<br />
{{Note|Use random MAC addresses. Do not use MAC addresses of real eth devices, beacuse this can lead to collisions.}}<br />
<br />
=== Removing veth from a VE ===<br />
<pre><br />
vzctl set <VEID> --veth_del <dev_name><br />
</pre><br />
Here <tt>dev_name</tt> is the ethernet device name in the [[VE0|host system]].<br />
<br />
==== Example ====<br />
<pre><br />
vzctl set 101 --veth_del veth101.0 --save<br />
</pre><br />
After executing this command veth device with host-side ethernet name veth101.0 will be removed from VE 101 and veth configuration will be updated in VE config file.<br />
<br />
== Common configurations with virtual ethernet devices ==<br />
Module <tt>vzethdev</tt> must be loaded to operate with veth devices.<br />
<br />
=== Simple configuration with virtual ethernet device ===<br />
<br />
==== Start a VE ====<br />
<pre><br />
[host-node]# vzctl start 101<br />
</pre><br />
<br />
==== Add veth device to VE ====<br />
<pre><br />
[host-node]# vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save<br />
</pre><br />
<br />
==== Configure devices in VE0 ====<br />
<pre><br />
[host-node]# ifconfig veth101.0 0<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/veth101.0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/veth101.0/proxy_arp<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/eth0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp<br />
</pre><br />
<br />
==== Configure device in VE ====<br />
<pre><br />
[host-node]# vzctl enter 101<br />
[ve-101]# /sbin/ifconfig eth0 0<br />
[ve-101]# /sbin/ip addr add 192.168.0.101 dev eth0<br />
[ve-101]# /sbin/ip route add default dev eth0<br />
</pre><br />
<br />
==== Add route in [[VE0]] ====<br />
<pre><br />
[host-node]# ip route add 192.168.0.101 dev veth101.0<br />
</pre><br />
<br />
=== Virtual ethernet device with IPv6 ===<br />
<br />
==== Start [[VE]] ====<br />
<pre><br />
[host-node]# vzctl start 101<br />
</pre><br />
<br />
==== Add veth device to [[VE]] ====<br />
<pre><br />
[host-node]# vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save<br />
</pre><br />
<br />
==== Configure devices in [[VE0]] ====<br />
<pre><br />
[host-node]# ifconfig veth101.0 up<br />
[host-node]# echo 1 > /proc/sys/net/ipv6/conf/veth101.0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv6/conf/eth0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv6/conf/all/forwarding<br />
</pre><br />
<br />
==== Configure device in [[VE]] ====<br />
<pre><br />
[host-node]# vzctl enter 101<br />
[ve-101]# /sbin/ifconfig eth0 up<br />
</pre><br />
<br />
==== Start router advertisement daemon (radvd) for IPv6 in VE0 ====<br />
First you need to edit radvd configuration file. Here is a simple example of <tt>/etc/radv.conf</tt>:<br />
<pre><br />
interface veth101.0<br />
{<br />
AdvSendAdvert on;<br />
MinRtrAdvInterval 3;<br />
MaxRtrAdvInterval 10;<br />
AdvHomeAgentFlag off;<br />
<br />
prefix 3ffe:2400:0:0::/64<br />
{<br />
AdvOnLink on;<br />
AdvAutonomous on;<br />
AdvRouterAddr off;<br />
};<br />
};<br />
<br />
interface eth0<br />
{<br />
AdvSendAdvert on;<br />
MinRtrAdvInterval 3;<br />
MaxRtrAdvInterval 10;<br />
AdvHomeAgentFlag off;<br />
<br />
prefix 3ffe:0302:0011:0002::/64<br />
{<br />
AdvOnLink on;<br />
AdvAutonomous on;<br />
AdvRouterAddr off;<br />
};<br />
};<br />
</pre><br />
<br />
Then, start radvd:<br />
<pre><br />
[host-node]# /etc/init.d/radvd start<br />
</pre><br />
<br />
==== Add IPv6 addresses to devices in [[VE0]] ====<br />
<pre><br />
[host-node]# ip addr add dev veth101.0 3ffe:2400::212:34ff:fe56:789a/64<br />
[host-node]# ip addr add dev eth0 3ffe:0302:0011:0002:211:22ff:fe33:4455/64<br />
</pre><br />
<br />
=== Virtual ethernet devices can be joined in one bridge ===<br />
Perform steps 1 - 4 from Simple configuration chapter for several VEs and/or veth devices<br />
<br />
==== Create bridge device ====<br />
<pre><br />
[host-node]# brctl addbr vzbr0<br />
</pre><br />
<br />
==== Add veth devices to bridge ====<br />
<pre><br />
[host-node]# brctl addif vzbr0 veth101.0<br />
...<br />
[host-node]# brctl addif vzbr0 veth101.n<br />
[host-node]# brctl addif vzbr0 veth102.0<br />
...<br />
...<br />
[host-node]# brctl addif vzbr0 vethXXX.N<br />
</pre><br />
<br />
==== Configure bridge device ====<br />
<pre><br />
[host-node]# ifconfig vzbr0 up<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/vzbr0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/vzbr0/proxy_arp<br />
</pre><br />
<br />
==== Add routes in [[VE0]] ====<br />
<pre><br />
[host-node]# ip route add 192.168.101.1 dev vzbr0<br />
...<br />
[host-node]# ip route add 192.168.101.n dev vzbr0<br />
[host-node]# ip route add 192.168.102.1 dev vzbr0<br />
...<br />
...<br />
[host-node]# ip route add 192.168.XXX.N dev vzbr0<br />
</pre><br />
<br />
Thus you'll have more convinient configuration, i.e. all routes to VEs will be through this bridge and VEs can communicate with each other even without these routes.<br />
<br />
=== Virtual ethernet devices + VLAN ===<br />
This configuration can be done by adding vlan device to the previous configuration.<br />
<br />
== External links ==<br />
* [http://www.tldp.org/HOWTO/Linux+IPv6-HOWTO/hints-daemons-radvd.html Linux IPv6 HOWTO, a chapter about radvd]<br />
<br />
<br />
[[Category: Networking]]<br />
[[Category: HOWTO]]</div>195.214.233.10https://wiki.openvz.org/index.php?title=Virtual_Ethernet_device&diff=1604Virtual Ethernet device2006-06-13T06:18:56Z<p>195.214.233.10: /* Configure devices in VE0 */ ifconfig up -> ifconfig 0</p>
<hr />
<div>Virtual ethernet device is ethernet device which can be used inside a [[VE]]. Unlike<br />
venet network device, veth device has a MAC address. Due to this, it can be used in configurations, when veth is bridged to <br />
ethX or other device and VPS user fully setups his networking himself, <br />
including IPs, gateways etc.<br />
<br />
Virtual ethernet device consist of two ethernet devices - one in [[VE0]] and another one <br />
in VE. These devices are connected to each other, so if a packet goes to one<br />
device it will come out from the other device.<br />
<br />
== Differences between venet and veth ==<br />
* veth allows broadcasts in VE, so you can use even dhcp server inside VE or samba server with domain broadcasts or other such stuff.<br />
* veth has some security implications, so is not recommended in untrusted environments like HSP. This is due to broadcasts, traffic sniffing, possible IP collisions etc. i.e. VE user can actually ruin your ethernet network with such direct access to ethernet layer.<br />
* With venet device, only node administrator can assign an IP to a VE. With veth device, network settings can be fully done on VE side. VE should setup correct GW, IP/mask etc and node admin then can only choose where your traffic goes.<br />
* veth devices can be bridged together and/or with other devices. For example, in host system admin can bridge veth from 2 VEs with some VLAN eth0.X. In this case, these 2 VEs will be connected to this VLAN.<br />
* venet device is a bit faster and more efficient.<br />
* With veth devices IPv6 auto generates an address from MAC.<br />
<br />
The brief summary:<br />
{| class="wikitable" style="text-align: center;"<br />
|+ '''Differences between veth and venet'''<br />
! Feature !! veth !! venet<br />
|-<br />
! MAC address<br />
| {{yes}} || {{no}}<br />
|-<br />
! Broadcasts inside VE<br />
| {{yes}} || {{no}}<br />
|-<br />
! Traffic sniffing<br />
| {{yes}} || {{no}}<br />
|-<br />
! Network security<br />
| low <ref>Due to broadcasts, sniffing and possible IP collisions etc.</ref> || hi<br />
|- <br />
! Can be used in bridges<br />
| {{yes}} || {{no}}<br />
|-<br />
! Performance<br />
| fast || fastest<br />
|-<br />
|}<br />
<references/><br />
<br />
== Virtual ethernet device usage ==<br />
<br />
=== Adding veth to a VE ===<br />
<pre><br />
vzctl set <VEID> --veth_add <dev_name>,<dev_addr>,<ve_dev_name>,<ve_dev_addr><br />
</pre><br />
Here <br />
* <tt>dev_name</tt> is ethernet device name in the [[VE0|host system]]<br />
* <tt>dev_addr</tt> is its MAC address<br />
* <tt>ve_dev_name</tt> is an ethernet device name in the VE<br />
* <tt>ve_dev_addr</tt> is its MAC address<br />
<br />
MAC addresses must be entered in XX:XX:XX:XX:XX:XX format. Note that this option<br />
is incremental, so devices are added to already existing ones.<br />
<br />
==== Examples ====<br />
<pre><br />
vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save<br />
</pre><br />
After executing this command <tt>veth</tt> device will be created for VE 101 and veth configuration will be saved to a VE configuration file.<br />
Host-side ethernet device will have <tt>veth101.0</tt> name and <tt>00:12:34:56:78:9A</tt> MAC address.<br />
VE-side ethernet device will have <tt>eth0</tt> name and <tt>00:12:34:56:78:9B</tt> MAC address.<br />
{{Note|Use random MAC addresses. Do not use MAC addresses of real eth devices, beacuse this can lead to collisions.}}<br />
<br />
=== Removing veth from a VE ===<br />
<pre><br />
vzctl set <VEID> --veth_del <dev_name><br />
</pre><br />
Here <tt>dev_name</tt> is the ethernet device name in the [[VE0|host system]].<br />
<br />
==== Example ====<br />
<pre><br />
vzctl set 101 --veth_del veth101.0 --save<br />
</pre><br />
After executing this command veth device with host-side ethernet name veth101.0 will be removed from VE 101 and veth configuration will be updated in VE config file.<br />
<br />
== Common configurations with virtual ethernet devices ==<br />
Module <tt>vzethdev</tt> must be loaded to operate with veth devices.<br />
<br />
=== Simple configuration with virtual ethernet device ===<br />
<br />
==== Start a VE ====<br />
<pre><br />
[host-node]# vzctl start 101<br />
</pre><br />
<br />
==== Add veth device to VE ====<br />
<pre><br />
[host-node]# vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save<br />
</pre><br />
<br />
==== Configure devices in VE0 ====<br />
<pre><br />
[host-node]# ifconfig veth101.0 0<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/veth101.0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/veth101.0/proxy_arp<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/eth0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp<br />
</pre><br />
<br />
==== Configure device in VE ====<br />
<pre><br />
[host-node]# vzctl enter 101<br />
[ve-101]# /sbin/ifconfig eth0 up<br />
[ve-101]# /sbin/ip addr add 192.168.0.101 dev eth0<br />
[ve-101]# /sbin/ip route add default dev eth0<br />
</pre><br />
<br />
==== Add route in [[VE0]] ====<br />
<pre><br />
[host-node]# ip route add 192.168.0.101 dev veth101.0<br />
</pre><br />
<br />
=== Virtual ethernet device with IPv6 ===<br />
<br />
==== Start [[VE]] ====<br />
<pre><br />
[host-node]# vzctl start 101<br />
</pre><br />
<br />
==== Add veth device to [[VE]] ====<br />
<pre><br />
[host-node]# vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save<br />
</pre><br />
<br />
==== Configure devices in [[VE0]] ====<br />
<pre><br />
[host-node]# ifconfig veth101.0 up<br />
[host-node]# echo 1 > /proc/sys/net/ipv6/conf/veth101.0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv6/conf/eth0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv6/conf/all/forwarding<br />
</pre><br />
<br />
==== Configure device in [[VE]] ====<br />
<pre><br />
[host-node]# vzctl enter 101<br />
[ve-101]# /sbin/ifconfig eth0 up<br />
</pre><br />
<br />
==== Start router advertisement daemon (radvd) for IPv6 in VE0 ====<br />
First you need to edit radvd configuration file. Here is a simple example of <tt>/etc/radv.conf</tt>:<br />
<pre><br />
interface veth101.0<br />
{<br />
AdvSendAdvert on;<br />
MinRtrAdvInterval 3;<br />
MaxRtrAdvInterval 10;<br />
AdvHomeAgentFlag off;<br />
<br />
prefix 3ffe:2400:0:0::/64<br />
{<br />
AdvOnLink on;<br />
AdvAutonomous on;<br />
AdvRouterAddr off;<br />
};<br />
};<br />
<br />
interface eth0<br />
{<br />
AdvSendAdvert on;<br />
MinRtrAdvInterval 3;<br />
MaxRtrAdvInterval 10;<br />
AdvHomeAgentFlag off;<br />
<br />
prefix 3ffe:0302:0011:0002::/64<br />
{<br />
AdvOnLink on;<br />
AdvAutonomous on;<br />
AdvRouterAddr off;<br />
};<br />
};<br />
</pre><br />
<br />
Then, start radvd:<br />
<pre><br />
[host-node]# /etc/init.d/radvd start<br />
</pre><br />
<br />
==== Add IPv6 addresses to devices in [[VE0]] ====<br />
<pre><br />
[host-node]# ip addr add dev veth101.0 3ffe:2400::212:34ff:fe56:789a/64<br />
[host-node]# ip addr add dev eth0 3ffe:0302:0011:0002:211:22ff:fe33:4455/64<br />
</pre><br />
<br />
=== Virtual ethernet devices can be joined in one bridge ===<br />
Perform steps 1 - 4 from Simple configuration chapter for several VEs and/or veth devices<br />
<br />
==== Create bridge device ====<br />
<pre><br />
[host-node]# brctl addbr vzbr0<br />
</pre><br />
<br />
==== Add veth devices to bridge ====<br />
<pre><br />
[host-node]# brctl addif vzbr0 veth101.0<br />
...<br />
[host-node]# brctl addif vzbr0 veth101.n<br />
[host-node]# brctl addif vzbr0 veth102.0<br />
...<br />
...<br />
[host-node]# brctl addif vzbr0 vethXXX.N<br />
</pre><br />
<br />
==== Configure bridge device ====<br />
<pre><br />
[host-node]# ifconfig vzbr0 up<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/vzbr0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/vzbr0/proxy_arp<br />
</pre><br />
<br />
==== Add routes in [[VE0]] ====<br />
<pre><br />
[host-node]# ip route add 192.168.101.1 dev vzbr0<br />
...<br />
[host-node]# ip route add 192.168.101.n dev vzbr0<br />
[host-node]# ip route add 192.168.102.1 dev vzbr0<br />
...<br />
...<br />
[host-node]# ip route add 192.168.XXX.N dev vzbr0<br />
</pre><br />
<br />
Thus you'll have more convinient configuration, i.e. all routes to VEs will be through this bridge and VEs can communicate with each other even without these routes.<br />
<br />
=== Virtual ethernet devices + VLAN ===<br />
This configuration can be done by adding vlan device to the previous configuration.<br />
<br />
== External links ==<br />
* [http://www.tldp.org/HOWTO/Linux+IPv6-HOWTO/hints-daemons-radvd.html Linux IPv6 HOWTO, a chapter about radvd]<br />
<br />
<br />
[[Category: Networking]]<br />
[[Category: HOWTO]]</div>195.214.233.10https://wiki.openvz.org/index.php?title=Virtual_Ethernet_device&diff=1542Virtual Ethernet device2006-06-08T13:50:29Z<p>195.214.233.10: /* Virtual ethernet devices can be joined in one bridge */</p>
<hr />
<div>Virtual ethernet device is ethernet device which can be used inside a [[VE]]. Unlike<br />
venet network device, veth device has a MAC address.<br />
<br />
Virtual ethernet device consist of two ethernet devices - one in [[VE0]] and another one <br />
in VE. These devices are connected to each other, so if a packet goes to one<br />
device it will come out from the other device.<br />
<br />
<br />
== Virtual ethernet device usage ==<br />
<br />
=== Adding veth to a VE ===<br />
<pre><br />
vzctl set <VEID> --veth_add <dev_name>,<dev_addr>,<ve_dev_name>,<ve_dev_addr><br />
</pre><br />
Here <br />
* <tt>dev_name</tt> is ethernet device name in the [[VE0|host system]]<br />
* <tt>dev_addr</tt> is its MAC address<br />
* <tt>ve_dev_name</tt> is an ethernet device name in the VE<br />
* <tt>ve_dev_addr</tt> is its MAC address<br />
<br />
MAC addresses must be entered in XX:XX:XX:XX:XX:XX format. Note that this option<br />
is incremental, so devices are added to already existing ones.<br />
<br />
==== Examples ====<br />
<pre><br />
vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save<br />
</pre><br />
After executing this command <tt>veth</tt> device will be created for VE 101 and veth configuration will be saved to a VE configuration file.<br />
Host-side ethernet device will have <tt>veth101.0</tt> name and <tt>00:12:34:56:78:9A</tt> MAC address.<br />
VE-side ethernet device will have <tt>eth0</tt> name and <tt>00:12:34:56:78:9B</tt> MAC address.<br />
{{Note|Use random MAC addresses. Do not use MAC addresses of real eth devices, beacuse this can lead to collisions.}}<br />
<br />
=== Removing veth from a VE ===<br />
<pre><br />
vzctl set <VEID> --veth_del <dev_name><br />
</pre><br />
Here <tt>dev_name</tt> is the ethernet device name in the [[VE0|host system]].<br />
<br />
==== Example ====<br />
<pre><br />
vzctl set 101 --veth_del veth101.0 --save<br />
</pre><br />
After executing this command veth device with host-side ethernet name veth101.0 will be removed from VE 101 and veth configuration will be updated in VE config file.<br />
<br />
== Common configurations with virtual ethernet devices ==<br />
Module <tt>vzethdev</tt> must be loaded to operate with veth devices.<br />
<br />
=== Simple configuration with virtual ethernet device ===<br />
<br />
==== Start a VE ====<br />
<pre><br />
[host-node]# vzctl start 101<br />
</pre><br />
<br />
==== Add veth device to VE ====<br />
<pre><br />
[host-node]# vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save<br />
</pre><br />
<br />
==== Configure devices in VE0 ====<br />
<pre><br />
[host-node]# ifconfig veth101.0 up<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/veth101.0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/veth101.0/proxy_arp<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/eth0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp<br />
</pre><br />
<br />
==== Configure device in VE ====<br />
<pre><br />
[host-node]# vzctl enter 101<br />
[ve-101]# /sbin/ifconfig eth0 up<br />
[ve-101]# /sbin/ip addr add 192.168.0.101 dev eth0<br />
[ve-101]# /sbin/ip route add default dev eth0<br />
</pre><br />
<br />
==== Add route in [[VE0]] ====<br />
<pre><br />
[host-node]# ip route add 192.168.0.101 dev veth101.0<br />
</pre><br />
<br />
=== Virtual ethernet device with IPv6 ===<br />
<br />
==== Start [[VE]] ====<br />
<pre><br />
[host-node]# vzctl start 101<br />
</pre><br />
<br />
==== Add veth device to [[VE]] ====<br />
<pre><br />
[host-node]# vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save<br />
</pre><br />
<br />
==== Configure devices in [[VE0]] ====<br />
<pre><br />
[host-node]# ifconfig veth101.0 up<br />
[host-node]# echo 1 > /proc/sys/net/ipv6/conf/veth101.0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv6/conf/eth0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv6/conf/all/forwarding<br />
</pre><br />
<br />
==== Configure device in [[VE]] ====<br />
<pre><br />
[host-node]# vzctl enter 101<br />
[ve-101]# /sbin/ifconfig eth0 up<br />
</pre><br />
<br />
==== Start router advertisement daemon (radvd) for IPv6 in VE0 ====<br />
First you need to edit radvd configuration file. Here is a simple example of <tt>/etc/radv.conf</tt>:<br />
<pre><br />
interface veth101.0<br />
{<br />
AdvSendAdvert on;<br />
MinRtrAdvInterval 3;<br />
MaxRtrAdvInterval 10;<br />
AdvHomeAgentFlag off;<br />
<br />
prefix 3ffe:2400:0:0::/64<br />
{<br />
AdvOnLink on;<br />
AdvAutonomous on;<br />
AdvRouterAddr off;<br />
};<br />
};<br />
<br />
interface eth0<br />
{<br />
AdvSendAdvert on;<br />
MinRtrAdvInterval 3;<br />
MaxRtrAdvInterval 10;<br />
AdvHomeAgentFlag off;<br />
<br />
prefix 3ffe:0302:0011:0002::/64<br />
{<br />
AdvOnLink on;<br />
AdvAutonomous on;<br />
AdvRouterAddr off;<br />
};<br />
};<br />
</pre><br />
<br />
Then, start radvd:<br />
<pre><br />
[host-node]# /etc/init.d/radvd start<br />
</pre><br />
<br />
==== Add IPv6 addresses to devices in [[VE0]] ====<br />
<pre><br />
[host-node]# ip addr add dev veth101.0 3ffe:2400::212:34ff:fe56:789a/64<br />
[host-node]# ip addr add dev eth0 3ffe:0302:0011:0002:211:22ff:fe33:4455/64<br />
</pre><br />
<br />
=== Virtual ethernet devices can be joined in one bridge ===<br />
Perform steps 1 - 4 from Simple configuration chapter for several VEs and/or veth devices<br />
<br />
==== Create bridge device ====<br />
<pre><br />
[host-node]# brctl addbr vzbr0<br />
</pre><br />
<br />
==== Add veth devices to bridge ====<br />
<pre><br />
[host-node]# brctl addif vzbr0 veth101.0<br />
...<br />
[host-node]# brctl addif vzbr0 veth101.n<br />
[host-node]# brctl addif vzbr0 veth102.0<br />
...<br />
...<br />
[host-node]# brctl addif vzbr0 vethXXX.N<br />
</pre><br />
<br />
==== Configure bridge device ====<br />
<pre><br />
[host-node]# ifconfig vzbr0 up<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/vzbr0/forwarding<br />
[host-node]# echo 1 > /proc/sys/net/ipv4/conf/vzbr0/proxy_arp<br />
</pre><br />
<br />
==== Add routes in [[VE0]] ====<br />
<pre><br />
[host-node]# ip route add 192.168.101.1 dev vzbr0<br />
...<br />
[host-node]# ip route add 192.168.101.n dev vzbr0<br />
[host-node]# ip route add 192.168.102.1 dev vzbr0<br />
...<br />
...<br />
[host-node]# ip route add 192.168.XXX.N dev vzbr0<br />
</pre><br />
<br />
Thus you'll have more convinient configuration, i.e. all routes to VEs will be through this bridge and VEs can communicate with each other even without these routes.<br />
<br />
=== Virtual ethernet devices + VLAN ===<br />
This configuration can be done by adding vlan device to the previous configuration.<br />
<br />
== External links ==<br />
* [http://www.tldp.org/HOWTO/Linux+IPv6-HOWTO/hints-daemons-radvd.html Linux IPv6 HOWTO, a chapter about radvd]<br />
<br />
<br />
[[Category: Networking]]<br />
[[Category: HOWTO]]</div>195.214.233.10https://wiki.openvz.org/index.php?title=Virtual_Ethernet_device&diff=1535Virtual Ethernet device2006-06-08T12:57:02Z<p>195.214.233.10: /* Common configurations with virtual ethernet devices */</p>
<hr />
<div>Virtual ethernet device is ethernet device which can be used inside a [[VE]]. Unlike<br />
venet network device, veth device has a MAC address.<br />
<br />
Virtual ethernet device consist of two ethernet devices - one in [[VE0]] and another one <br />
in VE. These devices are connected to each other, so if a packet goes to one<br />
device it will come out from the other device.<br />
<br />
<br />
== Virtual ethernet device usage ==<br />
<br />
=== Adding veth to a VE ===<br />
<pre><br />
vzctl set <VEID> --veth_add <dev_name>,<dev_addr>,<ve_dev_name>,<ve_dev_addr><br />
</pre><br />
Here <br />
* <tt>dev_name</tt> is ethernet device name in the [[VE0|host system]]<br />
* <tt>dev_addr</tt> is its MAC address<br />
* <tt>ve_dev_name</tt> is an ethernet device name in the VE<br />
* <tt>ve_dev_addr</tt> is its MAC address<br />
<br />
MAC addresses must be entered in XX:XX:XX:XX:XX:XX format. Note that this option<br />
is incremental, so devices are added to already existing ones.<br />
<br />
==== Examples ====<br />
<pre><br />
vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save<br />
</pre><br />
After executing this command <tt>veth</tt> device will be created for VE 101 and veth configuration will be saved to a VE configuration file.<br />
Host-side ethernet device will have <tt>veth101.0</tt> name and <tt>00:12:34:56:78:9A</tt> MAC address.<br />
VE-side ethernet device will have <tt>eth0</tt> name and <tt>00:12:34:56:78:9B</tt> MAC address.<br />
{{Note|Use random MAC addresses. Do not use MAC addresses of real eth devices, beacuse this can lead to collisions.}}<br />
<br />
=== Removing veth from a VE ===<br />
<pre><br />
vzctl set <VEID> --veth_del <dev_name><br />
</pre><br />
Here <tt>dev_name</tt> is the ethernet device name in the [[VE0|host system]].<br />
<br />
==== Example ====<br />
<pre><br />
vzctl set 101 --veth_del veth101.0 --save<br />
</pre><br />
After executing this command veth device with host-side ethernet name veth101.0 will be removed from VE 101 and veth configuration will be updated in VE config file.<br />
<br />
== Common configurations with virtual ethernet devices ==<br />
Module <tt>vzethdev</tt> must be loaded to operate with veth devices.<br />
<br />
=== Simple configuration with virtual ethernet device ===<br />
1. Start VE<br />
<pre><br />
[host-node] vzctl start 101<br />
</pre><br />
2. Add veth device to VE<br />
<pre><br />
[host-node] vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save<br />
</pre><br />
3. Configure devices in VE0<br />
<pre><br />
[host-node] ifconfig veth101.0 up<br />
[host-node] echo 1 > /proc/sys/net/ipv4/conf/veth101.0/forwarding<br />
[host-node] echo 1 > /proc/sys/net/ipv4/conf/veth101.0/proxy_arp<br />
[host-node] echo 1 > /proc/sys/net/ipv4/conf/eth0/forwarding<br />
[host-node] echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp<br />
</pre><br />
4. Configure device in VE<br />
<pre><br />
[host-node] vzctl enter 101<br />
[ve-101] ifconfig eth0 up<br />
[ve-101] ip addr add 192.168.0.101 dev eth0<br />
[ve-101] ip ro add default dev eth0<br />
</pre><br />
5. Add route in VE0<br />
<pre><br />
[host-node] ip ro add 192.168.0.101 dev veth101.0<br />
</pre><br />
<br />
=== Virtual ethernet device can be used with IPv6 ===<br />
1. Start VE<br />
<pre><br />
[host-node] vzctl start 101<br />
</pre><br />
2. Add veth device to VE<br />
<pre><br />
[host-node] vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save<br />
</pre><br />
3. Configure devices in VE0<br />
<pre><br />
[host-node] ifconfig veth101.0 up<br />
[host-node] echo 1 > /proc/sys/net/ipv6/conf/veth101.0/forwarding<br />
[host-node] echo 1 > /proc/sys/net/ipv6/conf/eth0/forwarding<br />
[host-node] echo 1 > /proc/sys/net/ipv6/conf/all/forwarding<br />
</pre><br />
4. Configure device in VE<br />
<pre><br />
[host-node] vzctl enter 101<br />
[ve-101] ifconfig eth0 up<br />
</pre><br />
5. Start router advertisement daemon (radvd) for IPv6 in VE0<br />
Here is simple example of radv.conf<br />
<pre><br />
interface veth101.0<br />
{<br />
AdvSendAdvert on;<br />
MinRtrAdvInterval 3;<br />
MaxRtrAdvInterval 10;<br />
AdvHomeAgentFlag off;<br />
<br />
prefix 3ffe:2400:0:0::/64<br />
{<br />
AdvOnLink on;<br />
AdvAutonomous on;<br />
AdvRouterAddr off;<br />
};<br />
};<br />
<br />
interface eth0<br />
{<br />
AdvSendAdvert on;<br />
MinRtrAdvInterval 3;<br />
MaxRtrAdvInterval 10;<br />
AdvHomeAgentFlag off;<br />
<br />
prefix 3ffe:0302:0011:0002::/64<br />
{<br />
AdvOnLink on;<br />
AdvAutonomous on;<br />
AdvRouterAddr off;<br />
};<br />
};<br />
</pre><br />
6. Add IPv6 addresses to devices in VE0<br />
<pre><br />
[host-node] ip a add dev veth101.0 3ffe:2400::212:34ff:fe56:789a/64<br />
[host-node] ip a add dev eth0 3ffe:0302:0011:0002:211:22ff:fe33:4455/64<br />
</pre><br />
<br />
=== Virtual ethernet devices can be joined in one bridge ===<br />
Thus you'll have more convinient configuration, i.e. all routes to VEs will be<br />
through this bridge and VEs can communicate with each other even without these routes.<br />
<br />
=== Virtual ethernet devices + VLAN ===<br />
This configuration can be done by adding vlan device to the previous configuration.<br />
<br />
[[Category: Networking]]<br />
[[Category: HOWTO]]</div>195.214.233.10https://wiki.openvz.org/index.php?title=Virtual_Ethernet_device&diff=1531Virtual Ethernet device2006-06-08T12:01:37Z<p>195.214.233.10: /* Adding veth to a VE */</p>
<hr />
<div>Virtual ethernet device is ethernet device which can be used inside a [[VE]]. Unlike<br />
venet network device, veth device has a MAC address.<br />
<br />
Virtual ethernet device consist of two ethernet devices - one in [[VE0]] and another one <br />
in VE. These devices are connected to each other, so if a packet goes to one<br />
device it will come out from the other device.<br />
<br />
<br />
== Virtual ethernet device usage ==<br />
<br />
=== Adding veth to a VE ===<br />
<pre><br />
vzctl set <VEID> --veth_add <dev_name>,<dev_addr>,<ve_dev_name>,<ve_dev_addr><br />
</pre><br />
Here <br />
* <tt>dev_name</tt> is ethernet device name in the [[VE0|host system]]<br />
* <tt>dev_addr</tt> is its MAC address<br />
* <tt>ve_dev_name</tt> is an ethernet device name in the VE<br />
* <tt>ve_dev_addr</tt> is its MAC address<br />
<br />
MAC addresses must be entered in XX:XX:XX:XX:XX:XX format. Note that this option<br />
is incremental, so devices are added to already existing ones.<br />
<br />
==== Examples ====<br />
<pre><br />
vzctl set 101 --veth_add veth101.0,00:12:34:56:78:9A,eth0,00:12:34:56:78:9B --save<br />
</pre><br />
After executing this command veth device will be created for VE 101 and veth configuration will be saved to VE config file.<br />
Host-side ethernet device will have <tt>veth101.0</tt> name and <tt>00:12:34:56:78:9A</tt> MAC address,<br />
VE-side ethernet device will have <tt>eth0</tt> name and <tt>00:12:34:56:78:9B</tt> MAC address. Please do not use MAC address of eth0<br />
device in host sytem for veth devices, beacuse this can lead to collisions.<br />
<pre><br />
vzctl set 101 --veth_del veth101.0 --save<br />
</pre><br />
After executing this command veth device with host-side ethernet name <tt>veth101.0</tt> will be removed from VE 101 and<br />
veth configuration will be updated in VE config file.<br />
<br />
=== Removing veth from a VE ===<br />
<pre><br />
vzctl set <VEID> --veth_del <dev_name><br />
</pre><br />
Here <tt>dev_name</tt> is the ethernet device name in the [[VE0|host system]].<br />
<br />
== Common configurations with virtual ethernet devices ==<br />
<br />
=== Virtual ethernet device can be used with IPv6 ===<br />
You'll need to setup IPv6 address on ethernet device inside a VE, add default route inside a VE<br />
and add route to this address via host-side veth in host system.<br />
<br />
=== Virtual ethernet devices can be joined in one bridge ===<br />
Thus you'll have more convinient configuration, i.e. all routes to VEs will be<br />
through this bridge and VEs can communicate with each other even without these routes.<br />
<br />
=== Virtual ethernet devices + VLAN ===<br />
This configuration can be done by adding vlan device to the previous configuration.<br />
<br />
[[Category: Networking]]<br />
[[Category: HOWTO]]</div>195.214.233.10https://wiki.openvz.org/index.php?title=Virtual_Ethernet_device&diff=1530Virtual Ethernet device2006-06-08T11:52:18Z<p>195.214.233.10: /* Virtual ethernet device can be used with IPv6 */</p>
<hr />
<div>Virtual ethernet device is ethernet device which can be used inside a [[VE]]. Unlike<br />
venet network device, veth device has a MAC address.<br />
<br />
Virtual ethernet device consist of two ethernet devices - one in [[VE0]] and another one <br />
in VE. These devices are connected to each other, so if a packet goes to one<br />
device it will come out from the other device.<br />
<br />
<br />
== Virtual ethernet device usage ==<br />
<br />
=== Adding veth to a VE ===<br />
<pre><br />
vzctl set <VEID> --veth_add <dev_name>,<dev_addr>,<ve_dev_name>,<ve_dev_addr><br />
</pre><br />
Here <br />
* <tt>dev_name</tt> is ethernet device name in the [[VE0|host system]]<br />
* <tt>dev_addr</tt> is its MAC address<br />
* <tt>ve_dev_name</tt> is an ethernet device name in the VE<br />
* <tt>ve_dev_addr</tt> is its MAC address<br />
<br />
MAC addresses must be entered in XX:XX:XX:XX:XX:XX format. Note that this option<br />
is incremental, so devices are added to already existing ones.<br />
<br />
=== Removing veth from a VE ===<br />
<pre><br />
vzctl set <VEID> --veth_del <dev_name><br />
</pre><br />
Here <tt>dev_name</tt> is the ethernet device name in the [[VE0|host system]].<br />
<br />
== Common configurations with virtual ethernet devices ==<br />
<br />
=== Virtual ethernet device can be used with IPv6 ===<br />
You'll need to setup IPv6 address on ethernet device inside a VE, add default route inside a VE<br />
and add route to this address via host-side veth in host system.<br />
<br />
=== Virtual ethernet devices can be joined in one bridge ===<br />
Thus you'll have more convinient configuration, i.e. all routes to VEs will be<br />
through this bridge and VEs can communicate with each other even without these routes.<br />
<br />
=== Virtual ethernet devices + VLAN ===<br />
This configuration can be done by adding vlan device to the previous configuration.<br />
<br />
[[Category: Networking]]<br />
[[Category: HOWTO]]</div>195.214.233.10https://wiki.openvz.org/index.php?title=Main_Page_old&diff=1522Main Page old2006-06-08T08:38:31Z<p>195.214.233.10: added Networking cat.</p>
<hr />
<div>{{OpenVZ links}}<br />
<br />
;[[Internals]]<br />
: Description of how OpenVZ works, what's inside the kernel etc.<br />
;[[:Category: Templates|Templates]]<br />
: Everything about OpenVZ templates<br />
;[[:Category: Kernel|Kernel]]<br />
: Articles concerning OpenVZ kernel<br />
;[[:Category:Troubleshooting|Troubleshooting]]<br />
: What to do if something fails<br />
;[[:Category:Networking|Networking]]<br />
: Networking-related articles<br />
;[[:Category:HOWTO|HOWTOs]]<br />
: How to do something<br />
;[[FAQ]]<br />
: Frequently Asked Questions<br />
;[[:Category:Definitions|Definitions]]<br />
: Short definitions of various terms used in OpenVZ<br />
<br />
== External links ==<br />
* [http://en.wikipedia.org/wiki/OpenVZ Wikipedia:OpenVZ]<br />
* [http://meta.wikimedia.org/wiki/Help:Editing How to edit wiki pages]</div>195.214.233.10https://wiki.openvz.org/index.php?title=Category:Networking&diff=1521Category:Networking2006-06-08T08:38:11Z<p>195.214.233.10: </p>
<hr />
<div>Everything that relates to [[VE]] networking.</div>195.214.233.10https://wiki.openvz.org/index.php?title=VPN_via_the_TUN/TAP_device&diff=1435VPN via the TUN/TAP device2006-06-01T11:29:24Z<p>195.214.233.10: </p>
<hr />
<div>= VPN via the TUN/TAP device inside VE =<br />
<br />
== Kernel TUN/TAP support ==<br />
OpenVZ supports VPN inside a VE via kernel TUN/TAP module and device.<br />
To allow VE #101 to use the TUN/TAP device the following should be done:<br />
<br />
Make sure the '''tun''' module has been already loaded on the hardware node:<br />
<pre><br />
# lsmod | grep tun<br />
</pre><br />
<br />
If it is not there, use the following command to load '''tun''' module:<br />
<pre><br />
# modprobe tun<br />
</pre><br />
<br />
You can also add it into /etc/modules.conf to make sure it will be loaded on every reboot automatically.<br />
<br />
== Granting VE an access to TUN/TAP ==<br />
Allow your VE to use the tun/tap device:<br />
<pre><br />
# vzctl set 101 --devices c:10:200:rw --save<br />
</pre><br />
<br />
And create the character device file inside the VE:<br />
<pre><br />
# vzctl exec 101 mkdir -p /dev/net<br />
# vzctl exec 101 mknod /dev/net/tun c 10 200<br />
# vzctl exec 101 chmod 600 /dev/net/tun<br />
</pre><br />
<br />
== Configuring VPN inside VE ==<br />
After the configuration steps above are done it is possible to use VPN software working with TUN/TAP inside<br />
VE just like on a usual standalone linux box.<br />
<br />
The following software can be used for VPN with TUN/TAP:<br />
* Virtual TUNnel (http://vtun.sourceforge.net)<br />
* OpenVPN (http://openvpn.sourceforge.net)<br />
<br />
<br />
== External links ==<br />
* [http://vtun.sourceforge.net Virtual TUNnel]<br />
* [http://openvpn.sourceforge.net OpenVPN]</div>195.214.233.10https://wiki.openvz.org/index.php?title=VPN_via_the_TUN/TAP_device&diff=1434VPN via the TUN/TAP device2006-06-01T11:28:27Z<p>195.214.233.10: </p>
<hr />
<div>= VPN via the TUN/TAP device inside VE =<br />
<br />
== Kernel TUN/TAP support ==<br />
OpenVZ supports VPN inside a VE via kernel TUN/TAP module and device.<br />
To allow VE #101 to use the TUN/TAP device the following should be done:<br />
<br />
Make sure the '''tun''' module has been already loaded on the hardware node:<br />
<pre><br />
# lsmod | grep tun<br />
</pre><br />
<br />
If it is not there, use the following command to load '''tun''' module:<br />
<pre><br />
# modprobe tun<br />
</pre><br />
<br />
You can also add it into /etc/modules.conf to make sure it will be loaded on every reboot automatically.<br />
<br />
== Granting VE an access to TUN/TAP ==<br />
Allow your VE to use the tun/tap device:<br />
<pre><br />
# vzctl set 101 --devices c:10:200:rw --save<br />
</pre><br />
<br />
And create the character device file inside the VE:<br />
<pre><br />
# vzctl exec 101 mkdir -p /dev/net<br />
# vzctl exec 101 mknod /dev/net/tun c 10 200<br />
# vzctl exec 101 chmod 600 /dev/net/tun<br />
</pre><br />
<br />
== Configuring VLAN inside VE ==<br />
After the configuration steps above are done it is possible to use VPN software working with TUN/TAP inside VE.<br />
<br />
The following software can be used for VPN with TUN/TAP:<br />
* Virtual TUNnel (http://vtun.sourceforge.net)<br />
* OpenVPN (http://openvpn.sourceforge.net)<br />
<br />
<br />
== External links ==<br />
* [http://vtun.sourceforge.net Virtual TUNnel]<br />
* [http://openvpn.sourceforge.net OpenVPN]</div>195.214.233.10https://wiki.openvz.org/index.php?title=VPN_via_the_TUN/TAP_device&diff=1433VPN via the TUN/TAP device2006-06-01T11:24:25Z<p>195.214.233.10: </p>
<hr />
<div>'''VPN via the TUN/TAP device inside VE'''<br />
<br />
<br />
=== Kernel tun support ===<br />
OpenVZ supports VPN inside a VE via kernel TUN/TAP module and device.<br />
To allow VE #101 to use the TUN/TAP device the following steps should be taken:<br />
<br />
Make sure the tun module has been already loaded on the hardware node:<br />
<pre><br />
# lsmod | grep tun<br />
</pre><br />
<br />
If it is not there, use the following command to load '''tun''' module:<br />
<pre><br />
# modprobe tun<br />
</pre><br />
<br />
You can also add it into /etc/modules.conf to make sure it will be loaded on every reboot automatically.<br />
<br />
=== Granting VE an access to TUN/TAP ===<br />
Allow your VE to use the tun/tap device:<br />
<pre><br />
# vzctl set 101 --devices c:10:200:rw --save<br />
</pre><br />
<br />
And create the device in the VE:<br />
<pre><br />
# vzctl exec 101 mkdir -p /dev/net<br />
# vzctl exec 101 mknod /dev/net/tun c 10 200<br />
# vzctl exec 101 chmod 600 /dev/net/tun<br />
</pre><br />
<br />
=== Configure VLAN inside VE ===<br />
After the configuration steps above are done it is possible to use TUN/TAN devices inside VE and use VPN software working with TUN/TAP.<br />
<br />
The following software can be used for VPN with TUN/TAP:<br />
* Virtual TUNnel (http://vtun.sourceforge.net)<br />
* OpenVPN (http://openvpn.sourceforge.net)<br />
<br />
<br />
=== External links ===<br />
* [http://vtun.sourceforge.net Virtual TUNnel]<br />
* [http://openvpn.sourceforge.net OpenVPN]</div>195.214.233.10https://wiki.openvz.org/index.php?title=FAQ&diff=1432FAQ2006-06-01T11:14:56Z<p>195.214.233.10: </p>
<hr />
<div>[[Different kernel flavors (UP, SMP, ENTERPRISE, ENTNOSPLIT)]]<br />
<br />
[[VPN via the TUN/TAP device]]</div>195.214.233.10https://wiki.openvz.org/index.php?title=FAQ&diff=1431FAQ2006-06-01T11:14:48Z<p>195.214.233.10: </p>
<hr />
<div>[[Different kernel flavors (UP, SMP, ENTERPRISE, ENTNOSPLIT)]]<br />
[[VPN via the TUN/TAP device]]</div>195.214.233.10https://wiki.openvz.org/index.php?title=Different_kernel_flavors_(UP,_SMP,_ENTERPRISE,_ENTNOSPLIT)&diff=1430Different kernel flavors (UP, SMP, ENTERPRISE, ENTNOSPLIT)2006-06-01T11:10:16Z<p>195.214.233.10: </p>
<hr />
<div>The version of a kernel you need depends on your server hardware.<br />
<br />
The table below describes the cases when it is better to use each of these kernels:<br />
<br />
{| border="1"<br />
|+ Kernel flavors list<br />
! Kernel type !! Hardware !! Use case<br />
|-<br />
! UP (uniprocessor)<br />
| up to 4GB of RAM || -<br />
|-<br />
! SMP (symmetric multiprocessor)<br />
| up to 4 GB of RAM<br />
| 10-20 VPSs<br />
|-<br />
! entnosplit (SMP + PAE support)<br />
| up to 64 GB of RAM<br />
| 10-30 VPSs<br />
|-<br />
! enterprise (SMP + PAE support + 4/4GB split)<br />
| up to 64 GB of RAM<br />
| >20-30 VPSs<br />
|}<br />
<br />
<br />
These kernels are optimized for these types of hardware configurations and usage scenarios,<br />
so choosing the right kernel can help performance for 5-15%.<br />
<br />
Use 'rpm -ihv' command for ovzkernel RPM installation. Please do not use the "rpm -Uhv" command to install the kernel,<br />
otherwise all the previously installed kernels may be removed from your system.</div>195.214.233.10https://wiki.openvz.org/index.php?title=Different_kernel_flavors_(UP,_SMP,_ENTERPRISE,_ENTNOSPLIT)&diff=1429Different kernel flavors (UP, SMP, ENTERPRISE, ENTNOSPLIT)2006-06-01T11:09:21Z<p>195.214.233.10: </p>
<hr />
<div>The version of a kernel you need depends on your server hardware.<br />
The table below describes the cases when it is better to use each of these kernels:<br />
<br />
{| border="1"<br />
|+ Kernel flavors list<br />
! Kernel type !! Hardware !! Use case<br />
|-<br />
! UP (uniprocessor)<br />
| up to 4GB of RAM || -<br />
|-<br />
! SMP (symmetric multiprocessor)<br />
| up to 4 GB of RAM<br />
| 10-20 VPSs<br />
|-<br />
! entnosplit (SMP + PAE support)<br />
| up to 64 GB of RAM<br />
| 10-30 VPSs<br />
|-<br />
! enterprise (SMP + PAE support + 4/4GB split)<br />
| up to 64 GB of RAM<br />
| >20-30 VPSs<br />
|}<br />
<br />
These kernels are optimized for these types of hardware configurations and usage scenarios,<br />
so choosing the right kernel can improve (or descrease) a performance for 5-15%.<br />
<br />
Use 'rpm -ihv' command for ovzkernel RPM installation. Please do not use the "rpm -Uhv" command to install the kernel,<br />
otherwise all the previously installed kernels may be removed from your system.</div>195.214.233.10https://wiki.openvz.org/index.php?title=FAQ&diff=1428FAQ2006-06-01T10:55:56Z<p>195.214.233.10: </p>
<hr />
<div>[[Different kernel flavors (UP, SMP, ENTERPRISE, ENTNOSPLIT)]]</div>195.214.233.10https://wiki.openvz.org/index.php?title=Kernel_build&diff=1427Kernel build2006-06-01T10:53:10Z<p>195.214.233.10: </p>
<hr />
<div>This FAQ will help you in case you want to apply some patches to the kernel on your own or rebuild it from sources.<br />
On RPM based distros such as RedHat Enterprise Linux/CentOS, Fedora Core or SUSE one can simpy rebuild kernel from SRPM,<br />
for other distros it is required to install sources, build and install kernel manually. The below are given the details for both cases.<br />
<br />
== Rebuilding kernel from SRPM ==<br />
<br />
=== Download ===<br />
OpenVZ kernel SRC RPM can be download from the official downloads at http://openvz.org/download/kernel/.<br />
Beta versions of kernels for different OS distributions can be also found at http://openvz.org/download/beta/.<br />
<br />
=== Installation ===<br />
Install the downloaded SRC RPM with the following command:<br />
<pre><br />
# rpm -ihv ovzkernel-2.6.16-026test012.1.src.rpm<br />
</pre><br />
<br />
After successfull installation you can find kernel sources in /usr/src/<distro>/SOURCES/<br />
and kernel spec file (kernel-ovz.spec) in /usr/src/<distro>/SPECS, where <distro> is your distribution-specific directory.<br />
For example, for RedHat based distros it is 'redhat', for SUSE it is 'packages'.<br />
<br />
=== Adding your own patches ===<br />
To modify the kernel one needs just to add specific patches to the kernel spec file and put this patch into SOURCES directory.<br />
Put your patch into SOURCES directory with the following command:<br />
<pre><br />
# cp <patch> /usr/src/<distro>/SOURCES/<br />
</pre><br />
<br />
Then open spec file /usr/src/<distro>/SPECS/kernel-ovz.spec in the editor and add the following lines:<br />
<pre><br />
Patch10000: <patch-name><br />
</pre><br />
and<br />
<pre><br />
%patch10000 -p1<br />
</pre><br />
in appropriate places where similar text lines are.<br />
<br />
=== Building RPMs ===<br />
Before rebuilding the kernel make sure that you adjusted the kernel version in kernel-ovz.spec.<br />
This will help you to distinguish binaries then from already existing kernels<br />
(or from official OpenVZ kernels). To do so, edit /usr/src/<distro>/SPECS/kernel-ovz.spec file and replace the following line:<br />
<pre><br />
%define ksubrelease 1<br />
</pre><br />
with<br />
<pre><br />
%define ksubrelease 1-my.kernel.v1<br />
</pre><br />
<br />
<br />
To rebuild the kernel type the following commands then:<br />
<pre><br />
# cd /usr/src/<distro>/SPECS<br />
# rpmbuild -ba --target=i686 kernel-ovz.spec<br />
</pre><br />
<br />
After successfull kernel compilation binary RPMs can be found at /usr/src/<distro>/RPMS/i686<br />
<br />
== Rebuilding kernel from sources ==<br />
<br />
=== Download ===<br />
To compile OpenVZ linux kernel one need to download the original linux kernel sources and OpenVZ patches for it.<br />
<br />
Linux kernel can be found at http://www.kernel.org/, e.g. 2.6.16 kernel can be downloaded from http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.16.tar.bz2 .<br />
<br />
Appropriate OpenVZ patches for this kernel version can be found at http://openvz.org/download/, e.g. at the moment there is a patch [http://download.openvz.org/beta/kernel/026test012.1/patches/patch-026test012-combined.gz patch-026test012-combined.gz] available.<br />
Kernel configs are also avaialble at OpenVZ download site. Most usually SMP config is used, so lets download [http://download.openvz.org/beta/kernel/026test012.1/configs/kernel-2.6.16-026test012-i686-smp.config.ovz kernel-2.6.16-026test012-i686-smp.config.ovz]<br />
for this example.<br />
<br />
=== Building ===<br />
First, extract the kernel sources from archive:<br />
<pre><br />
# tar vjxf linux-2.6.16.tar.bz2<br />
# cd linux-2.6.16<br />
</pre><br />
<br />
Apply OpenVZ patches to the kernel:<br />
<pre><br />
# gzip -d patch-026test012-combined.gz<br />
# patch -p1 < patch-026test012-combined<br />
</pre><br />
<br />
Now we need to place the config and build the kernel:<br />
<pre><br />
# cp kernel-2.6.16-026test012-i686-smp.config.ovz .config<br />
# make oldconfig<br />
# make<br />
</pre><br />
<br />
=== Installation ===<br />
After successfull build of kernel it can be installed on the machine with the following commands run under '''root''' user:<br />
<pre><br />
# make install<br />
</pre><br />
<br />
Also you need to edit your GRUB or LILO config to make your kernel available for boot.<br />
<br />
[[Category:HOWTO]]</div>195.214.233.10https://wiki.openvz.org/index.php?title=VPS&diff=1426VPS2006-05-30T16:12:14Z<p>195.214.233.10: </p>
<hr />
<div>#REDIRECT [[VE]]</div>195.214.233.10https://wiki.openvz.org/index.php?title=Virtual_Environment&diff=1425Virtual Environment2006-05-30T16:11:48Z<p>195.214.233.10: created</p>
<hr />
<div>Virtual Environment (VE, otherwise also known as Virtual Private Server, or VPS) is one of the main concepts of OpenVZ.<br />
<br />
FIXME</div>195.214.233.10https://wiki.openvz.org/index.php?title=VE&diff=1424VE2006-05-30T16:09:54Z<p>195.214.233.10: </p>
<hr />
<div>#REDIRECT [[Virtual Environment]]</div>195.214.233.10https://wiki.openvz.org/index.php?title=Kernel_build&diff=1409Kernel build2006-05-30T08:37:26Z<p>195.214.233.10: </p>
<hr />
<div>This FAQ will help you in case you want to apply some patches to the kernel on your own or rebuild it from sources.<br />
On RPM based distros such as RedHat Enterprise Linux/CentOS, Fedora Core or SUSE one can simpy rebuild kernel from SRPM,<br />
for other distros it is required to install sources, build and install kernel manually. The below are given the details for both cases.<br />
<br />
== Rebuilding kernel from SRPM ==<br />
<br />
=== Download ===<br />
OpenVZ kernel SRC RPM can be download from the official downloads at http://openvz.org/download/kernel/.<br />
Beta versions of kernels for different OS distributions can be also found at http://openvz.org/download/beta/.<br />
<br />
=== Installation ===<br />
Install the downloaded SRC RPM with the following command:<br />
<pre><br />
# rpm -ihv ovzkernel-2.6.16-026test012.1.src.rpm<br />
</pre><br />
<br />
After successfull installation you can find kernel sources in /usr/src/<distro>/SOURCES/<br />
and kernel spec file (kernel-ovz.spec) in /usr/src/<distro>/SPECS, where <distro> is your distribution-specific directory.<br />
For example, for RedHat based distros it is 'redhat', for SUSE it is 'packages'.<br />
<br />
=== Adding your own patches ===<br />
To modify the kernel one needs just to add specific patches to the kernel spec file and put this patch into SOURCES directory.<br />
Put your patch into SOURCES directory with the following command:<br />
<pre><br />
# cp <patch> /usr/src/<distro>/SOURCES/<br />
</pre><br />
<br />
Then open spec file /usr/src/<distro>/SPECS/kernel-ovz.spec in the editor and add the following lines:<br />
<pre><br />
Patch10000: <patch-name><br />
</pre><br />
and<br />
<pre><br />
%patch10000 -p1<br />
</pre><br />
in appropriate places where similar text lines are.<br />
<br />
=== Building RPMs ===<br />
Before rebuilding the kernel make sure that you adjusted the kernel version in kernel-ovz.spec.<br />
This will help you to distinguish binaries then from already existing kernels<br />
(or from official OpenVZ kernels). To do so, edit /usr/src/<distro>/SPECS/kernel-ovz.spec file and replace the following line:<br />
<pre><br />
%define ksubrelease 1<br />
</pre><br />
with<br />
<pre><br />
%define ksubrelease 1-my.kernel.v1<br />
</pre><br />
<br />
<br />
To rebuild the kernel type the following commands then:<br />
<pre><br />
# cd /usr/src/<distro>/SPECS<br />
# rpmbuild -ba --target=i686 kernel-ovz.spec<br />
</pre><br />
<br />
After successfull kernel compilation binary RPMs can be found at /usr/src/<distro>/RPMS/i686<br />
<br />
== Rebuilding kernel from sources ==<br />
<br />
FIXME</div>195.214.233.10https://wiki.openvz.org/index.php?title=Kernel_build&diff=1408Kernel build2006-05-30T08:22:33Z<p>195.214.233.10: </p>
<hr />
<div>This FAQ will help you in case you want to apply some patches to the kernel on your own or rebuild it from sources.<br />
On RPM based distros such as RedHat Enterprise Linux/CentOS, Fedora Core or SuSe one can simpy rebuild kernel from SRPM,<br />
for other distros it is required to install sources, build and install kernel manually. The below are given the details for both cases.<br />
<br />
== Rebuilding kernel from SRPM ==<br />
<br />
=== Download ===<br />
OpenVZ kernel SRC RPM can be download from the official downloads at http://openvz.org/download/kernel/.<br />
Beta versions of kernels for differents OS distributions can be also found at http://openvz.org/download/beta/.<br />
<br />
=== Installation ===<br />
Install the downloaded SRC RPM with the following command:<br />
<pre><br />
# rpm -ihv ovzkernel-2.6.16-026test012.1.src.rpm<br />
</pre><br />
<br />
After successfull installtion you can find kernel sources in /usr/src/<distro>/SOURCES/<br />
and kernel spec file (kernel-ovz.spec) in /usr/src/<distro>/SPECS, where <distro> is your distribution-specific directory.<br />
For example, for RedHat based distros it is 'redhat', for SUSE it is 'packages'.<br />
<br />
=== Adding your own patches ===<br />
To modify the kernel one needs just to add specifc patches to the kernel spec file and put this patch into SOURCES directory.<br />
Put your patch into SOURCES directory with the following command:<br />
<pre><br />
# cp <patch> /usr/src/<distro>/SOURCES/<br />
</pre><br />
<br />
Then open spec file /usr/src/<distro>/SPECS/kernel-ovz.spec in the editor and add the following lines:<br />
<pre><br />
Patch10000: <patch-name><br />
</pre><br />
and<br />
<pre><br />
%patch10000 -p1<br />
</pre><br />
in appropriate places where similar text lines are.<br />
<br />
=== Building RPMs ===<br />
Before rebuilding the kernel make sure that you adjusted the kernel version in kernel-ovz.spec.<br />
This will help you to distinguish binaries then from already existing kernels<br />
(or from official OpenVZ kernels). To do so, edit /usr/src/<distro>/SPECS/kernel-ovz.spec file and replace the following line:<br />
<pre><br />
%define ksubrelease 1<br />
</pre><br />
with<br />
<pre><br />
%define ksubrelease 1-my.kernel.v1<br />
</pre><br />
<br />
<br />
To rebuild the kernel type the following commands then:<br />
<pre><br />
# cd /usr/src/<distro>/SPECS<br />
# rpmbuild -ba --target=i686 kernel-ovz.spec<br />
</pre><br />
<br />
After successfull kernel compilation binary RPMs can be found at /usr/src/<distro>/RPMS/i686<br />
<br />
== Rebuilding kernel from sources ==<br />
<br />
FIXME</div>195.214.233.10https://wiki.openvz.org/index.php?title=Kernel_build&diff=1407Kernel build2006-05-30T08:21:03Z<p>195.214.233.10: </p>
<hr />
<div>This FAQ will help you in case you want to apply some patches to the kernel on your own or rebuild it from sources.<br />
On RPM based distros such as RedHat Enterprise Linux/CentOS, Fedora Core or SuSe one can simpy rebuild kernel from SRPM,<br />
for other distros it is required to install sources, build and install kernel manually. The below are given the details for both cases.<br />
<br />
== Rebuilding kernel from SRPM ==<br />
<br />
=== Download ===<br />
OpenVZ kernel SRC RPM can be download from the official downloads at http://openvz.org/download/kernel/.<br />
Beta versions of kernels for differents OS distributions can be also found at http://openvz.org/download/beta/.<br />
<br />
=== Installation ===<br />
Install the downloaded SRC RPM with the following command:<br />
<pre><br />
# rpm -ihv ovzkernel-2.6.16-026test012.1.src.rpm<br />
</pre><br />
<br />
After successfull installtion you can find kernel sources in /usr/src/<distro>/SOURCES/<br />
and kernel spec file (kernel-ovz.spec) in /usr/src/<distro>/SPECS, where <distro> is your distribution-specific directory.<br />
For example, for RedHat based distros it is 'redhat', for SUSE it is 'packages'.<br />
<br />
=== Adding your own patches ===<br />
To modify the kernel one needs just to add specifc patches to the kernel spec file and put this patch into SOURCES directory.<br />
Put your patch into SOURCES directory with the following command:<br />
<pre><br />
# cp <patch> /usr/src/<distro>/SOURCES/<br />
</pre><br />
<br />
Then open spec file /usr/src/<distro>/SPECS/kernel-ovz.spec in the editor and add the following lines:<br />
<pre><br />
Patch10000: <patch-name><br />
</pre><br />
and<br />
<pre><br />
%patch10000 -p1<br />
</pre><br />
in appropriate places where similar text lines are.<br />
<br />
== Rebuilding kernel from sources ==<br />
<br />
Before rebuilding the kernel make sure that you adjusted the kernel version in kernel-ovz.spec.<br />
This will help you to distinguish binaries then from already existing kernels<br />
(or from official OpenVZ kernels). To do so, edit /usr/src/<distro>/SPECS/kernel-ovz.spec file and replace the following line:<br />
<pre><br />
%define ksubrelease 1<br />
</pre><br />
with<br />
<pre><br />
%define ksubrelease 1-my.kernel.v1<br />
</pre><br />
<br />
<br />
To rebuild the kernel type the following commands then:<br />
<pre><br />
# cd /usr/src/<distro>/SPECS<br />
# rpmbuild -ba --target=i686 kernel-ovz.spec<br />
</pre><br />
<br />
After successfull kernel compilation binary RPMs can be found at /usr/src/<distro>/RPMS/i686</div>195.214.233.10https://wiki.openvz.org/index.php?title=FAQ&diff=1406FAQ2006-05-30T07:36:30Z<p>195.214.233.10: </p>
<hr />
<div>;[[Kernel build HOWTO]]<br />
: How to build your won kernel, do modifications etc.</div>195.214.233.10https://wiki.openvz.org/index.php?title=Main_Page_old&diff=1401Main Page old2006-05-26T19:54:03Z<p>195.214.233.10: </p>
<hr />
<div>;[[Internals]]<br />
: Description of how OpenVZ works, what's inside the kernel etc.<br />
;[[Troubleshooting]]<br />
: What to do if something fails<br />
;[[FAQ]]<br />
: Frequently Asked Questions<br />
<br />
== External links ==<br />
* [http://meta.wikimedia.org/wiki/Help:Editing Editing wiki pages]</div>195.214.233.10https://wiki.openvz.org/index.php?title=Remote_console_setup&diff=1400Remote console setup2006-05-26T19:53:32Z<p>195.214.233.10: </p>
<hr />
<div>In case you are experiencing a kernel crash (oops) and have already [[Troubleshooting:Hardware|checked your hardware]], you should report what kernel says to the console to [http://bugzilla.openvz.org/ Bugzilla]. Sometimes kernel crashes so badly that syslogd is not working and what kernel says it never written to a file. If this is the case, you have to catch what kernel says. There are several ways possible.<br />
<br />
== Manual/Photo ==<br />
If kernel backtrace is not long enough there are chances that it can fit into a single screen. In that case, you can just make a photo of the kernel crash screen and attach it to the bug report. If you do not have a camera, you still can carefully write down (using a piece of paper and a pen, that is) what you see on the screen, and later type it into the bug report.<br />
<br />
== Serial console ==<br />
FIXME<br />
<br />
== Netconsole ==<br />
FIXME</div>195.214.233.10https://wiki.openvz.org/index.php?title=Remote_console_setup&diff=1399Remote console setup2006-05-26T19:48:42Z<p>195.214.233.10: </p>
<hr />
<div>In case you are experiencing a kernel crash (oops) and have already [[Troubleshooting:Hardware|checked your hardware]], you should report what kernel says to the console to [http://bugzilla.openvz.org/|Bugzilla]. Sometimes kernel crashes so badly that syslogd is not working and what kernel says it never written to a file. If this is the case, you have to catch what kernel says. There are several ways possible.<br />
<br />
== Manual/Photo ==<br />
If kernel backtrace is not long enough there are chances that it can fit into a single screen. In that case, you can just make a photo of the kernel crash screen and attach it to the bug report. If you do not have a camera, you still can carefully write down (using a piece of paper and a pen, that is) what you see on the screen, and later type it into the bug report.<br />
<br />
== Serial console ==<br />
FIXME<br />
<br />
== Netconsole ==<br />
FIXME</div>195.214.233.10