This Confluence has been LDAP enabled, if you are an ASF Committer, please use your LDAP Credentials to login. Any problems file an INFRA jira ticket please.

Skip to end of metadata
Go to start of metadata

Introduction

Linux Containers (LXC) is a lightweight system virtualization that uses resource isolation instead of the hardware emulation approach used by KVM and Xen. For users who do not require full OS virtualization as provided by KVM and Xen, container technologies such as LXC provide an attractive performant solution for virtualization.

Purpose

This docoument contains the design specification for LXC support in Cloudstack.

Status

Code complete, feature has been merged to master branch.

References

Feature Specification

LXC will be implemented as a hypervisor in Cloudstack and will be a first class citizen to the other hypervisors such as Xen, KVM, VMWare. As such, a user will be able to select LXC as the hypervisor for all areas a hypervisor is selectable and where the system resources have been met.

Primary storage

Available storage options for LXC primary storage are NFS and SharedMountPoint.

Secondary storage

Unlike other hypervisors where a VM is contained in a single image file, LXC containers run from a directory that serves as the root filesystem. LXC template images will be stored in TAR format in secondary storage. See LXC Templates section for details on how the image is unpacked.

Guest VM creation

Similar to KVM, LXC virtual machines will be created using libvirt. The libvirt domain xml will include two additional elements needed for LXC: <init> and <filesystem>.

<domain type='lxc'>
  <os>
    <type arch='x86_64'>exe</type>
    <!-- specifies the startup script -->
    <init>/sbin/init</init>
  </os>
  <devices>
    <!-- specifies the directory containing the root filesystem -->
    <filesystem type='mount'>
      <source dir='/mnt/primary/edb596f6-42fb-499d-8ded-8834aff52d75'/>
      <target dir='/'/>
    </filesystem>
  </devices>
</domain>

LXC Templates

Downloadable LXC template images should be stored as either tar.gz or tar formats. The SecondaryStorage VM will download and store the template as a tar file.

bash$ find /export/secondary -type f
/export/secondary/template/tmpl/1/10/template.properties
/export/secondary/template/tmpl/1/10/402b0be5-b840-3fef-b292-d330f3bf809a.tar

During the creation of the first VM for an LXC template, the management server will send a PrimaryStorageDownload command to the agent on the LXC host. This command makes a copy of the template from secondary storage onto primary storage. This copy is used as a base for creating all LXC images for the cluster and is not used directly to run a VM. The copy operation from secondary storage to primary storage will unpack the tar file into the destination template directory.

bash$ ls -ld /mnt/primary/*
dr-xr-xr-x. 23 root root      4096 Jan 25 11:33 /mnt/primary/2cc4e71e-2e4b-4987-a48e-dfae08e0d767

bash$ ls /mnt/primary/2cc4e71e-2e4b-4987-a48e-dfae08e0d767
bin   cgroup  etc   lib    media  opt   root  selinux  sys  usr
boot  dev     home  lib64  mnt    proc  sbin  srv      tmp  var

After a copy of the template is available on primary storage, the management server will send a CreateCommand to the LXC host to create a disk from the template. This involves a recursive copy of the template directory to the root directory for the VM.

bash$ ls -ld /mnt/primary/*
     4 dr-xr-xr-x. 23 root root      4096 Jan 25 11:33 2cc4e71e-2e4b-4987-a48e-dfae08e0d767
     4 drwxr--r--. 23 root root      4096 Jan 25 13:27 edb596f6-42fb-499d-8ded-8834aff52d75

System VMs

Each of the different hypervisors currently have their own System VMs. These system VM images are used to run a console proxy, secondary storage, and router VMs.

We discussed the possibility of creating System VMs for LXC. There was concern with the complexity and potential issues involving iptables for the router inside an LXC container. As an intermediate solution we are going to use KVM System VMs inside the LXC Cluster.

Direct Networking

Libvirt supports direct attachment of the guest VM's network to a physical interface. To enable this mode, add the following to agent.properties:

libvirt.vif.driver=com.cloud.hypervisor.kvm.resource.DirectVifDriver
network.direct.source.mode=private (other values: bridge|vepa)
network.direct.device=eth0

NOTE: The network device that is specified should not be a slave to any bridges.

Environment setup and testing

Obtaining latest code

The LXC code has been merged to the master branch for Cloudstack:

git clone https://git-wip-us.apache.org/repos/asf/cloudstack.git

Follow directions in /docs/en-US/build-rpm.xml to build RPMs for Cloudstack.

Installing Cloudstack

I will not cover how to install Cloudstack, please use the latest online documents. There are a few things to note when using the LXC code:

1. Use the latest system VM images from Jenkins

Import the latest system VM image:

/web/cloudstack/scripts/storage/secondary/cloud-install-sys-tmplt -m /mnt/secondary -u http://jenkins.cloudstack.org/view/master/job/build-systemvm-master/lastSuccessfulBuild/artifact/tools/appliance/dist/systemvmtemplate-2013-04-14-master-kvm.qcow2.bz2 -h kvm -F

2. LXC container

Cloudstack will not come bundled with an LXC container image, so you will need to prepare one yourself or download one.

  • No labels