, , , ,

I recently had to spend so much time to get vTPM working in XEN guest virtual machines. Here, I am going to share this experience so that others can follow this steps quickly and be able to work with vTPM (virtual trusted platform modules) as quick as possible.

*. According to the Xen 4.3 documentation, the vTPM currently just works for “para-virtualization” (PVM).

**. The following instructions are tested for Ubuntu 12.4 (as host), Xen 4.3, Linux Kernel 3.7.1 for Dom0, and Linux Kernel 3.7.9 for DomU kernel.

***. Please run all the commands in this tutorial as root (with sudo).


  1. Having TPM chip on the motherboard. The TPM hardware must be activated through Bios.
  2. Having a Linux host  installed on the machine (I used ubuntu 12.4)
  3. The reader should have the basic knowledge of the Linux kernel and knows how to compile that. We have to compile linux kernel to have a customized Dom0 and Domu in Xen.
  4. Users may want to have more detailed information on the structure and architecture of vTPMs can refer to the following sources:
    IBM research:
    IBM initial publication in Usenix 2006:
    Xen Wiki page:

High level Installation steps

Activating vTPM in Xen includes the following main steps. In the next part we explain the details of each step.

  1. Install a host operating system on the machine (we used Ubuntu 12.4)
  2. Installing Xen 4.3 hypervisor
  3. Install Dom0 kernel
  4. Install DomU kernel
  5. Configuring vTPM manager and vTPM
  6. Booting DomU with vTPM

In the following sections we describe each step in details.

1. Install a host OS

We assume that this step is already done as described in the prerequisites.

2. Installing Xen 4.3 hypervisor

Instructions of this section are based on the instructions in the Xen Guide and this blog. However, some steps has changed/added to enable vTPM in the Xen.

So, to install Xen 4.3 follow these steps:

1. Install all required packages. The list are as below. However, I suggest to install them one by one.

$sudo apt-get install bcc bin86 gawk bridge-utils iproute libcurl3 libcurl4-openssl-dev bzip2 module-init-tools transfig tgif texinfo texlive-latex-base texlive-latex-recommended texlive-fonts-extra texlive-fonts-recommended pciutils-dev mercurial build-essential make gcc libc6-dev zlib1g-dev python python-dev python-twisted libncurses5-dev patch libvncserver-dev libsdl-dev libjpeg62-dev iasl libbz2-dev e2fslibs-dev git-core uuid-dev ocaml libx11-dev bison flex xz-utils ocaml-findlib gcc-multilib checkpolicy

Install the following packages:

$ sudo apt-get install yajl*,pixman*

(by * I mean all the packages that have these words)

2. Download xen 4.3 source code:

$ wget http://bits.xensource.com/oss-xen/release/4.3.0/xen-4.3.0.tar.gz

3. Extract Xen4.3:

$ sudo tar xvf xen-4.3.0.tar.gz
$ cd xen-4.3.0
$ sudo vim Config.mk

4. Change the line in the open file as follows:


5. Install Xen:

$ sudo make xen
$ sudo ./configure
$ cd tools
$ sudo ./configure
$ cd ..
$ sudo make tools
$ sudo make stubdom
$ cd stubdom
$ sudo make
$ cd ..
$ sudo make install-xen
$ sudo make install-tools PYTHON_PREFIX_ARG=
$ sudo make install-stubdom

6. Since we use XSM/Flask security framework in Xen, and we need xenpolicy.24 later. You should run the following command in the xen4.3 directory:

$sudo make -C tools/flask/policy

7. If everything has been done correctly, you should see the following files in /boot directory:


8. Also check the existance of the following files:


*. if you cannot see these files/path, please check the existence of “local” directory in your system. You may not have it based on your host installation. In that case check /usr/lib/…

9. Edit “/etc/xen/xend-config.sxp” file and change it as follows:

(xend-unix-server yes)

3. Install Dom0 kernel

After installing Xen, we should install Dom0 which is, in fact, a default virtual machine created by Xen that takes care of other VMs in the system. To support vTPM we need to change some installation configuration of the Linux kernel and then compile the kernel. Please follow the below steps to install and configure that:

1. Download the kernel:

$ wget http://www.kernel.org/pub/linux/kernel/v3.0/linux-3.7.1.tar.gz

2. Extract the kernel

$ tar xvf linux-3.7.1.tar.gz

3. Configure/customize the kernel

$cd linux-3.7.1
$ sudo make menuconfig

4. Assure about the following kernel configurations

Processor type and features →
    High memory support (64GB)
       PAE (Physical Address Extension) Support - enabled

Processor type and features →
    Allocate 2nd-level pagetables from highmem - disabled

ACPI (Advanced Configuration and Power Interface) Support - enabled

Processor type and features →
     Paravirtualized guest support [y] →
          Xen guest support – enabled

Bus oprions-
     Xen PCI frontend – enabled

Device Drivers → 
     Block Devices [*] → 
           Xen virtual block device support – enabled 
           Block-device backend driver – enabled 

     Network device support [*] → 
           Xen network device frontend driver – enabled 
           Xen backend network device – enabled

     Input device support →
           Miscellaneous devices →
                Xen virtual keyboard and mouse support – enabled

     Character devices →
           Xen Hypervisor console support – enabled

     Xen driver support →
           Xen memory balloon driver – enabled
           Scrub pages before returning them to system – enabled
           Xen /dev/xen/evtchn device Backend driver support – enabled
           Xen filesystem – enabled
           Create compatibility mount point /proc/xen – enabled
           Create xen entries under /sys/hypervisor – enabled
           userspace grant access device driver – enabled
           User-space grant reference allocator driver – enabled
           xen platform pci device driver – enabled

5.DISABLE TPM SUPPORT (this is a critical step- dom0 should not have access to the TPM).

For that purpose, we exclude the TPM driver in the dom0 kernel:

Device Drivers —> Character devices —><  > TPM Hardware Support

6. After the above modifications, make sure about the following configurations in .config file in the Linux-3.7.1 directory:


7. Start installation of the kernel.

$ sudo make modules_prepare
$ sudo make
$ sudo make modules_install
$ sudo make install
$ cd /boot
$ sudo mkinitramfs -o initrd.img-3.7.1 3.7.1
$ sudo update-grub

8. Reboot the system and you should be able to start the Xen as follows:

$ sudo service xencommons start

9. If Xen is started correctly, you should be able to run the following command and see dom0 running:

$ sudo xl list

Also, if you execute “cat /proc/xen/capabilities”, the output must be: “control_d”.

*. If the Xen service cannot be run, then you may fix the problem by installing “blktab-dkms”:

$ sudo apt-get install blktab-dkms

Then append “blktab” to the /etc/modules file. It may solve the problem.

10. copy the xenpolicy.24 file to the /boot directory:

$ sudo cp tools/flask/policy/xenpolicy.24 /boot

11. modify /boot/grub/grub.cfg (this file is read only even for the owner.  So, first change the permissions: $ sudo chmod 744  /boot/grub/grub.cfg). Find the menuentry that you select at boot time (it should be “xen xen and linux 3.7.1”) and add the following lines to the end of it (after initrd.img-3.7.1):

echo    ‘Loading xenpolicy.24…’
module  /boot/xenpolicy.24

*. It is better to change the permissions of the grub.cfg after saving the changes (to 444).

12. Reboot the system and now you should be able to run the Xen service.

*. if you again face a problem, with this error message:

“Starting oxenstored…/usr/local/sbin/oxenstored: error while loading shared libraries: libxenctrl.so.4.3: cannot open shared object file: No such file or directory”

then, the following fix may help you:

sudo vim /etc/ld.so.conf
include /usr/lib64
sudo ldconfig
sudo service xencommons start

13. Configure networking in Xen:
This section describes how to set up linux bridging in Xen. It assumes eth0 is both your primary interface to dom0 and the interface you want your VMs to use. It also assumes you’re using DHCP.

sudo apt-get install bridge-utils

Edit /etc/network/interfaces, and make it look like this:

auto lo
iface lo inet loopback

auto xenbr0
iface xenbr0 inet dhcp
bridge_ports eth0

auto eth0
iface eth0 inet manual

Restart networking to enable xenbr0 bridge:

sudo /etc/init.d/networking restart

4. Install DomU kernel

2 main things should be done:

A) Kernel for new domain
B) File system for new domain (creating an image)

1. Download rectified kernel from github. Unfortunately, the current Linux kernels are not yet entirely adapted for the vTPM.
Quan Xu, a friend from Intel who helped me in this project, has modified the kernel code and adapted that to support vTPM. You can download the modified kernel, from the below address:

3. Use “sudo make menuconfig” and modify the configuration. In the final  form of .config file must look like this:


4. Enable IMA (security options–> Integrity Measurement Architecture(IMA)) and TPM (Device Drivers–>Character Devices–>TPM hardware support) in the menuconfig

5. compile the kernel

$ sudo make

6. Create empty disk of 10 GB with following commands

$ sudo dd if=/dev/zero of=/root/domu.img bs=1024K count=10240

7. Format this file system

$ sudo /sbin/mkfs.ext4 /root/domu.img

8. Mount the file system

$ sudo mount –o loop /root/domu.img /mnt/

9.  Install a base system. Here I am going with Debian Squeeze. For this you need to mount disk image to any directory and then use “debootstrap” command for populating image with Debian Squeeze base system. Since my system is 64 bit the command is:

$ sudo debootstrap --arch amd64 squeeze /mnt/

*. after mount and debootstrap operations completed, umount the created file system and mount it again to make sure that the file system is copied correctly.

10. At this point you should introduce the file systems to the system. For this purpose Edit /mnt/etc/fstab and copy the following lines in that

/dev/xvda1 / ext4 defaults 0 1
proc /proc proc defaults 0 0

11. Choose your guest name by editing /mnt/etc/hostname

12. Edit /mnt/etc/network/interfaces and make it look like below

auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp

13. Edit /mnt/etc/securetty and append


14. Do the following steps to create xvda and hvc0:

chroot /mnt
mknod /dev/xvda1 b 202 1
mknod /dev/hvc0 c 229 0
chown root:disk /dev/xvda1

15. Edit /mnt/etc/inittab and comment out the following lines

# 1:23:respawn:/sbin/getty 38400 tty1
# 2:23:respawn:/sbin/getty 38400 tty2 
# 3:23:respawn:/sbin/getty 38400 tty3 
# 4:23:respawn:/sbin/getty 38400 tty4 
# 5:23:respawn:/sbin/getty 38400 tty5 
# 6:23:respawn:/sbin/getty 38400 tty6

and add the following line:

hvc0:2345:respawn:/sbin/agetty -L 9600 hvc0

16. Create user and set password for root

$ sudo chroot /mnt
# adduser xen
# passwd root
# exit

17.Now we need to install kernel modules in newly created file system. We have downloaded and compiled kernel in step 1, at path /root/linux-3.9.1

$ cd YOUR PATH/linux-3.9.1
$ sudo make modules_install INSTALL_MOD_PATH=/mnt
$ sudo cp /root/linux-3.9.1/.config /mnt/boot/config-3.9.1

18. We need initrd file which is initial ram disk

$ sudo chroot /mnt
$ apt-get install initramfs-tools
$ mkinitramfs -o initrd.img-3.9.1-domU 3.9.1
$ exit
$ mv /mnt/initrd.img-3.9.1-domU /root/

19. Unmount the file system of DomU

sudo umount /mnt

5. Configuring vTPM Manager and vTPM

At this point we have all the necessary platforms ready. To use  vTPM in the guest VMs we have to first create a vTPM Manager.
The vTPM Manager is, in fact, a domain for itself and creates and manages vTPMs (TPMs for each VM). After successfully creating vTPM Manager, we can create vTPM instances that will serve as TPM for each VM. Finally, the guest VM is created and is connected to the vTPM. More detailed description on this structure can be found here. Below, we list the operations to achieve these steps.

5.1 Creating vTPM Manager

1. Creating disk image for vTPM Manager domain. The vTPM Manager requires a disk image to store its encrypted data. The image does not require a filesystem and can live anywhere on the host disk. The image does not need to be large. 16 Mb should be sufficient.

$ sudo dd if=/dev/zero of=/var/vtpmmgr-stubdom.img bs=16M count=1

2. Creating config file for the vTPM Manager domain. The vTPM Manager domain (vtpmmgr-stubdom) is launched like any other Xen VM and requires a config file. The manager requires a disk image for storage and permission to access the hardware memory pages for the TPM. An example configuration is as follows.


*. iomem line in the above config file says that the vTPM Manager should be able to access the IO memory pages of hardware TPM (that are 5 pages starting from 0xfed40000)

3. Launching the vTPM Manager. Like any other VM, the vTPM Manager can be launched:

$ sudo xl create -c <vTPM_Conf_File>

*. Assuming that vTPM_Conf is the name of the created config file.

**. If everything works well you should see the following line after the vTPM Manager booted correctly.

INFO[VTPM]: Waiting for commands from vTPM's:

***. It seems that the vTPM implementation is not very stable yet and it looks very fragile. Therefore, to shut down the vTPM Manager at any time, it is suggested that the vTPM Manager is destroyed carefully after it is truly booted  to prevent any disk (img) corruption. You can do that in xl toolstack: “sudo xl destroy <vTPM_ID>”.

5.2 Creating vTPM Manager

After creating vTPM Manager, we can create vTPM for a guest VM.

1. Creating a disk image for vTPM. Similar to vTPM Manager, vTPM is also a VM and therefore, it requires a disk image in the first place.

$ sudo dd if=/dev/zero of=/home/user/domu/vtpm.img bs=8M count=1

2. Creating vTPM config file:


*. You should replace UUID with your own, by running “uuidgen” in Linux.

**. The line vtpm=[…] is the front end TPM driver. It mentions that the vTPM is using the vtpmmgr in the background as the vTPM Manager.

3. Launching a VM based on the constructed config file.

$ sudo xl create -c <TPM_Conf_File>

*. After booting the vTPM, you should see the vTPM instance stops on the following line:


Info: TPM_Startup(1)


6 Booting DOmU (Guest VM) with vTPM Support

After starting vTPM Manager and VTPM, it is the time to fire a VM. We create a VM from the image we created with kernel 3.9.1.

1. In the first place we should create a VM config file. The config file looks like any normal VM config file. However, in the last line, we should include the connection to the vTPM. Here is an example.

kernel = '/home/user/domuKernel-master/vmlinux'
ramdisk = '/root/initrd.img-3.9.1-domU'
vcpus = '1'
memory = '1024'
root = '/dev/xvda1 ro'
name = 'storage'
vif = [ '','bridge=xenbr0']
dhcp = "dhcp"
on_poweroff = 'destroy'
on_reboot = 'restart'
on_crash = 'restart'
extra = 'console=hvc0 xencons=tty'

*. If you are copy/pasting the config file, please remember that you should modify the addresses based the locaiton of images etc. on your computer.
**. the last line (vtpm=[…]) shows the connection to the vTPM.

2. In this step, you can create the VM:

$ sudo xl create -c VM_CONFIG_FILE

*. during the boot process you should see the interaction of the VM with vTPM

3. After booting, make sure that the xen-tpmfront has been loaded in the guest VM. For that purpose, run the following command:

# modprobe xen-tpmfront

*. After this module is loaded, you should see in the vTPM that the front end is connected.

4. To test the TPM on the guest you should install “trousers” and “tpm-tools” using apt-get.

5. On the guest VM run the following

# tcsd
# tpm_version

Here, is the output on my computer:

TPM 1.2 Version Info:
Chip Version:
Spec Level:          2
Errata Revision:     1
TPM Vendor ID:       ETHZ
TPM Version:         01010000
Manufacturer Info:   4554485a

And everything works fine now!