Documentation/9psetup

From QEMU
Revision as of 10:10, 4 September 2020 by Schoenebeck (talk | contribs) (use 'msize' in example)

This section details the steps involved in setting up VirtFS (Plan 9 folder sharing over Virtio - I/O virtualization framework) between the guest and host operating systems. The instructions are followed by an example usage of the mentioned steps.

Preparation

1. Download the latest kernel code (2.6.36.rc4 or newer) from http://www.kernel.org to build the kernel image for the guest.

2. Ensure the following 9P options are enabled in the kernel configuration.

    CONFIG_NET_9P=y
    CONFIG_NET_9P_VIRTIO=y
    CONFIG_NET_9P_DEBUG=y (Optional)
    CONFIG_9P_FS=y
    CONFIG_9P_FS_POSIX_ACL=y
    CONFIG_PCI=y
    CONFIG_VIRTIO_PCI=y

and these PCI and virtio options:

    CONFIG_PCI=y
    CONFIG_VIRTIO_PCI=y
    CONFIG_PCI_HOST_GENERIC=y (only needed for the QEMU Arm 'virt' board)

3. Get the latest git repository from http://git.qemu.org/ or http://repo.or.cz/w/qemu.git.

4. Configure QEMU for the desired target. Note that if the configuration step prompts ATTR/XATTR as 'no' then you need to install libattr & libattr-dev first.

For debian based systems install packages libattr1 & libattr1-dev and for rpm based systems install libattr & libattr-devel. Proceed to configure and build QEMU.

5. Setup the guest OS image and ensure kvm modules are loaded.

Starting the Guest directly

To start the guest add the following options to enable 9P sharing in QEMU

    -fsdev fsdriver,id=[id],path=[path to share],security_model=[mapped|passthrough|none][,writeout=writeout][,readonly]
     [,socket=socket|sock_fd=sock_fd] -device virtio-9p-pci,fsdev=[id],mount_tag=[mount tag]
     

You can instead use the following also, which is just a short-cut of the above command.

    -virtfs fsdriver,id=[id],path=[path to share],security_model=[mapped|passthrough|none][,writeout=writeout][,readonly]
     [,socket=socket|sock_fd=sock_fd],mount_tag=[mount tag]

Options:

  • fsdriver: This option specifies the fs driver backend to use. Currently only "local","handle" and "proxy" file system drivers are supported. In future we plan on adding various types of network and cluster filesystems here.
  • id: Identifier used to refer to this fsdev.
  • path: The path on the host that is identified by this fsdev.
  • security_model: Valid options are mapped, passthrough & none.No need to specify security_model with "proxy" file system drivers.
  • writeout=writeout: This is an optional argument. The only supported value is "immediate".
  • readonly: Enables exporting 9p share as a readonly mount for guests. By default read-write access is given.
  • socket=socket: Enables proxy filesystem driver to use passed socket file for communicating with virtfs-proxy-helper
  • sock_fd=sock_fd: Enables proxy filesystem driver to use passed socket descriptor for communicating with virtfs-proxy-helper. Usually a helper like libvirt will create socketpair and pass one of the fds as sock_fd
  1. mapped: Files are created with Qemu user credentials and the client-user's credentials are saved in extended attributes.
  2. passthrough: Files on the filesystem are directly created with client-user's credentials.
  3. none: It is equivalent to passthrough security model; the only exception is, failure of priviliged operation like chown are ignored. This makes a passthrough like security model usable for people who run kvm as non root.
  • fsdev option is used along with -device driver "virtio-9p-pci".
  • Options for virtio-9p-pci driver are:
  • fsdev=id: Specifies the id value specified along with -fsdev option
  • mount_tag: A tag which acts as a hint to the guest OS and is used to mount this exported path.

Starting the Guest using libvirt

If using libvirt for management of QEMU/KVM virtual machines, the <filesystem> element can be used to setup 9p sharing for guests

 <filesystem type='mount' accessmode='$security_model'>
   <source dir='$hostpath'/>
   <target dir='$mount_tag'/>
 </filesystem>

In the above XML, the source directory will contain the host path that is to be exported. The target directory should be filled with the mount tag for the device, which despite its name, does not have to actually be a directory path - any string 32 characters or less can be used. The accessmode attribute determines the sharing mode, one of 'passthrough', 'mapped' or 'squashed'.

There is no equivalent of the QEMU 'id' attribute, since that is automatically filled in by libvirt. Libvirt will also automatically assign a PCI address for the 9p device, though that can be overridden if desired.

Mounting the shared path

You can mount the shared folder using

    mount -t 9p -o trans=virtio [mount tag] [mount point] -oversion=9p2000.L
  • mount tag: As specified in Qemu commandline.
  • mount point: Path to mount point.
  • trans: Transport method (here virtio for using 9P over virtio)
  • version: Protocol version. By default it is 9p2000.u .

Other options that can be used include:

  • msize: Maximum packet size including any headers. By default it is 8KB.
  • access: Following are the access modes
  1. access=user : If a user tries to access a file on v9fs filesystem for the first time, v9fs sends an attach command (Tattach) for that user. This is the default mode.
  2. access=<uid> : It only allows the user with uid=<uid> to access the files on the mounted filesystem
  3. access=any : v9fs does single attach and performs all operations as one user
  4. access=client : Fetches access control list values from the server and does an access check on the client.

Performance Considerations

You should set an appropriate value for option "msize" on client (guest OS) side to avoid degraded file I/O performance. This 9P option is only available on client side. If you omit to specify a value for "msize" with a Linux 9P client, the client would fall back to its default value of only 8 kiB which results in very poor performance. A good value for "msize" depends on the file I/O potential of the underlying storage on host side (i.e. a feature invisible to the client), and then you still might want to trade off between performance profit and additional RAM costs, i.e. with growing "msize" (RAM occupation) performance still increases, but the performance gain (delta) will shrink continuously.

For that reason it is recommended to benchmark and manually pick an appropriate value for 'msize' for your use case by yourself. As a starting point, you might start by picking something between 10 MiB .. >100 MiB for a spindle based SATA storage, whereas for a PCIe based Flash storage you might pick several hundred MiB or more. Then create some large file on host side (e.g. 12 GiB):

    dd if=/dev/zero of=test.dat bs=1G count=12

and measure how long it takes reading the file on guest OS side:

    time cat test.dat > /dev/null

then repeat with different values for "msize" to find a good value.

Example

An example usage of the above steps (tried on an Ubuntu Lucid Lynx system):

1. Download the latest kernel source from http://www.kernel.org

2. Build kernel image

  • Ensure relevant kernel configuration options are enabled pertaining to
  1. Virtualization
  2. KVM
  3. Virtio
  4. 9P
  • Compile

3. Get the latest QEMU git repository in a fresh directory using

    git clone git://repo.or.cz/qemu.git

4. Configure QEMU

For example for i386-softmm with debugging support, use

    ./configure '--target-list=i386-softmmu' '--enable-debug' '--enable-kvm' '--prefix=/home/guest/9p_setup/qemu/'

If this step prompts ATTR/XATTR as 'no', install packages libattr1 and libattr1-dev on your system using:

    sudo apt-get install libattr1
    sudo apt-get install libattr1-dev

5. Compile QEMU

    make
    make install

6. Guest OS installation (Installing Ubuntu Lucid Lynx here)

  • Create Guest image (here of size 2 GB)
    dd if=/dev/zero of=/home/guest/9p_setup/ubuntu-lucid.img bs=1M count=2000 
  • Burn a filesystem on the image file (ext4 here)
    mkfs.ext4 /home/guest/9p_setup/ubuntu-lucid.img 
  • Mount the image file
    mount -o loop /home/guest/9p_setup/ubuntu-lucid.img /mnt/temp_mount
  • Install the Guest OS

For installing a Debain system you can use package debootstrap

    debootstrap lucid /mnt/temp_mount 

Once the OS is installed, unmount the guest image.

    umount /mnt/temp_mount

7. Load the KVM modules on the host (for intel here)

    modprobe kvm
    modprobe kvm_intel 

8. Start the Guest OS

   /home/guest/9p_setup/qemu/bin/qemu -drive file=/home/guest/9p_setup/ubuntu-lucid.img,if=virtio \ 
   -kernel /path/to/kernel/bzImage -append "console=ttyS0 root=/dev/vda" -m 512 -smp 1 \
   -fsdev local,id=test_dev,path=/home/guest/9p_setup/shared,security_model=none -device virtio-9p-pci,fsdev=test_dev,mount_tag=test_mount -enable-kvm 
   

The above command runs a VNC server. To view the guest OS, install and use any VNC viewer (for instance xclientvncviewer).

9. Mounting shared folder

Mount the shared folder on guest using

    mount -t 9p -o trans=virtio test_mount /tmp/shared/ -oversion=9p2000.L,posixacl,msize=104857600,cache=loose

In the above example the folder /home/guest/9p_setup/shared of the host is shared with the folder /tmp/shared on the guest.