Features/TPM

From QEMU
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

Trusted Platform Module Support Phase I



Summary

Phase I of adding Trusted Platform Module (TPM) support to QEMU.

Owner

  • Name: Stefan Berger
  • Email: stefanb@linux.vnet.ibm.com

Some Background

The Trusted Platform Module (TPM) is a crypto device that has been built into many modern servers, laptops and even handheld devices. Operating Systems have been extended with device driver support for the TPM. On Linux the device can be used via /dev/tpm0.

Internally the TPM can be borken up into two parts. The upper part is the memory mapped IO (MMIO) interface. For the TPM 1.2 the Trusted Computing Group (TCG) has defined a standard interface called TPM Interface Standard (TIS) whose registers are in the 0x FED4 0000 - 0xFED4 4FFF address range. The specs for this interface can be found here.

http://www.trustedcomputinggroup.org/files/resource_files/87BCE22B-1D09-3519-ADEBA772FBF02CBD/TCG_PCClientTPMSpecification_1-20_1-00_FINAL.pdf

The lower part of the device would then be the core TPM that processes the TPM commands (aka 'ordinals'). More than 100 different commands have been defined for this device. Specs for the ordinals can be found here:

http://www.trustedcomputinggroup.org/files/resource_files/646B5D4D-1D09-3519-AD21C36DEA87B4B8/tpmwg-mainrev62_Part3_Commands.pdf

The TIS interface collects the request messages and sends the completed requests to the TPM, the TPM then processes the command and sends the result message to the TIS which then may raise an interrupt to notify the driver to pick up the response. Processing commands may range from less than a second to more than a minute, depending on what kind of operation the TPM is supposed to perform. Crypto operations, such as for example the generation of a 2048 bit RSA key, may take some time.

In several ways the TPM is different than many of the other devices found in a computer:

- There is explicit firmware (BIOS/UEFI/...) support for the TPM.
 - The BIOS sends a command sequence to the TPM. That same command sequence cannot be sent to the TPM again until the machine is rebooted.
 - The BIOS allows the TPM to be enabled/disable or activated/deactivated using a menu.
 - The BIOS must send a command to the TPM after ACPI S3 Resume
- The TPM is reset by a pulse from the CPU/chipset when the machine reboots. This then allows re-initialization.
- The TPM has built-in NVRAM where it stores persistent state.
 - Persistent State of the TPM comprises:
  - Keys: Endorsement Key (typically created during manufacturing), Storage Root Key (SRK), other persisted keys
  - Owner password (a user can take ownership of the TPM until explicitly released)
  - counters
  - internal flags
  - etc.
- The TPM has an owner; the owner of the TPM is identified by password; the owner also knows the password for the Storage Root Key (SRK); ownership if held until explicitly released
- The TPM has volatile state that is cleared by a machine reboot. The following parts are part of the volatile state of the TPM:
  - Platform Configuration Registers (PCRs; these are 'hash' registers allowing SHA1 operations)
  - keys currently loaded into the TPM but not persisted; only a limited number of keys can be loaded
  - some internal flags
  - sessions (established by applications to be able to send commands that require 'authorization')
- etc.


Phase I

For Phase I of TPM integration into QEMU I am proposing the integration of a 'passthrough' driver that enables users to access the host's TPM from within the VM.

Inside a Linux VM for example the usual tpm_tis driver (modprobe tpm_tis) will be enabled talking to an emulated TPM TIS frontend. The TPM TIS will then in turn communicate with /dev/tpm0 on the Linux host which is implemented by the 'passthrough' backend. Only one VM on a physical system will be able to use the single host TPM device. Since the persistent state of the TPM (ownership information, persisted keys etc.) as well as the volatile state (state of the Platform Configuration Registers (PCRs), loaded keys) cannot migrate along with the VM, migration of a VM using this 'passthrough' driver will be disabled.

The TPM passthrough driver may accept a file descriptor to be passed via command line (opened and inherited for example from libvirt). Using a file descriptor we can then also access a (software) TPM via a socket (local or TCP/IP). However, to properly support a TPM via socket we now have to send additional control message to the TPM to for example reset the TPM (upon VM reboot for example). A typical hardware TPM, however, does not support such reset messages and will indicate failure of the reset command (TPM_Init; defined by TCG with ordinal 0x97). The reset command allows us to at least support the rebooting of a virtual machine but does not enable suspend/resume, let alone migration. The reset command would be sent by the passthrough driver for socket file descriptor but must be filtered out as a possible message sent from within the VM (blacklisted ordinal). If the reset command was to fail for a socket file descriptor, the passthrough driver would refuse any further sending of messages to the TPM assuming improper implementation of the command in the TPM. Suspend/resume would require that a command exists that enables us to store the volatile state of the TPM (suspend) and another command to restore it (resume).

Considerations regarding an external TPM accessible via sockets:
- would require non-standard ordinal to save and restore volatile state for VM suspend/resume support
- should be placed on the same host as Qemu is (at least if libvirt manages it)
- to support migration would require shared storage between hosts
 - concurrent access to TPM state while TPM on target VM is started would need to be prevented (locking)
 - need to reset VM and indicate to read volatile state once VM once target system starts
 - management layer (libvirt) needed to properly handle migration
- prevent a VM from snapshotting
- requires support in BIOS, i.e., SeaBIOS

Phase II

In the 2nd phase of TPM integration each QEMU instance will have access to its own private TPM using a different driver, i.e. 'libtpms' . The emulated TPM device will then behave like a hardware TPM and require initialization by the firmware, such as for example SeaBIOS. Phase 2 will require the implementation of (v)NVRAM support for enabling the TPM's persistent state to be saved.

(v)NVRAM Considerations:
- NVRAM layer uses block layer to write data into VM image
 - advantages:
   - block layer supports migration via QEMU's block migration feature
   - block layer supports encryption if QCoW2 is used; desirable for encrypting TPM state but other image types may be problematic
   - block layer supports snapshotting if QCoW2 is used
- NVRAM layer may use ASN.1 (BER) encoding of data blobs written into VM image


Integrated TPM considerations:
- suspend/resume handled via internal API calls into libtpms
- dependencies on NVRAM layer API and BER visitor for peristent storage handling


Considerations regarding emulation of TPM in SMM following Wikipedia article and related You Tube Video
- Wikipedia: http://en.wikipedia.org/wiki/System_Management_Mode
  - It's not obvious why one would 'forward' TPM requests to a hardware TPM since OSes have TPM drivers
    themselves. It does not seem to buy much.
  - Requirements for emulation:
   - Well defined interface between OS and SMM mode to transfer TPM requests and response
     between OS TPM driver and SMM mode
   - SMM needs access to dedicated NVRAM to store the emulated TPM state; with that SMM/TPM
     also needs some form of primitive file system
   - Wikipedia article also mentions that OSes expect that not too much time is spent in SMM
     otherwise crashes or hangs may follow. Considering the long running TPM operations like
     RSA key creation, one would have to implement some form of cooperative multitasking so
     that the TPM's crypto code for example yield()s every few microseconds, stores the register
     state and leaves SMM mode only to later resume after the yield() and make a few more steps
     until the next yield(). The OS's TPM driver would then feed the SMI's so that the key
     creation can make progress (yeah, right).