Documentation/Platforms/RISCV

From QEMU
Revision as of 13:52, 28 August 2020 by Bmeng (talk | contribs)

Description

RISC-V is an open source instruction set. It is a modular with only a small set of mandatory instructions. Every other module might be implemented by vendors allowing RISC-V to be suitable for small embedded systems up to large supercomputers.

Build Directions

For RV64:

 ./configure --target-list=riscv64-softmmu && make

For RV32:

 ./configure --target-list=riscv32-softmmu && make

Booting Linux

Booting 64-bit Debian

Follow the instructions on the Debian wiki to boot Debian on QEMU: https://wiki.debian.org/RISC-V

Booting 64-bit Fedora

Download the Fedora prebuilt images from: https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/ You will want to download Fedora-Minimal-Rawhide-*-fw_payload-uboot-qemu-virt-smode.elf and Fedora-Minimal-Rawhide-*-sda.raw.xz images

Decompress the rootFS:

  unxz Fedora-Minimal-Rawhide-*-sda.raw.xz

Boot linux using RV64GC qemu:

  qemu-system-riscv64 \
   -nographic \
   -machine virt \
   -smp 4 \
   -m 2G \
   -kernel Fedora-Minimal-Rawhide-*-fw_payload-uboot-qemu-virt-smode.elf \
   -bios none \
   -object rng-random,filename=/dev/urandom,id=rng0 \
   -device virtio-rng-device,rng=rng0 \
   -device virtio-blk-device,drive=hd0 \
   -drive file=Fedora-Minimal-Rawhide-20200108.n.0-sda.raw,format=raw,id=hd0 \
   -device virtio-net-device,netdev=usernet \
   -netdev user,id=usernet,hostfwd=tcp::10000-:22
  • Login: root
  • Password: riscv

For more details on running Fedora see the Fedora Wiki: https://fedoraproject.org/wiki/Architectures/RISC-V/Installing

Booting 32-bit OpenEmbedded Images

Follow the usual OpenEmbedded build flow using meta-riscv to build for the qemuriscv32 machine. More details on doing this can be found here: https://github.com/riscv/meta-riscv/#build-image

Once the images are build you can boot them using:

 qemu-system-riscv32 \
   -bios ./tmp-glibc/deploy/images/qemuriscv32/fw_jump.elf \
   -device virtio-net-device,netdev=net0 -netdev user,id=net0,hostfwd=tcp::2222-:22,hostfwd=tcp::2323-:23 \
   -drive id=disk0,file=./tmp-glibc/deploy/images/qemuriscv64/core-image-minimal-qemuriscv32.ext4,if=none,format=raw \
   -device virtio-blk-device,drive=disk0 \
   -object rng-random,filename=/dev/urandom,id=rng0 \
   -device virtio-rng-device,rng=rng0 \
   -kernel ./tmp-glibc/deploy/images/qemuriscv32/Image \
   -append 'root=/dev/vda rw highres=off  console=ttyS0 mem=512M ip=dhcp earlycon=sbi ' \
   -show-cursor  -nographic -machine virt -m 512 -serial mon:stdio -serial null 

The above command will open up SSH and telnet ports which can be used to communicate with the guest. It will also pass in host entropy to the guest, allowing entropy to be available on boot.

NOTE: When using OpenEmbedded it is recommended to use the runqemu script to boot QEMU. It will dynamically handle display options as well as advanced networking

Booting 64-bit OpenEmbedded Images

Follow the usual OpenEmbedded build flow using meta-riscv to build for the qemuriscv64 machine. More details on doing this can be found here: https://github.com/riscv/meta-riscv/#build-image

Once the images are build you can boot them using:

 qemu-system-riscv64 \
   -bios ./tmp-glibc/deploy/images/qemuriscv64/fw_jump.elf \
   -device virtio-net-device,netdev=net0 -netdev user,id=net0,hostfwd=tcp::2222-:22,hostfwd=tcp::2323-:23 \
   -drive id=disk0,file=./tmp-glibc/deploy/images/qemuriscv64/core-image-minimal-qemuriscv64.ext4,if=none,format=raw \
   -device virtio-blk-device,drive=disk0 \
   -object rng-random,filename=/dev/urandom,id=rng0 \
   -device virtio-rng-device,rng=rng0 \
   -kernel ./tmp-glibc/deploy/images/qemuriscv64/Image \
   -append 'root=/dev/vda rw highres=off  console=ttyS0 mem=512M ip=dhcp earlycon=sbi ' \
   -show-cursor  -nographic -machine virt -m 512 -serial mon:stdio -serial null 

The above command will open up SSH and telnet ports which can be used to communicate with the guest. It will also pass in host entropy to the guest, allowing entropy to be available on boot.

NOTE: When using OpenEmbedded it is recommended to use the runqemu script to boot QEMU. It will dynamically handle display options as well as advanced networking

Booting 32-bit Buildroot Images

Clone the Buildroot source code and cd into the directory.

Generate the default config:

 make qemu_riscv32_virt_defconfig

Build the images

 make

Boot the images:

 qemu-system-riscv32 \
   -M virt -nographic \
   -bios output/images/fw_jump.elf \
   -kernel output/images/Image \
   -append "root=/dev/vda ro" \
   -drive file=output/images/rootfs.ext2,format=raw,id=hd0 \
   -device virtio-blk-device,drive=hd0 \
   -netdev user,id=net0 -device virtio-net-device,netdev=net0

Booting 64-bit Buildroot Images

Clone the Buildroot source code and cd into the directory.

Generate the default config:

 make qemu_riscv64_virt_defconfig

Build the images

 make

Boot the images:

 qemu-system-riscv64 \
   -M virt -nographic \
   -bios output/images/fw_jump.elf \
   -kernel output/images/Image \
   -append "root=/dev/vda ro" \
   -drive file=output/images/rootfs.ext2,format=raw,id=hd0 \
   -device virtio-blk-device,drive=hd0 \
   -netdev user,id=net0 -device virtio-net-device,netdev=net0

Microchip PolarFire SoC Icicle Kit

QEMU 5.2.0 supports a new machine: Microchip PolarFire SoC Icicle Kit. The Icicle Kit board integrates a PolarFire SoC, with one SiFive's E51 plus four U54 cores and many on-chip peripherals and an FPGA.

For more details about Microchip PolarFire Soc, please see: https://www.microsemi.com/product-directory/soc-fpgas/5498-polarfire-soc-fpga

The Icicle Kit board information can be found here: https://www.microsemi.com/existing-parts/parts/152514

Boot the images:

 qemu-system-riscv64 -M microchip-icicle-kit -smp 5 \
   -bios path/to/hss.bin -sd path/to/sdcard.img \
   -nic user,model=cadence_gem \
   -nic tap,ifname=tap,model=cadence_gem \
   -display none -serial stdio \
   -chardev socket,id=serial1,path=serial1.sock,server,wait \
   -serial chardev:serial1

The BIOS image used by this machine is hss.bin, aka Hart Software Services, which can be built from: https://github.com/polarfire-soc/hart-software-services

As of now the DDR memory controller in the Microchip PolarFire SoC has not been modeled. Simply creating unimplemented devices does not make HSS happy. Emulating the DDR memory controller is tedious, so a patched HSS should be used as the BIOS for this machine. To patch HSS, open boards/icicle-kit-es/hss_board_init.c in the HSS source tree, find the boardInitFunctions[] array that contains the initialization routines for this board, and remove the line that contains 'HSS_DDRInit'.

QEMU does not support eMMC hence the SD configuration shall be used in the HSS and Yocto BSP build. The eMMC configuration is not supported.

Instructions to build HSS:

 $ cp boards/icicle-kit-es/def_config.sdcard .config
 $ make BOARD=icicle-kit-es

For Yocto build, "MACHINE=icicle-kit-es-sd" should be specified, otherwise when booting Linux kernel the rootfs cannot be mounted. The generated image is something like: mpfs-dev-cli-icicle-kit-es-sd.rootfs.wic. Resize the file with 'qemu-image' to a power of 2 before passing to QEMU '-sd' command line.

The memory is set to 1 GiB by default to match the hardware. A sanity check on ram size is performed in the machine init routine to prompt user to increase the RAM size to > 1 GiB when less than 1 GiB ram is detected.

HSS output is on the first serial port (stdio) and U-Boot/Linux outputs on the 2nd serial port. OpenSBI outputs on a random serial port due to the lottery mechanism used during the multi-core boot.

Attaching GDB

To attach GDB to a QEMU RISC-V instance with only a single cluster (every machine except the sifive_u) run these commands from GDB:

 target extended-remote :1234
 info threads

To attach GDB to a QEMU RISC-V instance with multiple clusters (the sifive_u) run these commands from GDB:

 target extended-remote :1234
 add-inferior
 inferior 2
 attach 2
 set schedule-multiple
 info threads

The above commands assume the default GDB port exposed from QEMU of 1234. This will happen when you run QEMU with the '-s' command line argument.

If you would like QEMU to not run the guest until you have connected GDB, you can specify the '-S' command line argument as well.

Links

Contacts

Maintainers: