Internships/ProjectIdeas/QEMUPerformance: Difference between revisions

From QEMU
No edit summary
No edit summary
Line 4: Line 4:
PART I: (user mode)
PART I: (user mode)


a) select around a dozen test programs (resembling components of SPEC benchmark, but must be open source, and preferably license compatible with QEMU); test programs should be distributed like this: 4-5 FPU CPU-intensive, 4-5 non-FPU CPU intensive, 1-2 I/O intensive;
* select around a dozen test programs (resembling components of SPEC benchmark, but must be open source, and preferably license compatible with QEMU); test programs should be distributed like this: 4-5 FPU CPU-intensive, 4-5 non-FPU CPU intensive, 1-2 I/O intensive;
b) measure execution time and other performance data in user mode across all platforms for latest QEMU version:
* measure execution time and other performance data in user mode across all platforms for latest QEMU version:
       - try to improve performance if there is an obvious bottleneck;
       - try to improve performance if there is an obvious bottleneck;
       - develop tests that will be protection against performance regressions in future.
       - develop tests that will be protection against performance regressions in future.
c) measure execution time in user-mode for selected platforms for all QEMU versions in last 5 years:
* measure execution time in user-mode for selected platforms for all QEMU versions in last 5 years:
       - confirm performance improvements and/or detect performance degradation.
       - confirm performance improvements and/or detect performance degradation.
d) summarize all results in a comprehensive form, using also graphics/data visualization.
* summarize all results in a comprehensive form, using also graphics/data visualization.


PART II: (system mode)
PART II: (system mode)


a) measure execution time and other performance data for boot/shutdown cycle for selected machines for ToT:
* measure execution time and other performance data for boot/shutdown cycle for selected machines for ToT:
       - try to improve performance if there is an obvious bottleneck;
       - try to improve performance if there is an obvious bottleneck;
       - develop tests that will be protection against performance regressions in future.
       - develop tests that will be protection against performance regressions in future.
b) summarize all results in a comprehensive form.
* summarize all results in a comprehensive form.


DELIVERABLES
DELIVERABLES

Revision as of 06:48, 31 January 2020

Summary: The nature of this project is the exploration and analysis of performance of a software tool. The tool in this case is QEMU, and its two modes of operation: user mode and system mode

PART I: (user mode)

  • select around a dozen test programs (resembling components of SPEC benchmark, but must be open source, and preferably license compatible with QEMU); test programs should be distributed like this: 4-5 FPU CPU-intensive, 4-5 non-FPU CPU intensive, 1-2 I/O intensive;
  • measure execution time and other performance data in user mode across all platforms for latest QEMU version:
      - try to improve performance if there is an obvious bottleneck;
      - develop tests that will be protection against performance regressions in future.
  • measure execution time in user-mode for selected platforms for all QEMU versions in last 5 years:
      - confirm performance improvements and/or detect performance degradation.
  • summarize all results in a comprehensive form, using also graphics/data visualization.

PART II: (system mode)

  • measure execution time and other performance data for boot/shutdown cycle for selected machines for ToT:
      - try to improve performance if there is an obvious bottleneck;
      - develop tests that will be protection against performance regressions in future.
  • summarize all results in a comprehensive form.

DELIVERABLES

1) Each maintainer for target will be given a list of top 25 functions in terms of spent host time for each benchmark described in the previous section. Additional information and observations will be also provided, if the judgment is they are useful and relevant.

2) Each maintainer for machine (that has successful boot/shutdown cycle) will be given a list of top 25 functions in terms of spent host time during boot/shutdown cycle. Additional information and observations will be also provided, if the judgment is they are useful and relevant.

3) The community will be given all devised performance measurement methods in the form of easily reproducible step-by-step setup and execution procedures.

Deliverable should be gradually distributed over wider time interval of around two months.

Links:

Details:

  • Skill level: intermediate
  • Languages:
    • C (for code analysis, performance improvements)
    • Python (for automatization)
    • potentially JavaScript (d3.js or similar library; for data visualization)
  • Mentor: Aleksandar Markovic (aleksandar.markovic@rt-rk.com)
  • Suggested by: Aleksandar Markovic