Menu Content/Inhalt

2017-10 Compucon Roofline Capabilities Print
October 2017

Compucon Roofline Capability

A PC is a PC --> Wrong!

This capability is based on a technical paper published by the University of California at Berkeley in 2009 entitled “Roofline: An Insightful Visual Performance Model for Multicore Architectures”.  The model offers insight on how to improve the performance of multicore microprocessors and how to choose processors for high performance computing.

In brief, an algorithm that is written for parallel computing has a certain amount of computation workload and a certain amount of data access in order to carry out the computation in a processor.  The ratio of computation workload and data access is given the term “arithmetic intensity” or “operation intensity”.  The ratio has been shown to be pivotal in deciding if a given computer processor is fit for the algorithm or otherwise. 

Roofline is a visual representation of the capability of computer processor.  It is plotted on a graph which has the arithmetic intensity on the horizontal axis and the computation capability on the vertical axis.  Contrary to what most people may think, a given processor does not have a fixed computational capability as designed by the hardware vendor for all types of algorithms.  Instead the peak hardware computation capability is accessible to an algorithm only if the algorithm has a higher arithmetic intensity than the roofline ridge of the hardware.  The roofline is the peak capability built into the hardware, whereas the ridge is where the peak drops off to a lower level.  The drop off point is called the roofline ridge.  A rule of thumb is that the roofline is flat for high arithmetic intensity and is slant (slope) for low arithmetic intensity.  This reveals a very important point for computing system engineers to specify the type of processor to match the algorithm. 

The concept is simple but the application is not as simple.  Microprocessor designers are the biggest group of people having an interest in this model.  High performance system engineers would be the second main group of interested people.

Compucon has grasped this concept and applied it to matching processors with algorithms successfully.  Unfortunately, all shrink-wrapped software applications are close-sourced and how the code was written could not be examined as for open-sourced code.  That is, hardware matching for shrink-wrapped would have to be done by trial and error whereas the match could be predicted for open-source.

Compucon has further applied this concept to the international Square Kilometre Array (SKA) Science Data Processor (SDP) design investigation. At the time of writing, we have proposed a model for predicting the computational profile of a pipeline of applications.  Pipeline refers to a specific sequence of application modules that have to be executed for achieving the solution, and the sequence or module may be adjusted when more insight is obtained.  The proposed model serves the function as the conventional meaning of model, that is, to get some idea of what will happen before we invest in the full scale system and to avoid the risk of dumping the full investment down the drain.