This Readme described the work of two GSoC projects in the years 2019 and 2020 related to Amalthea-based response time analyses.

2019 Google Summer of Code (CPU-GPU Response Time and Mapping Analysis)


2020 Google Summer of Code (Non-Preemptive / Limited preemptive in Response Time Analysis)


Table of Contents

1. Milestone Overview

2. Scope & contribution to the open source community

3. Contents

4. Remarks

1. Milestone Overview

GSOC 2019

  • Response Time Analysis_CPU Part (Phase 1)
  • Refine Previous Phase & E2E Latency Foundation (EC, IC, LET) (Phase 2)
  • Finalize LET, EC, IC and the corresponding UI part (Phase 3)

GSOC 2020

  • Response Time Analysis for non-preemptive environment (Phase 1)
  • Response Time Analysis for limited-preemptive environment (Phase 2)
  • Blocking analysis for non-preemptive and limited-preemptive environment (Phase 3)

2. Scope & contribution to the open source community

GSOC 2019

The current APP4MC library does provide several methods which are useful for deriving execution time for a task, a runnable or ticks (pure computation) through the Util package. But methods for response time are still not available. The reason is that response time analysis can be varied depending on the analyzed model so it is hard to be generalized. But since the trends are evolving from homogeneous to heterogeneous platform, the analysis methodology have become much more sophisticated so it is necessary to have CPU response time analysis which can be used for different mapping analysis with a different processing unit type (e.g., GPU).

In this project, a standardized response time analysis methodology(Mathai Joseph and Paritosh Pandya, 1986) which involves a complex algorithm is used. Not only this, but also a class, CpuRTA which is designed for Generic Algorithm Mapping is provided. Since a heterogeneous platfrom usually requires a different analysis methodology for a processing unit according to its type(e.g., CPU & GPU), a class that can be used with GA Mapping and has a built-in general analysis methodology would be very helpful and save a lot of time which otherwise would be spent for implementing the same algorithm for those tasks that are mapped to a particular type of processing units (e.g., CPU). Along with these, another class, RuntimeUtilRTA which supports CpuRTA class provides several ways to calculate execution time of a task is also provided. The execution time calculation methodology can be different depending on an execution case (e.g., Worst Case, Best Case, Average Case), a transmission type (e.g., Synchronous, Asynchronous) or a different mapping model. This class can be modified and reused for other analysis models if only a method which takes care of a Runnable execution time is adjusted.

GSOC 2020 There are several paper about response time analysis but not much open source code is available that developer can use as reference or implementation can be found in the internet. This project is the contribution to the open source community in this matter. In this project, you can find method to calculate response time in different preemptive environment: non-preemptive, cooperative, or a mixed of the above.

3. Contents and how to use

GSOC 2019

First of all, you will need the pull the amalthea tools repo.

  1. Under ‘responseTime-analyzer’>‘plugins’>‘src’>...>‘gsoc_rta’ folder, there is ‘CpuRTA’ class. This is the implementation source file. By running them, one can derive the total sum of response times of the given model.

  2. Under ‘responseTime-analyzer’>‘plugins’>‘src’>...>‘gsoc_rta’>‘ui’ folder, there is ‘RTApp_WATERS19’ class. This is Java Swing UI source file that corresponds to the ‘CpuRTA’. This UI is created based on WATERS19 Project. By running this, one may get more detailed visuals of the result of ‘CpuRTA’ class. (Refer to ‘APP4RTA_1.0_Description.pdf’ for more details.)(‘responseTime-analyzer’>‘plugins’>‘doc’>‘APP4RTA_1.0_Description.pdf’)

Since the target of implementing heterogeneous platform is to achieve better performance and efficiency, just simply calculating response time is not enough. To realize the optimized response time analysis, different mapping analysis for the same given model according to Generic Algorithm should be taken into account. Generic Algorithm would map tasks to different processing units in the form of integer array so that the total sum of each task’s response time according to the each GA generation can be delivered and compared each other to come up with a better solution. For this reason, a public method which returns the total sum of each task’s response time and the relevant private methods that are used to support this method are needed. The corresponding methods are followed below.

Refer to javadoc for more details.

Calculate the total sum of response times of the tasks of the given Amalthea model with a GA mapping model


Calculate response time of the given task of the given Amalthea model with a GA mapping model


Sort out the given list of tasks (in order of shorter period first - Rate Monotonic Scheduling)

preciseTestCPURT (Response Time analysis Equation Explanation)

Calculate response time of the observed task according to the periodic tasks response time analysis algorithm.

Ri = Ci + Σj ∈ HP(i) [Ri/Tj]*Cj (a standardized response time analysis methodology(Mathai Joseph and Paritosh Pandya, 1986))


Calculate execution time of the given task under one of the several configurations.


Find out whether the given triggering task(that has an InterProcessTrigger) triggers a GPU task which is newly mapped to CPU.


Calculate execution time of the given runnableList in a synchronous manner.


Calculate execution time of the given runnableList in an asynchronous manner.


Calculate execution time of the given task which was originally designed for GPU but newly mapped to CPU by Generic Algorithm Mapping.


Calculate execution time of the given runnable.


Calculate memory access time of the observed task.


Calculate memory access time of the observed runnable.

Read(Write)_Access_Time = Round_UP(Size_of_Read_Labels / 64.0 Bytes) * (Read_Latency / Frequency)


Identify whether the given task has an InterProcessTrigger or not.

User Interface Window

[APP4RTA_1.0_Description](Add Ref here)(‘responseTime-analyzer’>‘plugins’>‘doc’>‘APP4RTA_1.0_Description.pdf’)

GSOC 2020

First of all, you will need the pull the amalthea tools repo.

Check out the app4mc0.9.8/gsoc20npRTA.

This is the branch that contains the GSoC2020 implementation by the time the Project is submitted. It may be merged to master soon, too.

  1. Under responseTime-analyzer > plugins > src >...> gsoc_rta folder You will find NPandPRta class. This is the implementation source file. One can calculate task's response time in different environment using this.

  2. Under responseTime-analyzer > plugins > src >...> gsoc_rta >‘test’ folder, there is NPandPNumerical class. This is a numerical example on how functions/equations used in ‘NPandPRta’ work.

  3. Under responseTime-analyzer > plugins > src >...> gsoc_rta folder Blocking is also located here, using this will allow you to calculate local and global blocking time of task

The utlimate target is to implement response time analysis in a mixed environemnt, where task can be surrounded by different tasks with different preemptive type, from preemptive, cooperative to non-preemptive.

The implementation are located in NPandPRta class. Which include several response time analysis methods.

Refer to each methods' javadoc for more info, mention here are the most important/useful functions, go to this readthedoc if you want a full documentation

There is something need to be mentioned before you try, this class is created based on WATERS2019 model, the functions are tested using that model. But you should be able to ultilize this class without many problem as long as you provided 3 input parameters:

  • Amalthea model - The model that you will use to ultilize this method, this should be given when you create the class object
  • Integer array (ia) - This is a representation of how task is allocated to cores, the location of each element represent task, and the value represent core. I.e. {0,2,3,1,1,2} : first task is assigned to first core of the model, 2nd task is assigned to 3rd core, 3rd task is assigned to 4th core and so on
  • Task - the task you want to calculate its response time

Also additionally there are 3 other parameters:

  • executionCase : TimeType.WCET - just put this like this, since you would want to calculate worst case response time most of the time, change to BCET if you want something different.
  • pTypeArray: a customized array that you can use to define tasks' preemption type. Leave null if don't use. Check javadoc for more info.
  • usePtypeArray: boolean variable to announce whether you want to use pTypeArray or not. Leave false if don't use.

Below are the important functions that you will probably use most of the time. For the list of all function, refer to the javadoc, readthedoc or open the class, lots of comment are left there


Calculate response time of task in mixed environment. Drop the task, the integer mapping array, and the model (again this class is made mainly for WATERS2019 derived model, but it should work on other as well) and you get your response time. The developer also opt in an option where you can input your own preemptive type array. Where you can change task's preemptive type without changing it in the model.


This function set a boolean variable where you can enable/disable schedubility check. Which means if you set to false. Every RTA functions will return the value without checking whether that response time bigger than task's period or not.


Calculate resposne time of task in preemptive environment via level-i busy window technique. Pretty vanilla/basic implementation.


Calculate resposne time of task in preemptive environment via recurrence relation. Again very basic execution of how response time is calculated Should give the same result as response time level-i.


Using the well-known semantics, where task is run as follow: READ -> EXECUTION -> WRITE Calculate all of the element from each step, sum all of them and we have task's execution time.

Blocking analysis, calculate blocking time of semaphore(critical section) when they are exist, or else calculate time other tasks have to wait due to global resource occupancy (task had to wait because other task is reading/writing label) ) Same as the, this class also created based on WATER 2019 model.

Again this only listed important/useful function. For more info, please refer functions' javadoc and readthedoc


Calculate task's global blocking time ( time blocked by task from other cores) due to semaphore lock. If there is no semaphore, the function will calculate blocking time due to resource being read/write by other task.

The blocking policy is Priority Ceiling Protocol FYI


Same with getGlobalBlockingTime, but this time we calculate blocking time due to local task (task within same core)

4. Remarks


  • In the previous phase, the CPU response time analysis had been done without considering the situation where GPU Tasks are mapped to CPU by the new integer array generation. This was rather inaccurate since a GPU Task contains offloading runnables which are used to copy-in and copy-out the local memory when it is mapped to GPU. Not only should these runnables be omitted, but also the labels from the triggering task should be taken into account for the GPU task that is newly mapped to CPU to access the specified memory. Therefore, a function “setGTCL(final Amalthea model)” that takes needed labels and save to a hashMap for each GPU Task has been made.


  • getExecutionTimeForGPUTaskOnCPU method which only considers a GPU original task's associated labels and ticks but ignores its offloading runnables.


  • In this implementation, the function allow all higher priority task in the same core to be able to preemp lower priority task at runnable bound. No preemption threshold is implemented here. The implementation for cooperative part used the equation 12 in this ref, but with the change of j:P_j > P_i, not j:P_j > \theta_i Preemption threshold did prove to be superior in several research in response time analysis. However the main focus of this project is the cooperative preemption, not threshold. This is one of the main future work if ever return to this topic.