Title: Priority L2 cache design for time predictability

Authors: Jun Yan; Wei Zhang

Addresses: Mathworks Inc., 3 Apple Hill Drive, Natick, MA 01760, USA ' Department of Electrical and Computer Engineering, Virginia Commonwealth University, Richmond, VA 23284, USA

Abstract: L2 caches are usually unified, and the possible interferences between instructions and data make it very hard, if not impossible, to perform timing analysis for unified L2 caches. This paper proposes a priority L2 cache to achieve both time predictability and high performance for real-time systems. The priority cache allows both the instruction and data streams to share the aggregate L2 cache space while preventing them from mutually replacing each other at runtime. While separate L2 caches can also achieve time predictability, our performance evaluation shows that the instruction priority cache (i.e., giving instructions priority over data) outperforms separate L2 caches. Compared to a unified L2 cache, the instruction priority cache degrades performance by only 1.1% on average. Moreover, we implement a prototype of the priority L2 cache on Virtex-6 FPGA and find hardware overhead of the priority L2 cache is very small.

Keywords: priority cache; time predictability; real-time systems; L2 caches; performance evaluation; timing analysis.

DOI: 10.1504/IJES.2016.080383

International Journal of Embedded Systems, 2016 Vol.8 No.5/6, pp.427 - 439

Received: 26 Dec 2013
Accepted: 12 Jun 2014

Published online: 21 Nov 2016 *

Full-text access for editors Full-text access for subscribers Purchase this article Comment on this article