Improving Map-Reduce for GPUs with cache Online publication date: Sat, 04-Jul-2015
by Arun Kumar Parakh; M. Balakrishnan; Kolin Paul
International Journal of High Performance Systems Architecture (IJHPSA), Vol. 5, No. 3, 2015
Abstract: Applications need specific or custom optimisations to completely exploit the compute capabilities of the underlying hardware. This is often a very tedious task for the programmer. Moreover, many of these applications do not scale well with data size. The Map-Reduce (MR) framework provides a high level of abstraction to map these applications onto the distributed/parallel architectures, but with a large performance penalty. We analyse a state-of-the-art MR framework to assess its performance penalty. The primary objective of this work is to reduce the performance gap between MR and native compute unified device architecture (CUDA) implementation of the applications (onlyCUDA). This work reports deployment of three applications on graphics processor units (GPUs) using MR framework. We study the performance of these applications on modern GPUs with different cache configurations. The results show that the performance of the applications with MR framework does not decline much if the reconfigurable cache of modern GPUs is utilised properly. We show penalty reduction of 5×, 6.45× and 15.87× for SmithWaterman (SW) algorithm, N-body (NB) simulation, and Blowfish (BF) algorithm, respectively.
Online publication date: Sat, 04-Jul-2015
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of High Performance Systems Architecture (IJHPSA):
Login with your Inderscience username and password:
Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.
If you still need assistance, please email email@example.com