Optimising MPI tree-based communication for NUMA architectures
by Christer Karlsson; Zizhong Chen
International Journal of Autonomous and Adaptive Communications Systems (IJAACS), Vol. 8, No. 4, 2015

Abstract: Today's computer clusters are often composed of many multi-core processors that are networked together. With this architecture communication between cores on different nodes is often on a magnitude slower than those between cores on the same node. Cores on the same processor communicate faster than cores on different processors on the same node. Most MPI implementations assume a homogeneous network. In this paper, we treat a multi-core node as a heterogeneous unit and optimise MPI scatter/gather communications by scheduling using topology information. We demonstrate that a previous heuristics for heterogeneous clusters do improve the performance, but might not produce optimal results on multi-core node for communications. Our solution modifies the fastest edge first heuristic by accounting for how many messages can be sent in parallel without impeding the bandwidth. We are able to achieve 20% to 30% performance gains over the MPI scatter/gather implementation on homogeneous, multi-core nodes.

Online publication date: Fri, 27-Nov-2015

The full text of this article is only available to individual subscribers or to users at subscribing institutions.

 
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.

Pay per view:
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.

Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Autonomous and Adaptive Communications Systems (IJAACS):
Login with your Inderscience username and password:

    Username:        Password:         

Forgotten your password?


Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.

If you still need assistance, please email subs@inderscience.com