Treffer: Compute units in OpenMP: extensions for heterogeneous parallel programming

Title:
Compute units in OpenMP: extensions for heterogeneous parallel programming
Contributors:
Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors, Universitat Politècnica de Catalunya. PM - Programming Models
Publisher Information:
John Wiley & sons
Publication Year:
2024
Collection:
Universitat Politècnica de Catalunya, BarcelonaTech: UPCommons - Global access to UPC knowledge
Document Type:
Fachzeitschrift article in journal/newspaper
File Description:
22 p.; application/pdf
Language:
English
Relation:
info:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2017-2020/PID2019-107255GB-C22/ES/UPC-COMPUTACION DE ALTAS PRESTACIONES VIII/; Gonzalez, M.; Morancho, E. Compute units in OpenMP: extensions for heterogeneous parallel programming. "Concurrency and computation: practice and experience", 10 Gener 2024, vol. 36, núm. 1, article e7885.; http://hdl.handle.net/2117/394656
DOI:
10.1002/cpe.7885
Rights:
Attribution-NonCommercial-NoDerivatives 4.0 International ; http://creativecommons.org/licenses/by-nc-nd/4.0/ ; Open Access
Accession Number:
edsbas.C63D29AA
Database:
BASE

Weitere Informationen

This article evaluates the current support for heterogeneous OpenMP 5.2 applications regarding the simultaneous activation of host and device computing units (e.g., CPUs, GPUs, or FPGAs). The article identifies limitations in the current OpenMP specification and describes the design and implementation of novel OpenMP extensions and runtime support for heterogeneous parallel programming. The Compute Unit (CUs) abstraction is introduced in the OpenMP programming model. The Compute Unit abstraction is defined in terms of an aggregation of computing elements (e.g., CPUs, GPUs, FPGAs). On top of CUs, the article describes dynamic work sharing constructs and schedulers that address the inherent differences in compute power of host and device CUs. New constructs and the corresponding runtime support are described for the new abstractions. The article evaluates the case of a hybrid multilevel parallelization of the NPB-MZ benchmark suite. The implementation exploits both coarse-grain and fine-grain parallelism, mapped to CUs of different nature (GPUs and CPUs). All CUs are activated using the new extensions and runtime support. We compare hybrid and nonhybrid executions under two state-of-the-art work-distribution schemes (Static and Dynamic Task schedulers). On a computing node composed of one AMD EPYC 7742 @ 2.250GHz (64 cores and 2 threads/core, totalling 128 threads per node) and 2 GPU AMD Radeon Instinct MI50 with 32GB, hybrid executions present speedups from 1.08 up to 3.18 with respect to a nonhybrid GPU implementation, depending on the number of activated CUs. ; This work was supported by the Spanish Ministry of Science and Technology (PID2019-107255GB). ; Peer Reviewed ; Postprint (published version)