Cache-oblivious matrix algorithms in the age of multicores and many cores
dc.contributor.author | Heinecke, Alexander | en_GB |
dc.contributor.author | Trinitis, Carsten | en_GB |
dc.date.accessioned | 2013-03-25T11:27:47Z | |
dc.date.available | 2013-03-25T11:27:47Z | |
dc.date.issued | 2012 | |
dc.identifier.citation | Heinecke, A. and Trinitis, C. (2012), 'Cache-oblivious matrix algorithms in the age of multicores and many cores'. Concurrency Computat.: Pract. Exper.. doi: 10.1002/cpe.2974 | en_GB |
dc.identifier.issn | 15320626 | |
dc.identifier.doi | 10.1002/cpe.2974 | |
dc.identifier.uri | http://hdl.handle.net/10547/275816 | |
dc.description.abstract | This article highlights the issue of upcoming wider single-instruction, multiple-data units as well as steadily increasing core counts on contemporary and future processor architectures. We present the recent port to and latest results of cache-oblivious algorithms and implementations of our TifaMMy code on four architectures: SGI's UltraViolet distributed shared-memory machine, Intel's latest x86 architecture code-named Sandy Bridge, AMD's new Bulldozer architecture, and Intel's future Many Integrated Core architecture. TifaMMy's matrix multiplication and LU decomposition routines have been adapted and tuned with regard to these architectures. Results are discussed and compared with vendors’ architecture-specific and optimized libraries, Math Kernel Library and AMD Core Math Library, for both a standard C++ version with vectorization compiler switches and TifaMMy's highly optimized vector intrinsics version. We provide insights into architectural properties and comment on the feasibility of heterogeneous cores and accelerators, namely graphics processing units. Besides bare-metal performance, the test platforms’ ease of use is analyzed in detail, and the portability of our approach to new and upcoming silicon is discussed with regard to required effort on code change abstraction levels. | |
dc.language.iso | en | en |
dc.publisher | John Wiley & Sons | en_GB |
dc.relation.url | http://doi.wiley.com/10.1002/cpe.2974 | en_GB |
dc.rights | Archived with thanks to Concurrency and Computation: Practice and Experience | en_GB |
dc.subject | shared-memory platforms | en_GB |
dc.subject | cache oblivious | en_GB |
dc.subject | block recursive | en_GB |
dc.subject | linear algebra | en_GB |
dc.subject | performance | en_GB |
dc.subject | parallelization | en_GB |
dc.title | Cache-oblivious matrix algorithms in the age of multicores and many cores | en |
dc.type | Article | en |
dc.identifier.journal | Concurrency and Computation: Practice and Experience | en_GB |
html.description.abstract | This article highlights the issue of upcoming wider single-instruction, multiple-data units as well as steadily increasing core counts on contemporary and future processor architectures. We present the recent port to and latest results of cache-oblivious algorithms and implementations of our TifaMMy code on four architectures: SGI's UltraViolet distributed shared-memory machine, Intel's latest x86 architecture code-named Sandy Bridge, AMD's new Bulldozer architecture, and Intel's future Many Integrated Core architecture. TifaMMy's matrix multiplication and LU decomposition routines have been adapted and tuned with regard to these architectures. Results are discussed and compared with vendors’ architecture-specific and optimized libraries, Math Kernel Library and AMD Core Math Library, for both a standard C++ version with vectorization compiler switches and TifaMMy's highly optimized vector intrinsics version. We provide insights into architectural properties and comment on the feasibility of heterogeneous cores and accelerators, namely graphics processing units. Besides bare-metal performance, the test platforms’ ease of use is analyzed in detail, and the portability of our approach to new and upcoming silicon is discussed with regard to required effort on code change abstraction levels. |