Why don't we use microprocessors in supercomputers



10.06.2013 13:00

Programming model for future supercomputers

Britta Widmann Press and public relations
Fraunhofer Society

The need for even faster, more effective but also energy-saving computer clusters is growing in every industry. A new, asynchronous programming model provides a crucial building block for realizing the next supercomputers.

High performance computing is one of the key technologies for countless applications, which we meanwhile take for granted. This ranges from a Google search to weather forecast and climate simulation to bio-informatics. Keyword big data, here too the need for even faster, more effective but also energy-saving computer clusters is growing. The number of processors per system has now reached the millions and will grow even faster in the future than before. The programming model used in supercomputers, the Message Passing Interface - MPI, has remained largely unchanged over the last 20 years. It ensures that the microprocessors in the distributed systems can communicate. In the meantime, however, it has reached its limits.

"I had to solve a calculation and simulation task for seismic data," says Dr. Carsten Lojewski from the Fraunhofer Institute for Industrial Mathematics ITWM. “But it wasn't possible with previous methods. Problems were a lack of scalability, the limitation to a block-synchronous, bilateral communication and the lack of fault tolerance. That is why, out of my own interest, I started to develop a new programming model «. At the end of this development there was GPI - the Global Address Space Programming Interface, which uses the parallel architecture of high-performance computers with maximum efficiency.

Universal programming interface

GPI is based on a completely new approach: on an asynchronous communication model. Each processor can freely access all data directly - regardless of the memory on which it is located and without influencing other processes running in parallel. Together with Rui Machado also from the ITWM and Dr. Christian Simmendinger from T-Systems Solutions for Research receives Dr. Carsten Lojewski one of this year's Joseph von Fraunhofer Awards.

Similar to the MPI programming model, GPI was not developed as a parallel programming language, but as a parallel programming interface and can therefore be used universally. The need for such a highly scalable, flexible and fault-tolerant interface is great and growing, especially since the number of processors in the supercomputers is also increasing exponentially.

The first successful examples show how well GPI can be implemented: "High Performance Computing has meanwhile developed into a universal tool in science and business, as an integral element of the design process, for example in automobile or aircraft construction," says Dr. Christian Simmendinger. »Take aerodynamics as an example: One of the cornerstones of simulation in the European aerospace environment, the TAU software, was ported to the GPI platform in a project with the German Aerospace Center (DLR). With the result that we were able to significantly increase the parallel efficiency «.

GPI is a tool for specialists, but with the potential to revolutionize algorithmic development for high-performance software. It is considered to be the key to enabling the next generation of supercomputers - exascale computers that are 1000 times faster than today's mainframes.


Additional Information:

http: //www.fraunhofer.de/de/presse/presseinformationen/2013/juni/programmiermode ... contact person


Features of this press release:
Journalists
Electrical engineering, information technology, mathematics
supraregional
Research results, competitions / awards
German