Current high performance computing (HPC) architectures are highly specialized machines which can approximate their peak performance only if programs fully exploit the specific properties of applications as well as the idiosyncrasies of the machine. Moreover, different classes of architectures require different programming models and optimization strategies. This has led to a situation where, with few exceptions, HPC architectures are generally programmed at a low level of abstraction, such as message passing, since high-level language, compiler, and tools support is often not able to deliver the required target code performance. The next few years will see the emergence of massively parallel machines reaching hundreds of Teraflops and potentially Petaflops performance by the end of the decade. Whereas some of these machines will be based on conventional principles, many research efforts are now focused on new architectural approaches such as Processor-in-Memory (PIM).
This panel discusses programming models for future architectures in view of the necessity to manage
massive parallelism, the need to support fault tolerance, and the tradeoff between
the expressivity and reliability provided by a high level of abstraction and the requirement of high target code performance.
It will focus on large-scale numerical simulations featuring dynamic and adaptive resource
The following issues will be addressed:
Panel members will provide brief overviews of 1) technology development underway at NASA and elsewhere to create autonomous system capabilities for space exploration missions, and 2) relevant developments in supercomputing. Already, supercomputing is providing valuable support at NASA in the form of high-end simulations and visualizations of autonomy design concepts and autonomous mission scenarios. The panel will discuss possibilities for cross-fertilizing the further development and applications of autonomy and supercomputing capabilities.
Application simulations that can accept and respond dynamically to new data injected at execution time, and conversely, application systems that have the ability to dynamically control the measurement processes, present a new simulation paradigm and a new methodology for achieving a more effective measurement process. We refer to such application simulations and measurement systems as Dynamic Data Driven Application Systems (DDDAS). This new paradigm has the potential to transform the way science and engineering are done, and to have a major impact in the way many functions in our society are conducted, such as manufacturing, commerce, transportation, hazard prediction/management, and medicine. The panel will feature presentations that will provide examples from several application areas that can benefit from this paradigm. The panel will also discuss the challenges in DDDAS, such as application composition environments, dynamic runtime support in systems software, and the design of algorithms suited to perturbations in the dynamic data inputs. Such challenges clearly present the need for a synergistic multidisciplinary research approach in systems, algorithms and applications, and such aspects will also be addressed in the panel.