DocumentCode :
3305317
Title :
The Next Generation of Compilers
fYear :
2009
fDate :
22-25 March 2009
Abstract :
Over the past decade, production compilers for general-purpose processors have adopted a number of major technologies emerging from compiler research, including SSA-based optimization, pointer analysis, profile-guided optimization, link-time cross-module optimization, automatic vectorization, and just-in-time compilation with adaptive optimization for dynamic languages.  These features are here to stay for the foreseeable future.  So what major new features could emerge from compiler research over the next decade?First, just-in-time and dynamic optimization will be extended to static languages, such as C, C++, and Fortran.  This has already happened for graphics applications, as in the MacOS X OpenGL library and the AMD ATI compiler, and is now being adopted for general-purpose multicore platforms such as the RapidMind Multicore Development Platform.Second, and perhaps most predictably, compilers will play a major role in tackling the multicore programming challenge.  This does not mean that automatic parallelization will come back from the dead.  Rather compiler support for parallel programming will take two forms: optimization and code generation for explicitly parallel programs; and interactive, potentially optimistic, parallelization technology to support semi-automatic porting of existing code to explicitly parallel programming models.Third, compilers will increasingly be responsible for enhancing or enforcing safety and reliability properties for programs.  The last few years have seen new language and compiler techniques (e.g. in the Cyclone, CCured, and SAFECode projects) that guarantee complete memory safety and sound operational semantics even for C and C++ programs. There is no longer any excuse for production C/C++ compilers not to provide these capabilities, at least as an option for security-sensitive software, including all privileged software.  Furthermo- - re, these capabilities can be deployed via a typed virtual machine that enables more powerful security and reliability techniques than with native machine code.Fourth, compilers will increasingly incorporate more sophisticated auto-tuning strategies for exploring optimization sequences, or even arbitrary code sequences for key kernels.  This is one of the major sources of unexploited performance improvements with existing compiler technology.Finally, compilers will adopt speculative optimizations in order to compensate for the constraints imposed by conservative static analysis. Recent architecture research has led to novel hardware mechanisms that can make such speculation efficient and the ball is in the compiler community´s court to invent new ways to exploit this hardware support for more powerful, traditional and non-traditional, optimizations.
Keywords :
Constraint optimization; Dynamic programming; Graphics; Hardware; Multicore processing; Optimized production technology; Optimizing compilers; Parallel programming; Product safety; Program processors;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Code Generation and Optimization, 2009. CGO 2009. International Symposium on
Conference_Location :
Seattle, WA, USA
Print_ISBN :
978-0-7695-3576-0
Type :
conf
DOI :
10.1109/CGO.2009.37
Filename :
4907645
Link To Document :
بازگشت