DocumentCode :
710453
Title :
Efficient optical interconnect architecture for HPC and data center systems
Author :
Schwetman, Herb
Author_Institution :
Oracle Labs., Burlington, VT, USA
fYear :
2015
fDate :
20-22 April 2015
Firstpage :
2
Lastpage :
3
Abstract :
Summary form only given. Modern large-scale computer systems require interconnection networks that are both high-performance and energy efficient. Most large-scale systems are made from a large number of “nodes”, where each node has one or more computing elements and a block of main memory, and is connected to a network that interconnects all of the nodes. There are several factors influencing the evolution of systems that has lead to this configuration: (1) as these components are all based on consumer products, the costs have become very low, as compared to customized components, (2) many problems require very large amounts of main memory (for efficient execution), and the best way to have sufficient memory capacity is to use many smaller discrete blocks of memory, as opposed to one or a few large blocks of memory, and (3) this arrangement of distributed nodes offers opportunities for maximum levels of parallel computation. The Top500 list [Top500] lists high-performance computer (HPC) systems ranked by achieved performance on a standard benchmark program. On this list, the number of nodes per system ranges from 44 to 98,000. Provisioning an efficient, high-performance network interconnecting these nodes is critical for the effective operation of such a system. Also, the "Exascale Report" [Kogg08] highlights the impacts of network power requirements in a "strawman" execscale system design. In this design, the interconnect portion of the proposed system consumes 27% of the total projected system power. In multi-node systems, there are two classes of interconnection networks: an intra-node network connecting the sockets and cores on a single node, and an inter-node network connecting the tens to thousands of nodes that comprise the system. The Oracle macrochip [Krish09] is a design for a node that uses a Silicon Photonics intra-node network to connect the 64 sites on a node. The next step in exploiting this macrochip is a system based on interconnec- ing multiple macrochips. Both this intranode network and the proposed inter-node network use optical communication to achieve the energy-efficient, high-performance operation that will be required in future HPC systems. This talk summarizes some of the lessons learned in designing and analyzing the macrochip [Koka12] ; it also discusses some of the issues that are emerging in the design of the multi-macrochip systems of the future.
Keywords :
computer centres; elemental semiconductors; integrated optics; integrated optoelectronics; optical communication; optical interconnections; parallel processing; silicon; HPC; Oracle macrochip; data center systems; distributed nodes; efficient optical interconnect architecture; high-performance computer; inter-node network; interconnection networks; large-scale computer systems; memory capacity; multi-node systems; network power requirements; optical communication; parallel computation; silicon photonics intra-node network;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Optical Interconnects Conference (OI), 2015 IEEE
Conference_Location :
San Diego, CA
Print_ISBN :
978-1-4799-8178-6
Type :
conf
DOI :
10.1109/OIC.2015.7115659
Filename :
7115659
Link To Document :
بازگشت