• DocumentCode
    413059
  • Title

    Adapting to memory pressure from within scientific applications on multiprogrammed COWs

  • Author

    Mills, Richard T. ; Stathopoulos, Andreas ; Nikolopoulos, Dimitrios S.

  • Author_Institution
    Dept. of Comput. Sci., Coll. of William & Mary, Williamsburg, VA, USA
  • fYear
    2004
  • fDate
    26-30 April 2004
  • Firstpage
    71
  • Abstract
    Summary form only given. Dismal performance often results when the memory requirements of a process exceed the physical memory available to it. Moreover, significant throughput reduction is experienced when this process is part of a synchronous parallel job on a nondedicated computational cluster. A possible solution is to develop programs that can dynamically adapt their memory usage according to the current availability of physical memory. We explore this idea on scientific computations that perform repetitive data accesses. Part of the program\´s data set is cached in resident memory, while the remainder that cannot fit is accessed in an "out-of-core" fashion from disk. The replacement policy can be user defined. This allows for a graceful degradation of performance as memory becomes scarce. To dynamically adjust its memory usage, the program must reliably answer whether there is a memory shortage or surplus in the system. Because operating systems typically export limited memory information, we develop a parameter-free algorithm that uses no system information beyond the resident set size (RSS) of the program. Our resulting library can be called by scientific codes with little change to their structure or with no change at all, if computations are already "blocked" for reasons of locality. Experimental results with both sequential and parallel versions of a memory-adaptive conjugate-gradient linear system solver show substantial performance gains over the original version that relies on the virtual memory system. Furthermore, multiple instances of the adaptive code can coexist on the same node with little interference with one another.
  • Keywords
    cache storage; conjugate gradient methods; multiprogramming; parallel programming; virtual storage; workstation clusters; memory pressure; memory-adaptive conjugate-gradient linear system; multiprogrammed COW; nondedicated computational cluster; operating systems; parallel version; parameter-free algorithm; repetitive data access; resident set size; scientific application; sequential version; synchronous parallel job; virtual memory system; Adaptive coding; Availability; Concurrent computing; Cows; Degradation; Libraries; Linear systems; Operating systems; Performance gain; Throughput;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Parallel and Distributed Processing Symposium, 2004. Proceedings. 18th International
  • Print_ISBN
    0-7695-2132-0
  • Type

    conf

  • DOI
    10.1109/IPDPS.2004.1303002
  • Filename
    1303002