Author_Institution :
Dept. of Comput. Sci. & Eng., Seoul Nat. Univ., Seoul, South Korea
Abstract :
Given a high-dimensional and large-scale tensor, how can we decompose it into latent factors? Can we process it on commodity computers with limited memory? These questions are closely related to recommendation systems exploiting context information such as time and location. They require tensor factorization methods scalable with both the dimension and size of a tensor. In this paper, we propose two distributed tensor factorization methods, SALS and CDTF. Both methods are scalable with all aspects of data, and they show an interesting trade-off between convergence speed and memory requirements. SALS updates a subset of the columns of a factor matrix at a time, and CDTF, a special case of SALS, updates one column at a time. On our experiment, only our methods factorize a 5-dimensional tensor with 1B observable entries, 10M mode length, and 1K rank, while all other state-of-the-art methods fail. Moreover, our methods require several orders of magnitude less memory than the competitors. We implement our methods on MapReduce with two widely applicable optimization techniques: local disk caching and greedy row assignment.
Keywords :
data handling; least squares approximations; matrix decomposition; parallel processing; CDTF; MapReduce; SALS; coordinate descent for tensor factorization; distributed tensor factorization methods; factor matrix; greedy row assignment; high-dimensional tensor factorization; large-scale tensor factorization; local disk caching; optimization techniques; subset alternating least square; Distributed databases; Matrix decomposition; Memory management; Optimization; Scalability; Tensile stress; Tin; Distributed computing; MapReduce; Recommender system; Tensor factorization;