DocumentCode :
35610
Title :
Accelerated Canonical Polyadic Decomposition Using Mode Reduction
Author :
Guoxu Zhou ; Cichocki, Andrzej ; Shengli Xie
Author_Institution :
Lab. for Adv. Brain Signal Process., RIKEN Brain Sci. Inst., Wako, Japan
Volume :
24
Issue :
12
fYear :
2013
fDate :
Dec. 2013
Firstpage :
2051
Lastpage :
2062
Abstract :
CANonical polyadic DECOMPosition (CANDECOMP, CPD), also known as PARAllel FACtor analysis (PARAFAC) is widely applied to Nth-order (N ≥ 3) tensor analysis. Existing CPD methods mainly use alternating least squares iterations and hence need to unfold tensors to each of their N modes frequently, which is one major performance bottleneck for large-scale data, especially when the order N is large. To overcome this problem, in this paper, we propose a new CPD method in which the CPD of a high-order tensor (i.e., ) is realized by applying CPD to a mode reduced one (typically, third-order tensor) followed by a Khatri-Rao product projection procedure. This way is not only quite efficient as frequently unfolding to N modes is avoided, but also promising to conquer the bottleneck problem caused by high collinearity of components. We show that, under mild conditions, any Nth-order CPD can be converted to an equivalent third-order one but without destroying essential uniqueness, and theoretically they simply give consistent results. Besides, once the CPD of any unfolded lower order tensor is essentially unique, it is also true for the CPD of the original higher order tensor. Error bounds of truncated CPD are also analyzed in the presence of noise. Simulations show that, compared with state-of-the-art CPD methods, the proposed method is more efficient and is able to escape from local solutions more easily.
Keywords :
iterative methods; least squares approximations; reduced order systems; singular value decomposition; tensors; CANDECOMP; CPD methods; Khatri-Rao product projection procedure; PARAFAC; accelerated canonical polyadic decomposition; alternating least squares iterations; large-scale data; mode reduction; parallel factor analysis; performance bottleneck; tensor analysis; Acceleration; Algorithm design and analysis; Learning systems; Matrix decomposition; Source separation; Tensile stress; Vectors; Alternating least squares (ALS); CP (PARAFAC) decompositions; Khatri–Rao product; mode reduction; tensor decompositions;
fLanguage :
English
Journal_Title :
Neural Networks and Learning Systems, IEEE Transactions on
Publisher :
ieee
ISSN :
2162-237X
Type :
jour
DOI :
10.1109/TNNLS.2013.2271507
Filename :
6558484
Link To Document :
بازگشت