Title :
Sparsity analysis of learned factors in Multilayer NMF
Author :
Ievgen Redko;Younes Bennani
Author_Institution :
Lab. d´Inf. de Paris-Nord, Univ. Paris 13, Villetaneuse, France
fDate :
7/1/2015 12:00:00 AM
Abstract :
The concept of nonnegative matrix factorization is a recent machine learning technique that is used to decompose large data matrices imposing the non-negativity constraints on the factors. This technique is now used in many data mining applications and thus remains a topic of ongoing interest. In this paper we are particularly interested in the Multilayer NMF - a model that can be seen as a pretraining step of Deep NMF model for learning hidden representations. We analyze the factors obtained using Multilayer NMF and show that the process of building layers can be seen as a repeated application of the Hoyer´s projection operator applied sequentially to the factor of the second layer. We also provide the sparsity analysis for matrices obtained during the optimization procedure at each layer. We conclude that the overall sparsity decreases with the increasing number of layers despite the general assumption that Multilayer NMF is efficient due to the fact that it increases the sparsity of learned factors.
Keywords :
"Nonhomogeneous media","Matrix decomposition"
Conference_Titel :
Neural Networks (IJCNN), 2015 International Joint Conference on
Electronic_ISBN :
2161-4407
DOI :
10.1109/IJCNN.2015.7280551