• DocumentCode
    3748747
  • Title

    Learning the Structure of Deep Convolutional Networks

  • Author

    Jiashi Feng;Trevor Darrell

  • Author_Institution
    Dept. of EECS, UC Berkeley, Berkeley, CA, USA
  • fYear
    2015
  • Firstpage
    2749
  • Lastpage
    2757
  • Abstract
    In this work, we develop a novel method for automatically learning aspects of the structure of a deep model, in order to improve its performance, especially when labeled training data are scarce. We propose a new convolutional neural network model with the Indian Buffet Process (IBP) prior, termed ibpCNN. The ibpCNN automatically adapts its structure to provided training data, achieves an optimal balance among model complexity, data fidelity and training loss, and thus offers better generalization performance. The proposed ibpCNN captures complicated data distribution in an unsupervised generative way. Therefore, ibpCNN can exploit unlabeled data -- which can be collected at low cost -- to learn its structure. After determining the structure, ibpCNN further learns its parameters according to specified tasks, in an end-to-end fashion, and produces discriminative yet compact representations. We evaluate the performance of ibpCNN, on fully-and semi-supervised image classification tasks, ibpCNN surpasses standard CNN models on benchmark datasets, with much smaller size and higher efficiency.
  • Keywords
    "Data models","Adaptation models","Training data","Complexity theory","Training","Neural networks","Convolutional codes"
  • Publisher
    ieee
  • Conference_Titel
    Computer Vision (ICCV), 2015 IEEE International Conference on
  • Electronic_ISBN
    2380-7504
  • Type

    conf

  • DOI
    10.1109/ICCV.2015.315
  • Filename
    7410672