Abstract :
Inductive modeling or “machine learning” algorithms are able to discover structure in high-dimensional data in a nearly automated fashion. These adaptive statistical methods-from decision trees and polynomial networks, to projection pursuit models, additive networks, and cascade correlation neural networks-repeatedly search for, and add on, the model component judged best at that state. Because of the huge model space of possible components, the choice is typical greedy. In fact, it is usual for the analyst and algorithm to be greedy at three levels: when choosing: 1) a term within a model, 2) a model within a family (class of method), and 3) a family within a collection of techniques. It is better in each stage, we argue, to “take a longer view” to: 1) consider terms in larger sets, 2) merge competing models within a family, and 3) to fuse information from disparate models, making the combination more robust. Example benefits of fusion are demonstrated on a challenging classification dataset, where one must infer the species of a bat from its chirps
Conference_Titel :
Systems, Man and Cybernetics, 1995. Intelligent Systems for the 21st Century., IEEE International Conference on