Author_Institution :
Spawars Syst. Center, San Diego, CA, USA
Abstract :
Contemporary neural architectures having one or more hidden layers suffer from the same deficiencies that genetic algorithms and methodologies for non-trivial automatic programming do; namely, they cannot exploit inherent domain symmetries for the transference of knowledge from an application of lesser to greater rank, or across similar applications. As a direct consequence, no ensemble of contemporary neural architectures allows for the effective codification and transference of knowledge within a society of individuals (i.e., swarm knowledge). These deficiencies stem from the fact that contemporary neural architectures cannot reason symbolically using heuristic ontologies. They cannot directly provide symbolic explanations of what was learned for purposes of inspection and verification. Moreover, they do not allow the knowledge engineer to precondition the internal feature space through the application of domain-specific modeling languages. A symbolic representation can support the heuristic evolution of an ensemble of neural architectures. Each neural network in the ensemble imbues a hidden layer and for this reason is NP-hard in its learning performance. It may be argued that the internal use of a neat representation subsumes the heuristic evolution of a scruffy one. It follows that there is a duality of representation under transformation. The goal of AI then is to find symbolic representations, transformations, and associated heuristic ontologies. This paper provides an introduction to this quest. Consider the game of chess for example. If a neural network or symbolic heuristic is used to evaluate board positions, then the best found iterate (i.e., of weights or symbols) serves as a starting point for iterative refinement. This paper addresses the ordering and similarity of the training instances in refining subsequent iterates. If we fix the learning technology, then we need to focus on reducing the problem, composing intermediate results, and transferring the results to a similar domain. For example, moving just a bishop against one opposing piece is a reduction, moving a bishop and say a rook against one opposing piece a composition, and moving a queen against one or more opposing pieces a transference. The training sets must be mutually orth- ogonal, or random to maximize the learned content. Learning what to present and when involves self-reference and this necessarily implies a heuristic approach.
Keywords :
heuristic programming; learning (artificial intelligence); neural nets; artificial intelligence; contemporary neural architectures; domain-specific modeling language; genetic algorithm; heuristic approach; heuristic evolution; heuristics; information codification; information fusion; information transference; learning; neural architecture ensemble; neural network; nontrivial automatic programming; swarm knowledge; symbolic representation; training sets; Automatic programming; Domain specific languages; Genetic algorithms; Inspection; Knowledge engineering; Machine learning; Neural networks; Ontologies; Sensor phenomena and characterization; USA Councils;