Abstract :
Machines have already been made which learn to recognize small alphabets of fairly well-defined characters. The immediately obvious difference between such machines and human subjects is that the latter are able to deal with large alphabets in which each character undergoes a wide range of variations. Presumably as design, theory, and technology advance, systems will be built which can handle a steadily increasing range of alphabets. It is not by any means clear, however, whether existing machines, or for that matter conceivable developments of them, employ the same logical principles as the brain, or would behave identically assuming enough computer time and logical components available. In principle, then, the problem is whether the important differences between existing character recognition machines and the human brain are purely quantitative. Such a view, i.e., that differences are purely quantitative and that given enough time and space a working model of the brain could be built employing the same logical principles as are present in one or other existing pattern-recognition machines, is implicit in much present theorizing on these topics (Rosenblatt [3], Uhr and Vossler [6], and Taylor [4]) and it is of appreciable theoretical, and no doubt practical interest, to consider its validity. In this paper a behavioral test is proposed, which, if passed by human subjects, and failed by a machine would indicate something more than a quantitive difference in logical design between the brain and the character recognition machine.