• DocumentCode
    2020694
  • Title

    On specifying and performing visual tasks with qualitative object models

  • Author

    Hager, Gregory D. ; Dodds, Zachary

  • Author_Institution
    Dept. of Comput. Sci., Johns Hopkins Univ., Baltimore, MD, USA
  • Volume
    1
  • fYear
    2000
  • fDate
    2000
  • Firstpage
    636
  • Abstract
    Vision-based control has aimed to develop general-purpose, high accuracy systems for manipulating objects. While much of the scientific and technological infrastructure needed to accomplish this aim is now in place, several stumbling blocks still remain. One continuing issue is accuracy, and its relationship to system calibration. We describe a generative task structure for vision-based control of motion that admits a simple, geometric approach to task specification. At the same time, this approach allows one to state precisely what types of miscalibration lead to errors in task performance. A second hurdle has been the programmability of hand-eye systems. However, we argue that a structured object representation sufficient for flexible hand-eye coordination is a possibility. The result is a high-level, object-centered language for expressing hand-eye tasks
  • Keywords
    calibration; motion control; object recognition; optical tracking; robot programming; robot vision; hand-eye systems; motion control; object recognition; programmability; robot programming; system calibration; vision-based control; visual tracking; Artificial intelligence; Calibration; Computer science; Control systems; Educational institutions; Feedback control; Focusing; Libraries; Robot kinematics; Robot vision systems;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Robotics and Automation, 2000. Proceedings. ICRA '00. IEEE International Conference on
  • Conference_Location
    San Francisco, CA
  • ISSN
    1050-4729
  • Print_ISBN
    0-7803-5886-4
  • Type

    conf

  • DOI
    10.1109/ROBOT.2000.844124
  • Filename
    844124