• DocumentCode
    229535
  • Title

    Principles for the future development of artificial agents

  • Author

    Johnson, Deborah G. ; Noorman, Merel

  • Author_Institution
    Sci., Technol. & Soc. Program, Univ. of Virginia, Charlottesville, VA, USA
  • fYear
    2014
  • fDate
    23-24 May 2014
  • Firstpage
    1
  • Lastpage
    3
  • Abstract
    A survey of popular, technical and scholarly literature suggests that autonomous artificial agents will populate the future. Although some visions may seem fanciful, autonomous artificial agents are being designed, built, and deployed in a wide range of sectors. The specter of future artificial agents - with more learning capacity and more autonomy - raises important questions about responsibility. Can anyone (any humans) be responsible for the behavior of entities that learn as they go and operate autonomously? This paper takes as its starting place that humans are and always should be held responsible for the behavior of machines, even machines that learn and operate autonomously. In order to prevent evolution to a future in which no humans are thought to be responsible for the behavior of artificial agents, four principles are proposed, principles that should be kept in mind as artificial agents are developed.
  • Keywords
    multi-agent systems; social aspects of automation; agent autonomy; artificial agent behavior; autonomous artificial agents; learning capacity; Computers; Context; Ethics; Materials; Presses; Robots; Sociotechnical systems; artificial agent; autonomy; responsibility;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Ethics in Science, Technology and Engineering, 2014 IEEE International Symposium on
  • Conference_Location
    Chicago, IL
  • Type

    conf

  • DOI
    10.1109/ETHICS.2014.6893395
  • Filename
    6893395