Title of article :
Trust, Privacy, and Frame Problems in Social and Business E-Networks, Part
Abstract :
Privacy issues in social and business e-networks are daunting in complexity—private information about oneself might be routed through countless artificial agents. Foreach such agent, in that context, two questions about trust are raised: Where an agent mustaccess (or store) personal information, can one trust that artificial agent with thatinformation and, where an agent does not need to either access or store personalinformation, can one trust that agent not to either access or store that information? It wouldbe an infeasible task for any human being to explicitly determine, for each artificial agent, whether it can be trusted. That is, no human being has the computational resources to makesuch an explicit determination. There is a well-known class of problems in the artificialintelligence literature, known as frame problems, where explicit solutions to them arecomputationally infeasible. Human common sense reasoning solves frame problems, though the mechanisms employed are largely unknown. I will argue that the trust relationbetween two agents (human or artificial) functions, in some respects, is a frame problemsolution. That is, a problem is solved without the need for a computationally infeasibleexplicit solution. This is an aspect of the trust relation that has remained unexplored in theliterature. Moreover, there is a formal, iterative structure to agent-agent trust interactionsthat serves to establish the trust relation non-circularly, to reinforce it, and to “bootstrap” its strength