Author_Institution :
George Mason Univ., Fairfax, VA, USA
Abstract :
Although the traditional client-server model first established the Web\´s backbone, it tends to underuse the Internet\´s bandwidth and intensify the burden that dedicated servers face as their load increases.\´ Peer-to-peer computing relies on individual computers\´ computing power and storage capacity to better utilize bandwidth and distribute this load in a self-organizing manner. In P2P, nodes (or peers) act as both clients and servers, form an application-level network, and route messages (such as requests to locate a resource). The design of these routing protocols is of paramount importance to a P2P application\´s efficiency: naive approaches - such as Gnutella\´s flood routing, for example - can add traffic. P2P systems that exhibit the "small world" property - in which most peers have few links to other peers, but a few of them have many - are robust to random attacks, but can be highly vulnerable to targeted ones. P2P computing also has the potential to enhance reliability and fault tolerance because it doesn\´t rely on dedicated servers.\´ Each peer maintains a local directory with entries to the resources it manages. It can also cache other peers\´ directory entries. Important applications of P2P technologies include distributed directory systems, new e-commerce models, and Web service discovery, all of which require efficient resource-location mechanisms.
Keywords :
Internet; distributed processing; transport protocols; P2P systems; Web service discovery; deterministic location problem; distributed directory systems; e-commerce models; fault tolerance; flood routing; peer-to-peer computing; random attacks; reliability; resource-location mechanisms; targeted attacks; Bandwidth; Distributed computing; Floods; Internet; Network servers; Peer to peer computing; Routing protocols; Spine; Telecommunication traffic; Web server;