Copyright2004 IEEEResilient Peer-to-PeerMulticast fromthe GroundUp StefanBirrer and Fabi ´an E. Bustamante Departmentof Computer Science NorthwesternUniversity Evanston,IL 60201, USA {sbirrer,fabianb }@cs.northwestern.edu 1.Introduction Multicast is an efficient mechanism to support group communication. It decouples the size of the receiver set from the amount of state kept at any single node and po- tentially avoids redundant communication in the network, promisingtomakepossiblelargescalemulti-partyapplica- tionssuchasaudioandvideoconference,researchcollabo- rationand content distribution. Anumberofresearchprojectshaverecentlyproposedan end-systemapproachtomulticast[11,3,24,10,21,18,28], partially in response to the deployment issues of IP Multi- cast [12, 13]. In this middleware [1] or application-layer approach, peers are organized as an overlay topology for data delivery, with each connection in the overlay mapped to a unicast path between two peers in the underlying In- ternet. Allmulticastrelatedfunctionalityisimplementedat the peers instead of at routers, and the goal of the multicast protocolistoconstructandmaintainanefficientoverlayfor datatransmission. One of the most important challenges of peer-to-peer multicast protocols is the ability to efficiently deal with the high degree of transiency inherent to their environment [5]. As multicast functionality is pushed to autonomous, un- predictable peers, significant performance losses can re- sult from group membership changes and the higher failure ratesofend-hostswhencomparedtorouters. Measurement studies of widely used peer-to-peer (P2P) systems have re- portedmedian session times1ranging from an hour to a minute[8,15,22]. Achievinghighdeliveryratioswithout sacrificing end-to-end latencies or incurring additional costshas provento be a challenging task. ThispaperintroducesNemo,anovelpeer-to-peermulti- castprotocolthataimsatachievingthiselusivegoal. Based on two techniques: (1) co-leaders and, (2)triggered nega- tiveacknowledgments(NACKs) ,Nemo’sdesignemphasizes conceptual simplicity and minimum dependencies [2], thus achieving, in a cost-effective manner, performance charac- teristics resilient to the natural instability of its target en- 1Session time is defined as the time between when a peer joins and leavesthe network.vironment. Simulation-based and wide-area experimenta- tions show that Nemo can achieve high delivery ratios (up to 99.98%) and low end-to-end latency similar to those of comparable protocols, while significantly reducing the cost in terms of duplicate packets (reductions >85%) and con- trol related traffic, making the proposed algorithm a more scalablesolution to the problem. The remainder of this paper describes our approach in moredetails(Section2),andpresentearlyexperimentalre- sults in Section 3. We briefly discuss related work in Sec- tion4 and conclude in Section 5. 2.Nemo’sApproach Nemo follows the implicit approach [3, 10, 21, 28] to building an overlay for multicasting: participating peers are organized in a control topology and the data delivery network is implicitly defined based on a set of forwarding rules. We here provide a summarized description of Nemo. For complete details, we direct the reader to the associated technicalreport [6]. The set of communicating peers are organized into clus- ters based on network proximity,2where every peer is a member of a cluster at the lowest layer. Clusters vary in size between dand3d−1, where dis a constant known as thedegree. Each of these clusters selects a leader3that be- comesamemberoftheimmediatesuperiorlayer. Inpartto avoidthedependencyonasinglenode,everyclusterleader recruits a number of co-leaders to form its crew. The pro- cessisrepeated,withallpeersinalayerbeinggroupedinto clusters, crew members selected, and leaders promoted to participate in the next higher layer. Hence peers can lead more than one cluster in successive layers of this logical hierarchy.4 Co-leaders improve the resilience of the multicast group byavoidingdependenciesonsinglenodesandprovidingal- 2Other factors such as bandwidth [25, 11] and expected peer life- time[8] could be easily incorporated. 3The leader is the peer in the center of the cluster, in terms of end-to- endlatency. 4ThisiscommontobothNemoandNice[3]aswellasZigzag[24];the degreebounds havebeen chosen to help reduce oscillation in clusters. Leader Co−leader Ordinary memberFigure1: Nemo’slogicalorganization. Theshapeillustratesonlytherole of a peer within a cluster: a leader of a cluster at a given layer can act as leader,co-leader,or an ordinary member at the nexthigher layer. ternativepathsfordataforwarding. Inaddition,crewmem- bers share the load from message forwarding, thus improv- ing scalability. Figure 1 illustrates the logical organization ofNemo. Anewpeerjoinsthemulticastgroupbyqueryingawell- known special end-system, the rendezvous point, for the IDs of the members on the top layer. Starting there and in an iterative manner, the incoming peer continues: ( i) re- questing the list of members at the current layer from the cluster’sleader,( ii)selectingfromamongthemwhotocon- tactnextbasedontheresultfromagivencostfunction,and (iii) moving into the next layer. When the new peer finds theleaderwithminimalcostatthebottomlayer,itjoinsthe associatedcluster. Nemo’s data delivery topology is implicitly defined by the set of packet-forwarding rules adopted. A peer sends a message to one of the leaders for its layer. Leaders (the leader and its co-leaders) forward any received message to all other peers in their clusters and up to the next higher layer. A node in charge of forwarding a packet to a given clustercanchooseanyofthecrewmembersinthecluster’s leadergroup as destination. Figure 2 illustrates the data forwarding algorithm using the logical topology from Figure 1. Each row corresponds toonetimestep. Attime t0apublisherforwardsthepacket to its cluster leader, which in turn, sends it to all cluster members and the leader of the next higher layer ( t1). At time t2, this leader forwards the packet to all its cluster members,i.e. themembersofitslowestlayerandthemem- bers of the second lowest layer. In the last step, the leader oftheclusterontheleftforwardsthepackettoitsmembers. While we have employed leaders for this example, Nemo usesco-leaders in similar manner for forwarding. To illustrate Nemo’s resilience to peer failures, Figure 3 shows an example of the forwarding algorithm in action. The forwarding responsibility is evenly shared among the leaders by alternating the message recipient among them. In case of a failed crew member, the remaining leaders can still forward their share of messages through the tree. Like other protocols aiming at high resilience [20, 4], Nemo re- lies on sequence numbers and triggered NACKs to detect t0t1t2t3TimeFigure2: Basic data forwardingin Nemo. One time step per row. lostpackets. Every peer piggybacks a bit-mask with each data packet indicatingthepreviouslyreceivedpackets. Inaddition,each peermaintainsacacheofreceivedpacketsandalistofmiss- ing ones. Once a gap (relative to a peer’s upstream neigh- bors) is detected in the packet flow, the absent packets are consideredmissing after a giventime period. 3.Evaluation We analyze the performance of Nemo using detailed simulation and wide-area experimentation. We compare Nemo’s performance to that of three other protocols – Narada [11], Nice [3] and Nice-PRM [4] – both in terms of application performance and protocol overhead. Appli- cationperformanceiscapturedbydeliveryratioandend-to- endlatency,whileoverheadisevaluatedintermsofnumber ofduplicate packets. For each of the three alternative protocols, the values for the available parameters were obtained from the corre- spondingliterature [11, 3, 4]. Weusedtwodifferentfailurerates. Thehighfailurerate employed a mean time to failure (MTTF) of 5minutes, and ameantimetorepair(MTTR)of2minutes. Thelowfailure rateusedaMTTFof60minutesandaMTTRof10minutes. For details on the protocols implementation and on the ex- perimentalsetup,wedirectthereadertotheassociatedtech- nicalreport [6]. All experiments were run with a payload of 100bytes. We opted for this relatively small packet size to avoid sat- uration effects in PlanetLab. For simulations, we assume infinite bandwidth per link and only model link delay, thus packet size is secondary. We employ a buffer size of 32 packets and a rate of 10 packets per second. This corre- sponds to a 3.2-second buffer, which is a realistic scenario forapplications such as multimedia streaming. 3.1.Simulation Results For all simulation results, each data point is the mean of 25independent runs.