Publications
2025
Abstract
Geolocating network devices is essential for various research areas. Yet, despite notable advancements, it continues to be one of the most challenging issues for experimentalists. An approach for geolocating that has proved effective is leveraging geolocating hints in PTR records associated with network devices. Extracting and interpreting geo-hints from PTR records is challenging because the labels are primarily intended for human interpretation rather than computational processing. Additionally, a lack of standardization across operators — and even within a single operator, due to factors like rebranding, mergers, and acquisitions — complicates the process.
Abstract
We introduce Borges (Better ORGanizations Entities mappingS), a novel framework for improving AS-to-Organization mappings using Large Language Models (LLMs). Existing approaches, such as AS2Org and its extensions, rely on static WHOIS data and rule-based extraction from PeeringDB records, limiting their ability to capture complex, dynamic organizational structures.
Abstract
In just a few decades, the Internet has evolved from a research prototype to a cyber-physical infrastructure of critical importance for modern society and the global economy. Surprisingly, despite its new role, the survivability of the Internet—its ability to fulfill its mission in the presence of large-scale failures—has received limited attention. We introduce Domino, our initial design and implementation of a testbench tool for stress testing the Internet’s routing system, a key element of the critical Internet infrastructure. The simulation-based testbench consists of a comprehensive and flexible framework that allows for the incorporation of diverse survivability metrics, provides a platform for specifying, evaluating, and comparing different topologies of the underlying Internet infrastructure, and can account for modifications to networking protocols and architectural components. By demonstrating the utility of the proposed testbench with a number of illustrative examples, we make a case for stress testing as a viable approach to evaluating the Internet’s survivability in the face of evolving challenges.
Abstract
The Internet’s connectivity relies on a fragile submarine cable network (SCN), yet existing tools fall short in assessing its criticality. We introduce Calypso, a new framework that leverages traceroute data to map traffic to submarine cables. Validated through real-world case studies, Calypso reveals hidden risks and offers new insights for enhancing SCN resilience.
Abstract
The emergence of large cloud providers in the last decade has transformed the Internet, resulting in a seemingly ever-growing set of datacenters, points of presence, and network peers. Despite the availability of closer peering locations, some networks continue to peer with cloud providers at distant locations, traveling thousands of kilometers. In this paper, we employ a novel cloud-based traceroute campaign to characterize the distances networks travel to peer with the cloud. This unique approach allows us to gain unprecedented insights into the peering patterns of networks. Our findings reveal that 50% of the networks peer within 300 kilometers of the nearest datacenter. However, our analysis also reveals that over 20% of networks travel at least 6,700 kilometers beyond the proximity of the nearest computing facility, and some as much as 18,791 kilometers! While these networks connect with the cloud worldwide, from South America to Europe and Asia, many come to peer with cloud providers in North America, even from Oceania and Asia. We explore possible motivations for the persistence of distant peering, discussing factors such as cost-effective routes, enhanced peering opportunities, and access to exclusive content.
Abstract
This paper presents the first large-scale empirical study of commercial personally identifiable information (PII) removal systems — commercial services that claim to improve privacy by automating the removal of PII from data broker’s databases. Popular examples of such services include DeleteMe ,Mozilla Monitor , Incogni , among many others. The claims these services make may be very appealing to privacy-conscious Web users, but how effective these services actually are at improving privacy has not been investigated. This work aims to improve our understanding of commercial PII removal services in multiple ways. First, we conduct a user study where participants purchase subscriptions from four popular PII removal services, and report ( i) what PII the service find, ( ii) from which data brokers, ( iii) whether the service is able to have the information removed, and ( iv) whether the identified information actually is PII describing the participant. And second, by comparing the claims and promises the services makes ( e.g.which and how many data brokers each service claims to cover). We find that these services have significant accuracy and coverage issues that limit the usefulness of these services as a privacy-enhancing technology. For example, we find that the measured services are unable to remove the majority of the identified PII records from data broker’s (48.2% of the successfully removed found records) and that most records identified by these services are notPII about the user (study participants found that only 41.1% of records identified by these services were PII about themselves).
2024
Abstract
We present the first large-scale analysis of the adoption of thirdparty serving infrastructures in government digital services. Drawing from data collected across 61 countries spanning every continent and region, capturing over 82% of the world’s Internet population, we examine the preferred hosting models for public-facing government sites and associated resources. Leveraging this dataset, we analyze government hosting strategies, cross-border dependencies, and the level of centralization in government web services. Among other findings, we show that governments predominantly rely on third-party infrastructure for data delivery, although this varies significantly, with even neighboring countries showing contrasting patterns. Despite a preference for third-party hosting solutions, most government URLs in our study are served from domestic servers, although again with significant regional variation. Looking at overseas located servers, while the majority are found in North America and Western Europe, we note some interesting bilateral relationships (e.g., with 79% of Mexico’s government URLs being served from the US, and 26% of China’s government URLs from Japan). This research contributes to understanding the evolving landscape of serving infrastructures in the government sector, and the choices governments make between leveraging third-party solutions and maintaining control over users’ access to their services and information.
Abstract
The Venezuelan crisis, unfolding over the past decade, has garnered international attention due to its impact on various sectors of civil society. While studies have extensively covered the crisis’s effects on public health, energy, and water management, this paper delves into a previously unexplored area - the impact on Venezuela’s Internet infrastructure. Amidst Venezuela’s multifaceted challenges, understanding the repercussions of this critical aspect of modern society becomes imperative for the country’s recovery.
Abstract
Geolocating network devices is essential for various research areas. Yet, despite notable advancements, it continues to be one of the most challenging issues for experimentalists. An approach for geolocating that has proved effective is leveraging geolocating hints in PTR records associated with network devices. We argue that Large Language Models (LLMs), rather than humans, are better equipped to identify patterns in DNS PTR records, and significantly scale the coverage of tools like Hoiho. We introduce an approach that leverages LLMs to classify PTR records, and generate regular expressions for these classes, and hint-to-location mapping. We present preliminary results showing the applicability of using LLMs as a scalable approach to leverage PTR records for infrastructure geolocation.
Abstract
We investigate network peering location choices, focusing on whether networks opt for distant peering sites even when nearby options are available. We conduct a network-wide cloud-based traceroute campaign using virtual machine instances from four major cloud providers to identify peering locations and calculate the “peering stretch”: the extra distance networks travel beyond the nearest data center to their actual peering points. Our results reveal a median peering stretch of 300 kilometers, with some networks traveling as much as 6,700 kilometers. We explore the characteristics of networks that prefer distant peering points and the potential motivations behind these choices, providing insights into digital sovereignty and cybersecurity implications.
Abstract
On November 28-29, 2023, Northwestern University hosted a workshop titled “Towards Re-architecting Today’s Internet for Survivability” in Evanston, Illinois, US. The goal of the workshop was to bring together a group of national and international experts to sketch and start implementing a transformative research agenda for solving one of our community’s most challenging yet important tasks: the re-architecting of tomorrow’s Internet for “survivability”, ensuring that the network is able to fulfill its mission even in the presence of large-scale catastrophic events. This report provides a necessarily brief overview of two full days of active discussions.
2023
Abstract
We present a longitudinal study of intercontinental long-haul links (LHLs) – links with latencies significantly higher than that of all other links in a traceroute path. Our study is motivated by the recognition of these LHLs as a network-layer manifestation of critical transoceanic undersea cables. We present a methodology and associated processing system for identifying long-haul links in traceroute measurements. We apply this system to a large corpus of traceroute data and report on multiple aspects of long haul connectivity including country-level prevalence, routers as international gateways, preferred long-haul destinations, and the evolution of these characteristics over a 7 year period. We identify 85,620 layer-3 links (out of 2.7M links in a large traceroute dataset) that satisfy our de/f_inition for intercontinental long haul with many of them terminating in a relatively small number of nodes. An analysis of connected components shows a clearly dominant component with a relative size that remains stable despite a signi/f_icant growth of the long-haul infrastructure.
Abstract
We describe the results of a large-scale study of third-party dependencies around the world based on regional top-500 popular websites accessed from vantage points in 50 countries, together covering all inhabited continents. This broad perspective shows that dependencies on a third-party DNS, CDN or CA provider vary widely around the world, ranging from 19% to as much as 76% of websites, across all countries. The critical dependencies of websites – where the site depends on a single third-party provider – are equally spread ranging from 5% to 60% (CDN in Costa Rica and DNS in China, respectively). Interestingly, despite this high variability, our results suggest a highly concentrated market of third-party providers: three providers across all countries serve an average of 92% and Google, by itself, serves an average of 70% of the surveyed websites. Even more concerning, these differences persist a year later with increasing dependencies, particularly for DNS and CDNs. We briefly explore various factors that may help explain the differences and similarities in degrees of third-party dependency across countries, including economic conditions, Internet development, economic trading partners, categories, home countries, and traffic skewness of the country’s top-500 sites.
Abstract
An organization-level topology of the Internet is a valuable resource with uses that range from the study of organizations’ footprints and Internet centralization trends, to analysis of the dynamics of the Internet’s corporate structures as result of (de)mergers and acquisitions. Current approaches to infer this topology rely exclusively on WHOIS databases and are thus impacted by its limitations, including errors and outdated data. We argue that a collaborative, operator-oriented database such as PeeringDB can bring a complementary perspective from the legally-bounded information available in WHOIS records. We present , a new framework that leverages self-reported information available on PeeringDB to boost the state-of-the-art WHOIS-based methodologies. We discuss the challenges and opportunities with using PeeringDB records for AS-to-organization mappings, present the design of and demonstrate its value identifying companies operating in multiple continents and mergers and acquisitions over a five-year period.
2022
Abstract
A new model of global virtual Mobile Network Operator (MNO) – the Mobile Network Aggregator ( MNA ) – has recently been gaining significant traction. MNA s provide mobile communications services to their customers by leveraging multiple MNOs, and connecting through the one that best match their customers’ needs at any point in time (and space). MNAs naturally provide optimized global coverage by connecting through local MNOs across the different geographic regions they provide service. In this paper, we dissect the operations of three MNAs, namely, Google Fi, Twilio and Truphone. We perform measurements using the three selected MNAs to assess their performance for three major applications, namely, DNS, web browsing and video streaming. We benchmark their performance comparing it to the one of a traditional MNO. We find that even MNAs provide some delay penalty compared to the service accessed through the local MNOs in the geographic area where the user is roaming, they can significantly improve performance compared to traditional roaming model of the MNOs (e.g. home routed roaming). Finally, in order to fully quantify the potential benefits that can be realized using the MNA model, we perform a set of emulations by deploying both control and user plane functions of open-source 5G implementations in different locations of AWS, and measure the potential gains.
Abstract
Almost all popular Internet services are hosted in a select set of countries, forcing other nations to rely on international connectivity to access them. We identify nations where traffic towards a large portion of the country is serviced by a small number of Autonomous Systems, and, therefore, may be exposed to observation or selective tampering by these ASes. We introduce the Country-level Transit Influence (CTI) metric to quantify the significance of a given AS on the international transit service of a particular country. By studying the CTI values for the top ASes in each country, we find that 34 nations have transit ecosystems that render them particularly exposed, where a single AS is privy to traffic destined to over 40% of their IP addresses. In the nations where we are able to validate our findings with in-country operators, our top-five ASes are 90% accurate on average. In the countries we examine, CTI reveals two classes of networks frequently play a particularly prominent role: submarine cable operators and state-owned ASes.
Abstract
We investigate a novel approach to the use of jitter to infer network congestion using data collected by probes in access networks. We discovered a set of features in jitter and jitter dispersion —a jitter-derived time series we define in this paper—time series that are characteristic of periods of congestion. We leverage these concepts to create a jitter-based congestion inference framework that we call Jitterbug. We apply Jitterbug’s capabilities to a wide range of traffic scenarios and discover that Jitterbug can correctly identify both recurrent and one-off congestion events. We validate Jitterbug inferences against state-of-the-art autocorrelation-based inferences of recurrent congestion. We find that the two approaches have strong congruity in their inferences, but Jitterbug holds promise for detecting one-off as well as recurrent congestion. We identify several future directions for this research including leveraging ML/AI techniques to optimize performance and accuracy of this approach in operational settings.
Abstract
The quality of mobile web experience remains poor, partially as a result of complex websites and design choices that worsen performance, particularly for users on suboptimal networks or with lowend devices. Prior proposed solutions have seen limited adoption due to the demand they place on developers and content providers, and the performing infrastructure needed to support them. We argue that Document and Permissions Policies – ongoing efforts to enforce good practices on web design – may offer the basis for a readily-available and easily-adoptable solution, as they encode key best practices for web development. In this paper, as a first step, we evaluate the potential performance cost of violating these well understood best practices and how common such violations are in today’s web. Our analysis shows, for example, that controlling for unsized-media policy, something applicable to 70% of the top Alexa websites, can indeed significantly reduce Cumulative Layout Shift, a core metric for evaluating the performance of the web.
Abstract
Advances in cloud computing have simplified the way that both software development and testing are performed. This is not true for battery testing for which state of the art test-beds simply consist of one phone attached to a power meter. These test-beds have limited resources,access, and are overall hard to maintain; for these reasons, they often sit idle with no experiment to run. In this paper, we propose to share existing battery testbeds and transform them into vantage points of BatteryLab, a power monitoring platform offering heterogeneous devices and testing conditions. We have achieved this vision with a combination of hardware and software which allow to augment existing battery test-beds with remote capabilities. BatteryLab currently counts three vantage points, one in Europe and two in the US, hosting three Android devices and one iPhone 7. We benchmark BatteryLab with respect to the accuracy of its battery readings, system performance, and platform heterogeneity. Next, we demonstrate how measurements can be run atop of BatteryLab by developing the “Web Power Monitor” (WPM), a tool which can measure website power consumption at scale. We released WPM and used it to report on the energy consumption of Alexa’s top 1,000 websites across 3 locations and 4 devices (both Android and iOS).
2021
Abstract
While non-pharmaceutical interventions (NPIs) such as stay-athome, shelter-in-place, and school closures are considered the most effective ways to limit the spread of infectious diseases, their use is generally controversial given the political, ethical, and socioeconomic issues they raise. Part of the challenge is the non-obvious link between the level of compliance with such measures and their effectiveness. In this paper, we argue that users’ demand on networked services can serve as a proxy for the social distancing behavior of communities, offering a new approach to evaluate these measures’ effectiveness. We leverage the vantage point of one of the largest worldwide CDNs together with public-available datasets of mobile users’ behavior, to examine the relationship between changes in user demand on the CDN and different interventions including stay-at-home/shelter-in-place, mask mandates, and school closures. As networked systems become integral parts of our everyday lives, they can act as witnesses of our individual and collective actions. Our study illustrates the potential value of this new role.
Abstract
In this paper we present and apply a methodology to accurately identify state-owned Internet operators worldwide and their Autonomous System Numbers (ASNs). Obtaining an accurate dataset of ASNs of state-owned Internet operators enables studies where state ownership is an important dimension, including research related to Internet censorship and surveillance, cyber-warfare and international relations, ICT development and digital divide, critical infrastructure protection, and public policy. Our approach is based on a multi-stage, in-depth manual analysis of datasets that are highly diverse in nature. We find that each of these datasets contributes in different ways to the classification process and we identify limitations and shortcomings of these data sources. We obtain the first data set of this type, make it available to the research community together with the several lessons we learned in the process, and perform a preliminary analysis based on our data. We find that 53% ( i.e.,123) of the world’s countries are majority owners of Internet operators, highlighting that this is a widespread phenomenon. We also find and document the existence of subsidiaries of state-owned governments operating in foreign countries, an aspect that touches every continent and particularly affects Africa. We hope that this work and the associated data set will inspire and enable a broad set of Internet measurement studies and interdisciplinary research.
Abstract
IP Exchange Providers (IPX-Ps) offer to their customers (e.g., mobile or IoT service providers) global data roaming and support for a variety of emerging services. They peer to other IPX-Ps and form the IPX network, which interconnects 800 MNOs worldwide offering their customers access to mobile services in any other country. Despite the importance of IPX-Ps, little is known about their operations and performance. In this paper, we shed light on these opaque providers by analyzing a large IPX-P with more than 100 PoPs in 40+ countries, with a particularly strong presence in America and Europe. Specifically, we characterize the traffic and performance of the main infrastructures of the IPX-P (i.e., 2-3-4G signaling and GTP tunneling), and provide implications for its operation, as well as for the IPX-P’s customers. Our analysis is based on statistics we collected during two time periods (i.e., prior and during COVID-19 pandemic) and includes insights on the main service the platform supports (i.e., IoT and data roaming), traffic breakdown and geographical/temporal distribution, communication performance (e.g., tunnel setup time, RTTs). Our results constitute a step towards advancing the understanding of IPX-Ps at their core, and provide guidelines for their operations and customer satisfaction.
Abstract
The Domain Name System (DNS) is both key determinant of a users’ quality of experience (QoE) and privy to their tastes, preferences, and even the devices they own. Growing concern about user privacy and QoE has brought a number of alternative DNS techniques and services, from public DNS to encrypted and oblivious DNS. Today, a user choosing among these services and its few providers is forced to prioritize – aware of it or not – between web performance, privacy, reliability, and the potential for a centralized market and its consequences. We present Ónoma, a DNS resolver that addresses the concerns about DNS centralization without sacrificing privacy or QoE by sharding requests across alternative DNS services, placing these services in competition with each other, and pushing resolution to the network edge. Our preliminary evaluation shows the potential benefits of this approach across locales, with different DNS services, content providers, and content distribution networks.
Abstract
Web performance researchers have to regularly choose between synthetic and in-the-wild experiments. In the one hand, synthetic tests are useful to isolate what needs to be measured, but lack the realism of real networks, websites, and server-specific configurations. Even enumerating all these conditions can be challenging, and no existing tool or testbed currently allows for this. In this paper, as in life, we argue thatunity makes strength: by sharing part of their experimenting resources, researchers can naturally build their desired realistic conditions without compromising on the flexibility of synthetic tests. We take a step toward realizing this vision with WebTune, a distributed platform for web measurements. At a high level, WebTune seamlessly integrates with popular web measurements tool like Lighthouse and Puppeteer exposing to an experimenter fine grained control on real networks and servers, as one would expect in synthetic tests. Under the hood, WebTune serves “Webtuned” versions of websites which are cloned and distributed to a testing network built on resources donated by the community. We evaluate WebTune with respect to its cloning accuracy and the complexity of network conditions to be reproduced. Further, we demonstrate its functioning via a 5-nodes deployment.
2020
Abstract
The closed design of mobile devices — with the increased security and consistent user interfaces— is in large part responsible for their becoming the dominant platform for accessing the Internet. These benefits, however, are not without a cost. Their operation of mobile devices and their apps is not easy to understand by either users or operators. We argue for recovering transparency and control on mobile devices through an extensible platform that can intercept and modify traffic before leaving the device or, on arrival, before it reaches the operating system. Conceptually, this is the same view of the traffic that a traditional middlebox would have at the far end of the first link in the network path. We call this platform “middlebox zero” or MBZ. By being on-board, MBZalso leverages local context as it processes the traffic and complement the network wide view of standard middleboxes. We discuss the challenges of the MBZapproach, sketch a working design, and illustrate its potential with some concrete examples.
Abstract
The last three decades have seen much evolution in web and network protocols: amongst them, a transition from HTTP/1.1 to HTTP/2 and a shift from loss-based to delay-based TCP congestion control algorithms. This paper argues that these two trends come at odds with one another, ultimately hurting web performance. Using a controlled synthetic study, we show how delay-based congestion control protocols (e.g., BBR and CUBIC + Hybrid Slow Start) result in the underestimation of the available congestion window in mobile networks, and how that dramatically hampers the effectiveness of HTTP/2. To quantify the impact of such finding in the current web, we evolved the web performance toolbox in two ways. First we develop Igor, a client-side TCP congestion control detection tool that can differentiate between loss-based and delay-based algorithms by focusing on their behavior during slow start. Second, we develop a Chromium patch which allows fine-grained control on the HTTP version to be used per domain. Using these new web performance tools, we analyze over 300 real websites and find that 67% of sites relying solely on delay-based congestion control algorithms have better performance with HTTP/1.1.
Abstract
Support for “things” roaming internationally has become critical for Internet of Things ( IoT) verticals, from connected cars to smart meters and wearables, and explains the commercial success of Machine-to-Machine ( M2M ) platforms. We analyze IoT verticals operating with connectivity via IoT SIMs, and present the first large-scale study of commercially deployed IoT SIMs for energy meters. We also present the first characterization of an operational M2M platform and the first analysis of the rather opaque associated ecosystem. For operators, the exponential growth of IoT has meant increased stress on the infrastructure shared with traditional roaming traffic. Our analysis quantifies the adoption of roaming by M2M platforms and the impact they have on the underlying visited Mobile Network Operators ( MNO s). To manage the impact of massive deployments of device operating with an IoT SIM, operators must be able to distinguish between the latter and traditional inbound roamers. We build a comprehensive dataset capturing the device population of a large European MNO over three weeks. With this, we propose and validate a classification approach that can allow operators to distinguish inbound roaming IoT devices.
Abstract
Nearly all international data is carried by a mesh of submarine cables connecting virtually every region in the world. It is generally assumed that Internet services rely on this submarine cable network (SCN) for backend traffic, but that most users do not directly depend on it, as popular resources are either local or cached nearby. In this paper, we study the criticality of the SCN from the perspective of end users. We present a general methodology for analyzing the reliance on the SCN for a given region, and apply it to the most popular web resources accessed by users in 63 countries from every inhabited continent, collectively capturing ≈80% of the global Internet population. We find that as many as 64.33% of all web resources accessed from a specific country rely on the SCN. Despite the explosive growth of data center and CDN infrastructure around the world, at least 28.22% of the CDN-hosted resources traverse a submarine cable.
Abstract
The IPX Network interconnects about 800 Mobile Network Operators ( MNO s) worldwide and a range of other service providers (such as cloud and content providers). It forms the core that enables global data roaming while supporting emerging applications, from VoLTE and video streaming to IoT verticals. This paper presents the first characterization of this, so-far opaque, IPX ecosystem and a first-of-its-kind in-depth analysis of an IPX Provider ( IPX-P ). The IPX Network is a private network formed by a small set of tightly interconnected IPX-P s. We analyze an operational dataset from a large IPX-P that includes BGP data as well as statistics from signaling. We shed light on the structure of the IPX Network as well as on the temporal, structural and geographic features of the IPX traffic. Our results are a first step in understanding the IPX Network at its core, key to fully understand the global mobile Internet.
2019
Abstract
The rapid growth in the number of mobile devices, subscriptions and their associated traffic, has served as motivation for several projects focused on improving mobile users’ quality of experience (QoE). Few have been as contentious as the Google-initiated Accelerated Mobile Project (AMP), both praised for its seemingly instant mobile web experience and criticized based on concerns about the enforcement of its formats. This paper presents the first characterization of AMP’s impact on users’ QoE. We do this using a corpus of over 2,100 AMP webpages, and their corresponding non-AMP counterparts, based on trendy-keyword-based searches. We characterized AMP’s impact looking at common web QoE metrics, including Page Load Time, Time to First Byte and SpeedIndex (SI). Our results show that AMP significantly improves SI, yielding on average a 60% lower SI than non-AMP pages without accounting for prefetching. Prefetching of AMP pages pushes this advantage even further, with prefetched pages loading over 2,000ms faster than non-prefetched AMP pages. This clear boost may come, however, at a non-negligible cost for users with limited data plans as it incurs an average of over 1.4 MB of additional data downloaded, unbeknownst to users.
Abstract
We present the rst detailed analysis of ad-blocking’s impact on user Web quality of experience (QoE). We use the most popular webbased ad-blocker to capture the impact of ad-blocking on QoE for the top Alexa 5,000 websites. We nd that ad-blocking reduces the number of objects loaded by 15% in the median case, and that this reduction translates into a 12.5% improvement on page load time (PLT) and a slight worsening of time to rst paint (TTFP) of 6.54%. We show the complex relationship between ad-blocking and quality of experience despite the clear improvements to PLT in the average case, for the bottom 10 percentile, this improvement comes at the cost of a slowdown on the initial responsiveness of websites, with a 19% increase to TTFP. To understand the relative importance of this trade-o on user experience, we run a large, crowd-sourced experiment with 1,000 users in Amazon Turk. For this experiment, users were presented with websites for which ad-blocking results in both, a reduction of PLT and a signi cant increase in TTFP. We nd, surprisingly, 71.5% of the time users show a clear preference for faster rst paint over faster page load times, hinting at the importance of rst impressions on web QoE.
Abstract
We present an approach to improve users’ web experience by dynamically reducing the complexity of websites rendered based on network conditions. Our approach is based on a simple insight – adjusting a browser window’s scale (i.e., zooming in/out), changes the number of objects placed above-the-fold and thus hides the loading of objects pushed below the fold in the user scroll time. We design ScaleUp , a browser extension that tracks network conditions and adjusts browser scale appropriately to improve user web Quality of Experience (QoE) while preserving the design integrity of websites. Through control experiments, we demonstrate the impact of ScaleUp on a number of key QoE metrics over a random sample of 50 from the top 500 Alexa websites. We show that a simple adjustment in scale can result in an over 19% improvement on Above-The-Fold (ATF) time in the median case. While adjusting a scale factor can improve metrics of QoE, it is unclear if that translates into an improved web experience for users. We summarize findings from a large, crowdsourced experiment with 1,000 users showing that, indeed, improvement to QoE metrics correlate with an enhanced user experience. We have released ScaleUp as a Chrome Extension that now counts with over 1,000 users worldwide, and report on some of the lessons learned from this deployment.
2018
Abstract
The growth of global Internet traffic has driven an exponential expansion of the submarine cable network, both in terms of the sheer number of links and its total capacity. Today, a complex mesh of hundreds of cables, stretching over 1 million kilometers, connects nearly every corner of the earth and is instrumental in closing the remaining connectivity gaps. Despite the scale and critical role of the submarine network for both business and society at large, our community has mostly ignored it, treating it as a black box in most Internet studies, from connectivity to inter-domain traffic and reliability. We make the case for a new research agenda focused on characterizing the global submarine network and the critical role it plays as a basic component of any inter-continental end-to-end connection.
Abstract
In-Flight Communication (IFC), available on a growing number of commercial flights, is often received by consumers with both awe for its mere availability and harsh criticism for its poor performance. Indeed, IFC provides Internet connectivity in some of the most challenging conditions with aircraft traveling at speeds in excess of 500 mph at 30,000 feet above the ground. Yet, while existing services do provide basic Internet accessibility , anecdotal reports rank their quality of service as, at best, poor. In this paper, we present the first characterization of deployed IFC systems. Using over 45 flight-hours of measurements, we profile the performance of IFC across the two dominant access technologies – direct air-to-ground communication (DA2GC) and mobile satellite service (MSS). We show that IFC QoS is in large part determined by the high latencies inherent to DA2GC and MSS, with RTTs averaging 200ms and 750ms, respectively, and that these high latencies directly impact the performance of common applications such as web browsing. While each IFC technology is based on well studied wireless communication technologies, our findings reveal that IFC links experience further degraded link performance than their technological antecedents. We find median loss rates of 7%, and nearly 40% loss at the 90th percentile for MSS, 6.8x larger than recent characterizations of residential satellite networks. We extend our IFC study exploring the potential of the newly released HTTP/2 and QUIC protocols in an emulated IFC environment, finding that QUIC is able to improve page load times by as much as 7.9 times. In addition, we find that HTTP/2’s use of multiplexing multiple requests onto a single TCP connection performs up to 4.8x worse than HTTP/1.1 when faced with large numbers of objects. We use network emulation to explore proposed technological improvements to existing IFC systems finding that high link losses, and not bandwidth, account for the largest factor of performance degradation with applications such as web browsing.
Abstract
In this paper, we empirically demonstrate the growing importance of reliability by measuring its effect on user behavior. We present an approach for broadband reliability characterization using data collected by many emerging national initiatives to study broadband and apply it to the data gathered by the Federal Communications Commission’s Measuring Broadband America project. Motivated by our findings, we present the design, implementation, and evaluation of a practical approach for improving the reliability of broadband Internet access with multihoming.
Abstract
The appeal and clear operational and economic benefits of anycast to service providers have motivated a number of recent experimental studies on its potential performance impact for end users. For CDNs on mobile networks, in particular, anycast provides a simpler alternative to existing routing systems challenged by a growing, complex, and commonly opaque cellular infrastructure. This paper presents the first analysis of anycast performance for mobile users. In particular, our evaluation focuses on two distinct anycast services, both providing part of the DNS Root zone and together covering all major geographical regions. Our results show that mobile clients tend to be routed to suboptimal replicas in terms of geographical distance, more frequently while on a cellular connection than on WiFi, with a significant impact on latency. We find that this is not simply an issue of lacking better alternatives, and that the problem is not specific to particular geographic areas or autonomous systems. We close with a first analysis of the root causes of this phenomenon and describe some of the major classes of anycast anomalies revealed during our study, additionally including a systematic approach to automatically detect such anomalies without any sort of training or annotated measurements. We release our datasets to the networking community.
2017
Abstract
The impressive growth of the mobile Internet has motivated several industry reports retelling the story in terms of number of devices or subscriptions sold per regions, or the increase in mobile traffic, both WiFi and cellular. Yet, despite the abundance of such reports, we still lack an understanding of the impact of cellular networks around the world.
Abstract
Most residential broadband services are described in terms of their maximum potential throughput rate, often advertised as having speeds “up to XMbps”. Though such promises are often met, they are fairly limited in scope and, unfortunately, there is no basis for an appeal if a customer were to receive compromised quality of service. While this ‘best effort’ model was sufficient in the early days, we argue that as broadband customers and their devices become more dependent on Internet connectivity, we will see an increased demand for more encompassing Service Level Agreements (SLA).
Abstract
This is a report on the Workshop on Tracking Quality of Experience in the Internet, held at Princeton, October 21–22, 2015, jointly sponsored by the National Science Foundation and the Federal Communication Commission. The term Quality of Experience (QoE) describes a user’s subjective assessment of their experience when using a particular application. In the past, network engineers have typically focused on Quality of Service (QoS): performance metrics such as throughput, delay and jitter, packet loss, and the like. Yet, performance as measured by QoS parameters only matters if it affects the experience of users, as they attempt to use a particular application. Ultimately, the user’s experience is determined by QoE impairments (e.g., rebuffering). Although QoE and QoS are related—for example, a video rebuffering event may be caused by high packet-loss rate—QoE metrics ultimately affect a user’s experience.
2016
Abstract
The risk of placing an undesired load on networks and networked services through probes originating from measurement platforms has always been present. While several scheduling schemes have been proposed to avoid undue loads or DDoS-like effects from uncontrolled experiments, the motivation scenarios for such schemes have generally been considered “sufficiently unlikely” and safely ignored by most existing measurement platforms. We argue that the growth of large, crowdsourced measurement systems means we cannot ignore this risk any longer.
Abstract
The global airline industry conducted over 33 million ights in 2014 alone, carrying over 3.3 billion passengers. Surprisingly, the trac management system handling this ight volume communicates over either VHF audio transmissions or plane transponders, exhibiting several seconds of latency and single bits per second of throughput. There is a general consensus that for the airline industry to serve the growing demand will require of signi cant improvements to the air trac management system; we believe that many of these improvements can leverage the past two decades of mobile networking research.
Abstract
Several broadband providers have been offering community WiFi as an additional service for existing customers and paid subscribers. These community networks provide Internet connectivity on the go for mobile devices and a path to offload cellular traffic. Rather than deploying new infrastructure or relying on the resources of an organized community, these provider-enabled community WiFi services leverage the existing hardware and connections of their customers. The past few years have seen a significant growth in their popularity and coverage and some municipalities and institutions have started to considered them as the basis for public Internet access. In this paper, we present the first characterization of one such service – the Xfinity Community WiFi network. Taking the perspectives of the home-router owner and the public hotspot user, we characterize the performance and availability of this service in urban and suburban settings, at different times, between September, 2014 and 2015. Our results highlight the challenges of providing these services in urban environments considering the tensions between coverage and interference, large obstructions and high population densities. Through a series of controlled experiments, we measure the impact to hosting customers, finding that in certain cases, the use of the public hotspot can degrade host network throughput by up-to 67% under high traffic on the public hotspot.
2015
Abstract
Poor visibility into the network hampers progress in a number of important research areas, from network troubleshooting to Internet topology and performance mapping. This persistent, well-known problem has served as motivation for numerous proposals to build or extend existing Internet measurement platforms by recruiting larger, more diverse vantage points. Capturing the edge of the network, however, remains an elusive goal.
Abstract
While mobile advertisement is the dominant source of revenue for mobile apps, the usage patterns of mobile users, and thus their engagement and exposure times, may be in conflict with the effectiveness of current ads. User engagement with apps can range from a few seconds to several minutes, depending on a number of factors such as users’ locations, concurrent activities and goals. Despite the wide-range of engagement times, the current format of ad auctions dictates that ads are priced, sold and configured prior to actual viewing, regardless of the actual ad exposure time.
Abstract
In recognition of the increasing importance of broadband, several governments have embarked on large-scale efforts to measure broadband services from devices within end-user’s homes. Participants for these studies were selected based on features that, a priori, were thought to be relevant to service performance such as geographic region, access technology and subscription level. Every new-year deployment since has followed the same model, ensuring that the number of measurement points remains stable despite the natural churn.
Abstract
The goal of our work is to characterize the current state of Cuba’s access to the wider Internet. This work is motivated by recent improvements in connectivity to the island and the growing commercial interest following the ease of restrictions on travel and trade with the US. In this paper, we profile Cuba’s networks, their connections to the rest of the world, and the routes of international traffic going to and from the island. Despite the addition of the ALBA-1 submarine cable, we find that round trip times to websites hosted off the island remain very high; pings to popular websites frequently took over 300 ms. We also find a high degree of path asymmetry in traffic to/from Cuba. Specifically, in our analysis we find that traffic going out of Cuba typically travels through the ALBA-1 cable, but, surprisingly, traffic on the reverse path often traverses high-latency satellite links, adding over 200 ms to round trip times. Last, we analyze queries to public DNS servers and SSL certificate requests to characterize the availability of network services in Cuba.
Abstract
Crowdsensing leverages the pervasiveness and power of mobile devices, such as smartphones and tablets, to enable ordinary citizens to collect, transport and verify data. Application domains range from environment monitoring, to infrastructure management and social computing. Crowdsensing services’ effectiveness is a direct result of their coverage, which is driven by the recruitment and mobility patterns of participants. Due to the typically uneven population distributions of most areas, and the regular mobility patterns of participants, less popular or populated areas suffer from poor coverage.
2014
Abstract
Though the impact of file-sharing of copyrighted content has been discussed for over a decade, only in the past few years have countries begun to adopt legislation to criminalize this behavior. These laws impose penalties ranging from warnings and monetary fines to disconnecting Internet service. While their supporters are quick to point out trends showing the efficacy of these laws at reducing use of file-sharing sites, their analyses rely on brief snapshots of activity that cannot reveal long- and short-term trends.
Abstract
We present the rst study of broadband services in their broader context, evaluating the impact of service characteristics (such as capacity, latency and loss), their broadband pricing and user demand. We explore these relationships, beyond correlation, with the application of natural experiments . Most e orts on broadband service characterization have so far focused on performance and availability, yet we lack a clear understanding of how such services are being utilized and how their use is impacted by the particulars of the market. By analyzing over 23-months of data collected from 53,000 end hosts and residential gateways in 160 countries, along with a global survey of retail broadband plans, we empirically study the relationship between broadband service characteristics, pricing and demand. We show a strong correlation between capacity and demand, even though subscribers rarely fully utilize their links, but note a law of diminishing returns with relatively smaller increases in demand at higher capacities. Despite the fourfold increase in global IP trac, we nd that user demand on the network over a three year period remained constant for a given bandwidth capacity. We exploit natural experiments to examine the causality between these factors. The reported ndings represent an important step towards understanding how user behavior, and the market features that shape it, a ect broadband networks and the Internet at large.
Abstract
Characterizing the ow of Internet trac is important in a wide range of contexts, from network engineering and application design to understanding the network impact of consumer demand and business relationships. Despite the growing interest, the nearly impossible task of collecting large-scale, Internet-wide trac data has severely constrained the focus of trac-related studies. In this paper, we introduce a novel approach to characterize inter-domain trac by reusing large, publicly available traceroute datasets. Our approach builds on a simple insight { the popularity of a route on the Internet can serve as an informative proxy for the volume of trac it carries. It applies structural analysis to a dual-representation of the ASlevel connectivity graph derived from available traceroute datasets. Drawing analogies with city grids and trac, it adapts data transformations and metrics of route popularity from urban planning to serve as proxies for trac volume. We call this approach Network Syntax , highlighting the connection to urban planning Space Syntax. We apply Network Syntax in the context of a global ISP and a large Internet eXchange Point and use ground-truth data to demonstrate the strong correlation ( r2values of up to 0.9) between inter-domain trac volume and the di erent proxy metrics. Working with these two network entities, we show the potential of Network Syntax for identifying critical links and inferring missing trac matrix measurements.
Abstract
DNS plays a critical role in the performance of smartdevices within cellular networks. Besides name resolution, DNS is commonly relied upon for directing users to nearby content caches for better performance. In light of this, it is surprising how little is known about the structure of cellular DNS and its effectiveness as a client localization method.
Abstract
Tens of millions of individuals around the world use decentralized content distribution systems, a fact of growing social, economic, and technological importance. These sharing systems are poorly understood because, unlike in other technosocial systems, it is difficult to gather large-scale data about user behavior. Here, we investigate user activity patterns and the socioeconomic factors that could explain the behavior. Our analysis reveals that (i) the ecosys- tem is heterogeneous at several levels: content types are heterogeneous, users specialize in a few content types, and countries are heterogeneous in user profiles; and (ii) there is a strong correlation between socioeconomic indicators of a country and users behavior. Our findings open a research area on the dynamics of decentralized sharing ecosystems and the socioeconomic factors affecting them, and may have implications for the design of algorithms and for policymaking.
Abstract
CDNs are responsible for delivering most of today’s Internet content, replicating popular content on servers worldwide. CDNs direct users to “nearby" replicas based on the location of users’ DNS resolver. The significantly better performance of next generation cellular networks, like LTE, compared with 2G/3G networks have made content replica selection a significant factor of a mobile user’s experience.
Abstract
In this poster, we posit that in the developed world broadband reliability will soon become the dominant feature for service comparison. We use data collected from residential gateways (via FCC/SamKnows) and end-hosts (via Namehelp) to study the availability and reliability of fixed-line broadband networks. Using natural experiments, we look at the impact that increased network downtime has on user demand.
Abstract
A social news site presents user-curated content, ranked by popularity. Popular curators like Reddit, or Facebook have become e ective way of crowdsourcing news or sharing for personal opinions. Traditionally, these services require a centralized authority to aggregate data and determine what to display. However, the trust issues that arise from a centralized system are particularly damaging to the \Web democracy" that social news sites are meant to provide.
Abstract
We are becoming increasingly aware that the effectiveness of mobile crowdsourcing systems critically depends on the whims of their human participants, impacting everything from participant engagement to their compliance with the crowdsourced tasks.
2013
Abstract
In recent years the quantity and diversity of Internet-enabled consumer devices in the home have increased significantly. These trends complicate device usability and home resource management and have implications for crowdsourced approaches to broadband characterization.
Abstract
People use P2P systems such as BitTorrent to share an unprecedented variety and amount of content with others around the world. The random connection pattern used by BitTorrent has been shown to result in reduced performance for users and costly cross-ISP traffic. Although several client-side systems have been proposed to improve the locality of BitTorrent traffic, their effectiveness is limited by the availability of local peers. We show that sufficient locality is present in swarms – if one looks at the right time. We find that 50% of ISPs have at least five local peers online during the ISP’s peak hour, typically in the evening, compared to only 20% of ISPs during the median hour. To better discover these local peers, we show how to increase the overall peer discovery rate by over two orders of magnitude using client-side techniques: leveraging additional trackers, requesting more peers per sample, and sampling more frequently. We propose an approach to predict future availability of local peers based on observed diurnal patterns. This approach enables peers to selectively apply these techniques to minimize undue load on trackers.
Abstract
Dasu is an extensible platform for running network measurements and experiments at the Internet’s edge. Its clients run on end hosts, have built-in support for broadband characterization—an incentive for end-user adoption—and execute third-party experiment tasks. The platform supports concurrent third-party experiments by delegating clients to tasks based on experiment specifications and resource availability. This demo focuses on Dasu’s task delegation mechanism and shows how it enables third-party experimentation and maintain security and accountability.
Abstract
We present Dasu, a measurement experimentation platform for the Internet’s edge. Dasu supports both controlled network experimentation and broadband characterization, building on public interest on the latter to gain the adoption necessary for the former. We discuss some of the challenges we faced building a platform for the Internet’s edge, describe our current design and implementation, and illustrate the unique perspective it brings to Internet measurement. Dasu has been publicly available since July 2010 and has been installed by over 90,000 users with a heterogeneous set of connections spreading across 1,802 networks and 147 countries.
Abstract
Zachary S. BischofMario A. S ´anchezJohn S. Otto John P. RulaFabi´an E. Bustamante fzbischof, msanchez, jotto, john.rula, fabianb g@eecs.northwestern.edu Northwestern University Student author We present the broadband characterization function- ality of Dasu [1], showcase its user-interface, and in- clude side-by-side comparisons of competing broadband services. This poster complements S ´anchez et al [1] (appearing in NSDI) and its related demo submission; both focus on the design and implementation of Dasu as an experimental platform. As mentioned in [1], Dasu partially relies on service characterization as incentive for adoption. This side of Dasu is a prototype implementation of our crowdsourced-based, end-system approach to broadband characterization. By leveraging monitoring information from local hosts and home routers, our approach can attain scalability, continuity and end-user perspective while avoiding the potential pitfalls of similar models. Dasu currently includes the following measurements for broadband characterization: (i.) latency to the first public IP hop (last-mile), last private IP hop (last- meter), primary and secondary DNS servers, egress points, and content servers for popular websites (ii.) download and upload throughput, latency, and packet loss as measured by the Network Diagnostic Tool (NDT) (iii.) DNS lookup performance and (iv.) web browsing performance (page-loading time) for the 20 most popular websites (ranked by Alexa.com). These measurements are extensible, as it is built on the same framework presented in [1]. Dasu also collects passive performance metrics when available, such as snapshots of BitTorrent performance and average throughput rates measured by YouTube. Dasu’s user interface includes summaries of these measurements, including a comparison of the average performance seen by other Dasu users in the same region on other ISPs. Since many users have restrictions on their monthly bandwidth usage, Dasu includes a history of bandwidth usage due to BitTorrent and other traffic from the localhost. When available, Dasu uses UPnP counters to accurately keep track of the users’ total bandwidth consumption. In addition to service characterization, a key goal of Dasu is to be able to enable a comparison of ISPs on users’ terms – essentially the performance of their net- work applications. For example, our goal is to compare ISPs in terms of web browsing, video streaming, gaming, or V oIP performance. Figure 1: Sample screenshot of Dasu’s summary of a user’s Internet service Poster & Demo This poster focuses on the results presented to users through the Dasu client. It includes a demo of Dasu characterizing a sample user’s Internet service. Figure 1 shows an example of what is presented to the user when running Dasu. In this case, Dasu detects that the user’s DNS server configuration is suboptimal, since the secondary server has a shorter response time than the primary server. In the poster, we include multiple examples of such issues as well as Dasu’s suggestions on how to resolve them. Finally, the postera also includes illustrative results in comparing performance across ISPs, as well as a sample of results as presented to the user. References [1] S ´ANCHEZ , M. A., O TTO, J. S., B ISCHOF , Z. S., CHOFFNES , D. R., B USTAMANTE , F. E., K R- ISHNAMURTHY , B., AND WILLINGER , W. Dasu: Pushing experiments to the Internet’s edge. In Proc. of USENIX NSDI (2013).
2012
Abstract
A number of novel wireless networked services, ranging from participatory sensing to social networking, leverage the increasing capabilities of mobile devices and the movements of the individuals carrying them. For many of these systems, their effectiveness fundamentally depends on coverage and the particular mobility patterns of the participants. Given the strong spatial and temporal regularity of human mobility, the needed coverage can typically only be attained through a large participant base.
Abstract
Broadband characterization has recently attracted much at tention from the research community and the general public. Given this interest and the important business and policy implica tions of residential Internet service characterization, recent years have brought a variety of approaches to profiling Internet servic es, ranging from Web-based platforms to dedicated infrastructure i nside home networks. We have previously argued that network-inte nsive applications provide an almost ideal vantage point for broa dband service characterization at sufficient scale, nearly conti nuously and from end users. While we have shown that the approach is indee d effective at characterization and can enable performance c omparisons between service providers and geographic regions, a k ey unanswered question is how well the performance characteri stics captured by these network-intensive applications can pred ict the overall user experience with other applications. In this paper, using BitTorrent as an example network-inten sive application, we present initial results that demonstrate h ow to obtain estimates of bandwidth and latency of a network conne ction by leveraging passive monitoring and limited active measur ements from network intensive applications. We then analyze user e xperienced web performance under a variety of network conditio ns and show how estimated metrics from this network intensive application can serve as good web performance predictors. Categories and Subject Descriptors C.2.2 [ Communication Systems Organization ]: Computer Communication Networks— Network Protocols ; C.2.5 [ Communication Networks ]: Local and Wide-Area Networks— Internet ; C.4 [ Performance of Systems ]: Performance Attributes General Terms Experimentation, Performance, Measurement
Abstract
The Domain Name System (DNS) is a fundamental component of today’s Internet. Recent years have seen radical changes to DNS with increases in usage of remote DNS and public DNS services such as OpenDNS. Given the close relationship between DNS and Content Delivery Networks (CDNs) and the pervasive use of CDNs by many popular applications including web browsing and real-time entertainment services, it is important to understand the impact of remote and public DNS services on users’ overall experience on the Web. This work presents a tool, namehelp , which comparatively evaluates DNS services in terms of the web performance they provide, and implements an end-host solution to address the performance impact of remote DNS on CDNs. The demonstration will show the functionality of namehelp with online results for its performance improvements. Categories and Subject Descriptors C.2.4 [ Computer Communication Networks ]: Distributed Systems
Abstract
Content Delivery Networks (CDNs) rely on the Domain Name System (DNS) for replica server selection. DNSbased server selection builds on the assumption that, in the absence of information about the client’s actual network location, the location of a client’s DNS resolver provides a good approximation. The recent growth of remote DNS services breaks this assumption and can negatively impact client’s web performance. In this paper, we assess the end-to-end impact of using remote DNS services on CDN performance and present the rst evaluation of an industry-proposed solution to the problem. We nd that remote DNS usage can indeed signi cantly impact client’s web performance and that the proposed solution, if available, can e ectively address the problem for most clients. Considering the performance cost of remote DNS usage and the limited adoption base of the industry-proposed solution, we present and evaluate an alternative approach, Direct Resolution , to readily obtain comparable performance improvements without requiring CDN or DNS participation. Categories and Subject Descriptors C.2.4 [ Communication Networks ]: Distributed Systems| Distributed applications ; C.2.5 [ Communication Networks ]: Local and Wide-Area Networks| Internet ; C.4 [Performance of Systems ]: Measurement techniques General Terms Experimentation, Measurement, Performance
2011
Abstract
Evaluating and characterizing Internet Service Providers (ISPs) is critical to subscribers shopping for alternative ISPs, companies providing reliable Internet services, and governments surveying the coverage of broadband services to its citizens. Ideally, ISP characterization should be done at scale, continuously, and from end users. While there has been significant progress toward this end, current approaches exhibit apparently unavoidable tradeoffs between coverage, continuous monitoring and capturing userperceived performance. In this paper, we argue that network-intensive applications running on end systems avoid these tradeoffs, thereby offering an ideal platform for ISP characterization. Based on data collected from 500,000 peer-to-peer BitTorrentusers across 3,150 networks, togetherwiththereportedresultsfromtheU.K.Ofcom/SamKnows studies, we show the feasibility of this approach to characterize the service that subscribers can expect from a particular ISP. We discuss remaining research challenges and design requirements for a solution that enables efficient and accurate ISP characterization at an Internet scale.
Abstract
A thorough understanding of the network impact of emerging largescale distributed systems – where traffic flows and what it costs – must encompass users’ behavior, the traffic they generate and the topology over which that traffic flows. In the case of BitTorrent, however, previous studies have been limited by narrow perspectives that restrict such analysis. This paper presents a comprehensive view of BitTorrent, using data from a representative set of 500,000 users sampled over a two year period, located in 169 countries and 3,150 networks. This unique perspective captures unseen trends and reveals several unexpected features of the largest peer-to-peer system. For instance, over the past year total BitTorrent traffic has increased by 12%, driven by 25% increases in per-peer hourly download volume despite a 10% decrease in the average number of online peers. We also observe stronger diurnal usage patterns and, surprisingly given the bandwidth-intensive nature of the application, a close alignment between these patterns and overall traffic. Considering the aggregated traffic across access links, this has potential implications on BitTorrent-associated costs for Internet Service Providers (ISPs). Using data from a transit ISP, we find a disproportionately large impact under a commonly used burstable (95th-percentile) billing model. Last, when examining BitTorrent traffic’s paths, we find that for over half its users, most network traffic never reaches large transit networks, but is instead carried by small transit ISPs. This raises questions on the effectiveness of most in-network monitoring systems to capture trends on peer-to-peer traffic and further motivates our approach.1 Categories and Subject Descriptors C.2.4 [ Communication Networks ]: Distributed Systems— Distributed applications ; C.2.5 [ Communication Networks ]: Local A variation on the Indian fable of the seven blind men and the elephant. 1c ACM, 2011. This is the author’s version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Proc. of ACM SIGCOMM, 2011. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SIGCOMM’11, August 15–19, 2011, Toronto, Ontario, Canada. Copyright 2011 ACM 978-1-4503-0797-0/11/08 …$10.00.and Wide-Area Networks— Internet ; C.4 [ Performance of Systems ]: Measurement techniques General Terms Experimentation, Performance, Measurement
Abstract
Peer-to-peer (P2P) systems represent some of the largest distributed systems in today’s Internet. Among P2P systems, BitTorrent is the most popular, potentially accounting for 20-50% of P2P file-sharing traffic. In this paper, we argue that this popularity can be leveraged to monitor the impact of natural disasters and political unrest on the Internet. We focus our analysis on the 2011 Tohoku earthquake and tsunami and use a view from BitTorrent to show that it is possible to identify specific regions and network links where Internet usage and connectivity were most affected.
Abstract
Evaluating and characterizing access ISPs is critical to consumers shopping for alternative services and governments surveying the availability of broadband services to their citizens. We present Dasu , a service for crowdsourcing ISP characterization to the edge of the network. Dasu is implemented as an extension to a popular BitTorrent client [5] and has been available since July 2010. While the prototype uses BitTorrent as its host application, its design is agnostic to the particular host application. The demo showcases our current implementation using both a prerecorded execution trace and a live run. Categories and Subject Descriptors C.2.3 [ Computer Systems Organization ]: Computer Communication Networks— Network Operations ; C.2.5 [ Computer Communication Networks ]: Local and Wide-Area Networks— Internet; C.4 [ Performance of Systems ]: Measurement techniques General Terms Experimentation, Performance, Measurement
2010
Abstract
While P2P systems benefit from large numbers of interconnected nodes, each of these connections provides an opportunity for eavesdropping. Using only the connection patterns gathered from 10,000 BitTorrent (BT) users during a one-month period, we determine whether randomized connection patterns give rise to communities of users. Even though connections in BT require not only shared interest in content, but also concurrent sessions, we find that strong communities naturally form – users inside a typical community are 5 to 25 times more likely to connect to each other than with users outside. These strong communities enable guilt by association, where the behavior of an entire community of users can be inferred by monitoring one of its members. Our study shows that through a single observation point, an attacker trying to identify such communities can uncover 50% of the network within a distance of two hops. Finally, we propose and evaluate a practical solution that mitigates this threat. 1. Introduction P2P has enabled a wide range of Internet applications ranging from large-scale data distribution to video streaming and telephony. While much of the strength of the P2P model lies in large numbers of interconnected nodes, their connections offer multiple opportunities for eavesdropping. In this paper, we show that these connections erode privacy in a way that is ignored by most distributed systems and invisible to end users. This work focuses on the BitTorrent (BT) file-sharing network where peers connect on the basis of common and concurrent interest in the same content, rather than on friendship, common language or geographic proximity. Using connection patterns gathered during a onemonth period (comprising a stable population of 10,000 BT users), we investigate the existence of communities – collections of peers significantly more likely to connect to each other than to a randomly selected peer. We show that strong communities form naturally in BT, with users i
Abstract
Today’s open platforms for network measurement and distributed system research, which we collectively refer to as testbedsin this article, provide opportunities for controllable experimentation and evaluations of systems at the scale of hundreds or thousands of hosts. In this article, we identify several issues with extending results from such platforms to Internet wide perspectives. Specifically,wetrytoquantifythelevelofinaccuracyandincompleteness oftestbedresultswhenappliedtothecontextofalarge-scalepeerto-peer (P2P) system. Based on our results, we emphasize the importance of measurements in the appropriate environment when evaluating Internet-scale systems. Categories andSubjectDescriptors C.2.5[CommunicationNetworks ]: LocalandWide-AreaNetworks– Internet; C.4 [Performance of Systems]: Measurement techniques GeneralTerms Experimentation, Performance, Measurements.
Abstract
Network positioning systems provide an important service to large-scale P2P systems, potentially enabling c lients to achieve higher performance, reduce cross-ISP traffic and improve the robustness of the system to failures. Because tr aces representative of this environment are generally unavaila ble, and thereis noplatform suitedfor experimentationat the appro priate scale, network positioning systems have been commonly impl ementedandevaluatedinsimulationandonresearchtestbeds .The performance of network positioningremainsan open questio nfor large deployments at the edges of the network. This paper evaluates how four key classes of network positioning systems fare when deployed at scale and measured in P2P systems where they are used. Using 2 billion network measurements gathered from more than 43,000 IP addresses probing over 8 million other IPs worldwide, we show that network positioning exhibits noticeably worse performanc e than previously reported in studies conducted on research testb eds. To explain this result, we identify several key properties o f this environment that call into question fundamental assumptio ns driving network positioning research.
Abstract
Due to the ever increasing level of environmental noise that the EU population is exposed to, all countries are directed to disseminate community noise level exposures to the public in accordance with EU Directive 2002/49/EC. Environmental noise maps are used for this pur pose and as a means to avoid, prevent or reduce the harmful effects caused by exposure to envi ronmental noise. There is no common standard to which these maps are generated in the EU and indeed t hese maps are in most cases inaccurate due to poorly informed predictive models. This paper develops a novel e nvironmental noise monitoring methodology which will allow accurate road noise measurements t o replace erroneous source model approximations in the generation of noise maps. The approach proposes t he acquisition of sound levels and position coordinates by instrumented vehicles such as bicycles or cars or by pedestrians equipped with a Smartphone. The accumulation of large amounts of data over time w ill result in extremely high spatial and temporal resolution resulting in an accurate measurement of environment al noise.
Abstract
The user experience for networked applications is becoming a key benchmark for customers and network providers. Perceived u ser experience is largely determined by the frequency, duratio n and severity of network events that impact a service. While toda y’s networks implement sophisticated infrastructure that issues alarms for most failures, there remains a class of silent outages (e.g. , caused by configuration errors) that are not detected. Further, exist ing alarms provide little information to help operators understand th e impact of network events on services. Attempts to address this thro ugh infrastructure that monitors end-to-end performance for c ustomers have been hampered by the cost of deployment and by the volume of data generated by these solutions. We present an alternative approach that pushes monitoring t o applications on end systems and uses their collective view t o detect network events and their impact on services - an appro ach we call Crowdsourcing Event Monitoring (CEM). This paper presents a general framework for CEM systems and demonstrat es its effectiveness for a P2P application using a large datase t gathered from BitTorrent users and confirmed network events from two ISPs. We discuss how we designed and deployed a prototype CEM implementation as an extension to BitTorrent. This syst em performs online service-level network event detection thr ough passive monitoring and correlation of performance in end-u sers’ applications. Categories and Subject Descriptors C.2.3 [ Network Operations ]: Network monitoring C.2.4 Distributed Systems Distributed Applications General Terms Measurement, Performance, Reliability
Abstract
FaBián e. Bu Stamante taming the Torrent David R. Choffnes will receive his PhD this year from the Department of Electri - cal Engineering and Computer Science at Northwestern University. Not surprisingly, his research interests align with those of his advisor (below). In particular, he cur - rently focuses on designing, building, and evaluating large-scale, practical distributed systems. drchoffnes@eecs.northwestern.edu Fabián E. Bustamante is an associate profes - sor in the Department of Electrical Engineer - ing and Computer Science at Northwestern University. His research interests include operating systems and distributed systems, and networking in both wired and wireless settings. fabianb@eecs.northwestern.eduover the pa St decade, the peer-to- peer (P2P) model for building distributed systems has enjoyed incredible success and popularity, forming the basis for a wide variety of important Internet applications such as file sharing, voice-over-IP (VoIP), and video streaming. This success has not been universally welcomed. Internet Service Pro - viders (ISPs) and P2P systems, for example, have developed a complicated relationship that has been the focus of much media attention. While P2P bandwidth demands have yielded significant revenues for ISPs as users upgrade to broadband for improved P2P performance, P2P systems are one of their greatest and costliest traffic engineer - ing challenges, because peers establish con - nections largely independent of the Internet routing. Ono [4] is an extension to a popular BitTorrent client that biases P2P connections to avoid much of these costs without sacri - ficing, and potentially improving, BitTorrent performance. Most P2P systems rely on application-level rout - ing through an overlay topology built on top of the Internet. Peers in such overlays are typically con - nected in a manner oblivious to the underlying network topology and routing. These random con - nections can result in nonsensical outcomes where a peer—let’s say, in the authors’ own campus net - work in the Chicago suburbs—downloads content from a host on the West Coast even if the content is available from a much closer one in the Chicago area. This can not only lead to suboptimal perfor - mance for P2P users, but can also incur signifi - cantly larger ISP costs resulting from the increased interdomain (cross-ISP) traffic. The situation has driven ISPs to the unfavorable so - lution of interfering with users’ P2P traffic—shap - ing, blocking, or otherwise limiting it—all with questionable effectiveness. For instance, when early P2P systems ran over a fixed range of ports (e.g., 6881–89 for BitTorrent), ISPs attempted to shape traffic directed toward those ports. In response, P2P systems have switched to nonstandard ports, often selected at random. More advanced ISP strate - gies, such as deep packet inspection to identify and shape P2P-specific flows, have resulted in P2P cli - ents that encrypt their connections. Recently, some 52 ;LOGIN: v OL. 35, NO. 1ISPs have attempted to reduce P2P traffic by placing caches at ISP network edges or by using network appliances for spoofing TCP RST messages, which trick clients into closing connections to remote peers. The legality of these approaches is questionable. By caching content, ISPs may become partici - pants in illegal distribution of copyrighted material, while interfering with P2P flows in a non-transparent way not only may break the law but also can lead to significant backlash. Given this context, it is clear that any general and sustainable solution requires P2P users to buy in. One possible approach would be to enable some form of cooperation be - tween P2P users and ISPs. ISPs could offer an oracle service [2] that P2P users rely on for selecting among candidate neighbor peers, thus allowing P2P systems to satisfy their own goals while providing ISPs with a mecha - nism to manage their traffic [7]. However, we have seen that P2P users and ISPs historically have little reason to trust each other. Beyond this, support - ing such an oracle requires every participating ISP to deploy and maintain infrastructure that participates in P2P protocols. To drive peer selection, Ono adopts a new approach based on recycled net - work views gathered at low cost from content distribution networks (CDNs) without additional path monitoring or probing. Biased peer selection ad - dresses a key network management issue for ISPs, obviating controversial practices such as traffic shaping or blocking. By relying on third-party CDNs to guide peer selection, Ono ensures well-informed recommendations (thus facilitating and encouraging adoption) while bypassing the potential trust is - sues of direct cooperation between P2P users and ISPs. We have shown that peers selected based on this information are along high-quality paths to each other, offering the necessary performance incen - tive for large-scale adoption. At the end of July 2009, Ono had been installed more than 630,000 times by users in 200 countries. BitTorrent Basics Before describing Ono, we discuss how BitTorrent peers select neighbors for transferring content. A more complete description of the BitTorrent protocol can be found in the article by Piatek et al. [5]. BitTorrent distributes a file by splitting it into fixed-size blocks, called pieces, that are exchanged among the set of peers participating in a swarm. After receiving any full piece, a peer can upload it to other directly con - nected peers in the same swarm. To locate other peers sharing the same content, peers contact a tracker, im - plemented as a centralized or distributed service, that returns a random sub - set of available peers. By default, each peer initially establishes a number of random connections from the subset returned by the tracker. As the transfer progresses, downloading peers drop low-throughput connections and re - place them with new random ones. CDns as Oracles Ono biases peer connections to reduce cross-ISP traffic without negatively impacting, but indeed potentially improving, system performance. Ono’s peer recommendations are driven by recycled network views gathered at low cost from CDNs. CDNs such as Akamai or Limelight attempt to improve Web performance by delivering content to end users from multiple, geographically dispersed
2009
Abstract
An accurate Internet topology graph is important in many are as of networking,fromdecidingISPbusinessrelationshipstodi agnosing networkanomalies. MostInternetmappingeffortshavederi vedthe network structure, at the level of interconnected autonomo us systems (ASes), from a limited number of either BGP- or tracerou tebased data sources. While techniques for charting the topol ogy continue toimprove, the growth ofthe number of vantage poin ts is significantlyoutpaced bythe rapidgrowth of the Internet. In this paper, we argue that a promising approach to revealin g thehiddenareasoftheInternettopologyisthroughactivem easurementfromanobservation platformthatscaleswiththegrowi ngInternet. By leveraging measurements performed by an extensi on to a popular P2P system, we show that this approach indeed expos es significant new topological information. Based on tracerou te measurements from more than 992,000IPs in over 3,700ASes distributedacrosstheInternethierarchy, ourproposed heuri sticsidentify23,914newASlinksnotvisibleinthepublicly-availableBGP data – 12.86%morecustomer-provider links and 40.99% more peering links , than previously reported. We validate our heuristics using data from a tier-1 ISP and show that they correctly filter outallfalselinksintroducedbypublicIP-to-ASmapping. W ehave madetheidentifiedsetoflinksandtheirinferredrelations hipspublicallyavailable. Categories andSubject Descriptors C.2.3 [Computer-Communication Networks ]: Network Operations - Networkmonitoring General Terms Measurement, Management
Abstract
Peer-to-peer (P2P) systems enable a wide range of new and important Internet applications that can provide lowcost, high-performance and resilient services. While a strengthoftheP2Pparadigmistheabilitytotakeadvantage of large numbers of connections among diverse hosts, each of these connections provides an opportunity for eavesdropping on sensitive data. A number of efforts attempt to conceal connection data with private, trusted networks and encryption; however, the mere existence of a connection is sufficient to reveal information about user activity. Usingonlytheconnectionpatternsgatheredduri ng a one-month period (comprising a stable population of 10,000 BitTorrent users), we extract communities of users that shareinterest inthe same content. Despite thefact tha t connections in BitTorrent require not only shared interest in content, but also concurrent sessions, we find that strong communities of users naturally form – our analysis reveals that users inside the typical community are 5 to 25 times morelikelytoconnecttoeachotherthanwithusersoutside. These strong communities enable a guilt-by-association attack,whereanentirecommunityofuserscanbeclassified by monitoring one of its members. Our study shows that through a single observation point, an attacker trying to identifysuchcommunitiescanuncover50%ofthenetwork withinadistance oftwohops. To address this issue, we propose a new privacypreserving layer for P2P systems that disrupts community identification by obfuscating users’ network behavior. We show that a user can achieve plausible deniability by simply adding a small percent (between 25 and 50%) of additional random connections that are statistically indistinguishable from natural ones. Unlike connections in anonymizing networks, these random connections have the benefit of adding available bandwidth to the related swarms. Because our solution is protocol compliant and incrementally deployable, we have made it available as an extension toapopular BitTorrent client.
Abstract
For both technological and economic reasons, the default path between two end systems in the wide-area Internet can be suboptimal. This observation has motivated a number of systems that attempt to improve reliability and performance by routingoveroneormorehopsinanoverlay.Mostoftheproposed solutions, however, fall at an extreme in the cost-performance trade-off. While some provide near-optimal performance with an unscalable measurement overhead, others avoid measurement when selecting routes around network failures but make no attempt to optimize performance. This paper presents an experimental evaluation of an alternative approach to scalable, performance detouring based on the strategic reuse of measurements from other large distributed systems, namely content distribution networks (CDNs). By relying on CDN redirections as hints on network conditions, higher performance paths are readily found with little overhead and no active network measurement. We report results from a study of more than 13,700 paths between 170 widely-distributed hosts over a three-week period, showing the advantages of this approach compared to alternative solutions. We demonstrate th e practicality of our approach by implementing an FTP suite that uses our publicly available SideStep library to take advantage of these alternative Internet routes.
Abstract
The user experience for networked applications is becoming a key benchmark for customers and network providers when comparing, buying and selling alternative se rvices. There is thus a clear need to detect, isolate and determine the root causes of network events that impact end- to-end performance and the user experience so that operators can resolve such issues in a timely manner. We argue that the most appropriate place for m onitoring these service-level events is at the end systems where the services are used, and pr opose a new approach to enable and support this: Crowdsourcing Cloud Monitoring (C2M ). This paper presents a general framework for C2M systems and demonstrates its effectiveness using a large dataset of diagnostic information gathered fr om BitTorrent users, together with confirmed network events from two ISPs. We demonstrate that our crowdsourcing approach allows us to detect network events worldwide, including events spanning multiple networks. We discuss how we designed, implemented and deployed an extension to BitTorrent that performs real-time network event detection using ou r approach. It has already been installed more than 34,000 times.
Abstract
Vehicular networks are emerging as a new distributed system environment with myriad possible applications. Most studies on vehicular networks are carried out via simulation, given the logistical and economical problems with largescale deployments. This paper investigates the impact of realistic radio propagation settings on the evaluation of VANET-based systems. Using a set of instrumented cars, we collected IEEE 802.11b signal propagation measurements between vehicles in a variety of urban and suburban environments. We found that signal propagation between vehicles varies in different settings, especially between lineof-sight (“down the block”) and non line-of-sight (“around the corner”) communication in the same setting. Using a probabilistic shadowing model, we evaluate the impact of different parameter settings on the performance of an epidemic data dissemination protocol and discuss the implications of our findings. We also suggest a variation of a basic signal propagation model that incorporates additional realism without sacrificing scalability by taking advantage of environmental information, including node locations and street information. 1. Introduction Vehicular networks are emerging as a new distributed system environment with myriad possible applications that range from traffic information systems and road safety [1]– [5] to urban sensing and entertainment [6]. Vehicular adhoc networks (V ANETs) provide infrastructureless, rapidly deployable, self-configurable network connectivity. The network is made of vehicles interconnected by wireless links and willing to store and forward data for their peers. As vehicles move freely and organize themselves arbitrarily, message routing is done dynamically based on network connectivity. Compared with other ad-hoc networks, V ANETs are particularly challenging due in part to the vehicles’ high rate of mobility and the numerous signal-weakening obstructions, such as buildings, in their environments.Because
Abstract
We consider the problem of data dissemination in vehicular networks. Our main goal is to compare the applicationlevel performance of fully distributed and centralized data dissemination approaches in the context of traffic advisory systems. Vehicular networks are emerging as a new distributed system environment with myriad promising applications. Wirelesslyconnected, GPS-equipped vehicles can be used, for instance, as probes for traffic advisory or pavement condition information services with significant improvements in cost, coverage and accuracy. There is an ongoing discussion on the pros and cons of alternative approaches to data distribution for these applications. Proposed centralized, or infrastructure-based, models rely on road-side equipment to upload information to a central location for later use. Distributed approaches take advantage of the direct exchanges between participating vehicles to achieve higher scalability at the potential cost of data consistency. While distributed solutions can significantly reduce infrastructures’ deployment and maintenance costs, it is unclear what the impact of “imprecise” information is to an application or what level of adoption is needed for this model to be effective. This paper investigates the inherent trade-offs in the adoption of distributed or centralized approaches to a traffic advisory service, a commonly proposed application. We based our analysis on a measurements study of signal propagation in urban settings and an extensive simulation-based experimentation in the Chicago road network. I. I NTRODUCTION Vehicular networks are emerging as a new distributed system platforms with myriad promising applications. Wirelesslyconnected, GPS-equipped vehicles can be used, for instance, as probes for traffic advisory or pavement condition information services, improving their coverage and accuracy while reducing their costs [1]–[7]. While diverse in goals, the architecture of most proposed vehicular network sys
2008
Abstract
Peer-to-peer (P2P) systems, which provide a variety of popular services, such as file sharing, video streaming and voice-overIP, contribute a significant portion of today’s Internet traffic. By building overlay networks that are oblivious to the underlying Internet topology and routing, these systems have become one of the greatest traffic-engineering challenges for Internet Service Providers (ISPs) and the source of costly data traffic flows. In an attempt to reduce these operational costs, ISPs have tried to shape, block or otherwise limit P2P traffic, much to the chagrin of their subscribers, who consistently finds ways to eschew these controls or simplyswitch providers. In this paper, we present the design, deployment and evaluation of an approach to reducing this costly cross-ISP traffic without sacrificing system performance. Our approach recycles network views gathered at low cost from content distribution networks to drive biased neighbor selection without any path monitoring or probing. Using results collected from a deployment in BitTorrent withover120,000usersinnearly3,000networks,weshowthatour lightweight approach significantly reduces cross-ISP traffic and, over 33% of the time, it selects peers along paths that are within asingleautonomous system(AS).Further,wefindthatoursystem locates peers along paths that have two orders of magnitude lower latency and 30% lower loss rates than those picked at random, and that these high-quality paths can lead to significant improvements intransferrates. Inchallengedsettingswherepeersareoverloaded in terms of available bandwidth, our approach provides 31% average download-rate improvement; in environments with large available bandwidth, it increases download rates by 207% on average (and improves median rates by883%). Categories andSubjectDescriptors C.2.4[Distributed Systems ]: Distributed Applications C.2.3Network Operations Network management GeneralTerms Algorithms, Measurement, Performance, Experimentation, Mana gement Permission to make digital or hard copies of all or part of this w ork for personal or classroom use is granted without fee provided th at copies are not made or distributed for profit or commercial advantage and th at copies bearthisnoticeandthefullcitationonthefirstpage. Tocop yotherwise, to republish,topostonserversortoredistributetolists,re quirespriorspecific permission and/orafee. SIGCOMM’08, August 17–22,2008,Seattle, Washington, USA. Copyright2008ACM 978-1-60558-175-0/08/08…$5.00.
Abstract
Many large-scale distributed systems can benefit from a service that allows them to select among alternative nodes based on their relative network positions. A variety of approache s propose new measurement infrastructures that attempt to sca le this service to large numbers of nodes by reducing the amount of direct measurements to end hosts. In this paper, we introduce a new approach to relative network positioning that eliminates direct probing by leveraging pre-existing infrastructure. Specifically, we exploit the dynamic association of nodes with replica servers from large content distribution networks (CDNs) to dete rmine relative position information – we call this approach CDNbased Relative network Positioning (CRP). We demonstrate how CRP can support two common examples of location information used by distributed applications: server selection and dynamic nodeclustering.AfterdescribingCRPindetail,wepresentresults from an extensive wide-area evaluation that demonstrates its effectiveness.
2007
Abstract
We present a novel approach to remote traffic aggregation for Network Intrusion Detection Systems (NIDS) called Cooperative Selective Wormholing (CSW). Our approach works by selectively aggregating traffic bound for unused network ports on a volunteer’s commodity PC.CSW could enable NIDS operators to cheaply and efficiently monitor large distributed portions of the Internet, something they are currently incapable of. Based on a study of several hundred hosts in a universitynetwork, we posit that there is sufficient heterogeneity in hosts’ networkservice configurations to achieve a high degree of network coverage by re-using unused port space on client machines. We demonstrate Vortex ,a proof-of-concept CSW implementation that runs on a wide range of com-modity PCs (Unix and Windows). Our experiments show that Vortex can selectively aggregate traffic to a virtual machine backend, effectively allowing two machines to share the same IP address transparently. Weclose with a discussion of the basic requirements for a large-scale CSW deployment.
Abstract
Overlay-based multicast has been proposed as a key alternative for large-scale group communication. There is ample motivation for such an approach, as it delivers the scalability advantages of multicast while avoiding the deployment issues of a network-level solution. As multicast functionality is pushed to autonomous, unpredictable end systems, however, significant performance loss can result from their higher degree of transiency when compared to routers. Consequently, a number of techniques have recently been proposed to improve overlays’ resilience by exploiting path diversity and minimizing node dependencies. Delivering high application performance at relatively low costs and under high degree of transiency has proven to be a difficult task. Each of the proposed resilient techniques comes with a different trade-off in terms of delivery ratio, end-to-end latency and additional network traffic. In this paper, we review some of these approaches and evaluate their effectiveness by contrasting the performance and associated cost of representative protocols through simulation and wide area experimentation. Index Terms — Peer-to-Peer, Overlay Network, Multicast, Resilience. I. I NTRODUCTION OVERLAY-BASED multicast has been proposed as a key alternative for large-scale group communication [1]–[12]. With an overlay-based approach, all multicast-related functionality is implemented at the end systems instead of at the routers. The participating hosts configure themselves in an overlay topology, with each edge in the overlay corresponding to a unicast path between two end systems in the underlying Internet. The goal of a multicast protocol is thus to construct and maintain an efficient overlay for data transmission. As multicast functionality is pushed to autonomous, unpredictable end systems, however, significant performance loss can result from their Manuscript received March 2, 2007; revised August 30, 2007. An early version of this work appeared in the Pro
Abstract
We introduce Virtual Ferry Networking (VFN), a novel approach to data dissemination services on mobile adhoc networks. VFN exploits the emergent patterns of vehicles’ mobility to buffer and carry messages when immediately forwarding those messages would fail. Instead of depending on a fixed, small set of vehicles and paths for ferrying messages, VFN allows any vehicle moving along part of a virtual route to become a possible carrier for messages. VFN helps address many of the challenges with supporting distributed applications in challenging ad-hoc vehicular networks with rapidly changing topologies, fast-moving vehicles and signal-weakening obstructions such as bridges and buildings. We discuss the challenges with implementing VFN and present evaluation results from an early prototype.
Abstract
Packet forwarding prioritization (PFP) in routers is one of the mechanisms commonly available to network administrators. PFP can have a significant impact on the performance of applications, the accuracy of measurement tools’ results and the effectiveness of network troubleshooting procedures. Despite their potential impact, no information on PFP settings is readily available to end users. In this paper, we present an end-to-end approach for packet forwarding priority inference and its associated tool, POPI . This is the first attempt to infer router packetforwarding priority through end-to-end measurement. Our POPI tool enables users to discover such network policies through the monitoring and rank classification of loss rates for different packet types. We validated our approach via statistical analysis, simulation, and wide-area experimentation in PlanetLab. As part of our wide-area experiments, we employed POPI to analyze 156 random paths across 162 PlanetLab nodes. We discovered 15 paths flagged with multiple priorities, 13 of which were further validated through hop-by-hop loss rates measurements. In addition, we surveyed all related network operators and received responses for about half of them confirming our inferences. I. I NTRODUCTION Packet forwarding prioritization (PFP) has been available in off-the-shelf routers for quite a while, and various models from popular brands, such as Cisco and Juniper Networks [1, 2] offer support for it. Network operators have come to rely on these mechanisms for managing their networks, for example as a way of rate limiting certain classes of applications (e.g. peer-to-peer) [3]. PFP can have a significant impact on the performance of applications, beyond those targeted by administrators. PFP can also severely impact the accuracy of measurement tools’ output and the effectiveness of network troubleshooting procedures. For example, measuring network path characteristics is critical for the diagnosis, optimization and
Abstract
W eaddr esstheproblem ofhighly transient populations inunstructur edandloosely-structur edpeer-to-peer systems. Wepropose anumber ofillustrati vequery-r elated strategies and organizational protocols that, bytaking into consideration theexpected session times ofpeers (their lifespans), yield systems with performance characteristics moreresilient tothe natural instability oftheir environments. Wefirst demonstrate thebenefits oflifespan-based organizational protocols interms ofend-application performance and inthecontext ofdynamic and heter ogeneous Inter net environments. Wedothis using anumber ofcurrently adopted and proposed query-r elated strategies, including methods forquery distrib ution, caching and replication. Wethen show,through trace-dri vensimulation and wide-ar eaexperimentation, the performance advantages oflifespan-based, query-r elated strategies when layeredover currently employ edand lifespan-based organizational protocols. While merelyillustrati ve,theevaluated strategies and protocols clearly demonstrate theadvantages ofconsidering peers’ session time indesigning widely-deploy edpeer-to-peer systems. IndexTerms —Lifespan, session time, resilience, peer-to-peer . I.INTRODUCTION Peer-to-peer (P2P) computing canbedefined asthesharing ofcomputer resources and services bydirect exchange between theparticipating nodes. Since Napster’ s[2]introduction inthelate90s, thearea hasrecei vedincreasing attention from theresearch community andthegeneral public. Peers inP2P systems typically define anoverlay netw orktopology bykeepinganumber ofconnections toother peers, their “friends, ” and implementing amaintenance protocol that continuously repairs theoverlay asnewmembers join andothers leavethe system. Due inparttotheautonomous nature ofpeers, their mutual dependenc y,and their astoundingly largepopulations, the transienc yofpeers (a.k.a. churn) and itsimplications on theoverall system’ sperformance haverecently attracted the attention oftheresearch commu
2006
Abstract
Existing peer-to-peer systems rely on overlay network protocols for object storage and retrieval and message routing. These overlay protocols can be broadly classified as structured and unstructured – structured overlays impose constraints on the network topology for efficient object discovery, while unstructured overlays organize nodes in a random graph topology that is arguably more resilient to peer population transiency. There is an ongoing discussion on the pros and cons of both approaches. This paper contributes to the discussion a multiple-site, measurement-based study of two operational and widelydeployed file-sharing systems. The two protocols are evaluated in terms of resilience, message overhead, and query performance. We validate our findings and further extend our conclusions through detailed analysis and simulation experiments.
Abstract
One of the most important challenges of selforganized, overlay systems for large-scale group communication lies in these systems ability to handle the high degree of transiency inherent to their environment. While a number of resilient protocols and techniques have been recently proposed, achieving high delivery ratios without sacrificing end-to-end latencies or incurring significant additional costs has proven to be a difficult task. In this paper we review some of these approaches and experimentally evaluate their effectiveness by contrasting their performance and associated cost through simulation and widearea experimentation. I. I NTRODUCTION Deployment issues with IP Multicast [22], [23] have motivated recent work on alternate, peer-to-peer approaches for supporting group communication applications over the Internet [19], [29], [26], [37], [17], [10], [11], [4], [16], [39], [50], [35], [45]. In this self-adaptive, application-layer approach, participating peers configure themselves as an overlay topology for data delivery. The topology is an overlay in that each edge corresponds to a unicast path between two end systems in the underlying Internet. All multicast-related functionality is implemented at the end systems instead of at the routers, and the goal of the multicast protocol is to construct and maintain an efficient overlay for data transmission. As multicast functionality is pushed to autonomous, unpredictable end hosts, however, significant performance loss can result from their higher degree of transiency when compared to routers [6]. A good indicator of node’s transiency is the peers’ median session time , where session time is defined as the time between when a peer joins and leaves the network. Measurement studies of widely used P2P systems have reported median session times ranging from 90 to one minute [12], [18], [42], [27]. Although collected mostly from file-sharing applications, these measurements give us an idea of the high level of tra
Abstract
ion.Layering with PBIO makes Echo suitable for appli-cations that demand high-performance communi-cation of large amounts of data. In particular,because PBIO and Echo can directly transportstructured types, memory-resident data in a sourceprogram can be published, sent to subscribers, andrecreated as memory-resident data at the destina-tion with minimal transformation. Base type handling and optimization. In the context of high-performance messaging, Echo eventtypes are most functionally similar to the user-defined types found in the message-passinginterface (MPI), a widely used standard in high-performance systems. The main differences are inexpressive power and implementation. Like MPI’suser-defined types, Echo event types describe C-style structures made up of atomic data types. Bothsystems support nested structures and staticallysized arrays, but Echo’s type system extends thisto support null-terminated strings and dynamical-ly sized arrays. (Dynamic array sizes are given byan integer-typed field in the record. Full informa-tion about the types Echo and PBIO supportappears elsewhere. 2) IEEE INTERNET COMPUTING www.computer.org/internet/ JANUARY • FEBRUARY 2006 19High-Performance Computing Figure 1. Using event channels for communication. In this (a) abstract view of event channels and (b) an Echo realization of eventchannels, we see the decentralized structure of Echo’s realization.Process boundary Source/publisher Sink/subscriber Event channel(a) (b)
Abstract
To enhance web browsing experiences, content distribution networks (CDNs) move web content “closer” to clients by caching copies of web objects on thousands of servers worldwide. Additionally, to minimize client download times, such systems perform extensive network and server measurements, and use them to redirectclientstodifferentserversovershorttimescales. Inthispaper, we explore techniques for inferring and exploiting network measurements performed by the largest CDN, Akamai; our objective is to locate and utilize quality Internet paths withoutperforming extensive pathprobing or monitoring. Our contributions are threefold. First, we conduct a broad measurement study of Akamai’s CDN. We probe Akamai’s network from 140 PlanetLab vantage points for two months. We find that Akamairedirectiontimes,whileslightlyhigherthanadvertised,are sufficiently low to be useful for network control. Second, we empirically show that Akamai redirections overwhelmingly correlate withnetworklatenciesonthepathsbetweenclientsandtheAkamai servers. Finally,weillustratehowlarge-scaleoverlaynetworkscan exploit Akamai redirections to identify the best detouring nodes for one-hop source routing. Our research shows that in more than 50%ofinvestigatedscenarios,itisbettertoroutethroughthenodes “recommended” by Akamai, than to use the direct paths. Because this is not the case for the rest of the scenarios, we develop lowoverheadpruningalgorithmsthatavoidAkamai-drivenpathswhen theyare notbeneficial. Categories andSubjectDescriptors C.2.2[Computer-Communication Networks ]: Internet C.4[Performance ofSystems]: Measurement techniques GeneralTerms Measurement,Performance, Experimentation ∗Drafting is a technique commonly used by bikers and longdistance runners to reduce wind resistance by moving into the air pocket created behind theleader. Permission to make digital or hard copies of all or part of this w ork for personal or classroom use is granted without fee provided th at copies are not made or distributed for profit or commercial advantage and th at copies bearthisnoticeandthefullcitationonthefirstpage. Tocop yotherwise,to republish,topostonserversortoredistributetolists,re quirespriorspecific permission and/orafee. SIGCOMM'06, September 11–15,2006,Pisa, Italy. Copyright2006ACM1-59593-308-5/06/0009… $5.00.
2005
Abstract
Weexplorethefeasibilityofstreamingapplicationsover DHT-based substrates. In particular, we focus our study on the implications of bandwidth heterogeneity and transiency, both characteristic of these systems’ target environment. Our discussion is grounded on an initial evaluationofSplitStream,arepresentativeDHT-basedcooperative multicast system. 1. Introduction The limited deployment of IP Multicast [19, 20] has led to considerable interest in alternate approaches implementedattheapplicationlayer,relyingexclusivelyonendsystems[23,16,2,15,30,8,13]. Amongtheproposedendsystemmulticastprotocols,tree-basedsystemshaveproven to be highly scalable and efficient in terms of physical link stress, state and control overhead,and end-to-end latency. Conventional tree-based structures, however, are inherentlynotwellmatchedtothecharacteristicsofcooperative distributed environments. Cooperative settings, in which participating peers contribute resources in exchange for some service, have been found to be highly dynamic and heterogeneous in terms of node resource availability and uptimes [35, 36, 10, 29]. Tree-based multicast structures are problematically highly dependent on the reliability of non-leaf nodes and are likely to be bandwidth constrained, with bandwidth availability monotonically decreasing as oneascendsfromtheleaves. Inaddition,intree-basedmulticast systems the burden of data forwarding is carried by a small fraction of non-leaf nodes, clearly conflicting with the expectationsof a cooperativeenvironment[13]. Anumberofrecentlyproposedprotocols[30,24,13,9] explicitly address these issues by distributing the forwarding load among all participants, thus lowering system dependency on any particular node. While the proposed techniques can be equally applied to both performancebased [17, 23, 2, 30, 8] and DHT-based systems [34, 37,31], the latter path is particularly compelling as the resulting DHT substrate could potentially be used by multiple applic
Abstract
We introduce Nemo, a novel peer-to-peer multicast protocol that achieves high delivery ratio without sacrificing end-toend latency or incurring additional costs. Based on two simple techniques: (1) co-leaders to minimize dependencies and, (2)triggered negative acknowledgments (NACKs) to detect lost packets, Nemo’s design emphasizes conceptual simplicity and minimum dependencies, thus achieving performance characteristics capable of withstanding the natural instability of its target environment. We present an extensive comparative evaluation of our protocol through simulation and wide-area experimentation. We contrast the scalability and performance of Nemo with that of three alternative protocols: Narada, Nice and Nice-PRM. Our results show that Nemo can achieve delivery ratios similar to those of comparable protocols underhighfailurerates,butatafractionoftheircostintermsofduplicatepackets (reductions >90%)andcontrol-related traffic.
Abstract
Among the proposed overlay multicast protocols, treebased systems have proven to be highly scalable and efficientintermsofphysicallinkstressandend-to-endlatency. Conventional tree-based protocols, however, distribute the forwarding load unevenly among the participating peers. An effective approach for addressing this problem is to stripe the multicast content across a forest of disjoint trees, evenly sharing the forwarding responsibility among participants. DHTs seem to be naturally well suited for the task, as they are able to leverage the inherent properties of their routing model in building such a forest. In heterogeneous environments, though, DHT-based schemes for tree (and forest) construction may yield deep, unbalanced structures withpotentially largedelivery latencies. This paper introduces Magellan, a new overlay multicast protocol we have built to explore the tradeoff between fairness and performance in these environments. Magellan builds a data-distribution forest out of multiple performance-centric, balanced trees. It assigns every peer in the system a primary tree with priority over the peer’s resources. The peers’ spare resources are then made available to secondary trees. In this manner, Magellan achieves fairness, ensuring that every participating peer contributes resources to the system. By employing a balanced distribution tree with O(lgN)-bounded, end-to-end hop-distance, Magellan also provides high delivery ratio with comparable low latency. Preliminary simulation results show the advantageof this approach. 1.Introduction A recent research trend advocates an end-system, or application-level, approach to multicasting as a viable alternative to network-level multicast for a wide range of group communication applications [15, 22, 35, 3, 12, 6]. Inthisapplication-layerapproach,participatingnodesorga-nize themselves into an overlay topology for data delivery. Thetopologyisanoverlayinthateachedgecorrespondsto a unicast path between two
Abstract
High-bandwidth multisource multicast among widely distributed nodes is critical for a wide range of important applications including audio and video conferencing, multi-party games and content distribution. The limited deployment of IP Multicast has led to considerable interest in alternate approaches implemented at the application layer that rely only on end systems. In an end-system multicast approach, participating peers organize themselves into an overlay topology for data delivery. Among the proposed end system multicast protocols, tree-based systems have proven to be highly scalable and efficient in terms of physical link stress, state and control overhead, and end-to-end latency. However, normal tree structures have inherent problems in terms of resilience and bandwidth capacity. In this work we address the bandwidth constraints of conventional trees by importing Leiserson’s fat-treesfrom parallel computing into overlay networks. Paraphrasing Leiserson, a fat-tree is similar to a real tree in that its branches become thicker as one moves away from the leaves. By increasing the number of links closer to the root, a fat-tree can overcome the “root bottleneck” likely to be found by multisource multicast applications relying on conventional trees. The adoption of a fat-tree approach for overlay multicast (i) lowers the forwarding responsibility of the participating nodes, thus increasing system scalability to match the demands of high-bandwidth, multisource multicast applications; (ii) reduces the height of the forwarding tree, hence significantly shortening delivery latencies; and (iii) improves the system’s robustness to node transiency by increasing path diversity in the overlay. We introduce the design, implementation and performance evaluation of FatNemo, a new application-layer multicast protocol that builds on this idea, and report on a detailed comparative evaluation.
Abstract
We address the problem of highly transient populations in unstructured and loosely-structured peerto-peer systems. We propose a number of illustrative query-related strategies and organizational protocols that, by taking into consideration the expected session times of peers (their lifespans), yield systems with performance characteristics more resilient to the natural instability of their environments. We first demonstrate the benefits of lifespan-based organizational protocols in terms of end-application performance and in the context of dynamic and heterogeneous Internet environments. We do this using a number of currently adopted and proposed query-related strategies, including methods for query distribution, caching and replication. We then show, through trace-driven simulation and wide-area experimentation, the performance advantages of lifespan-based, query-related strategies when layered over currently employed and lifespanbased organizational protocols. While merely illustrative, the evaluated strategies and protocols clearly demonstrate the advantages of considering peers’ session time in designing widely-deployed peer-to-peer systems. 1. Introduction Due in part to the autonomous nature of peers, their architectural mutual dependency, and their excessively large populations, the transiency of peer populations (a.k.a. churn) and its implications on P2P systems have recently attracted the attention of the research community [3, 19, 7, 27, 16]. Measurement studies of deployed P2P systems have reported median session times1varying from one hour to one minute [29, 6, 27]. The implications of such a high degree of transiency on 1Where a node’s session time is the time from the node’s joining to its subsequent leaving from the system. We employthe overall system’s performance would clearly depend on the level of nodes’ investment in their neighboring peers. At the very least, the amount of maintenancerelated messages processed by
Abstract
Ashish Gupta Peter Dinda Fabian Bustamante {ashish,pdinda,fabianb }@cs.northwestern.edu Department of Computer Science, Northwestern University 1. INTRODUCTION Distributed hash tables (DHTs) are a distributed, peer-to- peer analogue of hash indices in database systems. Given a key, a DHT returns a pointer to the associated object. DHTs have also be extended to support “keyword” queries [5, 2](object identified by multiple keys). Fundamentally, how- ever, these approaches all return a undirected sample of the full result set. Unfortunately, most applications are inter-ested in the most popular members of the result set. In other words, if all the objects in the result set were to be ranked in descending order of the number of accesses to the objectin a given time interval, the application’s interest decreasesthe further down the ranked list it goes. We are developing distributed popularity indices (DPIs). Suppose a DHT supports two query primitives. The firstsimply finds an object given a key: Lookup :k→d while the second provides keyword queries: Query :{w 1,w2,…}→{ k1,k2,…} where the wiare keywords and the kiare the keys of the objects that have all of those keywords associated with them.A DPI supports queries of the form LookupPop :k→(d, p) where pis the popularity of the object associated with key k, and the conjunctive query QueryPop :({w 1,w2,…},n)→{k1,k2,…,k n} which is similar to Query except that the keys of the nmost popular objects are returned. As with a DHT, a DPI alsomust support Insert ,Update ,a n d Delete primitives. There is an additional primitive Visit:(k,w 1,w2,… ,v )→. that indicates that object associated with k(and its associ- ated keywords wi) has been visited vtimes. While the DPI is a distributed structure that we either generate on the fly in response to query or maintain persis-tently within a DHT, copies of at least portions of it can be cached locally on the client. Gupta is a Ph.D. student. Dinda and Bustamante are fac- ulty. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee. Copyright 200X ACM X-XXXXX-XX-X/XX/XX … $5.00.Applications Beyond the obvious application in peer-to-peer file sharing communities, DPIs have many uses. Consider replication. Suppose that the system replicates popular objects up to z times by using zdifferent hash functions. A client would merely issue a LookupPop to determine how many of the hash functions can be used for a particular key, providing a simple decoupling. Consider web search. Arguably, link structures, as used in PageRank, and aggregated bookmarks as in social book- marking, are proxies for the extent to which a page has beenvisited. Web clients could push Visits into the DPI as pages were visited. A QueryPop would then be able to extract the topnmost visited pages associated with a set of keywords. If the DPI automatically reduced accumulated popularityof objects over time by an exponential response with some time constant, we would know what pages were significant in the context of some set of keywords for a window of timeending in the present: a zeitgeist query. 2. EXPLOITING REVERSIBLE SKETCHES A k-ary sketch [3] is a variant of a Bloom filter [4] that captures popularity in a highly condensed form. A key is inserted into a Bloom filter by updating mhash tables of sizes/m, each fronted with a different uniform hash func- tion. Each bucket contains a bit and the update is a bit-or of each bucket determined by the key with one. In a sketch, the buckets contain integers and the update involves incre-menting the buckets determined by the key. A Bloom fil- ter provides a constant size representation of a potentially very large set of keys, with the caveat that it is imperfect—spurious keys may also be included. Similarly, a sketch is aconstant size, but imperfect representation of an ideal pop- ularity index that would map from key to number of visits. A key may appear more popular than it really is, but thisnoise affects unpopular keys much more than popular ones. A sketch, when queried with a key, provides an estimate of the number of updates (visits) to that key whose accuracyincreases with the number of visits. Surprisingly, given that sketches are based on hashes, there exists a reverse-hashing formulation that is reversible for popular keys, meaning that it is possible to determine the most visited keys from the sketch, as well as the number of updates to them, without knowing the keys in advance [6].Our Visit maps to a sketch update, while the popularity p from LookupPop is determined by using a sketch S allin the forward direction. Sallis computed over all keys. QueryPop is implemented using the per-keyword sketches Sw1,Sw2,… in the reverse direction and aggregating the results. We can generate DPIs for use for individual queries, and DPIs that persist. We refer to these approaches as query-driven and update-driven indices. 3. QUERY-DRIVEN INDICES Query-driven indices are computed and aggregated on the fly in response to LookupPop sa n d QueryPop s. For LookupPop , the client simply finds the node associated with kand asks it for the popularity of k, which the node stores as a simple counter associated with the object. Visitsimply increments this counter. ForQueryPop , we use a DHT that provides keyword search, such as Magnolia [2, 1] or others [5]. We use Query to deter- mine the set of keys of interest, and then contact each nodeassociated with these keys. Each node constructs a sketch that reflects the counters associated with the matching keys it holds. We then sum all of these sketches and deliver themto the client. Notice that sketch summation is associative (indeed commutative), and so we can use a reduction tree to do this in log time. Recall also that the sketches are offixed size. The client receives the summed sketch, which it reverses to determine the nmost popular keys. The final and intermediate sketches in the reduction tree could be cached in the DHT, associated with the keywordsused and a special “ sketch” keyword, and a timeout. In the common case, even the reduction tree could be avoided for most queries. 4. UPDATE-DRIVEN INDICES Update-driven indices have long-term persistence and make the workload involved with the popularity query indepen- dent of the workload for fetching objects. The key idea is that we distribute each of Sall,Sw1,Sw2,…using the DHT. We place the bucket ( i, j)o fs k e t c h S∗into the DHT using the key “ sketch *ij”. Visit is now more complex as it needs ultimately to up- date a bucket (actually mbuckets in parallel) that could be anywhere in the DHT. However, notice that Visit,w h i c h simply increments the bucket, is an associative and commu- tative operation. Thus, when a node sees two Visitsb e i n g routed through it, it can simply sum their varguments and emit a single Visit. Furthermore, if we permit the propaga- tion of Visits to be delayed (by giving each a deadline for when it must reach its bucket), the number of Visitst h a t we aggregate as we route them increases quickly. LookupPop is straightforward—the client merely needs to query for the sbuckets of Salland then estimate the popu- larity from the reconstructed sketch. Of course, that sketch could also be cached locally and/or in the DHT. QueryPop is a complex operation in an update-driven in- dex. The initial step is to reconstruct the sketches {Sw:w∈ querykeywords }. Unfortunately, it is not the case that if we sum these sketches we have the sketch that would have arisenif we had been managing a sketch for the query’s conjunc- tion of keywords. The most popular object for ( w 1,w2)m a y be very unpopular for w1andw2individually. It remains to be seen whether this loss of information is significant in practice for common workloads. We expect that a better way to answer the QueryPop query is to compute correlations of tracks (sequences of hash buckets) through pairs of sketches. The most highly corre- lated tracks (highest covariance) are most likely associatedwith the keys that are most popular for the combination of keywords. However, naively, m×s/msketches would re- sult in ( s/m) mtracks to be considered. We are working on dynamic programming approaches to this problem.1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16Positive shift Negative shift scale 20 positions Figure 1: Accuracy of k-ary sketches for Zipf-distributed objects. 5. EXPERIMENTS The effectiveness of our distributed popularity indices de- pends on the degree to which we can tolerate low accuracy, low precision answers for unpopular objects. Popularitymust be strongly skewed so that the top nobjects in terms ofVisits correspond to a large proportion of the visits even ifnis small. The popularity of documents and keywords tends to follow Zipf’s rule, which is a rank power law, meaning that the pro- portion of the total visits in the system captured by the ith ranked document, is ∝1/i α. Given such a distribution over keywords, how well do the top ndocuments recovered using reversible sketches capture the actual top ndocuments? Figure 1 shows the results of a simple experiment to ad- dress this question. Here, we inserted over 20 million keys with their popularity assigned according the Zipf distribu- tion with α=1.0. The most popular object was visited ∼10000 times. Reverse-hashing algorithms were used to re- cover the top 20 keys from the sketch and then compared to ground truth. The figure shows the reported shift in rank ofthe top 20 keys. Clearly, the shifts are very small except for the 4th ranked item. The three keys that were not really in the top 20 were in the top 30. The upshot of Figure 1 is that given keywords with a Zipf popularity rank distribution, a small sized sketch is quite capable of recovering the top nmost popular keys with little error. This bodes well for the distributed popularity indicesthat we have described here. 6. REFERENCES [1]Gupta, A., Sanghi, M., Dinda, P., and Bustamante, F. Magnolia: A novel dht architecture for keyword-based searching. Technical Report, Northwestern University (May 2005). [2]Gupta, A., Sanghi, M., Dinda, P., and Bustamante, F. Magnolia: A novel dht architecture for keyword-basedsearching (poster). NSDI 2005 (May 2005). [3]Krishnamurthy, B., Sen, S., Zhang, Y., and Chen, Y. Sketch-based change detection: Methods, evaluation, and applications. In P r o c .o fA C MS I G C O M MI M C (2003). [4]Mitzenmacher, M. Compressed bloom filters. IEEE/ACM Transactions on Networking 10 , 5 (Oct. 2002), 604–612. [5]Reynolds, P., and Vahdat, A. Efficient peer-to-peer keyword searching. In Middleware (2003), pp. 21–40. [6]Schweller, R., Gupta, A., Parsons, E., and Chen, Y. Reverse hashing for sketch-based one-pass change detection for high-speed networks. In ACM SIGCOMM IMC (2004).
Abstract
Ad-hoc wireless communication among hi ghly dyna mic, mobile node s in a urba n network is a critical capability for a wide range of important applications including automated vehicles, real-time traffic monitoring, a nd ba ttleground c ommunication. W hen evaluating application pe rform ance through simulation, a realistic mobility model for ve hicular ad-hoc networks (VANETs) is critical for a ccurate results. This technical report discusses the implementation of S TRAW, a new mobility model for VANETs in which node s move according to a realistic vehicular traffic model on roa ds defined by re al street map data. The challenge is to create a traffic model that accounts for i ndividual vehicle motion without incurring significant overhead relative to the cost of pe rform ing the wireless network simulation. W e identify essential and opt ional techniques for m odeling ve hicular motion that can be integrated into any w ireless network s imulator. W e then detail choices we made in implementing STRAW.
Abstract
DualPats exploits the strong correlation between TCP throughput and flow size, and the s tatistical stability of Internet path characteristics to accurately predict the TCP throughput of large transfers using active probing. We propose additional mechanisms to explain the correlation, andthen analyze why traditional TC P benchmarking fails to predict the throughput of large transfers well. We characterize stability and develop a dynamic sampling rate adjustment algorithm so that we probe a path based on its stability. Our analysis, desi gn, and evaluation is based on a large-scale measurement study.
Abstract
Ad-hoc wireless communication among highly dynamic, mobile nodes in a urban network is a critical capability for a wide range of important applications including automated vehicles, real-time traffic monitoring and vehicular safety applications. When evaluating application performance in simulation, a realistic mobility model for vehicular ad-hoc networks (VANETs) is critical for accurate results. This paper analyzes ad-hoc wireless network performance in a vehicular network in which nodes move according to a simplified vehicular traffic model on roads defined by real map data. We show that when nodes move according to our street mobility model, STRAW, network performance is significantly different from that of the commonly used random waypoint model. We also demonstrate that protocol performance varies with the type of urban environment. Finally, we use these results to argue for the development of integrated vehicular and network traffic simulators to evaluate vehicular ad-hoc network applications, particularly when the information passed through the network affects node mobility. 1. INTRODUCTION The community is increasingly interested in developing network protocols and services for vehicular ad-hoc networks (VANETs). Due in part to the prohibitive cost of deploying and implementing such systems in the real world, most research in this area relies on simulation for evaluation. A key component of these simulations is a realistic vehicular mobility model that ensures conclusions drawn from such experiments will carry through to real deployments. Unlike many other mobile ad-hoc environments where node movement occurs in an open field (such as conference rooms and caf´ es), vehicular nodes are constrained to streets often separated by buildings, trees or other objects. Street layouts and different obstructions increase the average distance between nodes and, in most cases, reduce the overall signal strength received at each node. We argue that a Permission to
Abstract
Parallel TCP flowsarebroadly used inthehigh performance distrib uted computing community toenhance networkthroughput, particularly forlargedata transfers. Previous resear chhas studied the mechanism bywhich parallel TCP impr ovesaggr egate throughput, buttheredoesn’t exist any practical mechanism topredict itsthroughput. Inthis work, we addr esshowtopredict parallel TCP throughput asafunction of thenumber offlows,aswell ashowtopredict thecorresponding impact oncrosstraffic. Tothebest ofour knowledge, weare thefirst toanswer thefollowing question onbehalf ofauser: what number ofparallel flowswill givethehighest throughput with less than ap%impact oncrosstraffic? Weterm this themaximum nondisrupti vethroughput. Webegin bystudying the beha vior ofparallel TCP insimulation tohelp derivea model forpredicting parallel TCP throughput and itsimpact oncross traffic. Combining this model with some previous findings wederiveasimple, yeteffecti ve,online advisor .We evaluate our advisor through simulation-based and wide-ar ea experimentation. I.INTRODUCTION Data intensi vecomputing applications require efficient management and transfer ofterabytes ofdata overwide area netw orks. Forexample, theLargeHadron Collider (LHC) at theEuropean physics center CERN ispredicted togenerate several petabytes ofrawand deriveddata peryear forapproximately 15years starting from 2005 [6].Data grids aim toprovide theessential infrastructure andservices fortheses applications, andareliable, high-speed data transfer service is afundamental andcritical component. Recent research has demonstrated that theactual TCP throughput achie vedbyapplications is,persistently ,significantly smaller than thephysical bandwidth “available” according totheend-to-end structural and load characteristics ofthenetw ork[39],[26].Here, wedefineTCPthroughput as theratio ofeffectivedata overitstransfer time, also called goodput [35]. Parallel TCP flowshavebeen widely used toincrease throughput. Forexample, GridFTP [5],part o
Abstract
Magnolia: A novel DHT Architecture for Keyword-based Searc hing Ashish Gupta, Manan Sanghi, Peter Dinda, Fabian Bustamante Department of Computer Science, Northwestern University Email: {ashish,manan,pdinda,fabianb }@cs.northwestern.edu The class of DHT-based P2P systems like Chord, Pastry, Tapes try, Kademlia greatly improve over unstructured P2P systems lik e GnutellaandKazaabyproviding (1)Scalableandefficient O(log(n)) lookup and routing for any document (2) Good load balancing properties for very large number of keys or documents. Howev er, to lookup a document, its complete initial identifier must be known tocompute itsunique hashed key and route tothe correct node , which is a major disadvantage compared to unstructured systems. Our goal in this ongoing project is to create a DHT-based P2P architecture that supports efficient partial keyword searc hes in a scalable manner. Some recent proposals for keyword search [ 2], [4], [1], [5] have suggested storing all documents pointers for a keyword on a node corresponding to keyID= h(keyword ). For example, all files which have ”usenix” in their title are stored on a single node corresponding to h(”usenix ”). Multiple keyword search can then be made possible bycomputing the hashes for eachkeyword and vi siting corresponding nodes to fetch all results (which can be proce ssed in the network for boolean operations before returning). Thou gh correct, we argue that this approach does not align well with the goals of a DHTsystem forvery largescale andtransient networks. High amount of keyword heterogeneity in occurrence frequency as well as query frequency further aggravate the problem (These have been sh own to follow Zipf distribution): (1) Millions of documents corre sponding to a common keyword can end up on a single node. Overall, distrib ution of these document pointers can be heavily skewed over the nod es (2) When a node disappears, all document pointers correspon ding to keyword(s) stored on this node are removed from the networ k, hampering future searches. This is especially problematic if the nodes storing pointers for popular keywords fail. (3) Nodes can be swamped with search traffic for these popular keywords creat ing routing hotspots (resulting from routing large number of me ssages to a single destination) as well as query hotspots. We have designed a simple DHT architecture Magnolia which is not effected by the fore-mentioned problems while simultan eously providing lognhops for routing and lookup and low, bounded number of nodes visited and traffic generated. Our model scen ario is a large scale P2P file sharing system with over 1 million nod es which show high transiency and is responsible for storing ov er 1 billion documents. Our architecture proposes novel node gr ouping and key distribution methods using a multi-hashing scheme a nd makes use ofhashfunctionproperties toeffectivelydistri butepointers corresponding to every keyword to a tunable number of nodes. Using Multi-hashing each keyword is balanced across a set of nodes in the system with little overlap between different set of node s, which achieves both good load-balance in terms of traffic and key st orage as wellasmakingsearchhighlyrobusttofailures.Wewanttofo rmthese groups such that popular keywords have low probability of be ing assigned to the same group. We also propose a modified DHT rout ing architecture which can then store documents and lookup keyw ord queries in log(n)hops , though the keyword pointers are mapped to multiple nodes. The amount of traffic generated and number of nodes visited is also low and bounded. Figure 1 shows the technique of multi-hashing. We have khash functions h1(), …, h k()where hi()maps a keyword a m′bit key (m′< m, the total number of bits used in nodeID or documentID). For each keyword corresponding to every document instance ( which weassume currentlyare derivedfromitstitleor metainform ationlikewi h1() h2() hk() …. keywordGroupID x=uniform r.v. over {1,…,k} hx(wi) m’ bits m bits nodeID keyword Set of k hash functions Fig. 1. The Multi-hashing process which maps each instance o f a keyword to one of the kpossible KeywordGroupIDs for a particular keyword ID tags in mp3 files) we compute a m′bit key using hx(keyword ) where xis a uniformly distributed random variable over the set 1, …, k. The intuition behind doing this is that the m′bit key corresponds to the first m′bits of the m-bit nodeID which is uniformly distributed over all the nodes. If m′= 16and there are one million nodes, on average220 216= 16nodes would have the same value for a particular m′bit value also called KeywordGroupID in our system. Since each instance of the keyword can map toany o f the hi()hash functions, it can map to any of the kKeywordGroupIDs and be stored on any of these nodes belonging to these groups. The motivation behind this technique to distribute keyword ins tances is that using a novel DHT routing architecture these group of no des belonging to a KeywordGroupID can be reached and searched wi th low and bounded number of hops, providing the same log(n)number of hops as the original proposals which route to a single node . Current Status We have currently worked out the design and details of the mul ti- hashing process and the corresponding DHT routing architec ture which provides low and bounded response time for storage and lookup. Our Technical Report [3] gives more detail along wit h an- alytical treatment for important performance and scalabil ity metrics: Load-balancing of keys (aggregate and per-keyword), routi ng and lookup performance, traffic generated and number of nodes vi sited and the routing state kept at each node. Our next step is to con duct a detailed evaluation of the system to measure important met rics using real world keyword and query distributions to provide a better understanding of advantages of Magnolia. REFERENCES [1] BAUER, D., HURLEY, P., PLETKA, R.,ANDWALDVOGEL ,M. Bringing efficient advanced queries to distributed hash tables. In Proceedings of IEEE LCN (Nov. 2004). [2] GARCES-ERICE, L., FELBER, P., BIERSACK , E. W., U RVOY-KELLER, G.,ANDROSS, K. W. Data indexing in peer-to-peer DHT networks. In24th International Conference on Distributed Computing Sy stems (24th ICDCS'2004) (Tokyo, Japan, Mar. 2004), IEEE Computer Society, pp. 200–208. [3] GUPTA,A., SANGHI,M., D INDA, P.,ANDBUSTAMANTE ,F. Magnolia: A novel dht architecture for keyword-based searching. Technial Report, Northwestern University (March 2005). [4] HARREN, M., H ELLERSTEIN , J. M., H UEBSCH, R., L OO, B. T., SHENKER,S.,ANDSTOICA,I. Complex queries in DHT-based peer-to- peer networks. Lecture Notes in Computer Science 2429 (2002), 242–?? [5] REYNOLDS ,P.,ANDVAHDAT,A. Efficient peer-to-peer keyword search- ing. InMiddleware (2003), pp. 21–40.
2004
Abstract
In a typical file system, only the current version of a file (or directory) is available. In Wayback, a user can also access any previous version, all the way back to the file’s creation time. Versioning is done automatically at the write level: each write to the file creates a new version. Wayback implements versioning using an undo log structure, exploiting the massive space available on modern disks to provide its very useful functionality. Wayback is a userlevel file system built on the FUSE framework that relies on an underlying file system for access to the disk. In addition to simplifying Wayback, this also allows it to extend any existing file system with versioning: after being mounted, the file system can be mounted a second time with versioning. We describe the implementation of Wayback, and evaluate its performance using several benchmarks.
Abstract
Copyright2004 IEEEResilient Peer-to-PeerMulticast fromthe GroundUp StefanBirrer and Fabi ´an E. Bustamante Departmentof Computer Science NorthwesternUniversity Evanston,IL 60201, USA {sbirrer,fabianb }@cs.northwestern.edu 1.Introduction Multicast is an efficient mechanism to support group communication. It decouples the size of the receiver set from the amount of state kept at any single node and po- tentially avoids redundant communication in the network, promisingtomakepossiblelargescalemulti-partyapplica- tionssuchasaudioandvideoconference,researchcollabo- rationand content distribution. Anumberofresearchprojectshaverecentlyproposedan end-systemapproachtomulticast[11,3,24,10,21,18,28], partially in response to the deployment issues of IP Multi- cast [12, 13]. In this middleware [1] or application-layer approach, peers are organized as an overlay topology for data delivery, with each connection in the overlay mapped to a unicast path between two peers in the underlying In- ternet. Allmulticastrelatedfunctionalityisimplementedat the peers instead of at routers, and the goal of the multicast protocolistoconstructandmaintainanefficientoverlayfor datatransmission. One of the most important challenges of peer-to-peer multicast protocols is the ability to efficiently deal with the high degree of transiency inherent to their environment [5]. As multicast functionality is pushed to autonomous, un- predictable peers, significant performance losses can re- sult from group membership changes and the higher failure ratesofend-hostswhencomparedtorouters. Measurement studies of widely used peer-to-peer (P2P) systems have re- portedmedian session times1ranging from an hour to a minute[8,15,22]. Achievinghighdeliveryratioswithout sacrificing end-to-end latencies or incurring additional costshas provento be a challenging task. ThispaperintroducesNemo,anovelpeer-to-peermulti- castprotocolthataimsatachievingthiselusivegoal. Based on two techniques: (1) co-leaders and, (2)triggered nega- tiveacknowledgments(NACKs) ,Nemo’sdesignemphasizes conceptual simplicity and minimum dependencies [2], thus achieving, in a cost-effective manner, performance charac- teristics resilient to the natural instability of its target en- 1Session time is defined as the time between when a peer joins and leavesthe network.vironment. Simulation-based and wide-area experimenta- tions show that Nemo can achieve high delivery ratios (up to 99.98%) and low end-to-end latency similar to those of comparable protocols, while significantly reducing the cost in terms of duplicate packets (reductions >85%) and con- trol related traffic, making the proposed algorithm a more scalablesolution to the problem. The remainder of this paper describes our approach in moredetails(Section2),andpresentearlyexperimentalre- sults in Section 3. We briefly discuss related work in Sec- tion4 and conclude in Section 5. 2.Nemo’sApproach Nemo follows the implicit approach [3, 10, 21, 28] to building an overlay for multicasting: participating peers are organized in a control topology and the data delivery network is implicitly defined based on a set of forwarding rules. We here provide a summarized description of Nemo. For complete details, we direct the reader to the associated technicalreport [6]. The set of communicating peers are organized into clus- ters based on network proximity,2where every peer is a member of a cluster at the lowest layer. Clusters vary in size between dand3d−1, where dis a constant known as thedegree. Each of these clusters selects a leader3that be- comesamemberoftheimmediatesuperiorlayer. Inpartto avoidthedependencyonasinglenode,everyclusterleader recruits a number of co-leaders to form its crew. The pro- cessisrepeated,withallpeersinalayerbeinggroupedinto clusters, crew members selected, and leaders promoted to participate in the next higher layer. Hence peers can lead more than one cluster in successive layers of this logical hierarchy.4 Co-leaders improve the resilience of the multicast group byavoidingdependenciesonsinglenodesandprovidingal- 2Other factors such as bandwidth [25, 11] and expected peer life- time[8] could be easily incorporated. 3The leader is the peer in the center of the cluster, in terms of end-to- endlatency. 4ThisiscommontobothNemoandNice[3]aswellasZigzag[24];the degreebounds havebeen chosen to help reduce oscillation in clusters. Leader Co−leader Ordinary memberFigure1: Nemo’slogicalorganization. Theshapeillustratesonlytherole of a peer within a cluster: a leader of a cluster at a given layer can act as leader,co-leader,or an ordinary member at the nexthigher layer. ternativepathsfordataforwarding. Inaddition,crewmem- bers share the load from message forwarding, thus improv- ing scalability. Figure 1 illustrates the logical organization ofNemo. Anewpeerjoinsthemulticastgroupbyqueryingawell- known special end-system, the rendezvous point, for the IDs of the members on the top layer. Starting there and in an iterative manner, the incoming peer continues: ( i) re- questing the list of members at the current layer from the cluster’sleader,( ii)selectingfromamongthemwhotocon- tactnextbasedontheresultfromagivencostfunction,and (iii) moving into the next layer. When the new peer finds theleaderwithminimalcostatthebottomlayer,itjoinsthe associatedcluster. Nemo’s data delivery topology is implicitly defined by the set of packet-forwarding rules adopted. A peer sends a message to one of the leaders for its layer. Leaders (the leader and its co-leaders) forward any received message to all other peers in their clusters and up to the next higher layer. A node in charge of forwarding a packet to a given clustercanchooseanyofthecrewmembersinthecluster’s leadergroup as destination. Figure 2 illustrates the data forwarding algorithm using the logical topology from Figure 1. Each row corresponds toonetimestep. Attime t0apublisherforwardsthepacket to its cluster leader, which in turn, sends it to all cluster members and the leader of the next higher layer ( t1). At time t2, this leader forwards the packet to all its cluster members,i.e. themembersofitslowestlayerandthemem- bers of the second lowest layer. In the last step, the leader oftheclusterontheleftforwardsthepackettoitsmembers. While we have employed leaders for this example, Nemo usesco-leaders in similar manner for forwarding. To illustrate Nemo’s resilience to peer failures, Figure 3 shows an example of the forwarding algorithm in action. The forwarding responsibility is evenly shared among the leaders by alternating the message recipient among them. In case of a failed crew member, the remaining leaders can still forward their share of messages through the tree. Like other protocols aiming at high resilience [20, 4], Nemo re- lies on sequence numbers and triggered NACKs to detect t0t1t2t3TimeFigure2: Basic data forwardingin Nemo. One time step per row. lostpackets. Every peer piggybacks a bit-mask with each data packet indicatingthepreviouslyreceivedpackets. Inaddition,each peermaintainsacacheofreceivedpacketsandalistofmiss- ing ones. Once a gap (relative to a peer’s upstream neigh- bors) is detected in the packet flow, the absent packets are consideredmissing after a giventime period. 3.Evaluation We analyze the performance of Nemo using detailed simulation and wide-area experimentation. We compare Nemo’s performance to that of three other protocols – Narada [11], Nice [3] and Nice-PRM [4] – both in terms of application performance and protocol overhead. Appli- cationperformanceiscapturedbydeliveryratioandend-to- endlatency,whileoverheadisevaluatedintermsofnumber ofduplicate packets. For each of the three alternative protocols, the values for the available parameters were obtained from the corre- spondingliterature [11, 3, 4]. Weusedtwodifferentfailurerates. Thehighfailurerate employed a mean time to failure (MTTF) of 5minutes, and ameantimetorepair(MTTR)of2minutes. Thelowfailure rateusedaMTTFof60minutesandaMTTRof10minutes. For details on the protocols implementation and on the ex- perimentalsetup,wedirectthereadertotheassociatedtech- nicalreport [6]. All experiments were run with a payload of 100bytes. We opted for this relatively small packet size to avoid sat- uration effects in PlanetLab. For simulations, we assume infinite bandwidth per link and only model link delay, thus packet size is secondary. We employ a buffer size of 32 packets and a rate of 10 packets per second. This corre- sponds to a 3.2-second buffer, which is a realistic scenario forapplications such as multimedia streaming. 3.1.Simulation Results For all simulation results, each data point is the mean of 25independent runs.
Abstract
Peer-to-peer systems have grown significantly in popularity over the last few years. An increasing number of research projects have beencloselyfollowingthistrend,lookingatmanyoftheparadigm’s technical aspects. In the context of data-sharing services, efforts have focused on a variety of issues from object location and routingtofairsharingandpeerlifespans. Overall,themajorityofthese projectshaveconcentratedoneitherthewholeP2Pinfrastructureor theclient-sideofpeers. Littleattentionhasbeengiventothepeer’s server-side,evenwhenthatsidedeterminesmuchoftheeverydayuser’sexperience. Inthispaper,wemakethecaseforlookingatthe server-side of peers, focusing on the problem of scheduling download requests at the server-side of P2P systems with the intent of minimizing the average response time experienced by users. We start by characterizing server workload based on extensive trace collectionandanalysis. Wethenevaluatetheperformanceandfairness of different scheduling policies through trace-driven simulations. Our results show that average response time can be dramatically reduced by more effectively scheduling the requests on the server-sideof P2P systems. 1. INTRODUCTION The popularity and tremendous success of peer-to-peer (P2P) systemshavemotivatedconsiderableresearchonmanyoftheparadigm’s technicalaspects. Inthecontextofdata-sharingservices,anumber of projects have explored a wide variety of issues including more scalable object location, query and routing protocols, fair resource sharing, and high churn-resilient systems, just to name a few. The majority of these projects have, so far, concentrated on either the whole P2P infrastructure or the clientside of a peer. Little attention has been given to the peer’s server-side, although that side determines much of the everydayuser’sexperience. After determining alternative sources for a desired object, the requesting peer initiates the object downloads from a subset of possible providers; each party effectively a
Abstract
This paper proposes the idea of emulating fat-trees in overlays for multi-source multicast applications. Fat-trees are like real trees in that theirbranches become thicker the closer one gets to the root, thus overcoming the “rootbottleneck” of regular trees. We introduce FatNemo, a novel overlay multi-sourcemulticast protocol based on this idea. FatNemo organizes its members into a tree ofclusters with cluster sizes increasing closer to the root. It uses bandwidth capacityto decide the highest layer in which a peer can participate, and relies on co-leadersto share the forwarding responsibility and to increase the tree’s resilience to pathand node failures. We present the design of FatNemo and show simulation-based experimental results comparing its performance with that of three alternative protocols (Narada, Niceand Nice-PRM). These initial results show that FatNemo not only minimizes theaverage and standard deviation of response time, but also handles end host failuresgracefully with minimum performance penalty.
Abstract
Peer-to-peer system s have grown significantl y in popularity over the last few years. An increasing num ber of research projects have been closely following this trend, looking at many of the paradigm ’s technical aspects. In the context of data-sharing services, efforts have focused on a variety of issues from object location and routing to fair sharing and peer lif espans. Overall, the m ajority of these projects hav e concentrated o n either the whole P2P infrastructure or the clien t-side of peers. Little a ttention has been given to the peer’s server -side, even when that s ide determ ines m uch of the everyday user’s experience. In this paper, we m ake the cas e for looking at the serv er side of peers, focusing on the problem of sche dulin g with the in tent of minimizing the average respo nse time experienced by users. W e start b y charac terizing server workload based on extensive trace collection and analysis. W e then evalua te the perform ance and fairness of different scheduling policies through tr ace-driven sim ulations. Our results show that average response tim e can be dram atically reduced by more effectively scheduling the requests on the serv er-side of P2P system s.
Abstract
ICMP-based m easurem ents (e.g. ping) are of ten criticized as un-representative of the applications’ experienced perform ance, as applications are based on TCP/UDP protoco ls and there is a w ell-accep ted conjecture that rou ters are often configured to treat ICMP differently from TCP and UDP. However, to the best of our knowledge, this assumption has not been validated. With this in m ind, we conducted extensive Internet end-to-end path m easurem ents of these three protocols, spanning ov er 90 sites (f rom both commercial and academ ic networks), over 6, 000 paths and m ore than 28 million prob es in PlanetLab during two weeks. Our results show that IC MP performa nce is a go od estim ator for TCP/UDP perform ance for the m ajority of the paths. However for nearly 0.5% of the paths, we found pe rsistent RTT differences between UDP and ICMP greater than 50% , while for TCP the difference ex ceeds 10% for 0.27% of the paths. Thus, although ICMP-based measurem ents can be trusted as predictors of TCP/UDP perform ance, distributed system s and ne twork researchers should be aware of some scenarios where th ese m easurem ents will be heavily m islead ing; th is paper also provides som e hints that can help in identifying those situations.
Abstract
One of the m ost im portant cha lleng es of peer- to-peer m ulticast pro tocols is the ability to efficiently deal with the high degree of tr ansiency inherent to their environm ent. As multicast functionality is pushed to au tonom ous, unpredictable peers, significant perform ance losses can result from group m embership changes and the higher failure rates of end-hosts when com pared to router s. Achieving high delivery ratios without sacrificing end-to-end latenc ies or incurring additional costs has proven to be a challenging task. This pape r introduces Nem o, a novel pe er-to-peer m ulticast proto col that aim s at achieving this elusive goal. Based on two sim ple techniques: (1) co-leaders to minimize dependencies and, (2) triggered negative acknowledgments (NACKs ) to detect lost packets, Nemo’s design em phasizes conceptu al sim plicity and m inimum dependencies, thus ach ieving perform ance characteristics capab le of withsta nding the na tural instability of its target environm ent. We present an extensive com parative eval uation of our protocol through sim ulation and wide-area experim entation. W e compare the scalability and perform ance of Ne mo with that of three alternat ive protocols: Narada, Nice and Ni ce-PRM. Our results show that Nemo can achieve delivery ratios (up to 99.9%) similar to those of com parable protocols under high failure rates, but at a fraction of their cost in term s of duplicate packets (reductions > 90%) and control-related traffic (reductions > 20%).
Abstract
The Shortest Remaining Processing Time (SRPT) scheduling policy was proven, inthe1960s, toyield the smallest mean response time,and recently itwas proven itsperformance gain overProcessor Sharing (PS) usually does notcome attheexpense oflargejobs. However,despite themany advanta gesofSRPT scheduling ,it isnotwidely applied. One important reason forthesporadic application ofSRPT scheduling isthat accur ate job size information isoften unavailable .Our previouswork addressed theperformance and fairness issues ofSRPT scheduling when jobsize information isinaccurate.Wefound that SRPT (and FSP) scheduling outper forms PSaslong asthereexists a(rather small) amount ofcorrelation between theestimated jobsizeand theactual jobsize.Inthework wesummarize here,wehave developed jobsizeestimation techniques tosupport theapplication ofSRPT toweb server and Peer-to-P eerserver sidescheduling .Wehave evaluated ourtechniques with extensive simulation studies and realworld implementation andmeasur ement. 1.Introduction The Shortest Remaining Processing Time (SRPT) scheduling polic yisextremely promising because even inageneral queuing system (G/G/1), itisprovably optimal, leading tosmallest possible mean value ofoccupanc yand therefore ofdelay time [19].More recent workhasshownthat thevariance ofdelay time in queuing systems islowerthan FIFO and LIFO [17].Bansal, etalprovedtheoretically that thedegree ofunfairness under SRPT issurprisingly small asEffortsponsored bytheNational Science Foundation under Grants ANI0093221, ACI-0112891, ANI-0301108, EIA-0130869, andEIA-0224449. Anyopinions, ndings andconclusions orrecommendations expressed in thismaterial arethose oftheauthor anddonotnecessarily reect theviews oftheNational Science Foundation (NSF).suming anM/G/1 queuing model andheavy-tailed jobsize distrib ution [3].Gong, etalfurther investigated thefairness issues ofSRPT through simulation [7]andconrmed thetheoretical results regardin
2003
Abstract
Elders Know Best: Lifespan-Based Ideas in P2P Systems Yi Qiao and Fabi ´an E. Bustamante I. INTRODUCTION The transiency of peer population and its implications on peer-to-peer (P2P) applications are increasingly calling the attention of the research community. As undesirable as un- avoidable,peerstransiencycouldnegatemanyoftheappealing features of the P2P approach. We are exploring new P2P protocols and strategies that, by considering peers’ lifespan a key attribute, can greatly boost thestability,efficiencyandscalabilityofthesesystems.Aspart of our work, we performed a thorough study of peers’ lifespan in a current and widely deployed P2P network. Through active probing of over 500,000 peers, we found that peer lifespan distribution can be modeled by a Pareto distribution of the form ¸Tk(k < 0) which, in this context, means that a peer’s expected remaining lifetime is directly proportional to its current age. II. PEERLIFESPAN AND P2P PROTOCOLS This last observation is the basis for a number of new protocols, first proposed in [1], for unstructured and loosely structured (ultra-peers) systems. In most P2P protocols there are a number of instances during a peer’s life where it must choose among “acquaintances”, such as when deciding whom to “befriend” (i.e. establishing a open connection) and when responding to a thir party’s request for recommendation. We have shown elsewhere [1], [2] that even simple lifespan- based P2P protocols can offer significant advantages, not only reduc
Abstract
One of the most im portant cha lleng es of peer -to-peer m ulticast pro tocols is the ability to efficiently d eal with the high degree of churn inherent to their environm ent. As m ulticast functionality is pushed to autonom ous, unpredictable peers, significant perfor mance losses can result from group m embership cha nges and the higher failure rates of end- hosts when com pared to routers. Achieving high delivery ratios w ithout sacrificing end- to-end laten cies or incu rring add itional co sts has proven to b e a challenging task. This paper introduces N emo, a nove l peer-to -peer m ulticast p rotoco l that aim s at achiev ing th is elus ive go al. W e present an extensive com parative evaluation of our protocol through sim ulation and wide-a rea experim entation. We com pare the perform ance of Ne mo with that of three alternative protoc ols: Narada, Nice and Nice- PRM. Our results show how Ne mo can achie ve delivery ratios sim ilar to those of comparable protocols (up to 99.98%) under differe nt failure rates, bu t at a fraction of their cost in term s of duplic ate pack ets (reduction s > 85%) an d control-related traffic. Keywords: Peer-to-peer, overlay multicast, resilience, churn.
Abstract
Wepresent amechanism forproviding differential data protection topublish/subscribe distrib uted systems, suchas those used inpeer-to-peer computing ,grid environments, andother s.This mechanism, termed “security overlays”, incorpor atescredential-based communication channel cre- ation, subscription andextension. Wedescribe aconcep- tualmodel ofpublish/subscribe services thatismade con- cretebyourmechanism. Wealso present anapplication, Active Video Streams, whose reimplementation using secu- rityoverlays allows ittoreact tohigh-le velsecurity poli- ciesspecified inXML without significant performance loss orthenecessity forembedding policy-specific code intothe application. 1Introduction Distrib uted applications andendusers interact bydy- namically sharing data, exchanging information, and us- ingorcontrolling remote devices. Inscientific endea v- ors, forinstance, researchers remotely access resources likemicroscopes[4 ],3Ddisplays[28 ,12],and may even wish tooperate sophisticated components liketheToko- mac fusion facility .Inindustry ,companies share parts designs[10 ]orother data critical totheir operation. Ex- amples include Schlumber ger’soilexploration processes where reserv oirsimulation data produced incomputer cen- tersshould beshared with ’onsite’ personnel conducting drilling[32 ],andwhere simulations should usewell logs to refine current drilling procedures. Another example isthe airline industry ,aswith Delta AirLines’ sharing offlight andpassenger information with third parties who distrib ute such data toselect passengers forcellphone-based passen- gernotification[26 ].Finally ,inremote sensing andcontrol, radar orcamera data ortelemetry/biometric information is captured, forw arded to,analyzed, andused byinterested re- mote parties, sometimes involving remote control loops, as intelesur gery andtargeting.Inmanysuch applications, remote users arenotinter- ested inand/or should notseeallofthedata allofthe time. Also, thecriteria forthese “which/whether” deci- sions canchange rapidly .Infact,dynamic interest changes sometimes help maketheimplementation ofsuch sys- tems orapplications feasible, byenabling dynamic data reduction[35 ],ortheyareused tooptimize implementa- tions, aswith lossy multimedia[22 ].Consequently ,there areconceptual models forsuch changes, including conte xt sensiti vity[14 ]inhuman-centered ubiquitous applications, spatial ortemporal locality inpervasiveand distrib uted systems[36 ,6],andcurrent focus orviewpoint inremote sensing, graphics, andvisualization[21 ].Finally ,whether implicitly determined orexplicitly captured byquality of service expressions[29 ,30,3],theoccurrence ofdynamic interest changes inapplications andsystems isaccompa- nied bythewide range ofeffects theycanhave,starting with simple changes indata selecti vity applied toongo- inginformation exchanges[21 ],continuing with theneed toapply varying transformations todata[22 ,28,24],and also including real-time control reactions asindynamic sensor repositioning orintelepresence[9 ]orteleimmersi ve applications[31 ]. SecurityandProtectioninDynamicDataSystems. The general problem addressed inthispaper is:Howcanappropriate security andprotection canbeas- sociated with thedata exchanges thattakeplace indy- namic systems andapplications? Inremote instrumentation andsensing, forinstance, costly physical infrastructure must beprotected from unauthorized orinappropriate access. Inremote telemetry ,privacycon- cerns may preventusfrom implementing keysafety func- tionality ,asevidenced byapplications likesmart cars[20 ] orremote biometric monitoring. Incooperati vescientific andengineering endea vors,endusers wish toprotect cer- tainelements ofthedata being shared, such asthehigh res- olution reserv oirmodeling data Schlumber gercannot make available toitscompetitors, orcertain materials properties which parts designers donotwanttodisclose. Similarly , inremote monitoring ande-commerce, itiscritical toen- sure that only certain elements ofdata streams aremade available toremote parties, aswith airlines’ caterers who should notrecei vedata about passenger identities butmust knowabout their food preferences, oraswith thedisclosure ofpassenger ortracking information tofederal agencies in cases ofpotential criminal activities. DifferentialDataProtectioninDynamicDataSystems. The targetsystems and applications addressed byour workaredistrib uted applications inwhich continuous data streams areproduced orcaptured, distrib uted, transformed, and filtered, inorder tomakeappropriate data available where andwhen itisneeded[28 ,24].Thespecific problem weaddress forsuch applications isthat:developers typically organize the data being ex- changed tomeet functional needs, whereassecurity requirements may require different data orga- nizations, distrib uted, andaccess patterns. Asimple example isadistrib uted sensor application in which data captured from multiple remote sensors iscom- bined intoalargercomposite stream, asneeded forsensor fusion orsimply totakeadvantage ofbandwidth impro ve- ments derivedfrom theuseoflargermessages, forexample. Programs operating onthecomposite stream canaccess all ofthecaptured data, thereby increasing thepotential dam- agefrom security violations. Inthiscase, theproblem tobe solvedistoprotect thecomposite stream such thatitsdata canonly beaccessed andused differentially . Differential data protection foradata stream isdefined astheability to:(1)giveonly certain users orprograms access tothedata being transported orstored, (2)protect individual entries indata items, aswhen anairline provides caterers access toselect portions ofpassenger records (e.g., indications offood preferences), and(3)limit thetransfor - mations andmanipulations (i.e., services) thatmay beap- plied todata, aswhen preventing certain data manipulations that canextract orderivesensiti vedata (e.g., identifying faces incaptured video). The data exchanges explored in detail inthispaper derivefrom ourongoing research insen- sorsystems[2, 30,37]andinadapti vesecurity[33 ],where itisimportant tonotonly protect access todata, butalso tooperate onthedata itself topreventitsinappropriate use, aswhen sensor images thatcontain some highly secure data (e.g., persons’ faces, identified military objects) are‘fuzzed out’or‘black edout’prior todistrib ution toothers. Insummary ,foranygivendata stream, thekeyques- tionweaskishowtoprotect andsecure certain data inthat stream, distinguished bydata type (e.g., ‘passenger id’fieldofthe‘passenger’ event)and/or data content (e.g., data val- uesandpositions associated with facerecognition). Asec- ondquestion ishowtoenforce such differential protection across multiple such streams inadistrib uted environment, where enforcement concerns theimposition limitations on certain stream manipulations byspecific endusers, aswell astheability toaccess specific stream data. SecurityOverlaysinDataDistributionMiddlewar e. Our approach toattaining differential data protection aug- ments data distrib ution middle warewith additional security mechanisms, where security meta-information isautomati- cally associated with thedata being exchanged. Such meta- information isthen used bymiddle waretoguarantee that data isonly accessible toandmanipulable byauthorized parties andthatthemanipulations bythose parties areau- thorized aswell. Essentially ,weoverlay onto existing data exchanges thesecurity andprotection currently needed. Se- curity overlays areentirely dynamic, meaning thattheycan bechanged andupdated independently ofthedata streams theyaffect, where overlays may bealtered while data ex- changes areongoing. The intent istomakesecurity over- lays asdynamic astheunderlying systems being used and theapplications being targeted. Our current implementation ofsecurity overlays isin middle warerunning onstandard operating system plat- forms. This implies thatdifferential data protection isen- forced only within theconfines ofthemiddle wareinfras- tructure, anditrequires thatinaddition tothedata protec- tionimplied bysecurity overlays, middle waremust utilize authentication methods toensure thatdata isnotmanipu- lated inunauthorized ways. The specific mechanism used iscredentials, early examples ofwhich arecapabilities in systems likeHydra[23 ].Acredential isapplied tosome data stream, named byachannel identifier .This creden- tialencapsulates areference toasetoftyped objects in thedata stream andrights tothese objects. Thecredential also servestoidentify itsbearer asanauthenticated client. Based onthecredential’ smeta-information (i.e., thetype information), twoactions may betakenwith respect tothe data stream. First, ahandler may beapplied tothestream, andthehandler’ soperations canextractfrom thestream data ofacertain type (e.g., oftype ‘passenger food pref- erence’) ortransform thestream’ sdata intoanewform by applying computations toit(e.g., computing statistical in- formation). Second, thenewlycreated data canberouted totheclient identified inthecredential, thelatter identi- fiedbyaclient description .This description currently con- tains anauthenticated client identifier ,butitcanalso use amore general wayofidentifying clients, such astrust levels,client roles, orgroup memberships (e.g., through community-based authentication[27 ,5]). Anewdata stream created byahandler isnotactually 2
Abstract
We consider the problem of choosing who to “be- friend” among a collection of known peers in dis- tributedP2Psystems. Inparticular,ourworkexplores anumberofP2Pprotocolsthat,byconsideringpeers’ lifespan distribution a key attribute, can yield systems with performance characteristics more resilient to the naturalinstability of their environments. This article presents results from our initial efforts, focusingoncurrentlydeployeddecentralizedP2Psys- tems. We measure the observed lifespan of more than 500,000peersinapopularP2Psystemforoveraweek andproposeafunctionalformthatfitsthedistribution well. We consider a number of P2P protocols based on this distribution, and use a trace-driven simula- tor to compare them against alternative protocols for decentralized and unstructured or loosely-structured P2P systems. We find that simple lifespan-based pro- tocols can reduce the ratio of connection breakdowns andtheir associated costs by over42%. Keywords: Peer-to-Peer, Lifespan, Pareto Distri- bution,Protocols, Empirical Study.
2002
Abstract
Common to computational grids and pervasive computing is the need for an expressive, efficient, and scalable directory service that provides information about objects in the environment. We argue that a directory interface that ‘pushes’ information to clients about changes to objects can significantly improve scalability. This paper describes the design, implementation, and evaluation of the Proactive Directory Service (PDS). PDS’ interface supports a customizable ‘proactive’ mode through which clients can subscribe to be notified about changes to their objects of interest. Clients can dynamically tune the detail and granularity of these notifications through filter functions instantiated at the server or at the object’s owner, and by remotely tuning the functionality of those filters. We compare PDS’ performance against off-the-shelf implementations of DNS and the Lightweight Directory Access Protocol. Our evaluation results confirm the expected performance advantages of this approach and demonstrate that customized notification through filter functions can reduce bandwidth utilization while improving the performance of both clients and directory servers.
Abstract
New trends in high-performance software development such as tool- and component-based approaches have increased the need for flexible and high-performance communication systems. When trying to reap the well-known benefits of these approaches, the question of what communication infrastructure should be used to link the various components arises. In this context, flexibility and high-performance seem to be incompatible goals. Traditional HPC-style communication libraries, such as MPI, offer good performance, but are not intended for loosely-coupled systems. Object- and metadata-based approaches like XML offer the needed plug-and-play flexibility, but with significantly lower performance. We observe that the flexibility and baseline performance of data exchange systems are strongly determined by their wire formats, or by how they represent data for transmission in heterogeneous environments. After examining the performance implications of using a number of different wire formats, we propose an alternative approach for flexible high-performance data exchange, Native Data Representation, and evaluate its current implementation in the portable binary I/O library.
Abstract
AIMS is an adaptive introspective management system designed to improve the robustness of complex distributed applications by enabling dynamic, fine-grained monitoring and adaptation. Rather than relying on static instrumentation or fixed monitoring strategies—which quickly become inadequate in heterogeneous, rapidly changing environments—AIMS allows probes to be installed, removed, reconfigured, and redeployed at runtime. It integrates customizable filters, predictive analytics, and both passive and active sensors to generate application-specific metrics and trigger context-appropriate adaptations. AIMS supports a hierarchical feedback model in which system resources are monitored and controlled at one level, while the effectiveness of those mechanisms is evaluated and adapted at a higher level. By enabling lightweight, dynamically adjustable introspection, AIMS reduces over- and under-instrumentation and supports more resilient decision-making in environments such as cluster-based Internet services, where resource conditions and workloads fluctuate unpredictably.
2001
Abstract
As the Internet matures, streaming data services are taking an increasingly important place alongside traditional HTTP transactions. The need to dynamically adjust the delivery of such services to changes in available network and processing resources has spawned substantial research on application-specific methods for dynamic adaptation, including video and audio streaming applications. Such adaptation techniques are well developed, but they are also highly specialized, with the client (receiver) and server (sender) implementing well-defined protocols that exploit content-specific stream properties. This paper describes our efforts to bring the benefits of such content-aware, application-level service adaptation to all types of streaming data and to do so in a manner that is efficient and flexible. Our contribution in this domain is ECho, a high-performance event-delivery middleware system. ECho’s basic functionality provides efficient binary transmission of event data with unique features that support dynamic data-type discovery and service evolution. ECho’s contribution to data stream adaptation is in the mechanisms it provides for its clients to customize their data flows through type-safe dynamic server extension.
Abstract
Active Streams is a middleware framework designed to enable adaptive, customizable distributed applications operating in heterogeneous, resource-variable environments. The approach models systems as compositions of services, applications, and self-describing data streams augmented with streamlets—lightweight, location-independent functional units that can be dynamically deployed, tuned, and migrated. Using dynamic code generation (E-Code) for efficient cross-platform execution, Active Streams supports both coarse-grained evolution via streamlet attachment/detachment and fine-grained adaptation through parameter updates and redeployment along changing datapaths. The framework integrates a push-based resource monitoring service (ARMS) for triggering adaptations and a proactive publish/subscribe directory service suited to rapidly changing environments. Together with a streamlet repository and the ECho event infrastructure, these components provide a flexible foundation for building responsive, component-based distributed systems capable of adapting to dynamic conditions in pervasive and high-performance computing settings.
Abstract
Adaptive Distributed Applications And Services A Thesis Presented to The Academic Faculty by Fabi´ an E. Bustamante In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy in Computer Science Georgia Institute of Technology November, 2001 Copyright c°2002 by Fabi´ an E. Bustamante The Active Streams Approach To Adaptive Distributed Applications And Services Approved: Dr. Karsten Schwan (College of Computing), Chairman Dr. Mustaque Ahamad (College of Computing) Dr. Greg Eisenhauer (College of Computing) Dr. Calton Pu (College of Computing) Dr. Kishore Ramachandran (College of Computing) Dr. Peter Steenkiste (School of Computer Science, Carnegie Mellon Univer- sity) Date Approved
Abstract
Active Streams introduces a middleware framework for building adaptive distributed applications capable of operating in large, heterogeneous, and resource-variable environments. The approach models systems around applications, services, and self-describing data streams enhanced with streamlets—small, location-independent functional units that can be dynamically deployed, tuned, or replaced as conditions change. By combining component-based extensibility with dynamic code generation, Active Streams enables both coarse-grained evolution and fine-grained adaptation across diverse execution contexts. A push-based resource monitoring service supports continuous awareness of system state, while a proactive directory service provides publish/subscribe updates on relevant objects. Together, these mechanisms allow applications and services to respond fluidly to variations in demand and resource availability, simplifying the development of adaptive, data-intensive distributed systems.
2000
Abstract
The Internet and the Grid are changing the face of high performance computing. Rather than tightly-coupled SPMD-style components running in a single cluster, on a parallel machine, or even on the Internet programmed in MPI, applications are evolving into sets of collaborating elements scattered across diverse computational elements. These collaborating components may run on different operating systems and hardware platforms and may be written by different organizations in different languages. Complete “applications” are constructed by assembling these components in a plug-and-play fashion. This new vision for high performance computing demands features and characteristics not easily provided by traditional high-performance communications middleware. In response to these needs, we have developed ECho, a high-performance event-delivery middleware that meets the new demands of the Grid environment. ECho provides efficient binary transmission of event data with unique features that support data-type discovery and enterprise-scale application evolution. We present measurements detailing ECho’s performance to show that ECho significantly outperforms other systems intended to provide this functionality and provides throughput and latency comparable to the most efficient middleware infrastructures available.
Abstract
High performance computing is being increasingly utilized in non-traditional circumstances where it must interoperate with other applications. For example, online visualization is being used to monitor the progress of applications, and real-world sensors are used as inputs to simulations. Whenever these situations arise, there is a question of what communications infrastructure should be used to link the different components. Traditional HPC-style communications systems such as MPI offer relatively high performance, but are poorly suited for developing these less tightly-coupled cooperating applications. Object-based systems and metadata formats like XML offer substantial plug-and-play flexibility, but with substantially lower performance. We observe that the flexibility and baseline performance of all these systems is strongly determined by their `wire format’, or how they represent data for transmission in a heterogeneous environment. We examine the performance implications of different wire formats and present an alternative with significant advantages in terms of both performance and flexibility.
1999
Abstract
We are concerned with the attainment of high performance in I/O on distributed, heterogeneous hardware. Our approach is to combine a program’s data retrieval and storage actions with operations executed on the resulting active I/O streams. Performance improvements are attained by exploitation of information about these operations and by runtime changes to their behavior and placement. In this fashion, active I/O can adjust to static system properties derived from the heterogeneous nature of resources and can respond to dynamic changes in system’s conditions, while reducing the total bandwidth needs and/or the end-to-end latencies of I/O actions.
1998
Abstract
The Virtual Microscope is being designed as an integrated computer hardware and software system that generates a highly realistic digital simulation of analog, mechanical light microscopy. We present our work over the past year in meeting the challenges in building such a system. The enhancements we made are discussed, as well as the planned future improvements. Performance results are provided that show that the system scales well, so that many clients can be adequately serviced by an appropriately configured data server.
1997
Abstract
Complex distributed collaborative applications have rich computational and communication needs that cannot easily be met by the currently available web-based software infrastructure. In this position paper, we claim that to address the needs of such highly demanding applications, it is necessary to develop an integrated framework that both supports high-performance executions via distributed objects and makes use of agent-based computations to address dynamic application behavior, mobility, and security needs. Specifically, we claim that based on application needs and resource availability, it should be possible for an application to switch at runtime between the remote invocation and evaluation mechanisms of the object and agent technologies being employed. To support such dynamically configurable applications, we identify several issues that arise for the required integrated object-agent system. These include: (1) system support for agent and object executions and (2) the efficient execution of agents and high-performance object implementations using performance techniques like caching, replication, and fragmentation of the state being accessed and manipulated. We are currently developing a system supporting high-end and scalable, collaborative applications.