The past few years have witnessed a growing number of large-scale networked systems. Most of these systems are built following an overlay approach, with each of them regularly and independently probing its environment to guide path selection algorithms, route around faulty links, and replicate content for availability. As these systems grow in popularity, such an approach will result in an unsustainable degree of monitoring and restrict the variety, number, and span of distributed services.
The thesis of this project is that a large fraction of globally-distributed systems can be built to ensure sustainable scalability by strategically reusing the view of the network gathered by long-running, ubiquitous services such as CDNs and P2P systems. This work defines and explores “3R” — a new approach to the design and implementation of distributed systems focused on minimizing aggregated control and administrative overhead by strategically reusing environment’s views and recycling previously gathered measurements. In particular, we are designing efficient techniques for maintaining, accessing, and reusing this information for building next-generation streaming multicast, content distribution, and data sharing applications.
People
Group members
- Fabian E. Bustamante, Faculty PI
- Dave Choffnes, PhD Student
Collaborators
- Aleksandar Kuzmanovic, Faculty
- Yan Chen, Faculty
- Ao-Jan Su, PhD Student
- Zihui Ge, AT&T
- Kai Chen, PhD Student
- Rahul Potharaju, MS Student
Papers
Zachary S. Bischof, John S. Otto, Fabián E. Bustamante Distributed Systems and Natural Disasters -- BitTorrent as a Global Witness Journal Article In Proc. of CoNEXT Special Workshop on the Internet and Disasters (SWID), 2011. @article{DSND, title = {Distributed Systems and Natural Disasters -- BitTorrent as a Global Witness}, author = {Zachary S. Bischof and John S. Otto and Fabián E. Bustamante}, url = {http://www.aqualab.cs.northwestern.edu/wp-content/uploads/2019/02/ZBischof11SWID.pdf http://www.aqualab.cs.northwestern.edu/wp-content/uploads/2019/02/ZBischof11SWID_Slides.pdf}, year = {2011}, date = {2011-12-03}, journal = {In Proc. of CoNEXT Special Workshop on the Internet and Disasters (SWID)}, abstract = {Peer-to-peer (P2P) systems represent some of the largest distributed systems in today's Internet. Among P2P systems, BitTorrent is the most popular, potentially accounting for 20-50% of P2P file-sharing traffic. In this paper, we argue that this popularity can be leveraged to monitor the impact of natural disasters and political unrest on the Internet. We focus our analysis on the 2011 Tohoku earthquake and tsunami and use a view from BitTorrent to show that it is possible to identify specific regions and network links where Internet usage and connectivity were most affected.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Peer-to-peer (P2P) systems represent some of the largest distributed systems in today's Internet. Among P2P systems, BitTorrent is the most popular, potentially accounting for 20-50% of P2P file-sharing traffic. In this paper, we argue that this popularity can be leveraged to monitor the impact of natural disasters and political unrest on the Internet. We focus our analysis on the 2011 Tohoku earthquake and tsunami and use a view from BitTorrent to show that it is possible to identify specific regions and network links where Internet usage and connectivity were most affected. |
John S. Otto, Mario A. Sánchez, David R. Choffnes, Fabián E. Bustamante, Georgos Siganos On Blind Mice and the Elephant -- Understanding the Network Impact of a Large Distributed System Journal Article In Proc. of ACM SIGCOMM, 2011. @article{BME, title = {On Blind Mice and the Elephant -- Understanding the Network Impact of a Large Distributed System}, author = {John S. Otto and Mario A. Sánchez and David R. Choffnes and Fabián E. Bustamante and Georgos Siganos}, url = {http://www.aqualab.cs.northwestern.edu/wp-content/uploads/2019/02/JOtto11SIGCOMM.pdf http://www.aqualab.cs.northwestern.edu/wp-content/uploads/2019/02/OttoSigcomm2011.pptx}, year = {2011}, date = {2011-08-06}, journal = {In Proc. of ACM SIGCOMM}, abstract = {A thorough understanding of the network impact of emerging large-scale distributed systems -- where traffic flows and what it costs -- must encompass users' behavior, the traffic they generate and the topology over which that traffic flows. In the case of BitTorrent, however, previous studies have been limited by narrow perspectives that restrict such analysis. This paper presents a comprehensive view of BitTorrent, using data from a representative set of 500,000 users sampled over a two year period, located in 169 countries and 3,150 networks. This unique perspective captures unseen trends and reveals several unexpected features of the largest peer-to-peer system. For instance, over the past year total BitTorrent traffic has increased by 12%, driven by 25% increases in per-peer hourly download volume despite a 10% decrease in the average number of online peers. We also observe stronger diurnal usage patterns and, surprisingly given the bandwidth-intensive nature of the application, a close alignment between these patterns and overall traffic. Considering the aggregated traffic across access links, this has potential implications on BitTorrent-associated costs for Internet Service Providers (ISPs). Using data from a transit ISP, we find a disproportionately large impact under a commonly used burstable (95th-percentile) billing model. Last, when examining BitTorrent traffic's paths, we find that for over half its users, most network traffic never reaches large transit networks, but is instead carried by small transit ISPs. This raises questions on the effectiveness of most in-network monitoring systems to capture trends on peer-to-peer traffic and further motivates our approach.}, keywords = {}, pubstate = {published}, tppubtype = {article} } A thorough understanding of the network impact of emerging large-scale distributed systems -- where traffic flows and what it costs -- must encompass users' behavior, the traffic they generate and the topology over which that traffic flows. In the case of BitTorrent, however, previous studies have been limited by narrow perspectives that restrict such analysis. This paper presents a comprehensive view of BitTorrent, using data from a representative set of 500,000 users sampled over a two year period, located in 169 countries and 3,150 networks. This unique perspective captures unseen trends and reveals several unexpected features of the largest peer-to-peer system. For instance, over the past year total BitTorrent traffic has increased by 12%, driven by 25% increases in per-peer hourly download volume despite a 10% decrease in the average number of online peers. We also observe stronger diurnal usage patterns and, surprisingly given the bandwidth-intensive nature of the application, a close alignment between these patterns and overall traffic. Considering the aggregated traffic across access links, this has potential implications on BitTorrent-associated costs for Internet Service Providers (ISPs). Using data from a transit ISP, we find a disproportionately large impact under a commonly used burstable (95th-percentile) billing model. Last, when examining BitTorrent traffic's paths, we find that for over half its users, most network traffic never reaches large transit networks, but is instead carried by small transit ISPs. This raises questions on the effectiveness of most in-network monitoring systems to capture trends on peer-to-peer traffic and further motivates our approach. |
David R. Choffnes, Fabián E. Bustamante, Zihui Ge Crowdsourcing Service-Level Network Event Detection Journal Article In Proc. of ACM SIGCOMM, 2010. @article{CSLNED, title = {Crowdsourcing Service-Level Network Event Detection}, author = {David R. Choffnes and Fabián E. Bustamante and Zihui Ge}, url = {http://www.aqualab.cs.northwestern.edu/wp-content/uploads/2019/02/DChoffnes10SIGCOMM.pdf}, year = {2010}, date = {2010-08-03}, journal = {In Proc. of ACM SIGCOMM}, abstract = {The user experience for networked applications is becoming a key benchmark for customers and network providers. Perceived user experience is largely determined by the frequency, duration and severity of network events that impact a service. While today’s networks implement sophisticated infrastructure that issues alarms for most failures, there remains a class of silent outages (e.g., caused by configuration errors) that are not detected. Further, existing alarms provide little information to help operators understand the impact of network events on services. Attempts to address this through infrastructure that monitors end-to-end performance for customers have been hampered by the cost of deployment and by the volume of data generated by these solutions. We present an alternative approach that pushes monitoring to applications on end systems and uses their collective view to detect network events and their impact on services - an approach we call Crowdsourcing Event Monitoring (CEM). This paper presents a general framework for CEM systems and demonstrates its effectiveness for a P2P application using a large dataset gathered from BitTorrent users and confirmed network events from two ISPs. We discuss how we designed and deployed a prototype CEM implementation as an extension to BitTorrent. This system performs online service-level network event detection through passive monitoring and correlation of performance in end-users’ applications.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The user experience for networked applications is becoming a key benchmark for customers and network providers. Perceived user experience is largely determined by the frequency, duration and severity of network events that impact a service. While today’s networks implement sophisticated infrastructure that issues alarms for most failures, there remains a class of silent outages (e.g., caused by configuration errors) that are not detected. Further, existing alarms provide little information to help operators understand the impact of network events on services. Attempts to address this through infrastructure that monitors end-to-end performance for customers have been hampered by the cost of deployment and by the volume of data generated by these solutions. We present an alternative approach that pushes monitoring to applications on end systems and uses their collective view to detect network events and their impact on services - an approach we call Crowdsourcing Event Monitoring (CEM). This paper presents a general framework for CEM systems and demonstrates its effectiveness for a P2P application using a large dataset gathered from BitTorrent users and confirmed network events from two ISPs. We discuss how we designed and deployed a prototype CEM implementation as an extension to BitTorrent. This system performs online service-level network event detection through passive monitoring and correlation of performance in end-users’ applications. |
David R. Choffnes, Fabián E. Bustamante Taming the Torrent Journal Article In USENIX, 2010. BibTeX | Links: @article{TTc, title = {Taming the Torrent}, author = {David R. Choffnes and Fabián E. Bustamante}, url = {http://www.aqualab.cs.northwestern.edu/wp-content/uploads/2019/02/DChoffnes10login.pdf}, year = {2010}, date = {2010-02-06}, journal = {In USENIX}, keywords = {}, pubstate = {published}, tppubtype = {article} } |
Kai Chen, David R. Choffnes, Rahul Potharaju, Yan Chen, Fabián E. Bustamante Where the Sidewalk Ends: Extending the Internet AS Graph Using Traceroutes From P2P Users Journal Article In Proc. of CoNEXT, 2009. @article{WSEEI, title = {Where the Sidewalk Ends: Extending the Internet AS Graph Using Traceroutes From P2P Users}, author = {Kai Chen and David R. Choffnes and Rahul Potharaju and Yan Chen and Fabián E. Bustamante}, url = {http://www.aqualab.cs.northwestern.edu/wp-content/uploads/2019/02/KChen09Conext.pdf}, year = {2009}, date = {2009-12-06}, journal = {In Proc. of CoNEXT}, abstract = {An accurate Internet topology graph is important in many areas of networking, from deciding ISP business relationships to diagnosing network anomalies. Most Internet mapping efforts have derived the network structure, at the level of interconnected autonomous systems (ASes), from a limited number of either BGP- or traceroute-based data sources. While techniques for charting the topology continue to improve, the growth of the number of vantage points is significantly outpaced by the rapid growth of the Internet. In this paper, we argue that a promising approach to revealing the hidden areas of the Internet topology is through active measurement from an observation platform that scales with the growing Internet. By leveraging measurements performed by an extension to a popular P2P system, we show that this approach indeed exposes significant new topological information. Based on traceroute measurements from more than 992,000 IPs in over 3,700 ASes distributed across the Internet hierarchy, our proposed heuristics identify 23,914 new AS links not visible in the publicly-available BGP data -- 12.86% more customer-provider links and 40.99% more peering links, than previously reported. We validate our heuristics using data from a tier-1 ISP and show that they correctly filter out all false links introduced by public IP-to-AS mapping. We have made the identified set of links and their inferred relationships publically available.}, keywords = {}, pubstate = {published}, tppubtype = {article} } An accurate Internet topology graph is important in many areas of networking, from deciding ISP business relationships to diagnosing network anomalies. Most Internet mapping efforts have derived the network structure, at the level of interconnected autonomous systems (ASes), from a limited number of either BGP- or traceroute-based data sources. While techniques for charting the topology continue to improve, the growth of the number of vantage points is significantly outpaced by the rapid growth of the Internet. In this paper, we argue that a promising approach to revealing the hidden areas of the Internet topology is through active measurement from an observation platform that scales with the growing Internet. By leveraging measurements performed by an extension to a popular P2P system, we show that this approach indeed exposes significant new topological information. Based on traceroute measurements from more than 992,000 IPs in over 3,700 ASes distributed across the Internet hierarchy, our proposed heuristics identify 23,914 new AS links not visible in the publicly-available BGP data -- 12.86% more customer-provider links and 40.99% more peering links, than previously reported. We validate our heuristics using data from a tier-1 ISP and show that they correctly filter out all false links introduced by public IP-to-AS mapping. We have made the identified set of links and their inferred relationships publically available. |
Ao-Jan Su, David R. Choffnes, Aleksandar Kuzmanovic, Fabián E. Bustamante Drafting Behind Akamai: Inferring Network Conditions Based on CDN Redirections Journal Article In IEEE/ACM Transactions on Networking (ToN), 17 (6), 2009. @article{DBAc, title = {Drafting Behind Akamai: Inferring Network Conditions Based on CDN Redirections}, author = {Ao-Jan Su and David R. Choffnes and Aleksandar Kuzmanovic and Fabián E. Bustamante}, url = {https://ieeexplore.ieee.org/document/5238553}, year = {2009}, date = {2009-12-03}, journal = {In IEEE/ACM Transactions on Networking (ToN)}, volume = {17}, number = {6}, abstract = {To enhance Web browsing experiences, content distribution networks (CDNs) move Web content "closer" to clients by caching copies of Web objects on thousands of servers worldwide. Additionally, to minimize client download times, such systems perform extensive network and server measurements and use them to redirect clients to different servers over short time scales. In this paper, we explore techniques for inferring and exploiting network measurements performed by the largest CDN, Akamai; our objective is to locate and utilize quality Internet paths without performing extensive path probing or monitoring. Our contributions are threefold. First, we conduct a broad measurement study of Akamai's CDN. We probe Akamai's network from 140 PlanetLab (PL) vantage points for two months. We find that Akamai redirection times, while slightly higher than advertised, are sufficiently low to be useful for network control. Second, we empirically show that Akamai redirections overwhelmingly correlate with network latencies on the paths between clients and the Akamai servers. Finally, we illustrate how large-scale overlay networks can exploit Akamai redirections to identify the best detouring nodes for one-hop source routing. Our research shows that in more than 50% of investigated scenarios, it is better to route through the nodes "recommended" by Akamai than to use the direct paths. Because this is not the case for the rest of the scenarios, we develop low-overhead pruning algorithms that avoid Akamai-driven paths when they are not beneficial. Because these Akamai nodes are part of a closed system, we provide a method for mapping Akamai-recommended paths to those in a generic overlay and demonstrate that these one-hop paths indeed outperform direct ones. }, keywords = {}, pubstate = {published}, tppubtype = {article} } To enhance Web browsing experiences, content distribution networks (CDNs) move Web content "closer" to clients by caching copies of Web objects on thousands of servers worldwide. Additionally, to minimize client download times, such systems perform extensive network and server measurements and use them to redirect clients to different servers over short time scales. In this paper, we explore techniques for inferring and exploiting network measurements performed by the largest CDN, Akamai; our objective is to locate and utilize quality Internet paths without performing extensive path probing or monitoring. Our contributions are threefold. First, we conduct a broad measurement study of Akamai's CDN. We probe Akamai's network from 140 PlanetLab (PL) vantage points for two months. We find that Akamai redirection times, while slightly higher than advertised, are sufficiently low to be useful for network control. Second, we empirically show that Akamai redirections overwhelmingly correlate with network latencies on the paths between clients and the Akamai servers. Finally, we illustrate how large-scale overlay networks can exploit Akamai redirections to identify the best detouring nodes for one-hop source routing. Our research shows that in more than 50% of investigated scenarios, it is better to route through the nodes "recommended" by Akamai than to use the direct paths. Because this is not the case for the rest of the scenarios, we develop low-overhead pruning algorithms that avoid Akamai-driven paths when they are not beneficial. Because these Akamai nodes are part of a closed system, we provide a method for mapping Akamai-recommended paths to those in a generic overlay and demonstrate that these one-hop paths indeed outperform direct ones. |
David R. Choffnes, Fabián E. Bustamante On the Effectiveness of Measurement Reuse for Performance-Based Detouring Journal Article In Proc. of IEEE INFOCOM, 2009. @article{EMRPBD, title = {On the Effectiveness of Measurement Reuse for Performance-Based Detouring}, author = {David R. Choffnes and Fabián E. Bustamante}, url = {http://www.aqualab.cs.northwestern.edu/wp-content/uploads/2019/02/DChoffnes09Infocom.pdf}, year = {2009}, date = {2009-04-03}, journal = {In Proc. of IEEE INFOCOM}, abstract = {For both technological and economic reasons, the default path between two end systems in the wide-area Internet can be suboptimal. This observation has motivated a number of systems that attempt to improve reliability and performance by routing over one or more hops in an overlay. Most of the proposed solutions, however, fall at an extreme in the cost-performance trade-off. While some provide near-optimal performance with an unscalable measurement overhead, others avoid measurement when selecting routes around network failures but make no attempt to optimize performance. This paper presents an experimental evaluation of an alternative approach to scalable, performance detouring based on the strategic reuse of measurements from other large-scale distributed systems, namely content distribution networks (CDN). By relying on CDN redirections as hints on network conditions, higher performance paths are readily found with little overhead and no active network measurement. We report results from a study of more than 13,700 paths between 170 widely-distributed hosts over a three-week period, showing the advantages of this approach. We demonstrate the practicality of our approach by implementing an FTP suite that uses our publicly available SideStep library to take advantage of these improved Internet routes.}, keywords = {}, pubstate = {published}, tppubtype = {article} } For both technological and economic reasons, the default path between two end systems in the wide-area Internet can be suboptimal. This observation has motivated a number of systems that attempt to improve reliability and performance by routing over one or more hops in an overlay. Most of the proposed solutions, however, fall at an extreme in the cost-performance trade-off. While some provide near-optimal performance with an unscalable measurement overhead, others avoid measurement when selecting routes around network failures but make no attempt to optimize performance. This paper presents an experimental evaluation of an alternative approach to scalable, performance detouring based on the strategic reuse of measurements from other large-scale distributed systems, namely content distribution networks (CDN). By relying on CDN redirections as hints on network conditions, higher performance paths are readily found with little overhead and no active network measurement. We report results from a study of more than 13,700 paths between 170 widely-distributed hosts over a three-week period, showing the advantages of this approach. We demonstrate the practicality of our approach by implementing an FTP suite that uses our publicly available SideStep library to take advantage of these improved Internet routes. |
David R. Choffnes, Fabián E. Bustamante Taming the Torrent: A practical approach to reducing cross-ISP traffic in P2P systems Journal Article In Proc. of ACM SIGCOMM, 2008. @article{TTP2P, title = {Taming the Torrent: A practical approach to reducing cross-ISP traffic in P2P systems}, author = {David R. Choffnes and Fabián E. Bustamante}, url = {http://aqualab.cs.northwestern.edu/wp-content/uploads/2019/02/DChoffnes08Sigcomm.pdf}, year = {2008}, date = {2008-08-03}, journal = {In Proc. of ACM SIGCOMM}, abstract = {Peer-to-peer (P2P) systems, which provide a variety of popular services, such as file sharing, video streaming and voice-over-IP, contribute a significant portion of today's Internet traffic. By building overlay networks that are oblivious to the underlying Internet topology and routing, these systems have become one of the greatest trafficengineering challenges for Internet Service Providers (ISPs) and the source of costly data traffic flows. In an attempt to reduce these operational costs, ISPs have tried to shape, block or otherwise limit P2P traffic, much to the chagrin of their subscribers, who consistently finds ways to eschew these controls or simply switch providers. In this paper, we present the design, deployment and evaluation of an approach to reducing this costly cross- ISP traffic without sacrificing system performance. Our approach recycles network views gathered at low cost from content distribution networks to drive biased neighbor selection without any path monitoring or probing. Using results collected from a deployment in BitTorrent with over 120,000 users in nearly 3,000 networks, we show that our lightweight approach significantly reduces cross-ISP traffic and over 33% of the time it selects peers along paths that are within a single autonomous system (AS). Further, we find that our system locates peers along paths that have two orders of magnitude lower latency and 30% lower loss rates than those picked at random, and that these highquality paths can lead to significant improvements in transfer rates. In challenged settings where peers are overloaded in terms of available bandwidth, our approach provides 31% average download-rate improvement; in environments with large available bandwidth, it increases download rates by 207% on average (and improves median rates by 883%). DATA SET As we state in the paper, data used for this study will be made available upon request to edgescope@aqua-lab.org. For privacy reasons, the data is provided at an AS-level granularity. Note that you will have to agree to these terms before we grant access to the data. Also note that the dataset consists of 10s of GB of compressed data, so plan accordingly. }, keywords = {}, pubstate = {published}, tppubtype = {article} } Peer-to-peer (P2P) systems, which provide a variety of popular services, such as file sharing, video streaming and voice-over-IP, contribute a significant portion of today's Internet traffic. By building overlay networks that are oblivious to the underlying Internet topology and routing, these systems have become one of the greatest trafficengineering challenges for Internet Service Providers (ISPs) and the source of costly data traffic flows. In an attempt to reduce these operational costs, ISPs have tried to shape, block or otherwise limit P2P traffic, much to the chagrin of their subscribers, who consistently finds ways to eschew these controls or simply switch providers. In this paper, we present the design, deployment and evaluation of an approach to reducing this costly cross- ISP traffic without sacrificing system performance. Our approach recycles network views gathered at low cost from content distribution networks to drive biased neighbor selection without any path monitoring or probing. Using results collected from a deployment in BitTorrent with over 120,000 users in nearly 3,000 networks, we show that our lightweight approach significantly reduces cross-ISP traffic and over 33% of the time it selects peers along paths that are within a single autonomous system (AS). Further, we find that our system locates peers along paths that have two orders of magnitude lower latency and 30% lower loss rates than those picked at random, and that these highquality paths can lead to significant improvements in transfer rates. In challenged settings where peers are overloaded in terms of available bandwidth, our approach provides 31% average download-rate improvement; in environments with large available bandwidth, it increases download rates by 207% on average (and improves median rates by 883%). DATA SET As we state in the paper, data used for this study will be made available upon request to edgescope@aqua-lab.org. For privacy reasons, the data is provided at an AS-level granularity. Note that you will have to agree to these terms before we grant access to the data. Also note that the dataset consists of 10s of GB of compressed data, so plan accordingly. |
Ao-Jan Su, David R. Choffnes, Fabián E. Bustamante, Aleksandar Kuzmanovic Relative Network Positioning via CDN Redirections Journal Article In Proc. of the International Conference on Distributed Computing Systems (ICDCS), 2008. @article{RNPCDNR, title = {Relative Network Positioning via CDN Redirections}, author = {Ao-Jan Su and David R. Choffnes and Fabián E. Bustamante and Aleksandar Kuzmanovic}, url = {http://www.aqualab.cs.northwestern.edu/wp-content/uploads/2019/02/AJSu08CRP.pdf}, year = {2008}, date = {2008-06-06}, journal = { In Proc. of the International Conference on Distributed Computing Systems (ICDCS)}, abstract = {Many large-scale distributed systems can benefit from a service that allows them to select among alternative nodes based on their relative network positions. A variety of approaches propose new measurement infrastructures that attempt to scale this service to large numbers of nodes by reducing the amount of direct measurements to end hosts. In this paper, we introduce a new approach to relative network positioning that eliminates direct probing by leveraging pre-existing infrastructure. Specifically, we exploit the dynamic association of nodes with replica servers from large content distribution networks (CDNs) to determine relative position information -- we call this approach CDN-based Relative network Positioning (CRP). We demonstrate how CRP can support two common examples of location information used by distributed applications: server selection and dynamic node clustering. After describing CRP in detail, we present results from an extensive wide-area evaluation that demonstrates its effectiveness.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Many large-scale distributed systems can benefit from a service that allows them to select among alternative nodes based on their relative network positions. A variety of approaches propose new measurement infrastructures that attempt to scale this service to large numbers of nodes by reducing the amount of direct measurements to end hosts. In this paper, we introduce a new approach to relative network positioning that eliminates direct probing by leveraging pre-existing infrastructure. Specifically, we exploit the dynamic association of nodes with replica servers from large content distribution networks (CDNs) to determine relative position information -- we call this approach CDN-based Relative network Positioning (CRP). We demonstrate how CRP can support two common examples of location information used by distributed applications: server selection and dynamic node clustering. After describing CRP in detail, we present results from an extensive wide-area evaluation that demonstrates its effectiveness. |
Ao-Jan Su, David R. Choffnes, Alekzandar Kuzmanovic, Fabián E. Bustamante Drafting Behind Akamai (Travelocity-Based Detouring) Journal Article In Proc. of ACM SIGCOMM 2006, 2006. @article{DBA, title = { Drafting Behind Akamai (Travelocity-Based Detouring)}, author = {Ao-Jan Su and David R. Choffnes and Alekzandar Kuzmanovic and Fabián E. Bustamante}, url = {http://aqualab.cs.northwestern.edu/wp-content/uploads/2019/02/Ajsu06DBA.pdf http://aqualab.cs.northwestern.edu/wp-content/uploads/2019/02/Ajsu06DBA.ppt}, year = {2006}, date = {2006-09-03}, journal = { In Proc. of ACM SIGCOMM 2006}, abstract = {To enhance web browsing experiences, content distribution networks (CDNs) move web content closer to clients by caching copies of web objects on thousands of servers worldwide. Additionally, to minimize client download times, such systems perform extensive network and server measurements, and use them to redirect clients to different servers over short time scales. In this paper, we explore techniques for inferring and exploiting network measurements performed by the largest CDN, Akamai; our objective is to locate and utilize quality Internet paths without performing extensive path probing or monitoring. Our contributions are threefold. First, we conduct a broad measurement study of Akamai's CDN. We probe Akamai's network from 140 PlanetLab vantage points for two months. We find that Akamai redirection times, while slightly higher than advertised, are sufficiently low to be useful for network control. Second, we empirically show that Akamai redirections overwhelmingly correlate with network latencies on the paths between clients and the Akamai servers. Finally, we illustrate how large-scale overlay networks can exploit Akamai redirections to identify the best detouring nodes for one-hop source routing. Our research shows that in more than 50%of investigated scenarios, it is better to route through the nodes recommended by Akamai, than to use the direct paths. Because this is not the case for the rest of the scenarios, we develop low overhead pruning algorithms that avoid Akamai-driven paths when they are not beneficial.}, keywords = {}, pubstate = {published}, tppubtype = {article} } To enhance web browsing experiences, content distribution networks (CDNs) move web content closer to clients by caching copies of web objects on thousands of servers worldwide. Additionally, to minimize client download times, such systems perform extensive network and server measurements, and use them to redirect clients to different servers over short time scales. In this paper, we explore techniques for inferring and exploiting network measurements performed by the largest CDN, Akamai; our objective is to locate and utilize quality Internet paths without performing extensive path probing or monitoring. Our contributions are threefold. First, we conduct a broad measurement study of Akamai's CDN. We probe Akamai's network from 140 PlanetLab vantage points for two months. We find that Akamai redirection times, while slightly higher than advertised, are sufficiently low to be useful for network control. Second, we empirically show that Akamai redirections overwhelmingly correlate with network latencies on the paths between clients and the Akamai servers. Finally, we illustrate how large-scale overlay networks can exploit Akamai redirections to identify the best detouring nodes for one-hop source routing. Our research shows that in more than 50%of investigated scenarios, it is better to route through the nodes recommended by Akamai, than to use the direct paths. Because this is not the case for the rest of the scenarios, we develop low overhead pruning algorithms that avoid Akamai-driven paths when they are not beneficial. |