Title: An Architecture For Edge Networking Services
Authors: Lloyd Brown, Emily Marx, Dev Bali (UC Berkeley); Emmanuel Amaro (Microsoft); Debnil Sur (VMware Research); Ezra Kissel (LBL); Inder Monga (ESNet); Ethan Katz-Bassett (Columbia University); Arvind Krishnamurthy (University of Washington); James McCauley (Mount Holyoke College); Tejas N. Narechania (UC Berkeley); Aurojit Panda (New York University); Scott Shenker (ICSI AND UC Berkeley)
Scribe: Ziyi Wang (Xiamen University)
Introduction
The study addresses the challenge of integrating edge networking services into the broader Internet infrastructure while maintaining key principles such as interconnection and neutrality. This issue is crucial as it impacts the efficiency and universality of edge services—like content delivery networks and privacy protection services—across global networks. Existing systems fall short because they often result in fragmented and restricted service offerings, which can limit global coverage and innovation. The current edge services architecture lacks unified interconnection and neutrality, creating barriers to widespread and fair service deployment.
Key Idea and Contribution
The authors propose a novel architecture named InterEdge, designed to address these shortcomings by providing a global, extensible, and neutral platform for edge service deployment. InterEdge integrates edge services into the Internet’s existing framework, requiring all service nodes to operate within a standardized execution environment. This approach allows edge services, such as multicast or privacy protection, to be deployed universally across different networks, ensuring consistent performance and accessibility. The architecture facilitates the interconnection of edge service providers, overcoming the limitations of fragmented and restricted systems.
Additionally, more detailed operations, such as enhanced security, are discussed in the paper.
Evaluation
The authors evaluated the InterEdge architecture using the NSS Fabric testbed, demonstrating its effectiveness in improving performance and extensibility. This result is significant because it addresses the fragmented and restricted nature of current edge services, enabling a more cohesive and universally accessible edge network. Readers can care about this paper because the proposed architecture has the potential to transform edge networking by providing a unified platform that supports rapid innovation and broad service deployment across diverse networks.
Q1:
Over the past 20-30 years, there have been many attempts to improve network services, but most have not succeeded. What are the reasons for this? What is different about this attempt?
A1:
Past attempts were primarily hindered by the difficulty of sharing economic benefits due to service differentiation and the complexities of responsibility attribution and debugging. The difference with this attempt is that we are not starting with the most profitable areas but are choosing easier services to implement and adopting a new economic model that avoids complex settlement issues. Although challenges remain with debugging and responsibility attribution, by simplifying service deployment and operations, we hope to overcome these challenges and drive the adoption and improvement of services.
Q2:
Considering the challenges in ISP networks, particularly the long tail effect of video providers, is it feasible to standardize an effective runtime environment?
A2:
Standardizing a runtime environment faces significant challenges. On one hand, we have a global telecommunications infrastructure dominated by existing industry giants, which poses a challenge. On the other hand, there is a smaller chance (perhaps one in a hundred) of building something better. Despite these challenges, I am willing to take that chance. Early services that are not provided by these major players might be ignored but are unlikely to be completely sidelined.
Q3:
I’m really glad to see where this is heading and that the PC decided to take a risk on this. But I have a question. Let’s consider a concrete example: Suppose the origin pays for a DDoS protection service and the ISP closer to the subscriber also pays for a DDoS protection service. If both services work well on their own but somehow pass through one service then the other negates the value of the first, how do we handle this situation? It’s almost like a privatization of supply.
A3:
We discuss a similar issue in the paper, and it’s something that has kept me up at night. We think there could be a scenario similar to BGP. Even though BGP is a flawed protocol, it works because of the business practices around it. We anticipate that there will be business practices where some services are application-specific and can be chosen, while others are unilateral, like DDoS protection or quality of service, which can be applied regardless of their source. This separation seems to be our foothold. We are trying to figure out a solution similar to Gal Rexford’s approach to handle this challenge.
Personal Thoughts
This paper presents a compelling solution to a significant challenge in the field of networking.
Open questions worth exploring include how InterEdge performs in highly dynamic network environments, such as those with rapidly changing traffic patterns and user demands. Another intriguing area is the potential integration of InterEdge with emerging technologies like 5G and IoT, which could benefit from enhanced edge services. These directions could further validate and extend the impact of this innovative architecture.