James Kempf
  Internet Draft                                              Rob Austein
  Document: draft-iab-e2e-futures-02.txt                              IAB
  Expires: October 2003                                        April 2003
  
  
                  The Rise of the Middle and the Future of End to End:
               Reflections on the Evolution of the Internet Architecture
  
  
  
  Status of this Memo
  
     This document is an Internet-Draft and is in full conformance with all
     provisions of Section 10 of RFC2026.
  
     Internet-Drafts are working documents of the Internet Engineering Task Force
     (IETF), its areas, and its working groups.  Note that      other groups may
     also distribute working documents as Internet-Drafts.
  
     Internet-Drafts are draft documents valid for a maximum of six months and
     may be updated, replaced, or obsoleted by other documents at any time.  It
     is inappropriate to use Internet-Drafts as reference material or to cite
     them other than as "work in progress."
  
     The list of current Internet-Drafts can be accessed at
          http://www.ietf.org/ietf/1id-abstracts.txt
     The list of Internet-Draft Shadow Directories can be accessed at
          http://www.ietf.org/shadow.html.
  
  
  Abstract
  
  The end to end principle is the core architectural principle of the Internet.
  In this document, we briefly examine the development of the end to end
  principle as it has been applied to the Internet architecture over the years.
  We discuss current trends in the evolution of the Internet architecture in
  relation to the end to end principle, and try to draw some conclusion about the
  evolution of the end to end principle, and thus for the Internet architecture
  which it supports, in light of these current trends.
  
  Table of Contents
  
     1.0  Introduction..................................................2
     2.0  A Brief History of the End to End Principle...................2
     3.0  Trends Opposed to the End to End Principle....................4
     4.0  Whither the End to End principle?.............................7
     5.0  Internet Standards as an Arena for Conflict...................9
     6.0  Conclusions...................................................9
     7.0  Acknowledgements.............................................10
     8.0  References...................................................10
     9.0  Security Considerations......................................11
     10.0  IANA Considerations........................................11
  
     Kempf and Austein      Expires October 2003            [Page 1]


     Internet Draft        Future of End to End            April, 2003
  
     11.0  Author Information.........................................11
     12.0  Full Copyright Statement...................................11
  
   1.0   Introduction
  
  One of the key architectural principles of the Internet is the end to end
  principle in the papers by Saltzer, Reed, and Clark [1][2]. The end to end
  principle was originally articulated as a question of where best not to put
  functions in a communication system. Yet, in the ensuing years, it has evolved
  to  address  concerns  of  maintaining  openness,  increasing  reliability  and
  robustness, and preserving the properties of user choice and ease of new
  service development as discussed by Blumenthal and Clark in [3]; concerns that
  were not part of the original articulation of the end to end principle.
  
  In this document, we examine how the interpretation of the end to end principle
  has evolved over the years, and where it stands currently. We examine trends in
  the development of the Internet that have led to pressure to define services in
  the network, a topic that has already received some amount of attention from
  the IAB in RFC 3238 [4]. We describe some considerations about how the end to
  end principle might evolve in light of these trends.
  
  Discussion of this draft should be directed to:
  
      end2end-interest@postel.org
  
  subscribe through:
  
      http://www.postel.org/mailman/listinfo/end2end-interest.
  
   2.0   A Brief History of the End to End Principle
  
  2.1 In the Beginnning...
  
  The end to end principle was originally articulated as a question of where best
  not to put functions in a communication system:
  
     The function in question can completely and correctly be implemented only
     with the knowledge and help of the application standing at the end points of
     the communication system. Therefore, providing that questioned function as a
     feature of the communication system itself is not possible. (Sometimes an
     incomplete version of the function provided by the communication system may
     be useful as a performance enhancement.) [1].
  
  The specific examples given in [1] and other references at the time [2]
  primarily involve transmission of data packets: data integrity, delivery
  guarantees, duplicate message suppression, per packet encryption, and
  transaction management. From the viewpoint of today's Internet architecture, we
  would view most of these as transport layer functions (data integrity, delivery
  guarantees, duplicate message suppression, and perhaps transaction management),
  others as network layer functions with support at other layers where necessary
  (for example, packet encryption), and not application layer functions.
  
  
  
     Kempf and Austein      Expires October 2003            [Page 2]


     Internet Draft        Future of End to End            April, 2003
  
  Interestingly, the expression of the end to end principle cited above is
  phrased as a negative: what *cannot* be fully provided in a communication
  system rather than what can be provided. Much of the wider applicability later
  attributed to the end to end principle, outside of the original application to
  the transport of packets, derives from this phrasing.
  
  2.2 ...In the Middle...
  
  As the Internet developed, the end to end principle gradually widened to
  concerns about where best to put the state associated with applications in the
  Internet: in the network or at end nodes. The best example is the description
  in RFC 1958 [5]:
  
     This principle has important consequences if we require applications to
     survive partial network failures. An end-to-end protocol design should not
     rely on the maintenance of state (i.e. information about the state of the
     end-to-end communication) inside the network. Such state should be
     maintained only in the endpoints, in such a way that the state can only be
     destroyed when the endpoint itself breaks (known as fate-sharing). An
     immediate consequence of this is that datagrams are better than classical
     virtual circuits.  The network's job is to transmit datagrams as efficiently
     and flexibly as possible. Everything else should be done at the fringes.
  
  The original articulation of the end to end principle as where not to define
  functions in a communication system took a while to percolate through the
  engineering community, and had evolved by this point to a broad architectural
  statement about what belongs in the network and what doesn't. RFC 1958 uses the
  term "application" to mean the entire network stack on the end node, including
  network, transport, and application layers, in contrast to the earlier
  articulation of the end to end principle as being about the communication
  system itself.  "Fate-sharing" describes this quite clearly: the fate of a
  conversation between two applications is only shared between the two
  applications; the fate does not depend on anything in the network, except for
  the network's ability to get packets from one application to the other.
  
  The end to end principle in this formulation is specifically about what kind of
  state is maintained where:
  
     To perform its services, the network maintains some state information:
     routes, QoS guarantees that it makes, session information where that is used
     in header compression, compression histories for data compression, and the
     like. This state must be self-healing; adaptive procedures or protocols must
     exist to derive and maintain that state, and change it when the topology or
     activity of the network changes. The volume of this state must be minimized,
     and the loss of the state must not result in more than a temporary denial of
     service given that connectivity exists.  Manually configured state must be
     kept to an absolute minimum.[5]
  
  In this formulation of the end to end principle, state involved in getting
  packets from one end of the network to the other is maintained in the network.
  The state is "soft state," in the sense that it can be quickly dropped and
  reconstructed (or even required to be periodically renewed) as the network
  topology changes due to routers and switches going on and off line. "Hard
  
  
     Kempf and Austein      Expires October 2003            [Page 3]


     Internet Draft        Future of End to End            April, 2003
  
  state", state upon which the proper functioning of the application depends, is
  only maintained in the end nodes.
  
  In summary, the general awareness both of the principle itself and of its
  implications for how unavoidable state should be handled grew over time to
  become a (if not the) foundation principle of the Internet architecture.
  
  2.3 ...And Now.
  
  An interesting example of how the end to end principle continues to influence
  the technical debate in the Internet community is IP mobility. While the
  existing Internet routing architecture imposes some severe constraints on how
  closely IP mobility can match the end to end principle, the Mobile IPv6
  standard, described in the Mobile IPv6 draft by Johnson, Perkins, and Arkko
  [6], tries to strike a balance. Mobile IPv6 has eliminated the local routing
  proxy (the Foreign Agent), which was a feature of the older Mobile IPv4 design
  that compromised end to end routing. The end node now handles its own routing
  identifier, the care of address. In addition, Mobile IPv6 includes secure
  mechanisms for optimizing routing to allow end to end routing between the
  mobile end node and the correspondent node, removing the need to route through
  the global routing proxy at the home agent. These features are all based on end
  to end considerations. However, the need for the global routing proxy in the
  home agent in Mobile IPv6 is determined by the aliasing of the global node
  identifier with the routing identifier in the Internet routing architecture, a
  topic that was discussed in an IAB workshop and reported in RFC 2956 [8], and
  that hasn't changed in IPv6.
  
  Despite this constraint, the vision emerging out of the IETF working groups
  developing standards for mobile networking is of a largely autonomous mobile
  node with multiple wireless link options, among which the mobile node picks and
  chooses. The end node is therefore responsible for maintaining the integrity of
  the communication, as the end to end principle implies. This kind of innovative
  application of the end to end principle derives from the same basic
  considerations of reliability and robustness (wireless link integrity, changes
  in connectivity and service availability with movement, etc.) that motivated
  the original development of the end to end principle. While the basic
  reliability of wired links and routing and switching equipment has improved
  considerably since the end to end principle was formalized 15 years ago, the
  reliability or unreliability of wireless links is governed more strongly by the
  basic physics of the medium and the instantaneous radio propagation conditions.
  It therefore seems likely that despite the inclusion of link mechanisms to
  mitigate unreliability in wireless links, the end to end principle will play an
  increasingly important role in IP wireless networking technology.
  
   3.0   Trends Opposed to the End to End Principle
  
  While the end to end principle continues to provide a solid foundation for much
  IETF design work, the specific application of the end to end principle
  described in RFC 1958 has increasingly come into question from various
  directions. The IAB has been concerned about trends opposing the end to end
  principle for some time, for example RFC 2956 [8] and RFC 2775 [11]. The
  primary focus of concern in these documents is the reduction in transparency
  due to the introduction of NATs and other address translation mechanisms in the
  Internet, and the consequences to the end to end principle of various scenarios
  
     Kempf and Austein      Expires October 2003            [Page 4]


     Internet Draft        Future of End to End            April, 2003
  
  involving full, partial, or no deployment of IPv6. More recently, the topic of
  concern has shifted to the consequences of service deployment in the network.
  The IAB opinion on Open Pluggable Edge Services (OPES) in RFC 3238 [4] is
  intended to assess the architectural desirability of defining services in the
  network and to raise questions about how such services might result in
  compromises of the end to end principle. Clark, et al. in [9] and Carpenter in
  RFC 3234 [10] also take up the topic of service definition in the network.
  
  Perhaps the best review of the forces militating against the end to end
  principle is by Blumenthal and Clark in [3]. The authors make the point that
  the Internet originally developed among a community of like-minded technical
  professionals who trusted each other, and was administered by academic and
  government institutions who enforced a policy of no commercial use. The major
  stakeholders in the Internet are quite different today. As a consequence, new
  requirements have evolved over the last decade. Examples of these requirements
  are discussed in the following subsections. Other discussions about pressures
  on the end to end principle in today's Internet can be found in the discussion
  by Reed [12] and Moors' paper in the 2002 IEEE International Communications
  Conference [13].
  
  3.1 Lack of Trust
  
  Perhaps the single most important change from the Internet of 15 years ago is
  the lack of trust between end nodes. Because the end users in the Internet of
  15 years ago were few, and were largely dedicated to using the Internet as a
  tool for computer science research and for communicating research results,
  trust between end users (and thus between the end nodes that they use) and
  between network operators and their users was simply not an issue in general.
  Today, the motivations of some individuals using the Internet are not always
  entirely ethical, and, even if they are, the assumption that end nodes will
  always co-operate to achieve some mutually beneficial action, as implied by the
  end to end principle, is not always accurate. In addition, the growth in users
  who are either not technologically sophisticated enough or simply uninterested
  in maintaining their own security has required network operators to become more
  proactive in deploying measures to prevent naive or uninterested users from
  inadvertently or intentionally generating security problems. One of the most
  common examples of network elements interposing between end hosts are those
  dedicated to security: firewalls, VPN tunnel endpoints, certificate servers,
  etc. These intermediaries are designed to protect the network from unimpeded
  attack or to allow two end nodes that may have no inherent reason to trust each
  other to achieve some level of trust; but, at the same time, they act as
  impediments for end to end communications.
  
  3.2 New Service Models
  
  New service models inspired by new applications require achieving the proper
  performance level as a fundamental part of the delivered service. These service
  models are a significant change from the original best effort service model.
  Email, file transfer, and even Web access aren't perceived as failing if
  performance degrades, though the user may become frustrated at the time
  required to complete the transaction. However, for streaming audio and video,
  to say nothing of real time bidirectional voice and video, achieving the proper
  performance level, whatever that might mean for an acceptable user experience
  of the service, is part of delivering the service, and a customer contracting
  
     Kempf and Austein      Expires October 2003            [Page 5]


     Internet Draft        Future of End to End            April, 2003
  
  for the service has a right to expect the level of performance for which they
  have contracted. For example, content distributors sometimes release content
  via content distribution servers that are spread around the Internet at various
  locations to avoid delays in delivery if the server is topologically far away
  from the client. Retail broadband and multimedia services are a new service
  model for many service providers.
  
  3.3 Rise of the Third Party
  
  Academic and government institutions ran the Internet of 10 years ago. These
  institutions did not expect to make a profit from their investment in
  networking technology. In contrast, the network operator with which most
  Internet users deal today is the commercial ISP. Commercial ISPs run their
  networks as a business, and expect to make a profit (or at least not lose much)
  on their investment in the network. While this radical change in business model
  is not an excuse for modifying an architectural principle that has exhibited
  its value over time, it does put a certain amount of pressure on the end to end
  principle.
  
  In particular, the standard retail dialup bit pipe account with email and shell
  access has become a commodity service, resulting in low profit margins. While
  many ISPs are happy with this business model and are able to survive on it,
  others would like to deploy different service models that have a higher profit
  potential and provide the customer with more or different services. An example
  is retail broadband bit pipe access via cable or DSL coupled with streaming
  multimedia. Some ISPs that offer broadband access also deploy content
  distribution networks to increase the performance of streaming media. These
  services are typically deployed so that they are only accessible within the
  ISPÆs network, and as a result, they do not contribute to open, end to end
  service.
  
  ISPs are not the only third party intermediary that has appeared within the
  last 10 years. Unlike the previous involvement of corporations and governments
  in running the Internet, corporate network administrators, and governmental
  officials have become increasingly demanding of opportunities to interpose
  between two parties in an end to end conversation. A benign motivation for this
  involvement is to mitigate the lack of trust, so the third party acts as a
  trust anchor or enforcer of good behavior between the two ends. A less benign
  motivation is for the third parties to insert policy for their own reasons,
  perhaps taxation or even censorship. The requirements of third parties often
  have little or nothing to do with technical concerns, but rather derive from
  particular social and legal considerations.
  
  3.4 The Consumer as the Primary User
  
  The original users of the Internet were technologists who understood how it
  worked and had no reservations about tinkering with the software or hardware on
  their end hosts. The users of today are primarily non-technical consumers who
  buy packaged hardware and software from vendors and contract with ISPs for
  service. They expect their Internet service to function smoothly like any other
  product they buy, without much involvement on their part in keeping the product
  functional. This development matches closely the development of other
  technologies, for example the automobile, that have become mainstream.
  
  
     Kempf and Austein      Expires October 2003            [Page 6]


     Internet Draft        Future of End to End            April, 2003
  
  This pressure to simplify the user experience has resulted in a corresponding
  pressure to reduce the amount of installation, configuration, maintenance, and
  upgrade on end nodes. Requiring user involvement in the deployment of new
  software or hardware on the end nodes, in order to deploy new services, runs
  directly counter to this trend. One response has been the tendency to move
  deployment of new services to servers running an existing protocol, such as
  HTTP, or downloadable code, such as Java or browser plug-ins, which don't
  require any user involvement to install. Utilizing existing protocols such as
  HTTP also simplifies deployment from the network operator's perspective, since
  the network operator does not need to open a new hole in the firewall.
  
  Another response has been network intermediaries to provide the service.
  Typically, these intermediaries don't interpose on a flow between a client and
  a server, but they may act more like DNS, in that the intermediary is required
  in order to get access to the service. A further development of this trend
  would be to move much of the context and configuration for a user into a node
  in the network, where it can be upgraded without any user involvement. This
  development would remove the end host as the definitive location for the
  application and spread it out between the network and the end host.
  
   4.0   Whither the End to End Principle?
  
  Given the pressures on the end to end principle discussed in the previous
  section, a question arises about the future of the end to end principle. Does
  the end to end principle have a future in the Internet architecture or not? If
  it does have a future, how should it be applied? Clearly, an unproductive
  approach to answering this question is to insist upon the end to end principle
  as a fundamentalist principle that allows no compromise. The pressures
  described above are real and powerful, and if the current Internet technical
  community chooses to ignore these pressures, the likely result is that a market
  opportunity will be created for a new technical community that does not ignore
  these pressures but which may not understand the implications of their design
  choices. A more productive approach is to return to first principles and re-
  examine what the end to end principle is trying to accomplish, and then update
  our definition and exposition of the end to end principle given the
  complexities of the Internet today.
  
  4.1 Consequences of the End to End Principle
  
  In this section, we consider the two primary desirable consequences of the end
  to end principle: protection of innovation and provision of reliability and
  robustness.
  
  4.1.1   Protection of Innovation
  
  One desirable consequence of the end to end principle is protection of
  innovation. Requiring modification in the network in order to deploy new
  services is still typically more difficult than modifying end nodes. The lack
  of widespread deployment of multicast in public service providers is an
  example, since it is impossible to deploy without touching the network. The
  counterargument in Section 3.4 - that many end nodes are now essentially closed
  boxes which are not updatable and that most users don't want to update them
  anyway - does not apply to all nodes and all users. Many end nodes are still
  user configurable and a sizable percentage of users are "early adopters," who
  
     Kempf and Austein      Expires October 2003            [Page 7]


     Internet Draft        Future of End to End            April, 2003
  
  are willing to put up with a certain amount of technological grief in order to
  try out a new idea. And, even for the closed boxes and uninvolved users,
  downloadable code that abides by the end to end principle can provide fast
  service innovation. Requiring someone with a new idea for a service to convince
  a bunch of ISPs or corporate network administrators to modify their networks is
  much more difficult than simply putting up a Web page with some downloadable
  software implementing the service.
  
  4.1.2   Reliability and Robustness
  
  The second desirable consequence of the end to end principle is an increase the
  reliability and robustness of the exchange between the two parties in the
  conversation. During the early development of the Internet, the basic
  reliability of the hardware and software was fairly low, so involving
  additional network elements between the two ends could radically decrease the
  reliability of the overall connection. Technical reliability has improved
  considerably, but reliability due to involvement of network elements is still a
  concern [4]. In particular, as discussed in Section 2.3, wireless links suffer
  from an inherent unreliability that can only be partially mitigated by costly
  measures at the link layer, but new software upgrades still suffer from
  unexpected bugs, despite the increased quality control applied by vendors.
  
  4.1.3   Reliability and Trust
  
  Of more concern today, however, is the decrease in reliability and robustness
  that results from deliberate, active attacks on the network infrastructure and
  end nodes. While the original developers of the Internet were concerned by
  large-scale system failures, attacks of the subtlety and variety that the
  Internet experiences today were not a problem during the original development
  of the Internet. By and large, the end to end principle was not addressed to
  the decrease in reliability resulting from attacks deliberately engineered to
  take advantage of subtle flaws in software. These attacks are part of the
  larger issue of the trust breakdown discussed in Section 3.1. Thus, the issue
  of the trust breakdown can be considered another forcing function on the
  Internet architecture, similar to the issue of reliability and robustness due
  to technical.
  
  The immediate reaction to this trust breakdown has been to try to back fit
  security into existing protocols. While this effort is necessary, it is not
  sufficient. The issue of trust must become as firm an architectural principle
  in protocol design for the future as the end to end principle is today. Trust
  isn't simply a matter of adding some cryptographic protection to a protocol
  after it is designed. Rather, prior to designing the protocol, the trust
  relationships between the network elements involved in the protocol must be
  defined, and boundaries must be drawn between those network elements that share
  a trust relationship. The trust boundaries should be used to determine what
  type of communication occurs between the network elements involved in the
  protocol and which network elements signal each other. When communication
  occurs across a trust boundary, cryptographic or other security protection of
  some sort may be necessary. Additional measures may be necessary to secure the
  protocol when communicating network elements do not share a trust relationship.
  For example, a protocol might need to minimize state in the recipient prior to
  establishing the validity of the credentials from the sender in order to avoid
  a memory depletion DoS attack.
  
     Kempf and Austein      Expires October 2003            [Page 8]


     Internet Draft        Future of End to End            April, 2003
  
  
  4.2 Unbundling the End to End Principle
  
  One way to approach the end to end principle given the complexities of today's
  Internet is to, in a sense, unbundle it into its components of innovation
  protection and reliability and robustness, and apply these individually.
  Consider, for example, a distributed application running as an applet on an
  Internet appliance, like a cell phone or a pager. Provisioning of the appliance
  is a consequence of an end to end process, such as downloading the code for the
  applet from a Web, thus preserving rapid innovation. But in the operation of a
  distributed application, the end to end principle is not the only consideration
  for increasing reliability and robustness. For example, reliability and
  robustness can be increased with replication. An applet communicates with a
  server that then communicates with several databases or other applications that
  provide replicated services, and so on. The applet itself may have little or no
  knowledge of the services utilized by the server, but replication in those
  services may provide more reliability and robustness than if the end node
  running the applet had to manage the services, and at a considerable reduction
  in complexity. The entire application achieves robustness through distribution
  and replication of state and the possibility of failover maintained by the
  individual pieces of the application, while the end to end principle applies to
  each individual connection in the distributed application.
  
  
   5.0   Internet Standards as an Arena for Conflict
  
  Internet standards have increasingly become an arena for conflict [9]. ISPs
  have certain concerns, businesses and government have others, and vendors of
  networking hardware and software still others. Often, these concerns conflict,
  and sometimes they conflict with the concerns of the end users. For example,
  ISPs are reluctant to deploy interdomain QoS services because, among other
  reasons, every known instance creates a significant and easily exploited
  DoS/DDoS vulnerability. However, some end users would like to have end-to-end,
  Diffsrv or IntSrv-style QoS available to improve support for voice and video
  multimedia applications between end nodes in different domains, as discussed by
  Huston in RFC 2990 [14]. In this case, the security, robustness and reliability
  concerns of the ISP conflict with the desire of users for a different type of
  service.
  
  These conflicts will inevitably be reflected in the Internet architecture going
  forward. Some of these conflicts are impossible to resolve on a technical
  level, nor would it even be desirable, because they involve social and legal
  choices that the IETF is not empowered to make (for a counter argument in the
  area of privacy, see Goldberg, et al. [15]). But for those conflicts that do
  involve technical choices, the important properties of user choice and
  empowerment, reliability and integrity of end to end service, supporting trust
  and "good network citizen behavior," and fostering innovation in services
  should be the basis upon which resolution is made. The conflict will then play
  out on the field of the resulting architecture.
  
   6.0   Conclusions
  
  
  
  
     Kempf and Austein      Expires October 2003            [Page 9]


     Internet Draft        Future of End to End            April, 2003
  
  The end to end principle continues to guide technical development of Internet
  standards, and remains as important today for the Internet architecture as in
  the past. In many cases, unbundling of the end to end principle into its
  consequences leads to a distributed approach in which the end to end principle
  applies to interactions between the individual pieces of the application, while
  the unbundled consequences, protection of innovation and reliability and
  robustness, apply to the entire application. While the end to end principle
  originated as a focused argument about where best not to put functions in a
  communication system, particular properties developed by the Internet as a
  result of the end to end principle have come to be recognized as being as
  important, if not more so, than the principle itself. End user choice and
  empowerment, integrity of service, support for trust, and "good network citizen
  behavior" are all properties that have developed as a consequence of the end to
  end principle. Recognizing these properties in a particular proposal for
  modifications to the Internet has become more important than before as the
  pressures to incorporate services into the network have increased. Any proposal
  to incorporate services in the network should be weighed against these
  properties before proceeding.
  
   7.0   Acknowledgements
  
  Many of the ideas presented here originally appeared in the works of Dave
  Clark, John Wroclawski, Bob Braden, Karen Sollins, Marjory Blumenthal, and Dave
  Reed on forces currently influencing the evolution of the Internet. The authors
  would particularly like to single out the work of Dave Clark, who was the
  original articulator of the end to end principle and who continues to inspire
  and guide the evolution of the Internet architecture, and John Wroclawski, with
  whom conversations during the development of this paper helped to clarify
  issues involving tussle and the Internet.
  
   8.0   References
  
       [1] Saltzer, J.H., Reed, D.P., and Clark, D.D., "End to End Arguments in
           System Design," Communications Policy in Transition: The Internet and
           Beyond, B. Compaine and S. Greenstein, eds. MIT Press, September 2001.
       [2] Clark, D., "The Design Philosophy of the DARPA Internet Protocols,"
           Proc SIGCOMM 88, ACM CCR Vol 18, Number 4, August 1988, pp. 106-114.
       [3] Blumenthal, M., Clark, D.D., "Rethinking the design of the Internet:
           The end to end arguments vs. the brave new world", ACM Transactions on
           Internet Technology, Vol. 1, No. 1, August 2001, pp 70-109.
       [4] Floyd, S., and Daigle, L., "IAB Architectural and Policy Considerations
           for Open Pluggable Edge Services", RFC 3238, January 2002.
       [5] Carpenter, B., "Architectural Principles of the Internet," RFC 1958,
           June, 1996.
       [6] Johnson, D., Perkins, C., and Arkko, J., "Mobility Support in IPv6,"
           draft-ietf-mobileip-ipv6-20.txt, a work in progress.
       [7] Perkins, C., editor, "Mobility Support in IPv4", RFC 3220, August,
           2002.
       [8] Kaat, M., "Overview of 1999 IAB Network Layer Workshop," RFC 2956,
           October, 2000.
       [9] Clark, D.D., Wroclawski, J., Sollins, K., and Braden, B., "Tussle in
           Cyberspace: Defining Tommorow's Internet", Proceedings of Sigcomm 2002.
      [10] Carpenter, B., and Brim, S., "Middleboxes: Taxonomy and Issues," RFC
           3234, February, 2002.
  
     Kempf and Austein      Expires October 2003            [Page 10]


     Internet Draft        Future of End to End            April, 2003
  
      [11] Carpenter, B., "Internet Transparency," RFC 2775, February, 2000.
      [12] Reed, D., "The End of the End-to-End Argument?",
           http://www.reed.com/dprframeweb/dprframe.asp?section=paper&fn=endofendt
           oend.html, April, 2000.
      [13] Moors, T., "A Critical Review of End-to-end Arguments in System
           Design," Proc. 2000 IEEE International Conference on Communications,
           pp. 1214-1219, April, 2002.
      [14] Huston, G., "Next Steps for the IP QoS Architecture", RFC 2990,
           November, 2000.
      [15] Goldberg, I., Wagner, D., and Brewer, E., "Privacy-enhancing
           technologies for the Internet," Proceedings of IEEE COMPCON 97, pp.
           103-109, 1997.
  
  
   9.0   Security Considerations
  
  This document does not propose any new protocols, and therefore does not
  involve any security considerations in that sense.  However, throughout this
  document there are discussions of the privacy and integrity issues and the
  architectural requirements created by those issues.
  
   10.0  IANA Considerations
  
  There are no IANA considerations regarding this document.
  
   11.0  Author Information
  
  
  Internet Architecture Board
  EMail:  iab@iab.org
  
  IAB Membership at time this document was completed:
  
        Bernard Aboba
        Harald Alvestrand
        Rob Austein
        Leslie Daigle
        Patrik F„ltstr÷m
        Sally Floyd
        Jun-ichiro Itojun Hagino
        Mark Handley
        Geoff Huston
        Charlie Kaufman
        James Kempf
        Eric Rescorla
        Mike St. Johns
  
  This draft was created in April 2003.
  
   12.0  Full Copyright Statement
  
     Copyright (C) The Internet Society (2003).  All Rights Reserved.  This
     document and translations of it may be copied and furnished to others, and
     derivative works that comment on or otherwise explain it or assist in its
  
     Kempf and Austein      Expires October 2003            [Page 11]


     Internet Draft        Future of End to End            April, 2003
  
     implementation may be prepared, copied, published and distributed, in whole
     or in part, without restriction of any kind, provided that the above
     copyright notice and this paragraph are included on all such copies and
     derivative works.  However, this document itself may not be modified in any
     way, such as by removing the copyright notice or references to the Internet
     Society or other Internet organizations, except as needed for the purpose of
     developing Internet standards in which case the procedures for copyrights
     defined in the Internet Standards process must be followed, or as required
     to translate it into languages other than English.  The limited permissions
     granted above are perpetual and will not be revoked by the Internet Society
     or its successors or assigns.  This document and the information contained
     herein is provided on an "AS IS" basis and THE INTERNET SOCIETY AND THE
     INTERNET ENGINEERING TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR
     IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE
     INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF
     MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
     Kempf and Austein      Expires October 2003            [Page 12]