Internet Draft                                  J. Crowcroft
Expire in six months                                  UCL CS
                                                July 3 1996

                          Pricing the Internet
                  <draft-crowcroft-pricing-the-i-00.txt>

   This document is an Internet-Draft.  Internet-Drafts are working
   documents of the Internet Engineering Task Force (IETF), its areas,
   and its working groups.  Note that other groups may also distribute
   working documents as Internet-Drafts.

   Internet-Drafts are draft documents valid for a maximum of six

   months and may be updated, replaced, or obsoleted by other documents
   at any time.  It is inappropriate to use Internet- Drafts as
   reference material or to cite them other than as ``work in
   progress.''

   To learn the current status of any Internet-Draft, please check the
   ``1id-abstracts.txt'' listing contained in the Internet- Drafts
   Shadow Directories on ftp.is.co.za (Africa), nic.nordu.net (Europe),
   munnari.oz.au (Pacific Rim), ds.internic.net (US East Coast), or
   ftp.isi.edu (US West Coast).

Abstract

   This is a brief note about pricing the Internet. The focus is the use
   of pricing for usage of the infrastructure for transmitting and
   receiving IP packets, rather than of services such as WWW, FTP,
   Archie, Gopher etc. In this we propose subscription based pricing,
   and a mechanism based on dynamic host address allocation drawn from a
   set of seperately routed addresses to give priority levels.

   The idea of subscriptions here is not that all service pricing should
   be achieved through subscriptions per se. They are seen as a way of
   policing priority access to service bottlenecks, and thus can be
   deployed incrementally at weak interconnection points in a network,
   or between ISPs. Users of these subscriptions could easily be entire
   organisations (represented by network numbers, or CIDRized
   collections of network numbers, or ASs and so on). These
   organisations could employ refinement techniques to provide more
   usage oriented charging if needed.

Introduction

   To date, it has been argued by many in the Internet that usage based
   charges are not required for the following reasons




Jon Crowcroft                                                   ^L[Page 1]


draft-internet-pricing    Pricing the Internet               3 July 1996


   1. Applications are elastic, and as the number of users for a given
   capacity bottleneck increase, the current queuing and TCP flow
   control and congestion control mechanisms schemes effectively share
   out the capacity fairly. This means that so long as a user gets any
   capacity at all, then the total utility increases with the number of
   users, and therefore all is well. (Say we have 100 users and charge
   them 1 ECU per year and they get 1 picobit per second; if we have 200
   users, they get 1/2 a picobit per second; we can charge them 1/2 an
   ECU, make the same profit; if the don't care about when their picobit
   is transferred, we have more happy users.)

   2. If a user doesn't get good enough response (latency or throughput
   - it comes to the same thing whether they want a character echoed, a
   web page to pop up, or an FTP to finish so they can print a
   document), they can try again when the net is less busy.

   3. Incentives - you bill someone so they can STOP someone else using
   the net?  Why not just increase subsidy to the common good network,
   and keep all the users happy?

   These arguments are actually not clearcut. For example:  1/ TCP
   mechanisms do not operate well below 1 packet per round trip time
   with current FIFO "drop tail" routers - even with Random Early Drop
   routers, the types of load we are seeing on some links (e.g. the UK
   US link is currently operating on a 24 hour average at 300 new
   connection attempts per second) will never allow any TCP to stabalise
   even if it were long lived (and statistics from NLANR and the UK
   National caches show remarkable agreement that the traffic on the net
   is 70-90% WWW, and the average web page is around 2K bytes, making
   for an average TCP lifetime of 11 packets).  In fact, there are large
   charges associated with having lots of users - there are router state
   overheads, billing overheads, access line (modem) overheads etc etc)

   2/ Nowadays,users write programs to drive the network when they get
   poor human response.

   3/ The majority of users in today's Internet do not share a common
   goal, so subsidy based network support is not viable (except possibly
   to some extent in portions of the net such as the UK academic one)

   voice, (CU-SeeMe,) and the mbone...vat 'n vic applications...

   we could take a look at the requirements for a new model for pricing
   for the Internet on 3 timescales:

   Now(e.g. motivated by the Fat Pipe congestion between the UK and US)

   Medium Term - e.g. what we can do with existing IP hosts and routers,



Jon Crowcroft                                                   ^L[Page 2]


draft-internet-pricing    Pricing the Internet               3 July 1996


   and network management and applications, given some modest new
   technology.

   Longer Term- what can be done with RSVP and the Integrated Services
   Internet traffic model profiles (and related technology, like IPv6).

   This note is mostly about NOW.

The Reality Now

   Actually, we have pricing, at least from most commercial ISPs - its
   based on the access line speed from a subscriber to a provider, and
   at a higher level, on the interconnect speed between ISPs.  How do
   they determine access speeds? They keep TCP and UDP statistics
   (traffic matrices), and build a backbone with sufficient capacity for
   the number of typical active hosts from all sites t get some minimal
   latency - no-one has publicly stated this latency, but my guess is
   that it will depend on the "profile" of the ISP. A "High Quality"
   (Rolls Royce) ISP  would offer throughput and latency that would
   match the access line speed closely - e.g. a dialup user at 28kbps
   would see most of their line speed to any server site on the ISP.
   This doesn't mean that the net has to have backbone capacity at the
   sum of the access lines, since many users are idle at any time - and
   unlike CBR voice/video traffic, it takes a user request to cause
   traffic (though compressed silence suppressed voice, and motion
   detecting video sources might behave similarly).

   If a typical user just gets web pages and sends email, and maybe
   retrieves 100 items a day, they are responsible for bursts of traffic
   for which a backbone can be dimensioned quite easily.

   E.g. a net with 100,000 users with access lines at 64kbps, or 8Kbytes
   per sec, and a user expecting to see almost instantaneous response
   for their 11 packet exchange:

   If the users make 100 requests a day, there are 10M requests in say
   10 hours (assuming the night is idle - a dangerous assumption, but
   this is just for illustration).  To accommodate the 277 requests per
   second, at 2Kbytes, we need back bone switching at 4Mbps. This would
   in fact give around 1/4 the performance a user might expect (assuming
   the backbone has buffering enough for peak arrival, and TCP
   retransmits don't mess things up). To accommodate all users as if
   there is no other user, we need a backbone speed of 16Mbps. By
   contrast, if we ignored the traffic, and just looked at the lines as
   if they might be running 64kbps flatout, we'd need 6Gbps.


   Of course, the picture is not this simple since users have quite well



Jon Crowcroft                                                   ^L[Page 3]


draft-internet-pricing    Pricing the Internet               3 July 1996


   known synchronise busy times (e.g. 9am, just after lunch, etc etc).
   [assuming Poisson arrivals would give a somewhat  bias view - most
   traffic on networks is well known to be self similar, or heavy
   tailed...)

   However, given this, with 64kbps a high end access speed so far, and
   users accesses, not aggregated typically, but arriving at a router on
   the backbone, we have a traceable design and pricing problem.

   We can choose a latency by selecting an effective bandwidth (or
   utilisation) that the user will get from their

ISP to ISP

   Between service providers, we have the same picture, except that one
   ISP does not know the number of users or traffic profiles of another
   ISP. However, they can still measure them at least in a restricted
   way (they can measure the sources that venture off the other ISP
   only, obviously).

   There is a deployment problem for end to end (user to ISP, ISP to
   ISP, and ISP to user) billing: charges have to behave in some
   reasonable approximation to associative/commutative (perfection
   probably isn't called for, but at least close) or obvious unstable
   markets develop.


The Problem.

   There are two possible sources of problem for the simple approach
   above:

   1. The price that such the backbone would cost if all users are given
   an equal share may not be affordable.

   2. Users may offer traffic that IS a significant percentage of their
   access link speed (this has the same effect, but sooner).

   In this case, we need a way to extract extra money from some users
   (priority users). They can subscribe to a service (which implies we
   only need classify their traffic, and could authenticate out of band)
   or they can reserve the priority on demand (which means we need to
   authenticate the reservation in band), and they may want to pay only
   for what they actually use, or may be prepared to pay whether they se
   it or not (makes the accounting easier - simply start-end time
   based).

   [Lets ignore payment here - assume logging or something is done -



Jon Crowcroft                                                   ^L[Page 4]


draft-internet-pricing    Pricing the Internet               3 July 1996


   payment is just matching database to a billing point - its irrelevant
   (albeit a complex system, it has no bearing once we have a model for
   what we control and what we monitor).

   The Future

   From the user interface (at the Graphical user interface to a human,
   or the API for the novel application programmer) we expect to present
   a quality choice - how long will this  FTP take, how urgent is this
   WWW transaction, how important is this video conference.

   However, this can be reflected in the architecture for accounting and
   billing quite a few different ways:

   The basic choice is:  Subscription versus tokens versus online
   charges.

   At the internal interface (NNI in ATM language) between ISP and ISP,
   we can envisage transferring individual charges. We can also envisage
   transferring some notion of collective quality (e,g in phone nets,
   call blocking probability: in the Internet, packet loss or delay
   probability; with RSVP, reservation rejection probability, or for a
   high risk ISP, breach of contract for QoS (e.g. exceeding negotiated
   CDV, or packet loss delay contract, or minimum cell or packet rate
   for an agreed connection).



Priority

   One way we could envisage implementing priorities would be to sell
   addresses that are treated differently in the routers or are routed
   over links which are deliberately shared out to fewer sources. To do
   this, we would need routers that could route based on source address
   (as well as destination), or we would need to replicate the routing
   state for sources that wished to to use a priority based on such a
   scheme (or both!). An address might correspond to a range of services
   subscribed to (not just to one single one), and those services might
   have a limited lifespan (token limited to a maximum usage as a
   percentage of one's access line speed per 24*7 for example). This
   provides spatial and temporal aggregation of "reservations" again.

   Some routers now implement WFQ on a variety of input - typically, by
   application, and by input port on a router - it should be possible to
   partition up the address space so that a number of net/host from a
   site arrive at a bottleneck router VIA a different port (through
   policy routes making the default route taken by normal addressed
   packets not visible, for example). it would require leaf routers to



Jon Crowcroft                                                   ^L[Page 5]


draft-internet-pricing    Pricing the Internet               3 July 1996


   participate in the scheme though...  [Aside: note some WFQ
   implementations are limited in the range of mappings from traffic
   class to weighted queue - in general, WFQ can be implemented to a
   very fine grain of allocation quite efficiently, however.]

Aggregation and Deployability

   The Internet has very good aggregation of information for many
   purposes - address aggregation (and name aggregation, and route
   aggregation, with CIDRized addresses) all make the net highly
   deployable and cheap to manage. We would like subscriptions to be
   aggregateable - clearly address based subscriptions would work fairly
   well in this regard.


The Cost of Charging

   Keeping the cost of charging down is a good idea - this is why
   subscription based schemes seem to be sensible - we can see that they
   ought to scale only slightly worse than the current flat fee system.

   We can implement on-demand reservations by dynamic address
   allocation, instead of via a new protocol, too which can then be
   locally accounted - this then has the nice property of completely
   distributing the onus of charging and authentication, and leaving
   policies for how users obtain a priority up to the edge networks
   instead of the center.



Archive, Mirrors, Smoke and Caches

   Archive servers often confuse debates about charging. However, if we
   model a mirror or archive or Web Cache server as an ISP, we have a
   good handle on how to include them in our billing model - we simply
   regard traffic between a server and a subscriber to a remote ISP as
   _transitting_ the ISP that sponsors the server.



Sharing of unused "reservations" - subsidy and policy

   Link share has been seen to be important - in fact, it may be key to
   deploying any scheme for pricing resources, since the normal
   demand/price models don't seem to work well for information
   transmission...

Problem



Jon Crowcroft                                                   ^L[Page 6]


draft-internet-pricing    Pricing the Internet               3 July 1996


   Address Space and router memory are both running out - this scheme
   will only work if we can reclaim addresses - we can make it a
   precondition of getting "priority addresses" that a
   university/customer renumber all their systems to allow the provider
   to regain some breathing space.

   Problem with schemne - routers don't currently do WFQ on source
   address - however, adding soruce address as a possible hash field to
   geet to a WFQ is pretty simple. Failing this, assuming that outbound
   traffic is subject to the wsame queue on the return path, (at least
   for Web access this is true), then basing it on destinatio nat the
   far end will also sort of approximate to the same result, except for
   one important class of traffic - mbone.

Refinements

   A user organisation could purchase address space that is treated with
   priority access, is at liberty to _re-sell_ use of that space in lost
   of ways. One example could be that inside a university, the addresses
   in that space are allocated through DHCP, and users get some number
   of tokens that are time based, for how long they can use part of the
   address space. Another example might be a public Mbone phone box,
   where coins are used to gain adccess to an address. Other schemes are
   possible...


Interaction

   Settlements and Recursive Settlements - from the middle of the
   network outwards. Clearly, if we use an address space that gives
   guarantees at one bottleneck (e.g. between the first and second hop
   ISPs) There is no guarantee that it works on the next hop too.

   However, there is an incentive that can be created between ISPs (and
   between bottle-neck providers, or BNPs as we might term them) - lossy
   BNPs can be billed by guaranteeing ISPs, for failing to match the
   service agreement. This could form the basis for settlements very
   easily.

   The service contract should be stated, perhaps using the same
   parameter set as defined in tspecs and rspecs in RSVP, and through
   the same Policy Modules, so that we have a deployment scheme for RSVP
   too.


References

   B.Carpenter Metrics for Internet settlements draft-carpenter-



Jon Crowcroft                                                   ^L[Page 7]


draft-internet-pricing    Pricing the Internet               3 July 1996


   metrics-00.txt

   Mills, C., Hirsch, G. and Ruth, G., "Internet Accounting Background",
   RFC 1272, Bolt Beranek and Newman Inc., Meridian Technology
   Corporation, November 1991.

   Nevil Brownlee INTERNET-DRAFT draft-ietf-rtfm-acct-meter-mib-01.txt"
   Traffic Flow Measurement:  Architecture

   Brownlee, N., "Traffic Flow Measurement:  Meter MIB," Internet Draft,
   'Working draft' to become an experimental RFC.

   F. P. Kelly, "Tariffs and Effective Bandwidths in Multiservice
   Networks", Proc. 14th Int. Teletraffic Cong., 6-10 June 1994 North-
   Holland Elsevier Science B.V., 1, 1994, 387--410

   F. P. Kelly, "Routing in Circuit-Switched Networks: Optimization,
   Shadow prices and Decentralization", Adv. Appl. prob., Vol. 20, :,
   112-144, 1988

   R. Braden, L.Zhang, S.Berson, S.Herzog, S.Jamin File: draft-ietf-
   rsvp-spec-12.txt Resource ReSerVation Protocol (RSVP) -- Version 1
   Functional Specification draft-ietf-rsvp-spec-12.txt


Author's Address
   Jon Crowcroft
   UCL
   Gower St
   London WC1E 6BT
   England

   Tel +44 171 380 7296
   Fax +44 171 387 1397
   Email: jon@cs.ucl.ac.uk
















Jon Crowcroft                                                   ^L[Page 8]