The Need for Congestion Exposure in the Internet

Document Type Expired Internet-Draft (individual)
Last updated 2010-03-01
Replaces draft-moncaster-congestion-exposure-problem
Stream (None)
Intended RFC status (None)
Expired & archived
plain text pdf html bibtex
Stream Stream state (No stream defined)
Consensus Boilerplate Unknown
RFC Editor Note (None)
IESG IESG state Expired
Telechat date
Responsible AD (None)
Send notices to (None)

This Internet-Draft is no longer active. A copy of the expired Internet-Draft can be found at


Today's Internet is a product of its history. TCP is the main transport protocol responsible for sharing out bandwidth and preventing a recurrence of congestion collapse while packet drop is the primary signal of congestion at bottlenecks. Since packet drop (and increased delay) impacts all their customers negatively, network operators would like to be able to distinguish between overly aggressive congestion control and a confluence of many low-bandwidth, low-impact flows. But they are unable to see the actual congestion signal and thus, they have to implement bandwidth and/or usage limits based on the only information they can see or measure (the contents of the packet headers and the rate of the traffic). Such measures don't solve the packet-drop problems effectively and are leading to calls for government regulation (which also won't solve the problem). We propose congestion exposure as a possible solution. This allows packets to carry an accurate prediction of the congestion they expect to cause downstream thus allowing it to be visible to ISPs and network operators. This memo sets out the motivations for congestion exposure and introduces a strawman protocol designed to achieve congestion exposure.


T Moncaster (
Anne-Louise Burness (
Michael Menth (
Joao Araujo (
Steven Blake (
Richard Woundy (

(Note: The e-mail addresses provided for the authors of this Internet-Draft may no longer be valid.)