Network Shaofu. Peng
Internet-Draft ZTE Corporation
Intended status: Standards Track Zongpeng. Du
Expires: 20 April 2024 China Mobile
Kashinath. Basu
Oxford Brookes University
Zuopin. Cheng
New H3C Technologies
Dong. Yang
Beijing Jiaotong University
Chang. Liu
China Unicom
18 October 2023
Deadline Based Deterministic Forwarding
draft-peng-detnet-deadline-based-forwarding-07
Abstract
This document describes a deterministic forwarding mechanism to IP/
MPLS network, as well as corresponding resource reservation,
admission control, policing, etc, to provide guaranteed latency.
Especially, latency compensation with core stateless is discussed to
replace reshaping to be suitable for Diff-Serv architecture
[RFC2475].
Status of This Memo
This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at https://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
This Internet-Draft will expire on 20 April 2024.
Copyright Notice
Copyright (c) 2023 IETF Trust and the persons identified as the
document authors. All rights reserved.
Peng, et al. Expires 20 April 2024 [Page 1]
Internet-Draft Deadline Queueing Mechanism October 2023
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents (https://trustee.ietf.org/
license-info) in effect on the date of publication of this document.
Please review these documents carefully, as they describe your rights
and restrictions with respect to this document. Code Components
extracted from this document must include Revised BSD License text as
described in Section 4.e of the Trust Legal Provisions and are
provided without warranty as described in the Revised BSD License.
Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1. Requirements Language . . . . . . . . . . . . . . . . . . 5
2. EDF Scheduling Overview . . . . . . . . . . . . . . . . . . . 5
2.1. Planned Residence Time of the Service Flow . . . . . . . 6
2.2. Delay Levels Provided by the Network . . . . . . . . . . 6
2.3. Relationship Between Planned Residence Time and Delay
Level . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4. Relationship Between Service Burst Interval and Delay
Level . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3. Sorted Queue . . . . . . . . . . . . . . . . . . . . . . . . 8
3.1. Scheduling Mode for PIFO . . . . . . . . . . . . . . . . 8
3.2. Schedulability Condition for PIFO . . . . . . . . . . . . 8
3.2.1. Conditions for Leaky Bucket Constraint Function . . . 9
3.2.2. Schedulability Condition Analysis for On-time Mode . 10
3.3. Buffer Size Design . . . . . . . . . . . . . . . . . . . 12
4. Rotation Priority Queues . . . . . . . . . . . . . . . . . . 12
4.1. Alternate Queue Allocation Rules . . . . . . . . . . . . 14
4.2. Scheduling Mode for RPQ . . . . . . . . . . . . . . . . . 15
4.3. Schedulability Condition for RPQ . . . . . . . . . . . . 15
4.3.1. Schedulability Conditions for Alternate QAR . . . . . 18
4.3.2. Conditions for Leaky Bucket Constraint Function . . . 19
4.4. Buffer Size Design . . . . . . . . . . . . . . . . . . . 20
5. Reshaping . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6. Latency Compensation . . . . . . . . . . . . . . . . . . . . 22
6.1. Get Existing Accumulated Planned Residence Time . . . . . 22
6.2. Get Existing Accumulated Actual Residence Time . . . . . 22
6.3. Get Existing Accumulated Residence Time Deviation . . . . 23
6.4. Get Allowable Queueing Delay . . . . . . . . . . . . . . 23
6.5. Scheduled by Allowable Queueing Delay . . . . . . . . . . 24
7. Option-1: Reshaping plus Sorted Queue . . . . . . . . . . . . 25
8. Option-2: Reshaping plus RPQ . . . . . . . . . . . . . . . . 25
9. Option-3: Latency Compensation plus Sorted Queue . . . . . . 26
9.1. Packet Disorder Considerations . . . . . . . . . . . . . 26
10. Option-4: Latency Compensation plus RPQ . . . . . . . . . . . 28
10.1. Packet Disorder Considerations . . . . . . . . . . . . . 30
11. Resource Reseravtion . . . . . . . . . . . . . . . . . . . . 32
11.1. Delay Resource Definition . . . . . . . . . . . . . . . 33
Peng, et al. Expires 20 April 2024 [Page 2]
Internet-Draft Deadline Queueing Mechanism October 2023
11.2. Traffic Engineering Path Calculation . . . . . . . . . . 34
12. Admission Control on the Ingress . . . . . . . . . . . . . . 35
13. Overprovision Analysis . . . . . . . . . . . . . . . . . . . 37
14. Compatibility Considerations . . . . . . . . . . . . . . . . 38
15. Deployment Considerations . . . . . . . . . . . . . . . . . . 40
16. Evaluations . . . . . . . . . . . . . . . . . . . . . . . . . 41
17. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 42
18. Security Considerations . . . . . . . . . . . . . . . . . . . 42
19. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 42
20. References . . . . . . . . . . . . . . . . . . . . . . . . . 42
20.1. Normative References . . . . . . . . . . . . . . . . . . 42
20.2. Informative References . . . . . . . . . . . . . . . . . 44
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 45
1. Introduction
[RFC8655] describes the architecture of deterministic network and
defines the QoS goals of deterministic forwarding: Minimum and
maximum end-to-end latency from source to destination, timely
delivery, and bounded jitter (packet delay variation); packet loss
ratio under various assumptions as to the operational states of the
nodes and links; an upper bound on out-of-order packet delivery. In
order to achieve these goals, deterministic networks use resource
reservation, explicit routing, service protection and other means.
* Resource reservation refers to the occupation of resources by
service traffic, exclusive or shared in a certain proportion, such
as dedicated physical link, link bandwidth, queue resources, etc.
* Explicit routing means that the transmission path of traffic flow
in the network needs to be selected in advance to ensure the
stability of the route and does not change with the real-time
change of network topology, and based on this, the upper bound of
end-to-end delay and delay jitter can be accurately calculated.
* Service protection refers to sending multiple service flows along
multiple disjoint paths at the same time to reduce the packet loss
rate.
In general, a deterministic path is a strictly explicit path
calculated by a centralized controller, and resources are reserved on
the nodes along the path to meet the SLA requirements of
deterministic services.
[P802.1DC] described some Quality of Service (QoS) features specified
in IEEE Std 802.1Q, such as per-stream filtering and policing,
queuing, transmission selection, stream control and preemption, in a
network system which is not a bridge. In the presence of admission
Peng, et al. Expires 20 April 2024 [Page 3]
Internet-Draft Deadline Queueing Mechanism October 2023
control, policing, reshaping, a large number of packet scheduling
techniques can provide bounded latency. However, many packet
schedulers may result in an inefficient use of network resources, or
provide an overestimated latency. The underlying scheduling
mechanisms in IP/MPLS networks generally use SP (Strict Priority) and
WFQ (Weighted Fair Queuing), and manage a small number of priority
based queues. They are rate based schedulers.
For SP, the highest priority queue can consume the total port
bandwidth, while for WFQ scheduler, each queue may be configured with
a pre-set rate limit. Both of them can provide the worst-case
latency, but evaluation is generally overestimated. We assume that
when providing deterministic services in such a network, the observed
flow always has the highest (or relatively high) priority. In the
case where the network core supports reshaping per flow (or optimized
reshaping as provided by [IR-Theory]), the worst-case latency of a
flow is approximately equal to the accumulated burst of its traffic
class divided by the rate limit of that traffic class (note that a
rate based scheduler may refer to [Net-Calculus] to obtain its rate-
latency service curve and get a more accurate evaluation). When the
network core does not implement reshaping, multiple flows sharing the
same priority may form burst cascade, making it more difficult or
even impossible to evaluate the worst-case latency of a single flow.
[EF-FIFO] discusses the SP scheduling behavior in this core-stateless
situation, which requires the overall network utilization level to be
limited to a small portion of its link capacity in order to provide
an appropriate bounded latency.
To address the overestimation issue of rate based scheduling (i.e.,
if want a low latency, may be forced to allocate a large service
rate.), according to [EDF-algorithm], an EDF (earliest-deadline-
first) scheduler, which always selects the packet with the shortest
deadline for transmission, is an optimal scheduler for a bounded
delay service in the sense that it can support the delay bounds for
any set of connections that can be supported by some other scheduling
method. EDF is a latency based scheduler, which always selects the
packet with the shortest deadline for transmission. EDF further
distinguishes traffic in terms of time urgency, rather than rough
traffic classes.
This document introduces EDF scheduling mechanism to IP/MPLS network,
as well as corresponding resource reservation, admission control,
policing, etc, to provide guaranteed latency. Especially, an
enhanced option based on latency compensation is discussed to replace
reshaping and also achieve low jitter.
Peng, et al. Expires 20 April 2024 [Page 4]
Internet-Draft Deadline Queueing Mechanism October 2023
1.1. Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and
"OPTIONAL" in this document are to be interpreted as described in BCP
14 [RFC2119] [RFC8174] when, and only when, they appear in all
capitals, as shown here.
2. EDF Scheduling Overview
The EDF scheduler assigns a deadline for each incoming packet, which
is equal to the time the packet arrives at the node plus the latency
limit, i.e., planned residence time (D), see Section 2.1. The EDF
scheduling algorithm always selects the packet with the earliest
deadline for transmission.
The precondition for EDF to work properly is that the traffic of any
service flow must always satisfy the given traffic constraint
function when it reaches a certain EDF scheduler. Therefore, it
should generally implement traffic regulation at the network entrance
to ensure that the admitted traffic complies with the constraints;
And, implement reshaping on each intermediate node to temporarily
cache packets to ensure that packets entering the EDF scheduler queue
comply with the constraints. However, reshaping per flow is a
challenge in large-scaling networks. Some core stateless
optimization method need to be considered.
Another challenge of EDF scheduling is that queued packets must be
sorted and stored according to their deadline, and whenever a new
packet arrives at the scheduler, it needs to perform search and
insert operations on the corresponding data structure, e.g, a List, a
PIFO (put-in first-out) queue, or other type of sorted queue, at line
rate. [RPQ] described rotating-priority-queues that approximate EDF
scheduling behavior, and do not require deadline based sorting of
queued packets, simplifying enqueueing operations.
According to the above two challenges and the potential optimization
methods, we will obtain four combination solutions. Operators should
choose appropriate solutions based on the actual network situation.
This document suggests using option-3 or option-4.
* option-1: Reshaping plus sorted queue.
* option-2: Reshaping plus RPQ.
* option-3: Latency Compensation plus sorted queue.
* option-4: Latency Compensation plus RPQ.
Peng, et al. Expires 20 April 2024 [Page 5]
Internet-Draft Deadline Queueing Mechanism October 2023
2.1. Planned Residence Time of the Service Flow
The planned residence time (termed as D) of the packet is an offset
time, which is based on the arrival time of the packet and represents
the maximum time allowed for the packet to stay inside the node.
For a deterministic path based on deadline scheduling, the path has
deterministic end-to-end delay requirements. The delay includes two
parts, one is the accumulated residence delay and the other is the
accumulated link propagation delay. The end-to-end delay is
subtracted from the accumulated link propagation delay to obtain the
accumulated residence delay. A simple method is that the accumulated
residence delay is shared equally by each node along the path to
obtain the planning residence time of each node. Note that the link
propagation delay in reality may be not always fixed, e.g, due to the
affection of temperature, we assume that the tool for detecting the
link propagation delay can sense the changes beyond the preset
threshold and trigger the recalculation of the deterministic path.
There are many ways to indicate the planned residence time of the
packet.
* Carried in the packet. The ingress PE node, when encapsulating
the deterministic service flow, can explicitly insert the planned
residence time into the packet according to SLA. The transit
node, after receiving the packet, can directly obtain the planned
residence time from the packet. Generally, only a single planned
residence time needs to be carried in the packet, which is
applicable to all nodes along the path; Or insert a stack composed
of multiple deadlines, one for each node.
[I-D.peng-6man-deadline-option] defined a method to carry the
planned residence time in the IPv6 packets.
* Included in the matched local FIB entry or policy entry. An
implementation should support the policy to forcibly override the
planned residence time obtained from the packet.
2.2. Delay Levels Provided by the Network
The network may provide multiple delay levels on the outgoing port,
each with its own delay resource pool. For example, some typical
delay levels may be 10us, 20us, 30us, etc.
In theory, any additional delay level can be added dynamically, as
long as the buffer and remaining bandwidth on the data plane allow.
Peng, et al. Expires 20 April 2024 [Page 6]
Internet-Draft Deadline Queueing Mechanism October 2023
The quantification of delay resource pool for each delay level is
actually based on the schedulability conditions of EDF. This
document introduces two types of resources per delay level:
* Burst: represents the amount of bits bound that a delay level
provided.
* Bandwidth: represents the amount of bandwidth bound that a delay
level provided.
For more information on the construction of resource pools, please
refer to Section 3.2 and Section 4.3.
2.3. Relationship Between Planned Residence Time and Delay Level
The planned residence time (D) is the per-hop latency requirement of
individual flow, while the delay level (d) is the capability provided
by the node.
Generally, we only need to design a limited number of delay levels to
support a larger number of per-hop latency requirement. For example,
there are delay levels such as d_1, d_2, ..., and d_n, In the
resource management of the control plane, we assign d_i resources to
all D that meet d_i <= D < d_i+1.
2.4. Relationship Between Service Burst Interval and Delay Level
Although we generally prefer to have the service burst interval (SBI)
greater than the maximum delay level, there is actually no necessary
association between SBI and delay level.
Firstly, can a flow with small SBI (such as 10us) request a larger
delay level (such as 100us)? Yes. It seems that during a longer
residence time caused by delay level, there will be multiple rounds
of burst interval packets leading to bursts accumulation. However,
these packets can be distinguished and sent in sequence. In fact, we
can multiply the original SBI by several times to obtain the expanded
SBI (which includes multiple original bursts), with a length greater
than the requested delay level, to get the preferred paradigm.
Secondly, can a flow with large SBI (such as 1ms) request a smaller
delay level (such as 10us)? This is certainly yes.
Peng, et al. Expires 20 April 2024 [Page 7]
Internet-Draft Deadline Queueing Mechanism October 2023
3. Sorted Queue
[PIFO] defined the push-in first-out queue (PIFO), which is a
priority queue that maintains the scheduling order or time. A PIFO
allows elements to be pushed into an arbitrary position based on an
element's rank (the scheduling order or time), but always dequeues
elements from the head.
3.1. Scheduling Mode for PIFO
A PIFO queue may be configured as either in-time or on-time
scheduling mode, but cannot support both modes simultaneously.
In the in-time scheduling mode, as long as the queue is not empty,
packets always departured from the head of queue (HoQ) for
transmission. The actual bandwidth consumed by the scheduler may
exceed its set service rate C.
In the on-time scheduling mode, if the queue is not empty and the
rank of the HoQ packet is equal to or earlier than the current system
time, the HoQ packet will be sent. Otherwise, not.
3.2. Schedulability Condition for PIFO
[RPQ] has given the schedulability condition for classic EDF that
based on any type of sorted queue with in-time scheduling mode.
Suppose for any delay level d_i, the corresponding accumulated
constraint function is A_i(t). Let d_i < d_(i+1), then the
schedulability condition is:
sum{A_i(t-d_i) for all i} <= C*t (Equation-1)
where C is service rate of the EDF scheduler.
It should be noted that for a delay level d_i, its residence time is
actually contributed by its own flows and all other more urgent delay
levels. Based on the schedulability conditions, we can choose the
traffic arrival constraint function according to the preset delay
level, or we can choose the delay level according to the preset
traffic arrival constraint function.
The test of schedulability conditions needs to be based on the whole
network view. When we need to add new traffic to the network, we
need to consider which nodes the related path will pass through, and
then check in turn whether these nodes will still meet the
schedulability conditions after adding new traffic.
Peng, et al. Expires 20 April 2024 [Page 8]
Internet-Draft Deadline Queueing Mechanism October 2023
3.2.1. Conditions for Leaky Bucket Constraint Function
Assume that we want to support n delay levels (d_1, d_2,..., d_n) in
the network, and the traffic arrival constraint function of each
delay level d_i is the leaky bucket arrival curve A_i(t) = b_i + r_i
* t. Equation-1 can be expressed as:
b_1 <= C*d_1 - M
b_1 + b_2 + r_1*(d_2-d_1) <= C*d_2 - M
b_1 + b_2 + b_3 + r_1*(d_3-d_1) + r_2*(d_3-d_2) <= C*d_3 - M
... ...
sum(b_1+...+b_n) + r_1*(d_n-d_1) + r_2*(d_n-d_2) + ... +
r_n_1*(d_n-d_n_1) <= C*d_n - M
where, C is the service rate of the deadline scheduler, M is the
maximum size of the interference packet.
Note that the preset value of b_i does not depend on r_i, but r_i
generally refers to b_i (and burst interval) for setting. For
example, the preset value of r_i may be small, while the value of b_i
may be large. Such parameter design is more suitable for
transmitting traffic with large service burst interval, large service
burst size, but small bandwidth requirements.
An extreme example is that the preset r_i of each level d_i is close
to 0 (this is because the burst interval of the served service is too
large, e.g, one hour or one day), but the preset b_i is close to the
maximum value (e.g, b_1 = C*d_1 - M, note that this also requires
that the depth of the leaky bucket used to regulate the traffic is
large enough), then when the concurrent flow of all delay levels is
scheduled, the time 0~d_1 is all used to send the burst b_1, the time
d_1~d_2 is all used to send the burst b_2, the time d_2~d_3 is all
used to send the burst b_3, and so on.
However, a more common example is that the preset r_i of each level
d_i will divide C roughly equally, and the preset b_i is the maximum
packet size (such as 2000 bytes).
The parameters b_i and r_i of each level d_i constitute the delay
resources of that level of the link. A path can reserve required
burst and bandwidth from delay resources of the specific level, and
the reservation is successful only if the two resources are
successfully reserved at the same time. As long as neither b_i nor
r_i is free, the delay resource of level d_i is exhausted.
Peng, et al. Expires 20 April 2024 [Page 9]
Internet-Draft Deadline Queueing Mechanism October 2023
The delay resource reservation status of each level is independent.
For example, in the case that the parameter b_1 is determined, if the
required burst of the d_1 service is large, then although only a few
d_1 services can be supported, but if r_1 is very small, the network
can still support more services of other levels at the same time.
3.2.2. Schedulability Condition Analysis for On-time Mode
Compared with the in-time mode, on-time mode is non-work-conserving,
which can be considered as the combination of damper and EDF
scheduler. The intuitive understanding of the on-time mode is that
the on-time forwarding behavior applied on a flow maintains the time
interval (regulation interval by the regulator on the ingresss node)
between packets of that flow, but not lead to an increase in the
bandwidth occupied by that flow. Therefore, the on-time scheduling
mode does not cause the arrival curve to violate the traffic
constraint function. So that the schedulability condition (i.e.,
Equation-1) can also be applied to the on-time scheduling mode.
However, the on-time scheduling mode explicitly introduces the hold
time, the actual departure time of the packet may be after the
deadline. Suppose that the selected parameters of the constraint
function of each delay level makes the scheduler need to work at full
speed (i.e., service rate C), then for in-time mode, the worst case
is that there may be a packet of a specific delay level to be sent
just before its deadline during the busy period. While for on-time
mode, the busy period may be just begin at its deadline and may make
the sending time of the packet really exceed its deadline, but the
worst case of the extra delay will not exceed the delay level value.
The following Figure 1 shows the difference between on-time
scheduling and in-time scheduling.
Peng, et al. Expires 20 April 2024 [Page 10]
Internet-Draft Deadline Queueing Mechanism October 2023
arrival traffic:
A_1: #1 #2 #3 #4 #5 ...
A_2: $1 $2 $3 $4 $5 ...
...
A_5: &1 &2 &3 &4 &5 ...
|
v
In-time Scheduling:
#1$1...&1 #2$2...&2 #3$3...&3 #4$4...&4 #5$5...&5 ...
On-time Scheduling:
#1 #2 #3 #4 #5
$1 $2 $3 $4
... ...
&1
------+-----------+-----------+-----------+--... ...--+-----------+---
\__ d_1 __/
\________ d_2 ________/
\______________ d_3 ______________/
... ...
\_________________________ d_n ___________________________/
Figure 1: Difference between On-time and In-time Scheduling
As shown in the figure, each burst of A_1 (corresponding to delay
level d_1) is termed as #num, each burst of A_2 (corresponding to
delay level d_2) as $num, and each burst of A_5 (corresponding to
delay level d_5) as &num. A single burst may contain multiple
packets. For example, burst #1 may contain several packets, and the
actual time interval between #1 and #2 may be small. Although the
figure shows the example that the burst interval of multiple levels
of service is the same and the phase is aligned, the actual situation
is far from that. However, this example depicts the typical
scheduling behavior.
In the in-time mode, all concurrent traffic of multiple levels will
be scheduled as soon as possible according to priority, to construct
a busy period. For example, in the duration d_1, in addition to the
burst #1 that must be sent, the burst $1~&1 may also be sent, but the
latter is not necessarily scheduled to be sent before the burst #2 as
shown in the figure. Note that in-time mode cannot guarantee jitter.
While in the on-time mode, each burst is scheduled at its deadline,
which may just be the begin of the busy period. Because of the
scheduling delay, the transmission of the burst will exceed its
deadline. The last packet of the burst will face more delay than the
Peng, et al. Expires 20 April 2024 [Page 11]
Internet-Draft Deadline Queueing Mechanism October 2023
first packet. For example, when burst #5 enters the PIFO, it may
have the same deadline with bursts from $4 of A_2 to &1 of A_5. When
the deadlines of multiple packets are the same, use planned residence
time (D) as tiebreaker, i.e., the smaller the D, the smaller the rank
(see Section 7 for enqueue rule). So, #5 send first and may exceed
the deadline by one d_1; Then send $4 and may exceed the deadline by
one d_2; ...; Finally, send &1 and may exceed the deadline by one
d_5.
3.3. Buffer Size Design
The service rate of the deadline scheduler, termed as C, can reach
the link rate, but generally only needs to be configured as part of
the link bandwidth, such as 50%. Allow hierarchical scheduling, for
example, the deadline scheduler may participate in higher-level SP
scheduling along with other schedulers.
If flows are rate-controlled (i.e., reshaping is done inside the
network, or on-time mode is applied), the maximum depth of PIFO
should be the total amount of burst resource of all delay levels.
Otherwise, more buffer is necessary to store the accumulated bursts.
Please refer to Section 15 for more considerations.
4. Rotation Priority Queues
[RPQ] described rotating priority queues, and the priority
granularity of the queue is the same as that of the flows. If the
deadline of the flow is used as priority, it requires a lot of
priority and corresponding queues, with scalability issues.
Therefore, this section provides rotating priority queues with count-
down time range whose rotation interval is more refined, with the
following characteristics:
* Each queue has CT (Count-down Time) that is decreased by RTI
(rotation time interval), and AT (Authorization Time) that is for
sending duration. AT is also the CT difference between two
adjacent queues. Note that RTI must be less than or equal to the
AT, with AT = N * RTI, where the natural number N >= 1.
* The smaller the CT, the higher the priority. At the beginning,
all queues have different CT values, i.e., staggered from each
other, e.g, one queue has the minimum CT value (termed as MIN_CT),
and one queue has the maximum CT value (termed as MAX_CT), and the
CT values of all queues increase equally by AT. It should be
noted that CT is just the countdown of the HoQ, and the countdown
of the end of the queue (EoQ) is near CT+AT. So the CT attribute
of a queue is actually a range [CT, CT+AT).
Peng, et al. Expires 20 April 2024 [Page 12]
Internet-Draft Deadline Queueing Mechanism October 2023
* For a queue whose CT has been reduced to MIN_CT, after a new round
of AT, the CT will return to MAX_CT.
The above AT, RTI, MIN_CT and MAX_CT value should be choosed
according to the hardware capacity. Each link can independently use
different AT. The general principle is that the larger bandwidth,
the smaller AT. The AT must be designed large enough to include
interference delay caused by a single low priority packet with
maximum size.
The choose of RTI should consider the latency granularity of various
service flows, so that CT updated per RTI can match the delay
requirements of different services. For example, if the delay
difference of different traffic flows is several microseconds, RTI
can be choosed as 1 us. If the delay difference of different traffic
flows is several 10 microseconds, RTI can be choosed as 10 us.
A specific example of RPQ with in-time scheduling mode is depicted in
Figure 2.
+------------------------------+ +------------------------------+
| RPQ Group: | | RPQ Group: |
| queue-1(CT=50us) ######| | queue-1(CT=49us) ######|
| queue-2(CT=40us) ######| | queue-2(CT=39us) ######|
| queue-3(CT=30us) ######| | queue-3(CT=29us) ######|
| queue-4(CT=20us) ######| | queue-4(CT=19us) ######|
| queue-5(CT=10us) ######| | queue-5(CT= 9us) ######|
| queue-6(CT=0us) ######| | queue-6(CT=-1us) ######|
| queue-7(CT=-10us) ######| | queue-7(CT=-11us) ######|
+------------------------------+ +------------------------------+
+------------------------------+ +------------------------------+
| Other Queue Group: | | Other Queue Group: |
| queue-8 ############ | | queue-8 ############ |
| queue-9 ############ | | queue-9 ############ |
| queue-10 ############ | | queue-10 ############ |
| ... ... | | ... ... |
+------------------------------+ +------------------------------+
-o----------------------------------o-------------------------------->
T0 T0+1us time
Figure 2: Example of RPQ Groups
Peng, et al. Expires 20 April 2024 [Page 13]
Internet-Draft Deadline Queueing Mechanism October 2023
In this example, the AT for RPQ group is configured to 10us. Queue-1
~ queue-7 are members of RPQ group. Each queue has its CT attribute.
The MAX_CT is 50us, the MIN_CT is -10us. At the initial time (T0),
the CT of all queues are staggered from each other. For example, the
CT of queue-1 is 50us, the CT of queue-2 is 40uS, and so on.
Suppose the scheduling engine initiates a rotation timer with a time
interval of 1us, i.e., AT = 10 * RTI in this case. As shown in the
figure, at T0 + 1us, the CT of queue-1 becomes 49us, the CT of
queue-2 becomes 39us, etc.
At T0 + 10us, the CT of queue-7 will return to MAX_CT.
Note that the minimum D requested by a service flow should not be
smaller than d_1+F, where d_1 is the most urgent delay level, F is
the intra node forwarding delay. Therefore any packets with in-time
transmission should have Q (i.e., D + E - F) that is not be smaller
than d_1, and should never be inserted to a queue with negative CT.
However, considering there may be some abnormal case during
scheduling, adding a queue with MIN_CT = -AT for in-time mode to
ensure that incoming urgent traffic is sent before the queue's CT
rolling-over appears harmless.
4.1. Alternate Queue Allocation Rules
It may further let a RPQ queue (act as the virutal parent queue)
contain multiple sub-queues, each for a delay level. Packets are
actually stored in the physical sub-queues. That is, packets with
different D are inserted into different sub-queues and protected. In
this way, for two packets with the same Q but different D, we can
decide to firstly schedule the packet with the smallest D. This is
beneficial because when packets with smaller D facing larger
interference delay, it is difficult to have space for latency
compensation on downstream nodes, while packets with larger D have
larger space for latency compensation.
For a virtual parent queue, the physical sub-queue with small delay
level (e.g, 10us) is ranked before the physical sub-queue with large
delay level (e.g, 20us).
This alternate queue allocation rule enables on-time mode to provide
appropriate jitter performance (i.e., the worst case of exceeding the
deadline is the delay level value). However, it can also be
uniformly applied to in-time mode. According to different scheduling
behavior of in-time mode and on-time mode, MIN_CT may be designed to
-AT for in-time mode and -N*AT for on-time mode, where N is the
amount of delay levels.
Peng, et al. Expires 20 April 2024 [Page 14]
Internet-Draft Deadline Queueing Mechanism October 2023
4.2. Scheduling Mode for RPQ
A RPQ group may be configured as either in-time or on-time scheduling
mode, but cannot support both modes simultaneously.
In the in-time scheduling mode, in all non empty queues, the packets
in each queue are sequentially sent in the order of high priority
queue to low priority queue. The actual bandwidth consumed by the
scheduler may exceed its set service rate C.
In the on-time scheduling mode, only in all non empty queues with CT
<= 0, the packets in each queue are sequentially sent in the order of
high priority queue to low priority queue.
For a virtual parent queue that is allowed to be sent, for the
multiple non empty physical sub-queues it contains, packets are
sequentially sent from the non empty physical sub-queues along the
direction from the physical sub-queues with small delay levels to the
physical sub-queues with large delay levels. Only when a physical
sub-queue is cleared can the next non empty physical sub-queue be
sent.
4.3. Schedulability Condition for RPQ
In this section, we discuss the schedulability condition based on
deadline queues with in-time scheduling mode.
Suppose for any delay level d_i, the corresponding accumulated
constraint function is A_i(t), and let d_i < d_(i+1). Suppose for
any planned residence time D_i, the the corresponding constraint
function is A'_i(t). For simplicity, we take intra node forwarding
delay F as 0. Then the schedulability condition is:
* A_1(t-d_1) + sum{A_i(t+AT-d_i) for all i>=2} <= C*t, if a d_i
contains only one D_i. (Equation-2)
* sum{A_i(t+AT-d_i) for all i>=1} <= C*t, if d_i contains multiple
D_i. (Equation-3)
where AT is the CT interval between adjacency queue, RTI is the
rotation time interval, C is service rate of the deadline scheduler.
The proof is similar with that in [RPQ], except that the rotation
step is fine-grained by RTI and the priority of each queue is CT
range. Figure 3 below gives a rough explanation.
Peng, et al. Expires 20 April 2024 [Page 15]
Internet-Draft Deadline Queueing Mechanism October 2023
^
| Planned Residence Time
| | |
|CT_t+AT+2*RTI -> ===== | |
| CT_t+AT+RTI -> =====| |
CT_t + - - - - - - - - - - - - >+====+ -\
+AT | | | |
| | | |
| | | > Phycical queue-x
D_p + - - - - - - - - - - - - >| | |
| | | |
CT_t + - - - - - - - - - - - - >+====+ -/
| | |===== <- CT_t-RTI
| | | ===== <- CT_t-2*RTI
| | |
|
| RTI| ... ... | RTI| RTI| RTI| RTI| RTI| RTI|
---+----+-----------+----+----+----+----+----+----+--------->
0 ^ ^ ^ ^ time
| | | |
t-tau' t-ofst t t+tau
(busy period begin) (arrival) (departure)
Figure 3: Deadline Queues Scheduling
Suppose that the observed packet, with planned residence time D_p,
arrives at the scheduler at time t and leaves the scheduler at time
t+tau. It will be inserted to physical queue-x with count-down time
CT_t at the current timer interval RTI with starting time t-ofst and
end time t-ofst+RTI. According to the above packet queueing rules,
we have CT_t <= D_p < CT_t+AT. Also suppose that t-tau' is the
beginning of the busy period closest to t. Then, we can get the
amount of packets within time interval [t-tau', t+tau] that must be
scheduled before the observed packet. In detailed:
* For all service i with planned residence time D_i meeting CT_t <=
D_i < CT_t+AT, the workload is sum{A'_i[t-tau', t]}.
Explanation: since the packets with planned residence time D_i
in the range [CT_t, CT_t+AT) arrived at time t will be sent
before the observed packet, the packets with the same D_i
before time t will become more urgent at time t, and must also
be sent before the observed packet.
* For all service i with planned residence time D_i meeting D_i >=
CT_t+AT, the workload is sum{A'_i[t-tau', t-ofst-(D_i-CT_t-AT)]}.
Peng, et al. Expires 20 April 2024 [Page 16]
Internet-Draft Deadline Queueing Mechanism October 2023
Explanation: although the packets with planned residence time
D_i larger than CT_t+AT arrived at time t will be sent after
the observed packet, but the packets with the same D_i before
time t, especially before time t-ofst-(D_i-CT_t-AT), will
become more urgent at time t, and must be sent before the
observed packet.
* For all service i with planned residence time D_i meeting D_i <
CT_t, the workload is sum{A'_i[t-tau', t+(CT_t-D_i)]}.
Explanation: the packets with planned residence time D_i less
than CT_t at time t will certainly be sent before the observed
packet, at a future time t+(CT_t-D_i) the packets with the same
D_i will still be urgent than the observed packet (even the
observed packet also become urgent), and must be sent before
the observed packet.
* Then deduct the traffic that has been sent during the busy period,
i.e., C*(tau+tau').
Let tau as D_p, and remember that CT_t <= D_p, the above workload is
less than
sum{A'_i(tau'+CT_t+AT-D_i) for all D_i >= CT_t} +
sum{A'_i(tau'+CT_t-D_i) for all D_i < CT_t} - C*(tau'+D_p)
It is further less than
sum{A'_i(tau'+D_p+AT-D_i) for all D_i >= D_2} + A'_1(tau'+D_p-D_1)
- C*(tau'+D_p)
Then, denote x as tau'+D_p, we have
sum{A'_i(x+AT-D_i) for all D_i >= D_2} + A'_1(x-D_1) - C*(x)
In the case that d_i contains only one D_i, we have A_i = A'_i, d_i =
D_i, so the above workload is
sum{A_i(x+AT-d_i) for all d_i >= d_2} + A_1(x-d_1) - C*(x)
Let the above workload be less than zero, then we get Equation-2.
In the case that d_i contains multiple D_i, e.g, d_1 is the minimum
delay level with 10us, D_1 ~ D_10 is 10 ~ 19us respectively, d_2 is
20us, D_11 ~ D_20 iS 20 ~29us respectively, etc. Let D_1 ~ D_10
consume the resources of d_1, and D_11 ~ D_20 consume the resources
of d_2, etc. Then, the above workload is less than
Peng, et al. Expires 20 April 2024 [Page 17]
Internet-Draft Deadline Queueing Mechanism October 2023
sum{A'_i(x+AT-d_i) for all D_i belongs to d_i} - C*(x)
That is sum{A_i(x+AT-d_i) for all d_i} - C*(x), and let it less than
zero, then we get Equation-3.
Note that the key difference between the above two conditions (i.e.,
Equation-2, Equation-3) and one based on sorted queue (i.e.,
Equation-1) is the AT factor.
Other common considerations are the same as Section 3.2.
4.3.1. Schedulability Conditions for Alternate QAR
According to Section 4.1, a RPQ queue may further contain multiple
sub-queues, each for a delay level. Under the same parent queue, all
sub-queues are sorted in descending order of delay level. In this
case, the precise workload should exclude packets with higher delay
levels than the observed packet.
In the case that d_i contains only one D_i, the schedulability
condition is Equation-1.
This is because, in the workload, for all D_i meeting D_i >= CT_t+AT,
their contributed workload is changed to sum{A'_i[t-tau', t-ofst-
(D_i-CT_t)]} based on the analysis of Equation-2, that is, the amount
of workload A'_i(AT) (that is placed in queue-x) is excluded.
In the case that d_i contains multiple D_i, the schedulability
condition is still Equation-3.
This is because multiple D_i may belong to the same delay level as
D_p. Assuming that within time zone [t-ofst, t-ofst+I] the list of
all arrived D_i in the same parent queue-x with [CT_t, CT_t+AT) as
the observed packet (with D_p) is:
* D_a1~D_am, where D_a1 is closer to CT_t+AT, they are larger than
D_p (but smaller than CT_t+AT) and belongs to a larger delay level
than d_p (corresponding delay level of D_p).
* D_b1~D_bm, they are larger than D_p and belongs to the same delay
level as d_p.
* D_p.
* D_c1~D_cm, they are smaller than D_p, and may belongs to the same
delay level as d_p or a lower delay level than d_p.
Peng, et al. Expires 20 April 2024 [Page 18]
Internet-Draft Deadline Queueing Mechanism October 2023
So that both D_b1~D_bm and D_c1~D_cm should be scheduled before the
observed packet. This is also true for these set of packets that
have arrived in history.
Strictly, for D_a1, the contributed workload is sum{A'_i[t-tau',
t-ofst+I-AT]}, that is, only before time t-ofst+I-AT the arrived
packets of D_a1 will be placed in a more urgent queue-y with [CT_t,
CT_t + AT) than queue-x (at this history time its CT is [CT_t+AT,
CT_t + 2AT)) and should be shceduled before the observed packet.
Similarly, for D_a2, the contributed workload is sum{A'_i[t-tau',
t-ofst+I-AT+I]}, for D_am, the contributed workload is
sum{A'_i[t-tau', t-ofst+I-AT+(m-1)*I]}.
Note that queue-x also contains packets with D_i (e.g, D_a0, larger
than D_a1) that have arrived in history. For D_a0, the contributed
workload is sum{A'_i[t-tau', t-ofst+I-AT-(D_a0-D_a1)]}.
However, the number of m is not fixed. For safety, we can
appropriately overestimate workload time zone of D_a1~D_am to time
instant t and regard that they need to be scheduled before the
observed packet. Based on this, we can get the Equation-3.
4.3.2. Conditions for Leaky Bucket Constraint Function
Assume that we want to support delay levels (d_1, d_2,..., d_n) in
the network, and the traffic arrival constraint function of each
delay level d_i is the leaky bucket arrival curve A_i(t) = b_i + r_i
* t. Equation-2 can be expressed as:
b_1 <= C*d_1 - M
b_1 + b_2 + (r_1+r_2)*AT <= C*d_2 - M
b_1 + b_2 + b_3 + (r_1+r_2)*2*AT + r_3*AT <= C*d_3 - M
... ...
sum(b_1+...+b_n) + (r_1+r_2)*(n-1)*AT + r_3*(n-2)*AT + ... +
r_n*AT <= C*d_n - M
where, C is the service rate of the deadline scheduler, M is the
maximum size of the interference packet.
Equation-3 can be expressed as:
b_1 + r_1*AT <= C*d_1 - M
b_1 + b_2 + r_1*2*AT + r_2*AT <= C*d_2 - M
Peng, et al. Expires 20 April 2024 [Page 19]
Internet-Draft Deadline Queueing Mechanism October 2023
b_1 + b_2 + b_3 + r_1*3*AT + r_2*2*AT + r_3*AT <= C*d_3 - M
... ...
sum(b_1+...+b_n) + r_1*n*AT + r_2*(n-1)*AT + ... + r_n*AT <= C*d_n
- M
4.4. Buffer Size Design
The buffer size of each RPQ queue is AT * C - M, where M is the
maximum size of the packet with low priority. If we divide the time
by AT (such as 10 us) and observe the RPQ queue with the lowest
priority, such as d_100 (i.e., CT=100 us), then in the first AT, the
traffic flow with priority d_100 (traffic arrival follows the
constraint of A_100(t)) will enter that queue. In the second AT, the
traffic flow with priority of d_90 (traffic arrival follows the
constraint of A_90(t)) will enter the same queue (i.e., CT=90 us),
and so on. It can be seen that the maximum buffer size required for
the queue is sum(A_i(AT)) for all delay level i. Since the stability
condition of the deadline scheduler must meet sum(A_i(t)) < C*t, so
the buffer size of each deadline queue can be set to C*AT - M.
When deadline queues and latency compensation are used in
combination, a packet that arrives early is penalized and placed in a
queue with a larger CT, it will not cause the queue to overflow,
because the queue is just where it is located. That is, assuming
that the packet does not arrive early but later on time, it will not
be penalized, and will still enter the same queue where the CT
becomes smaller later.
Similarly, when a late arrival packet is rewarded and placed in a
queue with a smaller CT, it will not cause the queue overflow,
because the queue is just where it is located. That is, assuming
that the packet does not arrive late but arrives on time before, it
will not be rewarded, and will still enter the same queue with a
smaller CT which has not been reduced before.
However, an implementation may let all queues share the common buffer
with the total buffer cost as the sum of burst resources of all delay
levels.
If alternet QAR (Section 4.1) is applied, the actual buffer cost of a
virtual parent queue is contributed by all the physical sub-queues it
contains. The actual buffer cost of each physical sub queue is
dynamically allocated based on whether there is a packet inserted.
According to Section 4.3, the maximum buffer cost of a physical sub-
queue may reach the upper limit of burst resources for the
corresponding delay level (such as C*AT-M). However, all physical
Peng, et al. Expires 20 April 2024 [Page 20]
Internet-Draft Deadline Queueing Mechanism October 2023
sub-queues with the same delay level under all virtual parent queues
cannot simultaneously reach the maximum buffer cost, but their sum
may reach the maximum buffer cost.
If flows are rate-controlled (i.e., reshaping is done inside the
network, or on-time scheduling mode is applied), the MAX_CT may be
designed as the delay level with largest delay bound, and total
necessary buffer shared by all queues should be the total amount of
burst resource of all delay levels. Otherwise, MAX_CT should be
larger than the largest delay bound, and with more necessary buffer,
to store the accumulated bursts. Please refer to Section 15 for more
considerations.
5. Reshaping
Reshaping per flow inside the network, as described in [RFC2212], is
done at all heterogeneous source branch points and at all source
merge points, to restore (possibly distorted) traffic's shape to
conform to the TSpec. Reshaping entails delaying packets until they
are within conformance of the TSpec.
A network element MUST provide the necessary buffers to ensure that
conforming traffic is not lost at the reshaper. Note that while the
large buffer makes it appear that reshapers add considerable delay,
this is not the case. Given a valid TSpec that accurately describes
the traffic, reshaping will cause little extra actual delay at the
reshaping point (and will not affect the delay bound at all).
Maintaining a dedicated shaping queue per flow can avoid burstiness
cascading between different flows with the same traffic class, but
this approach goes against the design goal of packet multiplexing
networks. [IR-Theory] describes a more concise approach by
maintaining a small number of interleaved regulators (per traffic
class and incoming port), but still maintaining the state of each
flow. With this regulator, packets of multiple flows are processed
in one FIFO queue and only the packet at the head of the queue is
examined against the regulation constraints of its flow. However, as
the number of flows increases, the IR operation may become burdensome
as much as the per-flow reshaping.
For any observed EDF scheduler in the network, when the traffic
arriving from all incoming ports is always reshaped, then these flows
comply with their arrival constraint functions, which is crucial for
the schedulability conditions of EDF scheduling. Based on this, it
can quantify the delay resource pool which is open and reserved for
service flows.
Peng, et al. Expires 20 April 2024 [Page 21]
Internet-Draft Deadline Queueing Mechanism October 2023
6. Latency Compensation
[RFC9320] presents a latency model for DetNet nodes. There are six
type of delays that a packet can experience from hop to hop. The
processing delay (type-4), the regulator delay (type-5) , the
queueing subsystem delay (type-6), and the output delay (type-1)
together contribute to the residence time in the node.
In this document, the residence time in the node is simplified into
two parts: the first part is to lookup the forwarding table when the
packet is received from the incoming port (or generated by the
control plane) and deliver the packet to the line card where the
outgoing port is located; the second part is to store the packet in
the queue of the outgoing port for transmission. These two parts
contribute to the actual residence time of the packet in the node.
The former can be called forwarding delay (termed as F) and the
latter can be called queueing delay (termed as Q). The forwarding
delay is related to the chip implementation and is generally
constant; The queueing delay is unstable.
6.1. Get Existing Accumulated Planned Residence Time
The existing accumulated planned residence time of the packet refers
to the sum of the planned residence time of all upstream nodes before
the packet is transmitted to the current node. This information
needs to be carried in the packet. Every time the packet passes
through a node, the node accumulates its corresponding planned
residence time to the existing accumulated planned residence time
field in the packet. [I-D.peng-6man-deadline-option] defined a
method to carry existing accumulated planned residence time in the
IPv6 packets.
The setting of "existing accumulated planned residence time" in the
packet needs to be friendly to the chip for reading and writing. For
example, it should be designed as a fixed position in the packet.
The chip may support flexible configuration for that position.
6.2. Get Existing Accumulated Actual Residence Time
The existing accumulated actual residence time of the packet, refers
to the sum of the actual residence time of all upstream nodes before
the packet is transmitted to the current node. This information
needs to be carried in the packet. Every time the packet passes
through a node, the node accumulates its corresponding actual
residence time to the existing accumulated actual residence time
field in the packet. [I-D.peng-6man-deadline-option] defined a
method to carry existing accumulated actual residence time in the
IPv6 packets.
Peng, et al. Expires 20 April 2024 [Page 22]
Internet-Draft Deadline Queueing Mechanism October 2023
The setting of "existing accumulated actual residence time" in the
packet needs to be friendly to the chip for reading and writing. For
example, it should be designed as a fixed position in the packet.
The chip may support flexible configuration for that position.
The current node can carry the receiving and sending time of the
packet in the auxiliary data structure (note that is not packet
itself) of the packet, then the actual residence time of the packet
in the node can be calculated according to these two value.
Although other methods can also be, for example, carrying the
absolute system time of receiving and sending in the packet to let
the downstream node compute the actual residence time indirectly,
that has a low encapsulation efficiency.
6.3. Get Existing Accumulated Residence Time Deviation
The existing accumulated residence time deviation (termed as E)
equals existing accumulated planned residence time minus existing
accumulated actual residence time. This value can be zero, positive,
or negative.
If the existing accumulated planned residence time and the existing
accumulated actual residence time are carried in the packet, it is
not necessary to carry the existing accumulated residence time
deviation. Otherwise, it is necessary. The advantage of the former
is that it can be applied to more scenarios, while the later has less
packaging overhead.
In the case of in-time scheduling mode, E may be a very large
positive value. While in the case of on-time scheduling mode, E may
be 0, or a small value close to 0.
6.4. Get Allowable Queueing Delay
When a node receives a packet from the upstream node, it can first
get the existing accumulated residence time deviation (E), and then
add it to the planned residence time (D) of the packet at this node
to obtain the adjustment residence value, and then deduct the
forwarding delay (F) of the packet in the node, to obtain the
allowable queueing delay (Q) for that packet.
Q = D + E - F
Peng, et al. Expires 20 April 2024 [Page 23]
Internet-Draft Deadline Queueing Mechanism October 2023
In detailed, assume that the current node in a deterministic path is
i, all upstream nodes are from 1 to i-1. Let the planned residence
time be D, the actual residence time be R, the forwarding delay
intra-node be F, then the allowable queueing delay (Q) of the packet
on the current node i is calculated as follows:
E(i-1) = sum(D(1), ..., D(i-1)) - sum(R(1), ..., R(i-1))
Q(i) = D(i) + E(i-1) - F(i)
6.5. Scheduled by Allowable Queueing Delay
The packet will be sheduled based on its Q, that is, the packet is
scheduled based on latency compensation contributed by E, instead of
only D.
The core stateless latency compensation can achieve the effect of
reshaping per flow. Q can be used to identify ineligibility arrvials
of one delay level and prevent it from interferring with the
scheduling of eligibility arrvials of other delay levels.
Firstly, at network entry, all packets (after regulation) of the same
flow will be released to the EDF scheduler one after another at
different time (termed as begin time), but with the same allowable
queueing delay (Q), with initial E = 0. Then, the ideal departure
time of each packet should be its begin time plus Q. If all packets
has the ideal departure time (i.e., the updated E is still 0), then
the arrived traffic faced by the next hop also obey its arrival
constraint function. If all packets of all delay levels released by
all sources have the ideal departure time, all concurrent flows
received by a transit node will comply with their arrival
constraints. However, taking the in-time mode as an example, packets
may have an advanced departure time (i.e., the updated E is larger
than 0), instead of the ideal time. Therefore, the arrived traffic
faced by the downstream node may violate its arrival constraint
function. In this case, the downstream node may punish the
ineligibility arrving packets based on E, i.e. obtain appropriate Q
to restore eligibility arrvials.
Although, lantency compensation has the effect of reshaping, but it
is not equivalent to reshaping. Considering an accumulated bursts
that violates the traffic constraint function and arrives at a node,
if reshaping is used, it will substantially introduce shaping delay
for the ineligibility bursts, which will then enter the queueing
subsystem. While if latency compensation is used, this ineligibility
bursts will only be penalized with a larger Q and tolerated to be
placed in the queueing sub-system, and in the case of in-time mode it
may be immediately sent if higher priority queues are empty.
Peng, et al. Expires 20 April 2024 [Page 24]
Internet-Draft Deadline Queueing Mechanism October 2023
Note that the premise of latency compensation is that a flow must be
based on a fixed explicit path. If multiple packets from the same
flow arrive at the intermediate node along multiple paths with
different propagation lengths, even if these packets are all
eligibility packets, bursts accumulation may still form and cannot
even be punished.
7. Option-1: Reshaping plus Sorted Queue
A receivd packet is inserted to the PIFO queue according to rank = A
+ D + E, where, A is the time that packet arrived at the incoming
interface. Note that E is always 0 and not updated.
Enqueue rule:
* For two packets with different rank, the packet with a smaller
rank is closer to the head of the queue.
* For two packets with the same rank, the packet with a smaller D is
closer to the head of the queue.
* For two packets with the same rank and D, the packet that arrive
at the scheduler first is closer to the head of the queue.
The planned residence time (D) should be carried in the packet.
The scheduling mode (in-time or on-time) should also be carried in
the packet, and used to insert packet into PIFO with the
corresponding scheduling mode.
Dequeue rule:
* As mentioned in Section 3.1, for a PIFO with in-time scheduling
mode, as long as the queue is not empty, packets are always
departured from the HoQ for transmission; while for PIFO with on-
time scheduling mode, only if the queue is not empty and the rank
of the HoQ packet is equal to or earlier than the current system
time, the HoQ packet can be sent.
8. Option-2: Reshaping plus RPQ
A receivd packet is inserted to the appropriate RPQ queue according
to Q = D - F. That is, E is always 0 and not updated.
Enqueue rule:
Peng, et al. Expires 20 April 2024 [Page 25]
Internet-Draft Deadline Queueing Mechanism October 2023
* For a packet with Q, select the target RPQ queue (i.e., the
virtual parent queue) with corresponding CT, that meet CT <= Q <
CT+AT.
* Under the selected virtual parent queue, select the target
physical sub-queue with corresponding delay level d_i, which is
closest to D-F and not greater than D-F.
The planned residence time (D) should be carried in the packet.
The scheduling mode (in-time or on-time) should also be carried in
the packet, and used to insert packet into RPQ with the corresponding
scheduling mode.
Dequeue rule:
* As mentioned in Section 4.2, for a RPQ group with in-time
scheduling mode, in all non empty queues, the packets in each
queue are sequentially sent in the order of high priority queue to
low priority queue; while for a RPQ group with on-time scheduling
mode, only in all non empty queues with CT <= 0, the packets in
each queue are sequentially sent in the order of high priority
queue to low priority queue.
9. Option-3: Latency Compensation plus Sorted Queue
A receivd packet is inserted to the PIFO queue according to rank = A
+ D + E, where, A is the time that packet arrived at the incoming
interface. Note that E is generally not 0 and updated per hop.
The planned residence time (D) and accumulated residence time
deviation (E) should be carried in the packet.
The enqueue and dequeue operations are the same as Section 7.
9.1. Packet Disorder Considerations
Suppose that two packets, P1, P2, are generated instantaneously from
a specific flow at the source, and the two packets have the same
planned residence time. P1 may face less interference delay than P2
in their journey. When they arrive at an intermediate node in turn,
P2 will have less allowable queueing delay (Q) than P1 to try to stay
close to P1 again. It should be noted that to compary who is ealier
is based on the time arriving at the scheduler plus packet's Q. The
time difference between the arrival of two packets at the scheduler
may not be consistent with the difference between their Q. It is
possible to get an unexpected comparision result.
Peng, et al. Expires 20 April 2024 [Page 26]
Internet-Draft Deadline Queueing Mechanism October 2023
As shown in Figure 4, P1 and P2 are two back-to-back packets
belonging to the same burst. The arrival time when they are received
on the scheduler is shown in the figure. Suppose that the Q values
of two adjacent packets P1 and P2 are 40us and 39us, and arrive at
the scheduler at time T1 and T2 respectively. P1 will be sorted
based on T1 + 40us, while P2 will be sorted based on T2 + 39us.
Ideally, T2 should be T1 + 1us. However, this may be not the case.
For example, it is possible that T2 = T1 + 0.9us, Q1 = 40, Q2 = 39.1,
but just because the calculation accuracy of Q1 and Q2 is
microseconds, so they are, e.g, with half-adjust, approximately 40 us
and 39 us, respectively. This means that P2 will be sorted before P1
in the PIFO, resulting in disorder.
packets arrived later
packets arrived earlier |
| |
V V
--------+-----------------------------------------------+---------
... ... | .P1.................P2....................... | ... ...
--------+-----------------------------------------------+---------
P1.Q=40us
P2.Q=39us
| |
--------o---------------------o--------------------------------->
T1 T2 (=T1+0.9us)
| ___________________|
| |
v v
PIFO ##############################################################
top
Figure 4: Packets queueing based on Latency Compensation
DetNet architecture [RFC8655] provides Packet Ordering Function
(POF), that can be used to solve the above disorder problem caused by
the latency compensation.
Alternatively, if the POF is not enabled, we can also maintain states
for service flows to record the last queueing information to address
this issue.
For example, one ore more OGOs (order guarantee object) are
maintained per delay level and incoming port, on each outgoing port.
An OGO records the rank (i.e., arrival time at the incoming interface
plus D) of the last inserted packet mapped to this OGO.
Peng, et al. Expires 20 April 2024 [Page 27]
Internet-Draft Deadline Queueing Mechanism October 2023
When a packet arrives at the scheduler, it is first mapped to its
OGO, and get the rank of OGO, and put behind that rank.
Note that in practical situations, two back-to-back packets of the
same flow are generally evenly distributed within the burst interval
by policing, which means that the distance between these two packets
is generally much greater than the calculation accuracy mentioned
above, meaning that the disordered phenomenon will not really occur.
For example, the regulated result meets a Length Rate Quotient (LRQ)
constraint, and the time interval between two consecutive packets of
size l_i and l_j should be at least l_i/r, where r is the flow rate
(i.e., the reserved bandwidth of the flow). This can be done by LRQ
based regulation, or enhanced leaky bucket based regulation,
depending on implementation.
10. Option-4: Latency Compensation plus RPQ
A receivd packet is inserted to the appropriate RPQ queue according
to Q = D + E - F.
The planned residence time (D) and accumulated residence time
deviation (E) should be carried in the packet.
The enqueue and dequeue operations are the same as Section 8.
Figure 5 depicts an example of packets inserted to the RPQ queues.
Peng, et al. Expires 20 April 2024 [Page 28]
Internet-Draft Deadline Queueing Mechanism October 2023
P2 P1 +------------------------------+
+--------+ +--------+ | RPQ Group: |
| D=20us | | D=30us | | queue-1(CT=45us) ###### |
| E=15us | | E=-8us | +--+ | queue-2(CT=35us) ###### |
+--------+ +--------+ |\/| | queue-3(CT=25us) ###### |
------incoming port-1------> |/\| | queue-4(CT=15us) ###### |
|\/| | queue-5(CT=5us) ###### |
P4 P3 |/\| | queue-6(CT=-5us) ###### |
+--------+ +--------+ |\/| | queue-7(CT=-15us)###### |
| | | D=30us | |/\| +------------------------------+
+--------+ | E=-30us| |\/|
+--------+ |/\|
------incoming port-2------> |\/| +------------------------------+
|/\| | Other Queue Group: |
P6 P5 |\/| | queue-8 ############ |
+--------+ +--------+ |/\| | queue-9 ############ |
| | | D=40us | |\/| | queue-10 ############ |
+--------+ | E=40us | |/\| | ... ... |
+--------+ +--+ +------------------------------+
------incoming port-3------> ---------outgoing port---------->
-o----------------------------------o-------------------------------->
receiving-time base +F time
Figure 5: Time Sensitive Packets Inserted to RPQ
As shown in Figure 5, the node successively receives six packets from
three incoming ports, among which packet 1, 2, 3 and 5 have
corresponding deadline information, while packet 4 and 6 are best-
effort packets. These packets need to be forwarded to the same
outgoing port. It is assumed that they arrive at the line card where
the outgoing port is located at almost the same time after the
forwarding delay in the node (F = 5us). At this time, the queue
status of the outgoing port is shown in the figure. Then:
* The allowable queueing delay (Q) of packet 1 is 30 - 8 - 5 = 17us,
and it will be put into queue-4 (its CT is 15us), meeting the
condition that Q is in the range [15, 25).
* The allowable queueing delay (Q) of packet 2 is 20 + 15 - 5 =
30us, and it will be put into queue-3 (its CT is 25us), meeting
the condition that Q is in the range [25, 35).
* The allowable queueing delay (Q) of packet 3 is 30 - 30 - 5 =
-5us, and it will be put into queue-6 (its CT is -5us), meeting
the condition that Q is in the range [-5, 5).
Peng, et al. Expires 20 April 2024 [Page 29]
Internet-Draft Deadline Queueing Mechanism October 2023
* The allowable queueing delay (Q) of packet 5 in the node is 40 +
40 - 5 = 75us, and the queue it is placed on is not shown in the
figure (such as a hierarchical queue).
* Packets 4 and 6 will be put into the non-deadline queue in the
traditional way.
According to Section 4.3, An eligibility packet (i.e., E = 0) from a
specific delay level, even at the end of the inserted queue, can
ensure that it does not exceed its deadline, which is the key role of
the AT factor in the condition equation. Now, assuming that a packet
is penalized to a lower priority queue based on its positive E, this
penalty will not result in more than expected delay, apart from
potential delay E.
For example, when a packet is inserted queue based on
CT_x <= Q < CT_x + AT
even if it is at the end of the queue, then according to D = Q - E,
i.e., after time E (the penalty time), we have
CT_x - E <= Q - E < CT_x - E + AT
That is
CT_y <= D < CT_y + AT
So, in essence, it is still equivalent to an eligibility packet
entering the corresponding queue based on its delay level, and apply
the schedulability condition.
10.1. Packet Disorder Considerations
Suppose that two packets, P1, P2, are generated instantaneously from
a specific flow at the source, and the two packets have the same
planned residence time. P1 may face less interference delay than P2
in their journey. When they arrive at an intermediate node in turn,
P2 will have less allowable queueing delay (Q) than P1 to try to stay
close to P1 again. It should be noted that to compary who is ealier
is based on queue's CT and packet's Q, according to the above
queueing rule (CT <= Q < CT+AT), and the CT of the queue is not
changed in real-time, but gradually with the decreasing step RTI. It
is possible to get an unexpected comparision result.
As shown in Figure 6, P1 and P2 are two packets belonging to the same
burst. The arrival time when they are received on the scheduler is
shown in the figure. Suppose that the AT of the deadline queue is
Peng, et al. Expires 20 April 2024 [Page 30]
Internet-Draft Deadline Queueing Mechanism October 2023
10us, the decreasing step RTI is 1us, and the transmission time of
each packet is 0.01us. Also suppose that the Q values of two
adjacent packets P1 and P2 are 40us and 39us respectively, and they
are both received in the window from T0 to T0+1us. P1 will enter
queue-B with CT range [40, 50), while P2 will enter queue-A with CT
range [30, 40) just before the rotation event occurred. This means
that P2 will be scheduled before P1, resulting in disorder.
packets arrived later
packets arrived earlier |
| |
V V
--------+-----------------------------------------------+---------
... ... | .P1.................P2....................... | ... ...
--------+-----------------------------------------------+---------
P1.Q=40us
P2.Q=39us
| | |
--------o---------------------o---------------------o----------->
T0 T0+1us T0+2us time
queue-A.CT[30,40) queue-A.CT[29,39)
queue-B.CT[40,50) queue-B.CT[39,49)
queue-C.CT[50,60) queue-C.CT[49,59)
Figure 6: Packets queueing based on Latency Compensation
DetNet architecture [RFC8655] provides Packet Ordering Function
(POF), that can be used to solve the above disorder problem caused by
the latency compensation.
Alternatively, if the POF is not enabled, we can also maintain states
for service flows to record the last queueing information to address
this issue.
For example, one ore more OGOs (order guarantee object) are
maintained per delay level and incoming port, on each outgoing port.
An OGO records the queueing information which is the queue that all
the packets mapped to this OGO was inserted recently. For
simplicity, a count-down time (CT), which is copied from the recent
inserted deadline queue, can be recorded in OGO. Note that the CT of
OGO needs to decrease synchronously with that of other deadline
queues, with the same decreasing step RTI. If the CT of OGO
decreases to 0, it will remain at 0.
Peng, et al. Expires 20 April 2024 [Page 31]
Internet-Draft Deadline Queueing Mechanism October 2023
When a packet arrives at the deadline scheduler at the outgoing port
, it is first mapped to its OGO, and get the CT of OGO, termed as
OGO.CT. Then, according to the above queueing rule (CT <= Q <
CT+AT), get the CT of a preliminarily selected queue, termed as
preliminary CT.
* Let Q' is MAX{OGO.CT, preliminary CT}, and put the packet in the
target queue according to CT <= Q' < CT+AT
* Update the value of OGO.CT to the CT of target queue.
Note that in practical situations, two back-to-back packets of the
same flow are generally evenly distributed within the burst interval
by policing, which means that the distance between these two packets
is generally much greater than the calculation accuracy mentioned
above, meaning that the disordered phenomenon will not really occur.
11. Resource Reseravtion
Generally, a path may carry multiple service flows with different
delay levels. For a certain delay level d_i, the path will reserve
some resources from the delay resource pool of the link. The delay
resource pool here, as leaky bucket constraint function shown in
Section 3.2.1 or Section 4.3.2, is a set of preset parameters that
meet the schedulability conditions. For example, the level d_1 has a
burst upper limit of b_1 and a bandwidth upper limit of r_1. A path
j may allocate partial resources (b_i_j, and r_i_j) from the resource
quota (b_i, and r_i) of the link's delay level d_i. A service flow k
that carried in path j, may use resources (b_i_j_k, and r_i_j_k)
according to its T_SPEC. It can be seen that the values of b_i_j and
r_i_j determine the scale of the number of paths that can be
supported, while the values of b_i_j_k and r_i_j_k determine the
scale of the number of services that can be supported. The following
expression exists.
* b_i_j >= sum(b_i_j_k), for all service k over the path j.
* r_i_j >= sum(r_i_j_k), for all service k over the path j.
* b_i >= sum(b_i_j), for all path j through the specific link.
* r_i >= sum(r_i_j), for all path j through the specific link.
Peng, et al. Expires 20 April 2024 [Page 32]
Internet-Draft Deadline Queueing Mechanism October 2023
11.1. Delay Resource Definition
The delay resources of a link can be represented as the corresponding
burst and bandwidth resources for each delay level. Basically, what
delay levels (e.g, 10us, 20us, 30us, etc) are supported by a link
should be included in the link capability.
Figure 7 shows the delay resource model of the link. The resource
information of each delay level includes the following attributes:
* Delay Bound: Refers to the delay bound intra node corresponding to
this delay level. It is a pre-configuration value.
* Maximum Reservable Bursts: Refers to the maximum amount of bit
quota corresponding to this delay level. It is a pre-
configuration value based on the schedulability condition.
* Unreserved Bursts: Refers to the amount of bits reservable (i.e.,
free amount) corresponding to this delay level.
* Maximum Reservable Bandwidth: Refers to the maximum amount
bandwidth corresponding to this delay level. It is a pre-
configuration value based on the schedulability condition.
* Unreserved Bandwidth: Refers to the amount of bandwidth reservable
(i.e., free amount) corresponding to this delay level.
Peng, et al. Expires 20 April 2024 [Page 33]
Internet-Draft Deadline Queueing Mechanism October 2023
d_n +----------------------------------------+
| Maximum Reservable Bursts: MRBu_n |
| Unreserved Bursts: UBu_n |
| Maximum Reservable Bandwidth: MRB_n |
| Unreserved Bandwidth: UB_n |
+----------------------------------------+
... ... ...
... ... ...
d_2 +----------------------------------------+
| Maximum Reservable Bursts: MRBu_2 |
| Unreserved Bursts: UBu_2 |
| Maximum Reservable Bandwidth: MRB_2 |
| Unreserved Bandwidth: UB_2 |
+----------------------------------------+
d_1 +----------------------------------------+
| Maximum Reservable Bursts: MRBu_1 |
| Unreserved Bursts: UBu_1 |
| Maximum Reservable Bandwidth: MRB_1 |
| Unreserved Bandwidth: UB_1 |
+----------------------------------------+
----------------------------------------------------------->
Delay Resource of the Link
Figure 7
The IGP/BGP extensions to advertise the link's capability and delay
resource is defined in
[I-D.peng-lsr-deterministic-traffic-engineering].
11.2. Traffic Engineering Path Calculation
A candidate path may be selected according to the end-to-end delay
requirement of the flow. Subtract the accumulated link propagation
delay from the end-to-end delay requirement, and then divide it by
the number of hops to obtain the average planned residence time (D)
for each node. By default, select the appropriate delay level d_i
(d_i <= D-F) closest to the average planned residence time (D), and
then reserve resources from delay level d_i on each hop. However, a
local policy may allow more larger D to consume resources with
smaller delay levels.
Note that it is D, not d_i, carried in the forwarding packets.
Peng, et al. Expires 20 April 2024 [Page 34]
Internet-Draft Deadline Queueing Mechanism October 2023
12. Admission Control on the Ingress
On the ingress PE node, traffic regulation must be performed on the
incoming port, so that the service traffic does not exceed its
T-SPEC. This kind of regulation is usually the shaping using leaky
bucket combined with the incoming queue that receives service
traffic. A service may generate discrete multiple bursts within its
periodic service burst interval.
According to [RFC9016], the values of Burst Interval,
MaxPacketsPerInterval, MaxPayloadSize of the service flow will be
written in the SLA between the customer and the network provider, and
the network entry node will set the corresponding bucket depth
according to MaxPayloadSize to forcibly delay the excess bursts. The
entry node also sets the corresponding bucket rate according to
arrival rate that can be calculated.
The leaky bucket shaping will essentially make all the bursts within
the service burst interval evenly distributed within the service
burst interval, which may be inconsistent with the original arrival
curve of the service flow. Therefore, some bursts within the service
burst interval may face more shaping delay. For example, on the head
of the service burst interval, it contains two discrete bursts with
the same size, but the bandwidth reserved by the service is very
small (i.e., total burst size/burst interval). Assuming that the
bucket depth is the size of a single burst, the shaping delay faced
by the second burst is approximately half of the service burst
interval.
Although the shaped curve and the original arrival curve can be as
consistent as possible by increasing the bucket depth, to minimize
the shaping delay of each burst, but this means that the service will
occupy more burst resources, and reduce the service scale that the
network can support according to the schedulability conditions.
Unless, customers are willing to spend more money to buy a larger
burst.
On the entry node, for the burst that faces the shaping delay, its
shaping delay cannot be included in the latency compensation
equation, otherwise, it will make that burst catch up with the
previous burst, resulting in damage to the shaping result and
violation of the arrival constraint function.
Then, the regulated traffic arrives at the deadline scheduler on the
outgoing port. Since the traffic of each delay level meets the leaky
bucket arrival constraint function and the parameters of the shaping
curve do not exceed the limits of the parameters provided by the
schedulability conditions, the traffic can be successfully scheduled.
Peng, et al. Expires 20 April 2024 [Page 35]
Internet-Draft Deadline Queueing Mechanism October 2023
Note that the flow sent from the deadine scheduler of the headend to
the next hop still follows the arrival constraint function of the
path after reshaping or latency compensation on the next hop. Then
on the next hop, when concurrent flows received from multiple paths
are aggregated to the same outgoing port for transmission, within any
d_1 duration, the aggregated d_1 traffic will not exceed the burst
resources of delay level d_1 reserved by these paths on the outgoing
port, and each aggregated d_i traffic will not exceed the bandwidth
resources of delay level d_i reserved by these paths on the outgoing
port; Similarly, within any d_2 duration, the aggregated d_2 traffic
will not exceed the burst resources of level d_2 reserved by these
paths on the outgoing port, and each aggregated d_i traffic will not
exceed the bandwidth resources of level d_i reserved by these paths
on the outoging port, and so on.
Figure 8 depicts an example of deadline based traffic regulated and
scheduled on the ingress PE node in the case of option-4. In the
figure, the shaping delay caused by the previous burst is termed as
S#, and forwarding delay termed as F.
1st burst
|
received v
+-+ +-+ +----+ +-+ +--+ +------+
|1| |2| | 3 | |4| |5 | | 6 | <= burst sequence
+-+ +-+ +----+ +-+ +--+ +------+
| | | | | |
~+0 ~+S1 ~+0 ~+S3~+S4 ~+0
~+F ~+F ~+F ~+F ~+F ~+F
| | | | | |
UNI v v v v v v
ingr-PE -+--------+--------+--------+--------+--------+--------+---->
NNI | Auth | Auth | Auth | Auth | Auth | Auth | time
| time | time | time | time | time | time |
1,2 in 3 in 4 in 5 in 6 in
Queue-A Queue-B Queue-C Queue-D Queue-E
(CT<=Q) (CT<=Q) (CT<=Q) (CT<=Q) (CT<=Q)
| | | | |
~+Q ~+Q ~+Q ~+Q ~+Q
| | | | |
sending v v v v v
+-+ +-+ +----+ +-+ +--+ +------+
|1| |2| | 3 | |4| |5 | | 6 |
+-+ +-+ +----+ +-+ +--+ +------+
Peng, et al. Expires 20 April 2024 [Page 36]
Internet-Draft Deadline Queueing Mechanism October 2023
Figure 8: Deadline Based Packets Orchestrating
There are 6 bursts received from the client. The burst-2, 4, 5 has
regulation delay S1, S3, S4 that caused by previous burst
respectively. While burst-1, 3, 6 has zero regulation delay because
the number of tokens is sufficient. The regulation makes 6 bursts
roughly distributed within the service burst interval.
Suppose that each burst passes through the same intra-node forwarding
delay F, and when they arrive at the deadline scheduler in turn. In
the case of latency compensation plus RPQ, they will have the same
allowable queueing delay (Q), regardless of whether they have
experienced shaping delay before. When the packets of burst-1, 2
arrive at the scheduler, according to CT <= Q < CT+AT, they will be
placed in Queue-A with matched CT and waiting to be sent. Similarly,
when the packets of burst-3/4/5/6 arrive at the scheduler, they will
be placed in Queue-B/C/D/E respectively and waiting to be sent.
13. Overprovision Analysis
For each delay level d_i, the delay resource of the specific link is
(b_i, r_i). A path j may allocate partial resources (b_i_j, r_i_j)
from the resource pool (b_i, r_i). In order to support more d_i
services in the network, it is necessary to set larger b_i and r_i.
However, as mentioned earlier, the values of b_i and r_i are set
according to schedulability conditions and cannot be modified at
will. Therefore, the meaningful analysis is the service scale that
the network can support under the premise of determined b_i and r_i.
For bandwidth resource reservation case, the upper limit of the total
bandwidth that can be reserved for all aggregated services of delay
level d_i is r_i, which is the same as the behavior of traditional
bandwidth resource reservation. There is no special requirement for
the measurement interval of calculating bandwidth value.
For the burst resource reservation case, the upper limit of the total
burst that can be reserved for all aggregated services of delay level
d_i is b_i. If the burst of each service of level d_i is b_k, then
the number service can be supported is b_i/b_k, which is the worst
case considering the concurrent arrival of these service flows.
However, the burst resource reservation is independent of bandwidth
resource, i.e., it does not take the calculation result of b_k/d_i to
get an overprovision bandwidth and then to affect the reservable
bandwidth resources.
By providing multiple delay levels, we can allocate 100% of the link
bandwidth to deterministic services, as can be seen from the
schedulability condition equation.
Peng, et al. Expires 20 April 2024 [Page 37]
Internet-Draft Deadline Queueing Mechanism October 2023
14. Compatibility Considerations
Deadline is suitable for end-to-end and interconnection between
different networks. A large-scale network may span multiple
networks, and one of the goals of DetNet is to connect each network
domain to provide end-to-end deterministic delay service. The
adoption techniques and capabilities of each network are different,
and the corresponding topology models are either piecewise or nested.
For a particular path, if only some nodes in the path upgrade support
the deadline mechanism defined in this document, the end-to-end
deterministic delay/jitter target will only be partially achieved.
Those legacy devices may adopt the existing priority based scheduling
mechanism, and ignore the possible deadline information carried in
the packet, thus the intra node delay produced by them cannot be
perceived by the adjacent upgraded node. The more upgraded nodes
included in the path, the closer to the delay/jitter target.
Although, the legacy devices may not support the data plane mechanism
described in this document, but they can be freely programmed (such
as P4 language) to measure and insert the deadline information into
packets, in this case the delay/jitter target may be achieved.
Only a few key nodes are upgraded to support deadline mechanism,
which is low-cost, but can meet a service with relatively loose time
sensitive. Figure 9 shows an example of upgrading only several
network border nodes. In the figure, only R1, R2, R3 and R4 are
upgraded to support deadline mechanism. A deterministic path across
domain 1, 2, and 3 is established, which contains nodes R1, R2, R3,
and R4, as well as explicit nodes in each domain. Domain 1, 2 and 3
use the traditional strict priority based forwarding mechanism. The
encoding of the packet sent by R1 includes the planned residence time
and the accumulated residence time deviation. Especially, DS filed
in IP header ([RFC2474]) are also set to appropriate values. The
basic principle of setting is that the less the planned residence
time, the higher the priority. In order to avoid the interference of
non deterministic flow to deterministic flow, the priority of
deterministic flow should be set as high as possible.
The delay analysis based on strict priority without re-shaping in
each domain can be found in [SP-LATENCY], which gives the equation to
evaluate the worst-case delay of each hop during the resource
reservation procedure. The worst-case delay per hop depends on the
number of hops and the burst size of interference flows that may be
faced on each hop. [EF-FIFO] also shows that, for FIFO packet
scheduling be used to support the EF (expedited forwarding) per-hop
behavior (PHB), if the network utilization level alpha < l/(H-l), the
worst-case delay bound is inversely proportional to 1-alpha*(H-1),
where H is the number of hops in the longest path of the network.
Peng, et al. Expires 20 April 2024 [Page 38]
Internet-Draft Deadline Queueing Mechanism October 2023
Although the EDF scheduling with in-time mode, the SP scheduling and
EF FIFO scheduling are all work-conserving, the EDF scheduling can
further distinguish between urgent and non urgent according to
deadline information other than traffic class. Therefore, when
analyzing the latency of EDF scheduling, the latency is not evaluated
just according to the order in which the packets arrive at the
scheduler, but also according to the deadline of the packets. An
intuitive phenomenon is that if a packet unfortunately faces more
interference delays at the upstream nodes, it will become more urgent
at the downstream node, and will not always be unfortunate. This
operation of dynamically modifying the key fields, e.g, the existing
acumulated residence time deviation (E), of the packet can avoid
always overestimating worst-case latency on all hops as SP.
According to schedulability condition, the worst-case latancy per hop
is d_i.
When the border node (e.g, R2) receives the deterministic traffic, it
will obtain its rank according to the existing accumulated residence
time deviation information carried in the packet, and always sent as
soon as possible. For a specific deterministic flow, if it
experiences too much latency in the SP domain (due to unreasonable
setting of DS field and the inability to distinguish between
deterministic and non deterministic flows), even if the boundary node
accelerates the transmission, it may not be able to achieve the
target of low E2E latency. If the traffic experiences less latency
within the SP domain, on-time mode may work on the egress node to
achieve the end-to-end jitter target.
_____ ___ _____ ___ _____ ___
/ \/ \___ / \/ \___ / \/ \___
/ \ / \ / \
+--+ +--+ +--+ +--+
|R1| Strict Priority |R2| Strict Priority |R3| Strict Priority |R4|
+--+ domian 1 +--+ domian 2 +--+ domian 3 +--+
\____ __/ \____ __/ \____ __/
\_______/ \_______/ \_______/
Figure 9: Example of partial upgrade
Peng, et al. Expires 20 April 2024 [Page 39]
Internet-Draft Deadline Queueing Mechanism October 2023
15. Deployment Considerations
According to the above schedulability conditions, the delay levels
(e.g, d_i) that can be provided in the network is related to the
entire deployed service flows. Each delay level d_i has independent
delay resources, and the smaller d_i, the more valuable it is. The
operator needs to match the corresponding d_i for each service.
When option-3 with in-time mode is choosed, PIFO needs to be designed
with a large depth to store accumulated bursts. Similarly, when
option-4 with in-time mode is choosed, more deadline queues are
needed to store accumulated bursts.
The accumulated bursts on a intermediate node consists of multiple
rounds of burst interval flows, for example, the flow generated by
the source within the first round of burst interval (always
experiencing the worst case delay along the path) is caught up by the
flow generated within the second round of burst interval (always
experiencing the best case delay along the path). For delay level
d_i, the worst case delay is d_i, the best case delay is l/R, where l
is the smallest packet size of the flow, R is the port rate. For
simplicity to get the estimate size of accumulated bursts, here we
just take the best case delay as 0. Drawing on the method provided
in [SP-LATENCY], the accumulated bursts of d_i is:
ACC_BUR_i = ((d_i * h) / burst_interval) * b_i
For example, d_i is 10 us, burst_interval is 250 us, this means that
within the 25th hop, there will only be one b_10 burst in the queue.
If it exceeds 25 hops and is within 50 hops, there may be two b_10
burst simutaneously in the queue.
The accumulated bursts of other delay levels can be similarly
estimated. Operators need to evaluate the required buffer size based
on network hops and the supported delay levels.
Operators may also use on-time scheduling mode to simplify the design
of buffers. On-time scheduling mode can get a jitter for each delay
level to the value of delay level in theory (i.e., the worst case is
that on the egress node there are full traffic contributed by all
delay levels, that are discharging floodwater at the same time,
however, in reality, the service flow of the output port facing the
destination customer side may only involve one delay level, then the
jitter may be only one AT (e.g, 10us)).
Peng, et al. Expires 20 April 2024 [Page 40]
Internet-Draft Deadline Queueing Mechanism October 2023
16. Evaluations
This section gives the evaluation results of the Deadline mechanism
based on the requirements that is defined in
[I-D.ietf-detnet-scaling-requirements].
+======================+============+===============================+
| requiremens | Evaluation | Notes |
+======================+============+===============================+
| 3.1 Tolerate Time | Partial | No time synchronization needed|
| Asynchrony | | , but need frequency sync. |
+----------------------+------------+-------------------------------+
| 3.2 Support Large | | The eligibility arrival of |
| Single-hop | Yes | flows is independent with the |
| Propagation | | link propagation delay. |
| Latency | | |
+----------------------+------------+-------------------------------+
| 3.3 Accommodate the | | The higher service rate, the |
| Higher Link | Partial | more buffer needed for each |
| Speed | | delay level. And, extra |
| | | instructions to calculate E. |
+----------------------+------------+-------------------------------+
| 3.4 Be Scalable to | | Multiple delay levels, each |
| the Large Number | | with limited delay resources, |
| of Flows and | | can support lots of flows, |
| Tolerate High | Yes | without overprovision. |
| Utilization | | Utilization may reach 100% |
| | | link bandwidth. |
| | | The unused bandwidth of the |
| | | by the low levels or BE flows.|
+----------------------+------------+-------------------------------+
| 3.5 Tolerate Failures| | Independent of queueing |
| of Links or Nodes| N/A | mechanism. |
| and Topology | | |
| Changes | | |
+----------------------+------------+-------------------------------+
| 3.6 Prevent Flow | | Flows are permitted based on |
| Fluctuation | Yes | the resources reservation of |
| | | delay levels, and isolated |
| | | from each other. |
+----------------------+------------+-------------------------------+
| 3.7 Be scalable to a | | E2E latency is liner with hops|
| Large Number of | | , from ultra-low to low |
| Hops with Complex| Yes | latency by multiple delay |
| Topology | | levels. |
| | | E2E jitter is low by on-time |
| | | mode. |
Peng, et al. Expires 20 April 2024 [Page 41]
Internet-Draft Deadline Queueing Mechanism October 2023
+----------------------+------------+-------------------------------+
| 3.8 Support Multi- | | Independent of queueing |
| Mechanisms in | N/A | mechanism. |
| Single Domain and| | |
| Multi-Domains | | |
+----------------------+------------+-------------------------------+
Figure 10
17. IANA Considerations
There is no IANA requestion for this document.
18. Security Considerations
Security considerations for DetNet are described in detail in
[RFC9055]. General security considerations for the DetNet
architecture are described in [RFC8655]. Considerations specific to
the DetNet data plane are summarized in [RFC8938].
Adequate admission control policies should be configured in the edge
of the DetNet domain to control access to specific delay resources.
Access to classification and mapping tables must be controlled to
prevent misbehaviors, e.g, an unauthorized entity may modify the
table to map traffic to an expensive delay resource, and competes and
interferes with normal traffic.
19. Acknowledgements
TBD
20. References
20.1. Normative References
[I-D.ietf-detnet-scaling-requirements]
Liu, P., Li, Y., Eckert, T. T., Xiong, Q., Ryoo, J.,
zhushiyin, and X. Geng, "Requirements for Scaling
Deterministic Networks", Work in Progress, Internet-Draft,
draft-ietf-detnet-scaling-requirements-03, 7 July 2023,
<https://datatracker.ietf.org/doc/html/draft-ietf-detnet-
scaling-requirements-03>.
[I-D.peng-6man-deadline-option]
Peng, S., Tan, B., and P. Liu, "Deadline Option", Work in
Progress, Internet-Draft, draft-peng-6man-deadline-option-
01, 11 July 2022, <https://datatracker.ietf.org/doc/html/
draft-peng-6man-deadline-option-01>.
Peng, et al. Expires 20 April 2024 [Page 42]
Internet-Draft Deadline Queueing Mechanism October 2023
[I-D.peng-lsr-deterministic-traffic-engineering]
Peng, S., "IGP Extensions for Deterministic Traffic
Engineering", Work in Progress, Internet-Draft, draft-
peng-lsr-deterministic-traffic-engineering-01, 4 July
2023, <https://datatracker.ietf.org/doc/html/draft-peng-
lsr-deterministic-traffic-engineering-01>.
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119,
DOI 10.17487/RFC2119, March 1997,
<https://www.rfc-editor.org/info/rfc2119>.
[RFC2212] Shenker, S., Partridge, C., and R. Guerin, "Specification
of Guaranteed Quality of Service", RFC 2212,
DOI 10.17487/RFC2212, September 1997,
<https://www.rfc-editor.org/info/rfc2212>.
[RFC2474] Nichols, K., Blake, S., Baker, F., and D. Black,
"Definition of the Differentiated Services Field (DS
Field) in the IPv4 and IPv6 Headers", RFC 2474,
DOI 10.17487/RFC2474, December 1998,
<https://www.rfc-editor.org/info/rfc2474>.
[RFC2475] Blake, S., Black, D., Carlson, M., Davies, E., Wang, Z.,
and W. Weiss, "An Architecture for Differentiated
Services", RFC 2475, DOI 10.17487/RFC2475, December 1998,
<https://www.rfc-editor.org/info/rfc2475>.
[RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC
2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174,
May 2017, <https://www.rfc-editor.org/info/rfc8174>.
[RFC8655] Finn, N., Thubert, P., Varga, B., and J. Farkas,
"Deterministic Networking Architecture", RFC 8655,
DOI 10.17487/RFC8655, October 2019,
<https://www.rfc-editor.org/info/rfc8655>.
[RFC8938] Varga, B., Ed., Farkas, J., Berger, L., Malis, A., and S.
Bryant, "Deterministic Networking (DetNet) Data Plane
Framework", RFC 8938, DOI 10.17487/RFC8938, November 2020,
<https://www.rfc-editor.org/info/rfc8938>.
[RFC9016] Varga, B., Farkas, J., Cummings, R., Jiang, Y., and D.
Fedyk, "Flow and Service Information Model for
Deterministic Networking (DetNet)", RFC 9016,
DOI 10.17487/RFC9016, March 2021,
<https://www.rfc-editor.org/info/rfc9016>.
Peng, et al. Expires 20 April 2024 [Page 43]
Internet-Draft Deadline Queueing Mechanism October 2023
[RFC9055] Grossman, E., Ed., Mizrahi, T., and A. Hacker,
"Deterministic Networking (DetNet) Security
Considerations", RFC 9055, DOI 10.17487/RFC9055, June
2021, <https://www.rfc-editor.org/info/rfc9055>.
[RFC9320] Finn, N., Le Boudec, J.-Y., Mohammadpour, E., Zhang, J.,
and B. Varga, "Deterministic Networking (DetNet) Bounded
Latency", RFC 9320, DOI 10.17487/RFC9320, November 2022,
<https://www.rfc-editor.org/info/rfc9320>.
20.2. Informative References
[EDF-algorithm]
"A framework for achieving inter-application isolation in
multiprogrammed, hard real-time environments", 1996,
<https://ieeexplore.ieee.org/document/896011>.
[EF-FIFO] "Fundamental Trade-Offs in Aggregate Packet Scheduling",
2001, <https://ieeexplore.ieee.org/document/992892>.
[IR-Theory]
"A Theory of Traffic Regulators for Deterministic Networks
with Application to Interleaved Regulators", 2018,
<https://ieeexplore.ieee.org/document/8519761>.
[Net-Calculus]
"Network Calculus: A Theory of Deterministic Queuing
Systems for the Internet", 2001,
<https://leboudec.github.io/netcal/latex/netCalBook.pdf>.
[P802.1DC] "Quality of Service Provision by Network Systems", 2023,
<https://1.ieee802.org/tsn/802-1dc/>.
[PIFO] "Programmable Packet Scheduling at Line Rate", 2016,
<https://dl.acm.org/doi/pdf/10.1145/2934872.2934899>.
[RPQ] "Exact Admission Control for Networks with a Bounded Delay
Service", 1996,
<https://ieeexplore.ieee.org/document/556345/>.
[SP-LATENCY]
"Guaranteed Latency with SP", 2020,
<https://ieeexplore.ieee.org/document/9249224>.
[UBS] "Urgency-Based Scheduler for Time-Sensitive Switched
Ethernet Networks", 2016,
<https://ieeexplore.ieee.org/abstract/document/7557870>.
Peng, et al. Expires 20 April 2024 [Page 44]
Internet-Draft Deadline Queueing Mechanism October 2023
Authors' Addresses
Shaofu Peng
ZTE Corporation
China
Email: peng.shaofu@zte.com.cn
Zongpeng Du
China Mobile
China
Email: duzongpeng@foxmail.com
Kashinath Basu
Oxford Brookes University
United Kingdom
Email: kbasu@brookes.ac.uk
Zuopin Cheng
New H3C Technologies
China
Email: czp@h3c.com
Dong Yang
Beijing Jiaotong University
China
Email: dyang@bjtu.edu.cn
Chang Liu
China Unicom
China
Email: liuc131@chinaunicom.cn
Peng, et al. Expires 20 April 2024 [Page 45]