Minutes for AQM at interim-2014-aqm-1
minutes-interim-2014-aqm-1-1

Meeting Minutes Active Queue Management and Packet Scheduling (aqm) WG
Title Minutes for AQM at interim-2014-aqm-1
State Active
Other versions plain text
Last updated 2014-07-04

Meeting Minutes
minutes-interim-2014-aqm-1

   Meeting minutes of the AQM conference call
24. June 2014, 13:00 Eastern Daylight Time

Participants:

Spencer Dawkins (responsible AD)
Wes Eddy (WG co-chair)
Richard Scheffenegger (WG co-chair)
Dave Robinson
John Leslie
Nicolas Kuhn
Dave Täht
Greg White
Fred Baker
Gorry Fairhurst
Rong Pan
Preethi Natarajan
Anil Agarwal


2 unidentified parties

Agenda Bashing
- no objections

Discussion of aqm-recommendations
*) References to internet congestion collapse in the introduction
- consensus in the WG that latency reduction is the main objective 
- as this is an update to a BCP, need to keep historic references

Dave: Lot of Wifi expiricence congestion collapse, but on L2; AQM doesn't handle 
that

Slide - Comments 1: 

Dave: channeling Bob, looking 1st para; Put a period after 1st sentence and cut 
until 2nd sentenace 1st para.

Gorry: torn between preserving orinal text, or just intro to what we have to 
say.

John: is it possible to talk without reference to congestive collapse? in this 
document

Dave: it's the purpose of the end to end protocol to be sensitive to congestion 
collapse. AQM is not there to prevent that.

Fred: its not there to fix, but to help. 

John: Congestion collapse had to do with sites sending more when there was 
congestion. Therefore congestion Collapse.

Dave: I was kind of proposing putting a period after service degradation, and 
removing the internet meltdown and related stuff to a later portion of the 
document.

Wes: That may be ok, but we need a bit of context as this grew out of 2309, 
which was more about routers participating in actively preventing congestion 
collapse. And now the goals of AQM have morphed additionally to reduce latency. 
Tracing that bit of history would help tie things together. 

John: Good point, but it doesn't belong in the introduction.

Dave: Control latency, should be predominant in the document.

Gorry: we need more text to do this. Not everyone has the same view. We started 
off from 2309, and if we change this we have to be careful.

Fred: It would be good for the WG to decide what they want this draft to say. 
Give us text in order to say that and decide it's done. Doc already in the works 
for more than 1 year.

Wes: The meat of the doc are not disputed and this indicates they are pretty 
good. We just need to walk people into this.

Dave: Has anyone given this to a CTO and asked what they think? They are the 
ultimate audiences for this.

Gorry: Equally we haven't given this to a PhD student to 

John: I suppose I'm a CTO. I have to admit, I don't understand it very well.

Dave: That's one; A goal of this docuemnt is to convince people that we need AQM 
on everything. It needs to have a hook. 

John: I though the hook had to do with reducing latency, but maybe I'm 
misreading it.

Dave: No, the hook is latency, and not congestive collapse.

Wes: The hook has evolved over time, from 2309 congestion collapse, to what it 
is now, latency. State this in a sentence, that this fact has changed. If we do 
this early on in the document, it help people not get confused when it talks 
about both things.

Wes: I could suggest some text that I think that address what Dave and John, and 
maybe even for Bob too, have said. At least it should help drive the discussion 
to a conclusion.

Gorry: Text would be great.

==> Wes provide text for intro.

Slide: Comments 2

Gorry: Think we covered that just ago.

Wes: This is at the extreme end of take all the stuff about congestion collapse 
out. It's strongly relative to what we just discussed. There needs to be a 
bridge between 2309 focus on collapse, and what we are not working on. If we get 
the first one right, we get this one for free.

Dave: This my hope to.

Gorry: I think so to. We have changed the wording in certain place, and will 
read ok with a better introduction.

Dave: I have a problem that I disagree with Bob on this front. I agree strongly 
with moving the focus. Having a seperate historical section that talks a bit to 
congestion collapse is fine by me for fixing the intro.

Gorry: A seperate section, involving which text?

Dave: As i mentioned early, moving that part of text into a historical section. 
If Bob cares about this kind of text, he should provide the text.

Dave: Do we want to have a historical section or a section that talks to 
congestion collapse later in the document?

Fred: What might make sense is a change in the "what problem are we solving" 
section, that is two parts. One of them being that in the earlier internet, on 
slower links and when under attack, we are talking about congestive collapse. 
Describe what is,and how that deals with things. And seperately describing the 
issue of latency.

Dave: When we have some DOS attack, some level of filtering redirects that 
traffic and it is not handled by an AQM. Nothing scary in the intro seems to be 
simpler, thinking about the intended audience. I signed off on the docuemtn as 
it was.

Wes: You are just trying to tweak it, not blocking

Fred: Really helpful would be to make constructive assistance. Nothing that we 
seems to satisfy them.

Dave: My take on this: rework the intro and get this for free.

Gorry: I have no concerns splitting the intro into 2 sections speaking about 
congestion collapse and latency if that was clearer.

Richard: We shouldn't really remove the prior text about congestion collapse, as 
this is an update to a prior docuemnt. I think seperating this into two sections 
is a good way forward.

Gorry: SOunds like a plan.

wes: sounds good.

Fred: If we can have this in the next 2 weeks, we can have this in time for the 
next IETF

Wes: I commit to sending this later this week.

- Slide Comment flow fairness

Dave: There is some new text somewhere. 

Gorry: We addressed this, it's sorted.

- Slide 2a AQM must work without fairness mechanism

Richard: Is this sorted out as well?

Gorry: If the SFQ_Codel people are happy with the changes, I'm happy with them. 
They should be happy.

Dave: There are quite a few opinions here. Text reads good. Specific complaint 
here was that section 4 was not successful. 

Wes: In the DIFF there is a significant new addition. There is a bullet in sec 
4.1, that is "allow for combination with other mechanisms"

Fred: Which they always have.

Wes: I like the idea of dealing with that in a seperate document (queuing) that 
the WG might adopt later on. Keep this draft as it is.

Dave: There is a typo... hard to spot. Early in the doc there is a list of all 
the different mechanisms. To me they are an important component to a solution. 
They are very different from fair queuing, I don't know where that shuold be 
discussed.

Fred: I think they are seperate discussions. They address a different class of 
problem. I'm fond of schedulers, but they are not AQM.

Dave: Agree, ie. scheduling for rate limiting need to be in a different doc.

Fred: Scheduling for rate limitation makes it non work conserving, vs. work 
conserving queue by class. To me, what you want is a separate document that 
describes schedulers, and that is one attribute that a scheduler might have. 
there is an entire list of attributes that a scheduler might have.

Dave: From my view, this is the most desired feature to differentiate the 
services people are selling. It's not relevant to the AQM discussion. I just 
finished reviewing this. I'm completely happy here. I brought it up because rate 
limitation wasn't in 2nd para of sec 4.1

Gorry: Do we just want to add "and can be used for rate limiting"?

Fred: I'm not opposed, I can probably come up with a long list. My question is, 
what do the words "such as" mean here... Seems fairly open ended as it is. Where 
do you stop, but I'm not opposed.

Dave: I'm going to request: Rate Limitation, Priority Queuing, Classful Queuing, 
Fair Queuing.

Fred: Ok. If we agree that after that we are done, then fine.

Dave: is there not a SHOULD here?

Gorry: Does this section need RFC2119 keywords in?

Dave: I believe AQMs MUST allow a tie in with scheduling algorithms.

Fred: I would disagree with that. To me: MUST - if you don't do that, something 
will break. And this is what breaks in the next sentence. If I really like to 
say that, but cann't tell what may break, I use the word SHOULD. 

Dave: Fine. I think the 2nd bullet is enough.

Gorry: I think so to, and i tried to avoid RFC2119 keywords for the reasons 
mentioned by Fred.

Dave: SFQ_Codel defers a few idea to the SFQ portion, that could be done in the 
AQM, but are better to do in the SFQ portion.

Dave: I don't see anything else here.

Wes: I think we got consensus on the call.

Dave: I do have a structural point. Conclusion and recommendations, when you get 
to 4.2.1. It's kind of descriptive of what ECN is. It seems to be describing 
what ECN is elsewhere, and turning it into a recommendation here.  Does that 
make sense?

Fred: The recommendation here is the 1st sentence of the 3rd para: Please be 
able to do it.

Dave: Ok, and this is why.

Fred: Pretty much. ECN and drop as a signal are pretty comparable. The 
difference is, that ECN doesn't have a loss... The measured effect on traffic is 
about the same.

Richard: A point Bob was very paramount about was a recommendation if AQM does 
both, ECN and drop, the parameters should be configured seperately.

Dave: There was a significant penalty in CPU cycles to actually do that. So we 
didn't do it.

Fred: That's a particular implementation. I don't know how that relates to 
recommendation to somebody else that implements this in the future. By the way, 
I believe we address the seperate configuaration in the penultimate paragraph.

Dave: that captures the point from my perspective. I don't think that we will 
end up with anything in the codebase that will allow this.

Gorry: Thats ok, this is a BCP, which about the future. It's not about 
documenting what has been going on.

Fred: I can say, that in our codebase, we have had seperable configuration for 
15 years. There is more than one codebase here.

Dave: As long as it's a MAY I'm fine.

Richard: I also think this section addresses Bob's concern.

- Slide 3 Comment: Scope

Gorry: We added one, at the beginning.

Dave: I believe AQM technology is needed wherevery you have a fast to slow 
transition.

Gorry: A bottleneck.

Fred: I would further argue that a fast to slow transition can be many inputs of 
one speed to a single output of the same speed. We see that inside input-queued 
switches.

John: I apologize, but I just don't understand what you have as a statement of 
scope. I have yet to see version 5. I started this call very confused about 
statement of scope and I'm not doing any better.

Wes: At the begining of Sec 2, there is a new paragraph. I think it was added to 
address this.

Fred/Gorry: Yes.

Wes: John, when you had time to read that section, let us know if it clarifys 
the situation or not.

John: At first read, it leaves me wondering where the line gets drawn. this is a 
very general statement. I guess you are saying it applies to wireless situations 
as well as ethernet and other mechnasims. But it doesn't clarify to me that it's 
the case where you have more data coming in the line than you can send out. I 
think this is what you want to say, but it doesn't say it anywhere.

Dave: We need a set of paragraphs early on, that make people want to go out and 
enable this everywhere. 

John: Do we have a place to the effect where more bits come in than go out?

Fred: I don't think we want to say it in those terms. The key thing that AQM 
tries to control is a growth in latency, or a growth in standing queue depth. 
So, mean latency and mean queue depth. So I think I would state it in terms of 
latency and queue depth. Which actually this does. This is applicable anywhere 
where standing loads can be absorbed, and standing increases in latency and 
queue depth can be observed.

John: I'm not really ready to say what I wanted to add, but I do want it to be 
clear that this is what it's saying.

Fred: We continually try to respond to open ended comments and people come back 
saying they don't like it.. You need to tell us what you like.

John: Unfortunately, that would have been easier if I had seen version 5 before 
this meeting started.

Wes: As a Chair I can state, that from the WG charter and scope, this paragraph 
is pretty consistent with what we set out to do in the WG. In my opinion, it 
hits the mark.

Dave: I already stated: Our goal should be AQM in every buffer. I think Bob 
tried to create a mission statement there, and I object to that.

Gorry: A lot of these sentences were in the previous document, and we tries to 
[...] Other people to offer new sentences, but this is what we are putting to 
the WG for the moment, on that particular point.

Dave: Should have signed up to the conf call...

Wes: After the WG LC, there is IETF LC. There is no shortage of opportunity 
where Bob can comment on this, even when it leaves the WG.

Richard: There is little point in discussion this point as long as Bob is not 
here, to donate text as to what he had in mind.

- Slide Comment 4 - Synchronization and Lockout

Gorry: We added seperate points to discuss this, and I think that should fix it.

Dave: Where is this?

Richard: End of section 2, 3 bullets.

Gorry: There is a seperate bullet that mentiones lock out and synchronization 
individually.

Dave: typo!

Gorry: The main point on this is, that people realize that lock out and 
synchronization are not the same thing. To make that clearer, we put them in two 
separate paragraphs.

Dave: Can I send some citations for these? I'm not sure people have seen and 
understand global synchronization.

Gorry: Do you have a paper.

==> Dave: I take an action item to go looking for references to these two 
paragraphs.

Gorry: The RFC Editor will prefer papers or long-standing URLs, not regular web 
sites.

[00:51:15]

Wes: There is a Sally Floyd paper on synchronization that is quite good.

Fred: There is that one. There is also a paper from someone funded by Cisco, but 
he hasn't published yet, at the Victoria University in Wellington NZ. Bascially 
he measured it. Found instances of it etc. I asked him for a pointer to that 
paper, I'll get it some day.

Gorry: If you can send the reference that would be good.

Richard: So, the action is then with Dave to provide some references here.

Fred: This is not too hard to figure out, if you think about it. A TCP can be 
thought ouf as a wave function. What happens when you run two wave functions in 
parallel - sometimes they add up. That's what TCP synchronization is, not too 
hard.

Wes: The Sally Floyd paper called upon traffic phase effects. 

Fred: If you send us a reference to the necessary information, we can link to 
that.

Wes: Sure

-- Slide Comment 5; Lockout

Richard: Is there additional text necessary?

Dave: At the moment I cann't think of any. The thing that has come up recently 
on the packet scheduling debate is avoiding head of line blocking for ACK 
clocking is really a huge win. But I think this belongs in the scheduling paper 
not the AQM paper. And AQM does reduce head of line blocking for ACK clocking. 
Fred, what do you think about the ACK clocking issue, reducing latency on one 
portion of the path improves ACK clocking on the reverse path.

Fred: I'm not sure I understand

Dave: This is not directly related to lock out, but reducing latency. There 
might be an additional bullet point, improving ACK clocking.

Richard: An additional point here in section 2

Dave: yes

Fred: You are looking for motivation for the statement in 4.2? Implicit latency 
is a signal for congestion...

Dave: yes, possible

Gorry: If you think so, can you send 2-3 sentences?

Dave: Ok, if it doesn't make it, I'm easy still.

-- Slide Comment 6, 

Gorry: We just fixed it. it's ok

Richard: I've seen those references.

Wes: No furhter comment

Dave: I'ven't read the conex thing. Sure that I find something that raises my 
blood pressure.

Wes: IMHO, citations were added and comments addressed.

-- Slide Comment 7-9,

Wes: These are pretty minor. In v5 there is still one reference screwed up.

Gorry: Forgot to check in that version of XML

Wes: No 8 & 9, dave those are yours.

Fred: There is an issue with No 8; The RFC editor, by policy, would like to not 
include URLs as they change.

Dave: My problem is that much of that exists in a dusty library somewhere, and I 
would really like people to read it. Provide them online and searchable would 
address this.

Fred: You'd have to talk to the RFC editor about the policy.

Wes: We can have a WG wiki page, using IETF tools, and provide these there.

Dave: On point 7, there has been a bunch of recent research, in particular 
Stanford. Can't remember the name of the paper. They did a lot of research on 
the actual behaviour of flows on real 10G Eth links, and it "prooves" that you 
only need 20 packets of buffering, to handle the 10 000s of flows typically 
running though a 10G Eth link. 

Fred: I'm familiar with that paper, I disagree. That might be true in backbone 
carriers, but that particular carrier has one of the best engineered backbones. 
I'm not convinced that this is generally true for the Internet. I certainly 
don't see that anywhere I'm looking

Dave: I tend to agree with you, but haven't seen anyone trying to refute that 
paper. They make a very compelling argument for it. 

Fred: Cisco looked very hard at that. Would reduce our cost, and have memory 
sellable as add-on.

Dave: If you can find that paper, I would like to review it again. Poke holes in 
it some day. For point 9, again that bursty loss problem is something that 
scheduling solves. It's very easily seen. 

Gorry: Is this an AQM problem or scheduling? I can see this a scheduling issue.

Dave: It's addressed in the PIE-DOCSIS implementation. They try very hard not 
doing back to back drops. Is stopping back to back drops a goal of AQM?

Ron: For DOCSIS, they wanted to have good performance during max rate. But in 
the real internet, where so many bursty traffic are colliding with each other, I 
don't think this phenomena exists link in DOCSIS. 

Dave: Another argument: You get huge bursts in the home induced by Wifi, up to 
32 packets all at once. 

Fred: You realize they do that by design?

Dave: By design, but it also induces problems.

Fred: The problem they are working around - and this is worse in 802.11ac - it 
has a separate relationship with each device, as opposed to simply sending into 
the air. As a result devices are at different speeds, distances, prob of loss, 
emulating A or G etc. That means retraining the radio each time it picks a 
different device. And if it does that for every packet, it can spend 90% of 
capacity training the radio. So they do a form of deadline scheduling. Packet 
get accumulated for a little while, and then send in a burst. When the capacity 
is there to waste on training the radio, it can send single packets. Under load, 
you get bursts of packets. 

Dave: Going from the AP to the station is one problem. Coming from the station 
to the AP you have very few flows. In that case, bursty loss does hurt. So, 
should we mention bursty loss as a problem, and the answer is probably yes.

Greg: May I correct a misconception of the DOCSIS PIE implmentation? We actually 
don't prevent back to back drops. What we do, we calculate a desired drop 
probability. When you implement this, you effectively flip a bias point back and 
forth. If they are indepentent you have the potential, over a small number of 
packets, a deviation from the desired drop probability. We tries to bound that 
variability. So, we don't explicitly prevent back to back drops, if the 
probability is high enough, there will be back to back drops. What we do is 
prevent large swings in the effective probability, relative to the disired one.

Dave: I get it, and I like it.

Greg: The motivation really was when you got a really small number of sessions, 
you roll the dice badly, and you drop a lot more packets than you intended to. 
So it's a burstly loss condition we were concerned about most. This applies to 
the opposite direction also, when you don't drop near as many packets as you 
like to. 

Wes: So, do our editors have a path forward on this?

Richard: Dave, are you very insistent on having text on bursty loss in this 
document?

Dave: I'll write some text. If it works, great, if it doesn't make it, I'm cool 
too.

Gorry: Send the text to us and we will see if the text will fit. 

--- Agenda review

--- Evaluation draft Slide 1

Dave: What has driving me to this front has been to make video conferencing work 
well in the presence of load. So, I don't see that on your bullet points. I 
would like to have video conferncing and netflix style traffic. But does it 
belong to the eval suite, not the eval guidelines?

Richard: I think that's what Nicolas wanted to show, that the document has been 
split on guidelines, and how to apply those guidelines to specific scenarios and 
suites.

Dave: Back to slide 4. The point I want to make is that everyone, including 
myself is continuing to make this mistake of focussing on long running TCP 
flows. We need to stop. I think it was Freds data showing 19 packets medium flow 
size.

Fred: Yeah, in the sample i took.

Dave: I think your sample is probably pretty accurate. By and large you have 1-2 
big, fat flows, and the rest is short, 20-100 packets. 

Preethi: The majority of problems cause by bufferbloat is because of these long 
lived, elephant flows. They drive up buffer utilization, not the samll come-and-
go flows. If all is small flows, there is no bufferbloat problem. 

Dave: Yes and no; a typical web page downloads 15 flows at the same time, 
injection 20 medium packets, thats 300 packet more or less at the same time. 
Thats a short burst of bufferbloat. 

Preethi: From my understanding of the AQM is not to hurt those short lived 
flows, it's only to handle those long lived ones. It's not that you start 
dropping when the buffer is over a certain point, it is working on relatively 
longer timescales, not a short term burst. A short time burst has to be handled 
by buffer sizing, not by queue management. 

Dave: An AQM should not do too much harm. 

Rong (?): That is true, but it's not that long lived elephant flows are not 
relevant. They are the elephant in the room that we need to affect. They need to 
be regulated in a nice way, so that they don't occupy the majority of the 
buffer. Of course we should make sure that when these mice come, we don't do 
harm, correct. There is a short term and long term evaluation in my opinion. 

Preethi(?): There is a supplement in the TOC where we define the type of 
traffic. But we won't be going into the details of how many flow, how many 
packets. That will be in a seperate docuemnt, here we are more high level. We 
would like to keep it that way to arrive at a concensus on the high level moving 
forward.

Dave: In that case I would move up the bullet points below various TCP flavours. 
They are more interrelated with RTT and RTT fairness. I'm still reacting to 50 
non-bidirectional flows being used as a benchmark in the new secondary 
docuement. We are not talking about that today.

Preethi: It's not only about long lived TCP flows, but various kinds of mixed 
traffix. It's going to be there in the document at a very high level.

Richard: Please bring the detailed discussion the list, also for broader 
participation. Also to encourage people to read the new version.

Wes: Yes, on the mailing list. I think the feedback is good to hear and would be 
great to continue this discussion on the WG list. Thanks Nicholas. I think this 
clarifies the state of the document, and opens the gates on some feedback on 
that.  We further wanted to talk about the idea of adopting some algorithms 
specification. We mentioned this on the mailing list a while back, and got a 
couple of positive responses, but not as many as I would thought we'd get. Our 
logic here as chairs is, we know a lot of people are working on algorithms, we 
don't know how long people are going to be working on algorithms. We would like 
to get some of this work done before people move on to other work. We would like 
to contribute to that in the WG and help them improve their work, and have some 
of these published as RFC. From the list, there are several algorithms we seen 
already come in, and there are many more people are working on. We don't have 
very strict threshold of what it takes to get an algorithm adopted. We were 
floating a proposal that asks for multiple groups outside the editors of the 
specification itself that saying that they are interested and that it solves a 
particular use case they have in the real world. And that they would expiriement 
with it, and contribute feedback on the specification. So, is the working group 
ready to do this?

Dave: I've put the fq_codel draft in the hope that it gets some comments, so 
far, two comments. Fred, why do you call it sfq rather than fq implementation?

Fred: Probably no good reason. I can change that if that's an issue. 

Dave: Just seems to be confusing. I would like to see more eyeballs on the 
fq_codel draft, same thing for the codel draft.

Wes: Actually, I've looked at all of them in the list, and I think all of them 
are pretty well written and good starting points to have an IETF spec. The 
problem is, I don't want to start adopting algorithm drafts, if there aren't 
enough experts outside the editors to contribute to each other's specification. 
Like you said Dave, you only got a few comments on the documents you submitted, 
and I would have expected there to be a lot more. 

Dave: I was just hoping thats it's so well written that there are no comments 
needed. I thing it should get out in front of more eyeball, in particular with 
the RMCAT WG. 

Richard: How about the bar to have at least one group outside the editors review 
an algorithm and perhaps even expirience implementing it, before we as WG 
consider adopting it. Is that reasonable, or would that be too restrictive? With 
PIE and CODEL I believe we do have a very solid understanding including the 
variants. There have been lengthy discussions even before the WG started. So 
these two seem to be in a very good shape from fulfilling that threshold. But 
how about additional ones?

Dave: Additional algorithms to the three on the table here?

Richard: Yes.

Dave: I happen to like QFQ, I've heard about there may be coming an FQ_PIE. I'm 
not sure what you are asking. I would like WGs whose problem we are trying to 
solve to have input in the process, and taking input from them in terms of what 
kind of tests there will be. I like the suite that the RMCAT group was coming up 
with. 

Wes: I don't know about a specific WG. When we were forming this (AQM) WG, the 
entire routing area was concerned, what we were doing. Let's rephrase the 
question: Is there anyone here that thinks the WG is not ready to start adopting 
algorithms? That we need to pause? The original plan in the chartering 
discussion was we work out this evaluation scenario anf guidelines, and that we 
would converge on that before we were to pick algorithms. We are trying to maybe 
parallelise the work with this proposal.

Dave: I have no objections ever to more eyeballs. 

Gorry: Are you asking, can we start to take the two proposals we have on the 
table more seriously because they are becoming WG items?

Wes: There is more than two. And once it's clear we are adopting some specs, 
there may even be more that show up. 

Gorry: There are?

Wes: Well, what we are asking is essentially, do we think it's time to open the 
door to adopting specs as WG drafts?

Gorry: Yeah, and how are we going to control the rest? There are other 
algorithms that are widely deployed, in particular vendor's networks for 
instance.

Wes: When we floated this originally, I think the criteria was, that we would 
like to see that there are multiple groups contributing feedback to the spec and 
saying they are going to work with it in reality for some use case it's intented 
to meet. And that there have to be people other than the editors willing to work 
with it. And if you can meet that threshold, then it would be acceptable to poll 
the WG for adoption on a particular specification.

Gorry: Do you think it's possible to encourage people that currently have 
written drafts to try and do that? And if they do, these particular drafts can 
progress?

Wes: Well, we are trying to chart a path, to help these authors and editors out 
on specs in order the help them to progress. I think everyone understands that 
we can't just accept them all with further ado. We have to have some kind of 
process, otherwise it's chaos. So we are searching for a process really.

Fred: I think we need to determine what we mean when we say we have adopted 
something. If we adopt exactly one, that begins to sound like an IETF 
recommendation that everyone should do that one. That was the mistake we made 
frankly, with RED. We said, everybody has to do one thing. Having two things, 
one of which is head drop and one of which is tail drop, that really doesn't 
bother me. That kind of gives you two classes of algorithms. But then you are 
saying as a WG is that not everybody needs to do exactly this, but rather here 
is an example of what people should do. It might even be worthwhile to have a 
one paragraph RFC that says that. This is the class of recommendation that we 
are making. I think we as a WG need to discuss what kind of recommendation we 
want to make. If the kind of recommendation is that we think there is a class of 
algorithms that we know that these work in research and operation, then I think 
it's a different recommendation than the kind of thing that we did with RED. 

Wes: Yes, so when I sent an email about this I think I said that we need an 
applicatibility statement in them. For a particular deployment scenario maybe, 
or use case, or even implementation requirement.

Fred: I understand that there is a gotcha with those kinds of statements. 
Suppose somebody comes up with a new example. And that new example might be, 
should I do this in access points, should I do this on photonic links. Suppose 
somebody comes up with a use case that hasn't been listed. Then the question 
becomes, should I use this one, should I use that one, should I use a third one? 
There is no applicability statement saying what to do. That actually raises more 
questions than it solves. Also, I think Dave will agree with me on this, both of 
the algorithms that are on the table could be used more or less interchangeably, 
and should be applicable in a wide variety of cases. I don't know if an 
applicability statement would say, in this case, choose PIE, in that case choose 
CODEL. It would say, this is applicable in the set of cases that are under 
consideration by the AQM WG, or something like that. I'm not sure if an 
applicability statement gets you where you want to go.

Wes: Well, no. Perhaps you are right, you can almost come up with an 
inapplicability statement, if there are cases where we know they are not 
particular appropriate, that would be easier to say from simulations or 
expirience.

Fred: And those things I don't think we actually know. We know in those use 
cases we tried them, they help. 

Dave: All in favour in more deployments and more testing. More exposure to more 
use cases. Need to learn more stuff. 

Richard: What worries me is an overflow of different algorithms. So, say, a PhD 
student comes up with a clever idea, makes some simulations and declares 
victory, and wants his algorithm adopted by the AQM WG. I guess what I'm asking 
for is the WG to decide, what kind of gatekeeper or gating requirements do we 
put up, before we allow a draft to become a WG document, in order to limit the 
influx of algorithms that ask for official IETF approval. 

Gorry: One we could ask is deployment expirience, which several algorithms 
already have. 

Fred: Well yeah. And this comment is an outcome from the expirience we've had 
with the DiffServ: Some way of clarifying that it's significantly different from 
an algorithm that is already on the table.  With DiffServ we had a long list of 
people that came and wanted to pitch in my paper, i put this command in front of 
that command instead of after, and therefore it's different and please recognize 
my code. Kind of drove us crazy for some while. The distinction between head 
drop and tail drop is a really big distinction. Being able to argue that my new 
proposal is significantly different in some interesting way.

Richard: Good! Sounds like an interesting propsal.

Wes: Yes, we can take the feedback here and draw some kind of concrete proposal 
for moving forward on algorithm specs and get it out on the list, and walk 
through it in toronto face to face. This is also something we want our AD to 
support us on, if we are going to do it. 

Richard: I believe this concludes our meeting. The planned agenda for toronto is 
still that we want to review the update to 2309bis. revision -06 should be 
coming up before the toronto meeting. Then we would want to spend again the 
majority of time to discuss the evaluation guidelines and the scenarios. And 
after discussing the adoption of AQM mechansisms with our ADs, how we plan to 
move forward on that front.

Dave: I have made this suggestion before, I'm perfectly happy to drop it. I had 
in mind to talk to the case for comprehensive Queue management. Not just AQM, 
but packet scheduling, rate limiting, and policing. In order to try and provide 
some context for what we are trying to decide in moving forward. I had no 
feedback on that proposal, and I'm ok to drop it.

Wes: Is it something you are asking for agenda time in toronto?

Dave: If I get agenda time for it, i'll talk to it. 

Wes: Ok.

Richard: So, we have a 2hr slot, there should be plenty of time.

Dave: Alright! 

Wes: I will correspond with you offline to that.

Dave: Ok.

Wes: Is there anything else that people think we should focus on in toronto? In 
terms of 2309bis, I hope we can confirm that it's ready to go off to the AD for 
IETF LC and moving forward. If that is all, I think we can conclude the meeting 
here.

Dave: See you again in a couple weeks.

Wes: Ok, hope to see you all in Toronto.