[{"author": "Eduard V", "text": "

It is strange that search for \"BBR\" in the presentation or the draft gives no result. Is PPR better? How BBR is solving this issue? BBR claimed that have fast start on recovery

", "time": "2024-03-18T05:43:44Z"}, {"author": "Neal Cardwell", "text": "

Hi Eduard. PRR is applicable for standard congestion controls: either Reno or CUBIC. BBR uses a very different approach, and PRR does not apply.

", "time": "2024-03-18T05:50:51Z"}, {"author": "Neal Cardwell", "text": "

PRR does a great job of handling high loss situations, but it uses the ssthresh decision from the congestion control algorithm, so the long-term behavior for a connection using PRR is determined by the Reno or CUBIC algorithms (with all the pros and cons that implies).

", "time": "2024-03-18T05:53:28Z"}, {"author": "Eduard V", "text": "

But is BBR better/worse on the recovery?

", "time": "2024-03-18T05:53:49Z"}, {"author": "Neal Cardwell", "text": "

BBRv1 uses an approach very similar to PRR. BBRv3 uses an approach that is simpler than PRR and more aggressive than PRR. Which approach is better will depend on the scenario. :-)

", "time": "2024-03-18T05:55:09Z"}, {"author": "Eduard V", "text": "

thanks

", "time": "2024-03-18T05:55:52Z"}, {"author": "Zaheduzzaman Sarker", "text": "

lets not break the Internet with experiments.... :-).. even for experimental RFC we will need data to say it is working.

", "time": "2024-03-18T06:08:15Z"}, {"author": "Carles Gomez", "text": "

Agreed :)

", "time": "2024-03-18T06:12:09Z"}, {"author": "Martin Duke", "text": "

+1 to Ian. TCP is the wrong layer to solve this problem.

", "time": "2024-03-18T06:24:34Z"}, {"author": "Martin Duke", "text": "

Aside from the authentication problems, it saves us adding a new kernel API

", "time": "2024-03-18T06:25:11Z"}, {"author": "Gorry Fairhurst", "text": "

Oh... I've just learned what a ghost ACK is :-(

", "time": "2024-03-18T06:40:32Z"}, {"author": "Ziyang Xing", "text": "

Dear chairman,
\nAt the end of the meeting, I would like to introduce our work on MPTCP MPQUIC multipath. Is that okay? thanks

\n

https://datatracker.ietf.org/doc/draft-xing-nmop-sdn-controller-aware-mptcp-mpquic/

", "time": "2024-03-18T06:41:47Z"}, {"author": "Yoshifumi Nishida", "text": "

hmm. I'm not sure if we can guarantee it at this point.

", "time": "2024-03-18T06:43:01Z"}, {"author": "Yoshifumi Nishida", "text": "

I'm guessing current presentation can take some time.

", "time": "2024-03-18T06:44:58Z"}, {"author": "Aijun Wang", "text": "

@Martin, there are lots of applications that run directly on TCP, then why TCP is the wrong layer to solve the problem?

", "time": "2024-03-18T06:46:02Z"}, {"author": "Aijun Wang", "text": "

Only TCP based applications need the service affinity features. QUIC has similar mechanism, but TCP has no now

", "time": "2024-03-18T06:53:24Z"}, {"author": "John Border", "text": "

If keeping stats is an issue, can't you just remember the starting sequence number?

", "time": "2024-03-18T06:53:37Z"}, {"author": "Yepeng Pan", "text": "

@John: Yes (which is also state).

", "time": "2024-03-18T06:57:12Z"}]