New Media Stack
draft-jennings-dispatch-new-media-00
This document is an Internet-Draft (I-D).
Anyone may submit an I-D to the IETF.
This I-D is not endorsed by the IETF and has no formal standing in the
IETF standards process.
The information below is for an old version of the document.
| Document | Type |
This is an older version of an Internet-Draft whose latest revision state is "Expired".
|
|
|---|---|---|---|
| Author | Cullen Fluffy Jennings | ||
| Last updated | 2018-03-05 | ||
| RFC stream | (None) | ||
| Formats | |||
| Stream | Stream state | (No stream defined) | |
| Consensus boilerplate | Unknown | ||
| RFC Editor Note | (None) | ||
| IESG | IESG state | I-D Exists | |
| Telechat date | (None) | ||
| Responsible AD | (None) | ||
| Send notices to | (None) |
draft-jennings-dispatch-new-media-00
Network Working Group C. Jennings
Internet-Draft Cisco
Intended status: Standards Track March 5, 2018
Expires: September 6, 2018
New Media Stack
draft-jennings-dispatch-new-media-00
Abstract
A sketch of a proposal for a new media stack for interactive
communications.
Status of This Memo
This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at https://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
This Internet-Draft will expire on September 6, 2018.
Copyright Notice
Copyright (c) 2018 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(https://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License.
Jennings Expires September 6, 2018 [Page 1]
Internet-Draft new-media March 2018
Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2
2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3
3. Connectivity Layer . . . . . . . . . . . . . . . . . . . . . 4
3.1. Snowflake - New ICE . . . . . . . . . . . . . . . . . . . 4
3.2. STUN2 . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.2.1. STUN2 Request . . . . . . . . . . . . . . . . . . . . 5
3.2.2. STUN2 Response . . . . . . . . . . . . . . . . . . . 5
3.3. TURN2 . . . . . . . . . . . . . . . . . . . . . . . . . . 5
4. Transport Layer . . . . . . . . . . . . . . . . . . . . . . . 7
5. Media Layer - RTP3 . . . . . . . . . . . . . . . . . . . . . 7
5.1. Securing the messages . . . . . . . . . . . . . . . . . . 10
5.2. Sender requests . . . . . . . . . . . . . . . . . . . . . 10
5.3. Data Codecs . . . . . . . . . . . . . . . . . . . . . . . 10
5.4. Forward Error Correction . . . . . . . . . . . . . . . . 10
5.5. MTI Codecs . . . . . . . . . . . . . . . . . . . . . . . 10
5.6. Message Key Agreement . . . . . . . . . . . . . . . . . . 11
6. Control Layer . . . . . . . . . . . . . . . . . . . . . . . . 11
6.1. Transport Capabilities API . . . . . . . . . . . . . . . 11
6.2. Media Capabilities API . . . . . . . . . . . . . . . . . 11
6.3. Transport Configuration API . . . . . . . . . . . . . . . 12
6.4. Media Configuration API . . . . . . . . . . . . . . . . . 12
6.5. Transport Metrics . . . . . . . . . . . . . . . . . . . . 14
6.6. Flow Metrics API . . . . . . . . . . . . . . . . . . . . 14
6.7. Stream Metrics API . . . . . . . . . . . . . . . . . . . 15
7. Call Signaling . . . . . . . . . . . . . . . . . . . . . . . 15
8. Signaling Examples . . . . . . . . . . . . . . . . . . . . . 16
8.1. Simple Audio Example . . . . . . . . . . . . . . . . . . 16
8.1.1. simple audio advertisement . . . . . . . . . . . . . 16
8.1.2. simple audio proposal . . . . . . . . . . . . . . . . 17
8.2. Simple Video Example . . . . . . . . . . . . . . . . . . 18
8.2.1. Proposal sent to camera . . . . . . . . . . . . . . . 19
8.3. Simulcast Video Example . . . . . . . . . . . . . . . . . 19
8.4. FEC Example . . . . . . . . . . . . . . . . . . . . . . . 20
8.4.1. Advertisement includes a FEC codec. . . . . . . . . . 20
8.4.2. Proposal sent to camera . . . . . . . . . . . . . . . 21
9. Switched Forwarding Unit (SFU) . . . . . . . . . . . . . . . 22
10. Informative References . . . . . . . . . . . . . . . . . . . 22
Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 23
1. Introduction
This draft proposes a new media stack to replace the existing stack
RTP, DTLS-SRTP, and SDP Offer Answer. The key parts of this stack
are connectivity layer, the transport layer, the media layer, a
control API, and the signaling layer.
Jennings Expires September 6, 2018 [Page 2]
Internet-Draft new-media March 2018
The connectivity layer uses a simplified version of ICE, called
snowflake [I-D.jennings-dispatch-snowflake], to find connectivity
between endpoints and change the connectivity from one address to
another as different networks become available or disappear. It is
based on ideas from [I-D.jennings-mmusic-ice-fix].
The transport layer uses QUIC to provide a hop by hop encrypted,
congestion controlled transport of media. Although QUIC does not
currently have all of the partial reliability mechanisms to make this
work, this draft assumes that they will be added to QUIC.
The media layer uses existing codecs and packages them along with
extra header information to provide information about, when the
sequence needs to be played back, which camera it came from, and
media streams to be synchronized.
The control API is an abstract API that provides a way for the media
stack to report it capabilities and features and a way for the an
application tell the media stack how it should be configured.
Configuration includes what codec to use, size and frame rate of
video, and where to send the media.
The signaling layer is based on an advertisement and proposal model.
Each endpoint can create an advertisement that describes what it
supports including things like supported codecs and maximum bitrates.
A proposal can be sent to an endpoint that tells the endpoint exactly
what media to send and receive and where to send it. The endpoint
can accept or reject this proposal in total but cannot change any
part of it.
2. Terminology
o media stream: Stream of information from a single sensor. For
example, a video stream from a single camera. A stream may have
multiple encodings for example video at different resolutions.
o encoding: A encoded version of a stream. A given stream may have
several encodings at different resolutions. One encoding may
depend on other encodings such as forward error corrections or in
the case of scalable video codecs.
o flow: A logical transport between two computers. Many media
streams can be transported over a single flow. The actually IP
address and ports used to transport data in the flow may change
over time as connectivity changes.
o message: some data or media that to be sent across the network
along with metadata about it. Similar to an RTP packet.
Jennings Expires September 6, 2018 [Page 3]
Internet-Draft new-media March 2018
o media source: a camera, microphone or other source of data on an
endpoint
o media sink: a speaker, screen, or other destination for data on an
endpoint
o TLV: Tag Length Value. When used in the draft, the Tag, Length,
and any integer values are coded as variable length integers
similar to how this is done in CBOR.
3. Connectivity Layer
3.1. Snowflake - New ICE
All that is needed to discover the connectivity is way to:
o Gather some IP/ports that may work using TURN2 relay, STUN2, and
local addresses.
o A controller, which might be running in the cloud, to inform a
client to send a STUN2 packet from a given source IP/port to a
given destination IP/port.
o The receiver notifies the controller about information on received
STUN2 packets.
o The controller can tell the sender the secret that was in the
packet to prove consent of the receiver to receive data then the
sending client can allow media to flow over that connection.
The actually algorithm used to decide on what pairs of addresses are
tested and in what order does not need to be agreed on by both the
sides of the call - only the controller needs to know it. This
allows the controller to use machine learning, past history, and
heuristics to find an optimal connection much faster than something
like ICE.
The details of this approach are described in
[I-D.jennings-dispatch-snowflake].
3.2. STUN2
The speed of setting up a new media flow is often determined by how
many STUN2 checks need to be done. If the STUN2 packets are smaller,
then the stun checks can be done faster without risk of causing
congestion. The STUN2 server and client share a secret that they use
for authentication and encryption. When talking to a public STUN2
server this secret is the empty string.
Jennings Expires September 6, 2018 [Page 4]
Internet-Draft new-media March 2018
3.2.1. STUN2 Request
A STUN2 request consists of the following TLVs:
o a magic number that uniquely identifies this as a STUN2 request
packet with minimal risk of collision when multiplexing.
o a transaction ID that uniquely identifies this request and does
not change in retransmissions of the same request.
o an optional sender secret that can be used by the receiver to
prove that it received the request. In WebRTC the browser would
create the secret but the JavaScript on the sending side would
know the value.
The packet is encrypted by using the secret and an AEAD crypto to
create a STUN2 packet where the first two fields are the magic number
and transaction ID which are only authenticated followed by the rest
of the fields that are authenticated and encrypted followed by the
AEAD authentication data.
The STUN2 requests are transmitted with the same retransmission and
congestion algorithms as STUN2 in WebRTC 1.0
3.2.2. STUN2 Response
A STUN2 response consists of the following TLVs:
o a magic number that uniquely identifies this as a STUN2 response
packet with minimal risk of collision when multiplexing.
o the transaction ID from the request.
o the IP address and port the request was received from.
The packet is encrypted where the first two fields are the magic
number and transaction ID which are only authenticated followed by
the rest of the fields that are authenticated and encrypted followed
by the AEAD authentication data.
3.3. TURN2
Out of band, the client tells the TURN2 server the fingerprint of the
cert it uses to authenticate with and the TURN2 server gives the
client two public IP:port address pairs. One is called inbound and
other called outbound. The client connects to the outbound port and
authenticates to TURN2 server using the TLS domain name of server.
The TURN2 server authenticates the client using mutual TLS with
Jennings Expires September 6, 2018 [Page 5]
Internet-Draft new-media March 2018
fingerprint of cert provided by the client. Any time a message or
stun packet is received on the matched inbound port, the TURN2 server
forwards it to the client(s) connected to the outbound port.
A single TURN2 connection can be used for multiple different calls or
session at the same time and a client could choose to allocate the
TURN2 connection at the time that it started up. It does not need to
be done on a per session basis.
The client can not send from the TURN2 server.
Client A Turn Server Client B
(Media Receiver) (Media Sender)
| | |
| | |
| | |
|(1) OnInit Register (A's fingerprint)
|------------->| |
| | |
| | |
|(2) Register Response (Port Pair (L,R))
|<-------------| |
| | |
| | |
| L(left of Server), R(Right of Server)
| | |
| | |
| | |
|(3) Setup TLS Connection (L port)
|..............| |
| | |
| | |
| | | B send's media to A
| | |
| | |
| | |
| |(4) Media Tx (Received on Port R)
| |<-------------|
| | |
| | |
|(5) Media Tx (Sent from Port L)
|<-------------| |
| | |
| | |
Jennings Expires September 6, 2018 [Page 6]
Internet-Draft new-media March 2018
4. Transport Layer
The responsibility of the transport layer is to provide an end to end
crypto layer equivalent to DTLS and they must ensure adequate
congestion control. The transport layer brings up a flow between two
computers. This flow can be used by multiple media streams.
The MTI transport layer is QUIC with packets sent in an unreliable
mode.
This is secured by checking the fingerprints of the DTLS connection
match the fingerprints provided at the control layer or by checking
the names of the certificates match what was provided at control
layer.
The transport layer needs to be able to set the DSCP values in
transmitting packets as specified by the control layer.
The transport MAY provide a compression mode to remove the redundancy
of the non-encrypted portion of the media messages such as
GlobalEncodingID. For example, a GlobalEncodingID could be mapped to
a QUIC channel and then it could be removed before sending the
message and added back on the receiving side.
The transport need to be able to ensure that it has a very small
chance of being confused with the STUN2 traffic it will be
multiplexed with.
5. Media Layer - RTP3
Each message consist of a set of TLV headers with metadata about the
packet, followed by payload data such as the output of audio or video
codec.
There are several message headers that help the receiver understand
what to do with the media. The TLV header are the follow:
o Conference ID: Integer that will be globally unique identifier for
the for all applications using a common call signaling system.
This is set by the proposal.
o Endpoint ID: Integer to uniquely identify the endpoint with within
scope of conference ID. This is set by the proposal.
o Source ID: integer to uniquely identify the input source within
the scope a endpoint ID. A source could be a specific camera or a
microphone. This is set by the endpoint and included in the
advertisement.
Jennings Expires September 6, 2018 [Page 7]
Internet-Draft new-media March 2018
o Sink ID: integer to uniquely identify the sink within the scope a
endpoint ID. A sink could be a speaker or screen. This is set by
the endpoint and included in the advertisement.
o Encoding ID: integer to uniquely identify the encoding of the
stream within the scope of the stream ID. Note there may be
multiple encodings of data from the same source. This is set by
the proposal.
o Salt : salt to use for forming the initialization vector for AEAD.
The salt shall be sent as part of the packet and need not be sent
in all the packets. This is created by the endpoint sending the
message.
o GlobalEncodingID: 64 bit hash of concatenation of conference ID,
endpoint ID, stream ID, encoding ID
o Capture time: Time when the first sample in the message was
captured. It is a NTP time in ms with the high order bits
discarded. The number of bits in the capture time needs to be
large enough that it does not wrap in for the lifetime of this
stream. This is set by the endpoint sending the message.
o Sequence ID: When the data captured for a single point in time is
too large to fit in a single message, it can be split into
multiple chunks which are sequentially numbered starting at 0
corresponding to the first chunk of the message. This is set by
the endpoint sending the message.
o GlobalMessageID: 64 bit hash of concatenation of conference ID,
endpoint ID, encoding ID, sequence ID
o Active level: this is a number from 0 to 100 indicates the level
that the sender of this media wishes it to be considered active
media. For example if it was voice, it would be 100 if the person
was clearly speaking, and 0 if not, and perhaps a value in the
middle if it was uncertain. This allows an media switch to select
the active speaker in the in a conference call.
o Location in room: relative location in room enumerated starting at
front left and moving around clockwise. This helps get the
correct content on left and right screens for video and helps with
for spatial audio
o Reference Frame : bool to indicate if this message is part of a
reference frame
Jennings Expires September 6, 2018 [Page 8]
Internet-Draft new-media March 2018
o DSCP : DSCP to use on transmissions of this message and future
messages on this GlobalEncodingID
o Layer ID : Integer indicating which layer is for scalable video
codecs. SFU may use this to selectively drop a frame.
The keys used for the AEAD are unique to a given conference ID and
endpoint ID.
If the message has any of the following headers, they must occur in
the following order followed by all other headers:
1. GlobalEncodingID,
2. GlobalMessageID,
3. conference ID,
4. endpoint ID,
5. encoding ID,
6. sequence ID,
7. active level,
8. DSCP
Every second there much be at least one message in each encoding that
contains:
o conference ID,
o endpoint ID,
o encoding ID,
o salt,
o and sequence ID headers
but they are not needed in every packet.
The sequence ID or GlobalMessageID is required in every message and
periodically there should be message with the capture time.
Jennings Expires September 6, 2018 [Page 9]
Internet-Draft new-media March 2018
5.1. Securing the messages
The whole message is end to end secured with AEAD. The headers are
authenticated while the payload data is authenticated and encrypted.
Similar to how the IV for AES-GCM is calculated in SRTP, in this case
the IV is computed by xor'ing the salt with the concatenation of the
GlobalEncodingID and low 64 bits of sequence ID. The message
consists of the authenticated data, followed by the encrypted data ,
then the authentication tag.
5.2. Sender requests
The control layer supports requesting retransmission of a particular
media message identified by IDs and capture time it would contain.
The control layer supports requesting a maximum rate for each given
encoding ID.
5.3. Data Codecs
Data messages including raw bytes, xml, senml can all be sent just
like media by selecting an appropriate codec and a software based
source or sink. An additional parameter to the codec can indicate if
reliably delivery is needed and if in order delivery is needed.
5.4. Forward Error Correction
A new Reed-Solomon based FEC scheme based on
[I-D.ietf-payload-flexible-fec-scheme] that provides FEC over
messages needs to be defined.
5.5. MTI Codecs
Implementation MUST support at least G711, Opus, H.264 and AV1
Video codecs use square pixels.
Video codecs MUST support any aspect ratio within the limits of their
max width and height.
Video codecs MUST support a min width and min height of 1.
All video on the wire is oriented such that the first scan line in
the frame is up and first pixel in the scan line is on the left.
T.38 fax and DTMF are not supported. Fax can be sent as a TIFF
imager over a data channel and DTFM can be done as an application
specific information over a data channel.
Jennings Expires September 6, 2018 [Page 10]
Internet-Draft new-media March 2018
5.6. Message Key Agreement
The secret for encrypting messages can be provided in the proposal by
value or by a reference. The reference approach allows the client to
get it from a messaging system where the server creating the proposal
may not have access to the the secret. For example, it might come
from a system like [I-D.barnes-mls-protocol].
6. Control Layer
The control layer needs an API to find out what the capabilities of
the device are, and then a way to set up sending and receiving
stream. All media flow are only in one direction. The control is
broken into control of connectivity and transports, and control of
media streams.
6.1. Transport Capabilities API
An API to get information for remote connectivity including:
o set the IP, port, and credential for each TURN2 server
o can return the IP, port tuple for the remote side to send to TURN2
server
o gather local IP, port, protocol tuples for receiving media
o report SHA256 fingerprint of local TLS certificate
o encryption algorithms supported
o report an error for a bad TURN2 credential
6.2. Media Capabilities API
Send and receive codecs are consider separate codecs and can have
separate capabilities though the default to the same if not specified
separately.
For each send or receive audio codec, an API to learn:
o codec name
o the max sample rate
o the max sample size
o the max bitrate
Jennings Expires September 6, 2018 [Page 11]
Internet-Draft new-media March 2018
For each send or receive video codec, an API to learn:
o codec name
o the max width
o the max height
o the max frame rate
o the max pixel depth
o the max bitrate
o the max pixel rate ( pixels / second )
6.3. Transport Configuration API
To create a new flow, the information that can be configured is:
o turn server to use
o list of IP, Port, Protocol tuples to try connecting to
o encryption algorithm to use
o TLS fingerprint of far side
An api to allow modification of the follow attributes of a flow:
o total max bandwidth for flow
o forward error correction scheme for flow
o FEC time window
o retransmission scheme for flow
o addition IP, Port, Protocol pairs to send to that may improve
connectivity
6.4. Media Configuration API
For all streams:
o set conference ID
o set endpoint ID
Jennings Expires September 6, 2018 [Page 12]
Internet-Draft new-media March 2018
o set encoding ID
o salt and secret for AEAD
o flag to pause transition
For each transmitted audio steam, a way to set the:
o audio codec to use
o media source to connect
o max encoded bitrate
o sample rate
o sample size
o number of channels to encode
o packetization time
o process as one of : automatically set, raw, speech, music
o DSCP value to use
o flag to indicating to use constant bit rate
For each transmitted video stream, a way to set
o video codec to use
o media source to connect to
o max width and max height
o max encoded bitrate
o max pixel rate
o sample rate
o sample size
o process as one of : automatically set, rapidly changing video,
fine detail video
o DSCP value to use
Jennings Expires September 6, 2018 [Page 13]
Internet-Draft new-media March 2018
o for layered codec, a layer ID and set of layers IDs this depends
on
For each transmitted video stream, a way to tell it to:
o encode the next frame as an intra frame
For each transmitted data stream:
o a way to send a data message and indicate reliable or unreliable
transmission
For each received audio stream:
o audio codec to use
o media sink to connect to
o lip sync flag
For each received video stream:
o video codec to use
o media sink to connect to
o lip sync flag
For each received data stream:
o notification of received data messages
Note on lip sync: For any streams that have the lip sync flag set to
true, the render attempts to synchronize their play back.
6.5. Transport Metrics
o report gathering state and completion
6.6. Flow Metrics API
For each flow, report:
o report connectivity state
o report bits sent
o report packets lost
Jennings Expires September 6, 2018 [Page 14]
Internet-Draft new-media March 2018
o report estimated RTT
o report SHA256 fingerprint for certificate of far side
o current 5 tuple in use
6.7. Stream Metrics API
For sending streams:
o Bits sent
o packets lost
For receiving streams:
o capture time of most recently receives packet
o endpoint ID of more recently received packet
o bits received
o packets lost
For video streams (send & receive):
o current encoded width and height
o current encoded frame rate
7. Call Signaling
Call signaling is out of scope for usages like WebRTC but other
usages may want a common REST API they can use.
Call signaling works be having the client connect to a server when it
starts up and send its current advertisement and open a web socket or
to receive proposals from the server. A client can make a rest call
indicating the parties(s) it wishes to connect to and the server will
then send proposals to all clients that connect them. The proposal
tell each client exactly how to configure it's media stack and MUST
be either completely accepted, or completely rejected.
The signaling is based on the the advertisement proposal ideas from
[I-D.peterson-sipcore-advprop].
We define one round trip of signaling to be a message going from a
client up to a server in the cloud, then down to another client which
Jennings Expires September 6, 2018 [Page 15]
Internet-Draft new-media March 2018
returns a response along the reverse path. With this definition SIP
is takes 1.5 round trips or more if TURN is needed to set up a call
while this takes 0.5 round trips.
8. Signaling Examples
8.1. Simple Audio Example
8.1.1. simple audio advertisement
{
"receiveAt":[
{
"relay":"2001:db8::10:443",
"stunSecret":"s8i739dk8",
"tlsFingerprintSHA256":"1283938"
},
{
"stun":"203.0.113.10:43210",
"stunSecret":"s8i739dk8",
"tlsFingerprintSHA256":"1283938"
},
{
"local":"192.168.0.2:443",
"stunSecret":"s8i739dk8",
"tlsFingerprintSHA256":"1283938"
}
],
"sources":[
{
"sourceID":1,
"sourceType":"audio",
"codecs":[
{
"codecName":"opus",
"maxBitrate":128000
},
{
"codecName":"g711"
}
]
}
],
"sinks":[
{
"sinkID":1,
"sourceType":"audio",
"codecs":[
Jennings Expires September 6, 2018 [Page 16]
Internet-Draft new-media March 2018
{
"codecName":"opus",
"maxBitrate":256000
},
{
"codecName":"g711"
}
]
}
]
}
8.1.2. simple audio proposal
{
"receiveAt":[
{
"relay":"2001:db8::10:443",
"stunSecret":"s8i739dk8"
},
{
"stun":"203.0.113.10:43210",
"stunSecret":"s8i739dk8"
},
{
"local":"192.168.0.10:443",
"stunSecret":"s8i739dk8"
}
],
"sendTo":[
{
"relay":"2001:db8::20:443",
"stunSecret":"20kdiu83kd8",
"tlsFingerprintSHA256":"9389739"
},
{
"stun":"203.0.113.20:43210",
"stunSecret":"20kdiu83kd8",
"tlsFingerprintSHA256":"9389739"
},
{
"local":"192.168.0.20:443",
"stunSecret":"20kdiu83kd8",
"tlsFingerprintSHA256":"9389739"
}
],
"sendStreams":[
{
Jennings Expires September 6, 2018 [Page 17]
Internet-Draft new-media March 2018
"conferenceID":4638572387,
"endpointID":23,
"sourceID":1,
"encodingID":1,
"codecName":"opus",
"AEAD":"AES128-GCM",
"secret":"xy34",
"maxBitrate":24000,
"packetTime":20
}
],
"receiveStreams":[
{
"conferenceID":4638572387,
"endpointID":23,
"sinkID":1,
"encodingID":1,
"codecName":"opus",
"AEAD":"AES128-GCM",
"secret":"xy34"
}
]
}
8.2. Simple Video Example
Advertisement for simple send only camera with no audio
{
"sources":[
{
"sourceID":1,
"sourceType":"video",
"codecs":[
{
"codecName":"av1",
"maxBitrate":20000000,
"maxWidth":3840,
"maxHeight":2160,
"maxFrameRate":120,
"maxPixelRate":248832000,
"maxPixelDepth":8
}
]
}
]
}
Jennings Expires September 6, 2018 [Page 18]
Internet-Draft new-media March 2018
8.2.1. Proposal sent to camera
{
"sendTo":[
{
"relay":"2001:db8::20:443",
"stunSecret":"20kdiu83kd8",
"tlsFingerprintSHA256":"9389739"
}
],
"sendStreams":[
{
"conferenceID":0,
"endpointID":0,
"sourceID":0,
"encodingID":0,
"codecName":"av1",
"AEAD":"NULL",
"width":640,
"height":480,
"frameRate":30
}
]
}
8.3. Simulcast Video Example
Advertisement same as simple camera above but proposal has two
streams with different encodingID.
Jennings Expires September 6, 2018 [Page 19]
Internet-Draft new-media March 2018
{
"sendTo":[
{
"relay":"2001:db8::20:443",
"stunSecret":"20kdiu83kd8",
"tlsFingerprintSHA256":"9389739"
}
],
"sendStreams":[
{
"conferenceID":0,
"endpointID":0,
"sourceID":0,
"encodingID":1,
"codecName":"av1",
"AEAD":"NULL",
"width":1920,
"height":1080,
"frameRate":30
},
{
"conferenceID":0,
"endpointID":0,
"sourceID":0,
"encodingID":2,
"codecName":"av1",
"AEAD":"NULL",
"width":240,
"height":240,
"frameRate":15
}
]
}
8.4. FEC Example
8.4.1. Advertisement includes a FEC codec.
Jennings Expires September 6, 2018 [Page 20]
Internet-Draft new-media March 2018
{
"sources":[
{
"sourceID":1,
"sourceType":"video",
"codecs":[
{
"codecName":"av1",
"maxBitrate":20000000,
"maxWidth":3840,
"maxHeight":2160,
"maxFrameRate":120,
"maxPixelRate":248832000,
"maxPixelDepth":8
},
{
"codecName":"flex-fec-rs"
}
]
}
]
}
8.4.2. Proposal sent to camera
Jennings Expires September 6, 2018 [Page 21]
Internet-Draft new-media March 2018
{
"sendTo":[
{
"relay":"2001:db8::20:443",
"stunSecret":"20kdiu83kd8",
"tlsFingerprintSHA256":"9389739"
}
],
"sendStreams":[
{
"conferenceID":0,
"endpointID":0,
"sourceID":0,
"encodingID":1,
"codecName":"av1",
"AEAD":"NULL",
"width":640,
"height":480,
"frameRate":30
},
{
"conferenceID":0,
"endpointID":0,
"sourceID":0,
"encodingID":2,
"AEAD":"NULL",
"codecName":"flex-fec-rs",
"fecRepairWindow":200,
"fecRepairEncodingIDs":[
1
]
}
]
}
9. Switched Forwarding Unit (SFU)
When several clients are in conference call, the SFU can forward
packets based on looking at which clients needs a given
GlobalEncodingID. By looking at the "active level", the SFU can
figure out which endpoints are the active speaker and forward only
those. The SFU never changes anything in the message.
10. Informative References
Jennings Expires September 6, 2018 [Page 22]
Internet-Draft new-media March 2018
[I-D.barnes-mls-protocol]
Barnes, R., Millican, J., Omara, E., Cohn-Gordon, K., and
R. Robert, "The Messaging Layer Security (MLS) Protocol",
draft-barnes-mls-protocol-00 (work in progress), February
2018.
[I-D.ietf-payload-flexible-fec-scheme]
Singh, V., Begen, A., Zanaty, M., and G. Mandyam, "RTP
Payload Format for Flexible Forward Error Correction
(FEC)", draft-ietf-payload-flexible-fec-scheme-05 (work in
progress), July 2017.
[I-D.jennings-dispatch-snowflake]
Jennings, C. and S. Nandakumar, "Snowflake - A Lighweight,
Asymmetric, Flexible, Receiver Driven Connectivity
Establishment", draft-jennings-dispatch-snowflake-01 (work
in progress), March 2018.
[I-D.jennings-mmusic-ice-fix]
Jennings, C., "Proposal for Fixing ICE", draft-jennings-
mmusic-ice-fix-00 (work in progress), July 2015.
[I-D.peterson-sipcore-advprop]
Peterson, J. and C. Jennings, "The Advertisement/Proposal
Model of Session Description", draft-peterson-sipcore-
advprop-01 (work in progress), March 2011.
Author's Address
Cullen Jennings
Cisco
Email: fluffy@iii.ca
Jennings Expires September 6, 2018 [Page 23]