AGENDA FOR DMSP BOF Co-Chair: Nathaniel Borenstein, IBM Co-Chair: Chris Cross, IBM Area Directors: Ted Hardie Lisa Dusseault I. Note Well, Blue Sheets, Introductions II. DMSP Overview and Discussion -- Chris Cross, IBM, and J. Engelsma, Motorola (Initial I-D at http://www.ietf.org/internet-drafts/draft-engelsma-dmsp-01.txt) III. Goals for a Possible WG See draft charter below. IV. Next Steps -- Create WG? -- Revise I-D? Draft Charter The convergence of wireless communications with information technology and the miniaturization of computing platforms have resulted in advanced mobile devices that offer high resolution displays, application programs with graphical user interfaces, and access to the internet through full function web browsers. Mobile phones now support most of the functionality of a laptop computer. However the miniaturization that has made the technology possible and commercially successful also puts constraints on the user interface. Tiny displays and keypads significantly reduce the usability of application programs. Multimodal user interfaces, UIs that offer multiple modes of interaction, have been developed that greatly improve the usability of mobile devices. In particular multimodal UIs that combine speech and graphical interaction are proving themselves in the marketplace. However, not all mobile devices provide the computing resources to perform speech recognition and synthesis locally on the device. For these devices it is necessary to distribute the speech modality to a server in the network. The Distributed Multimodal Working Group will develop the protocols necessary to control, coordinate, and synchronize distributed modalities in a distributed Multimodal system. There are several protocols and standards necessary to implement such a system including DSR and AMR speech compression, session control, and media streaming. However, the DM WG will focus exclusively on the synchronization of modalities being rendered across a network, in particular Graphical User Interface and Voice Servers. The DM WG will develop an RFC for a Distributed Multimodal Synchronization Protocol that defines the logical message set to effect synchronization between modalities and enough background on the expected multimodal system architecture (or reference architecture defined elsewhere in W3C or OMA) to present a clear understanding of the protocol. It will investigate existing protocols for the transport of the logical synchronization messages and develop an RFC detailing the message format for commercial alternatives, including, possibly, HTTP and SIP. While not being limited to these, for simplicity of the scope the protocol will assume RTP for carriage of media, SIP and SDP for session control, and DSR and AMR for speech compression. The working group will not consider the authoring of applications as it will be assumed that this will be done with existing W3C markup standards such as XHTML and VoiceXML and commercial programming languages like Java and C/C++. It is expected that we will coordinate our work in the IETF with the W3C Multimodal Interaction Work Group. The following are our goals for the Working Group: Date Milestone TBD Submit Internet Draft Describing DMSP (standards track) TBD Submit Drafts to IESG for publication TBD Submit DMSP specification to IESG