Statistical Machine Learning for HCI; Speech Processing; Natural Language and Dialogue Processing; Image and Video Processing Tools for HCI; Processing of Handwriting and Sketching Dynamics; Basic Concepts of Multimodal Analysis; Multimodal Information Fusion; Modality Integration Methods; A Multimodal Recognition Framework for Joint Modality Compensation and Fusion; Managing Multimodal Data, Metadata and Annotations: Challenges and Solutions; Multimodal Input; Multimodal HCI Output: Facial Motion, Gestures and Synthesised Speech Synchronisation; Interactive Representations of Multimodal Databases; Modelling Interest in Face-to-Face Conversations from Multimodal Nonverbal Behaviour
Multimodal signal processing is an important research and development field that processes signals and combines information from a variety of modalities - speech, vision, language, text - which significantly enhance the understanding, modelling, and performance of human-computer interaction devices or systems enhancing human-human communication. The overarching theme of this book is the application of signal processing and statistical machine learning techniques to problems arising in this multi-disciplinary field. It describes the capabilities and limitations of current technologies, and discusses the technical challenges that must be overcome to develop efficient and user-friendly multimodal interactive systems.
With contributions from the leading experts in the field, the present book should serve as a reference in multimodal signal processing for signal processing researchers, graduate students, R&D engineers, and computer engineers who are interested in this emerging field.
- Presents state-of-art methods for multimodal signal processing, analysis, and modeling
- Contains numerous examples of systems with different modalities combined
- Describes advanced applications in multimodal Human-Computer Interaction (HCI) as well as in computer-based analysis and modelling of multimodal human-human communication scenes.