Description

Book Synopsis
The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces-user input involving new media (speech, multi-touch, hand and body gestures, facial expressions, writing) embedded in multimodal-multisensor interfaces.

This three-volume handbook is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas.

This third volume focuses on state-of-the-art multimodal language and dialogue processing, including semantic integration of modalities. The development of increasingly expressive embodied agents and robots has become an active test bed for coordinating multimodal dialogue input and output, including processing of language and nonverbal communication. In addition, major application areas are featured for commercializing multimodal-multisensor systems, including automotive, robotic, manufacturing, machine translation, banking, communications, and others. These systems rely heavily on software tools, data resources, and international standards to facilitate their development. For insights into the future, emerging multimodal-multisensor technology trends are highlighted in medicine, robotics, interaction with smart spaces, and similar areas. Finally, this volume discusses the societal impact of more widespread adoption of these systems, such as privacy risks and how to mitigate them. The handbook chapters provide a number of walk-through examples of system design and processing, information on practical resources for developing and evaluating new systems, and terminology and tutorial support for mastering this emerging field. In the final section of this volume, experts exchange views on a timely and controversial challenge topic, and how they believe multimodal-multisensor interfaces need to be equipped to most effectively advance human performance during the next decade.



Table of Contents
  • Preface
  • Figure Credits
  • Introduction: Toward the Design, Construction, and Deployment of Multimodal-Multisensor Interfaces
  • MULTIMODAL LANGUAGE AND DIALOGUE PROCESSING
    • Multimodal Integration for Interactive Conversational Systems
    • Multimodal Conversational Interaction with Robots
    • Situated Interaction
    • Software Platforms and Toolkits for Building Multimodal Systems and Applications
    • Challenge Discussion: Advancing Multimodal Dialogue
    • Nonverbal Behavior in Multimodal Performances
  • MULTIMODAL BEHAVIOR
    • Ergonomics for the Design of Multimodal Interfaces
    • Early Integration for Movement Modeling in Latent Spaces
    • Standardized Representations and Markup Languages for Multimodal Interaction
    • Multimodal Databases
  • EMERGING TRENDS AND APPLICATIONS
    • Medical and Health Systems
    • Automotive Multimodal Human-Machine Interface
    • Embedded Multimodal Interfaces in Robotics: Applications, Future Trends, and Societal Implications
    • Multimodal Dialogue Processing for Machine Translation
    • Commercialization of Multimodal Systems
    • Privacy Concerns of Multimodal Sensor Systems
  • Index
  • Biographies
  • Volume 3 Glossary

The Handbook of Multimodal-Multisensor Interfaces, Volume 3: Language Processing, Software, Commercialization, and Emerging Directions

Product form

£111.20

Includes FREE delivery

RRP £139.00 – you save £27.80 (20%)

Order before 4pm today for delivery by Thu 22 Jan 2026.

A Hardback by Sharon Oviatt, Björn Schuller, Philip Cohen

15 in stock


    View other formats and editions of The Handbook of Multimodal-Multisensor Interfaces, Volume 3: Language Processing, Software, Commercialization, and Emerging Directions by Sharon Oviatt

    Publisher: Morgan & Claypool Publishers
    Publication Date: 30/06/2019
    ISBN13: 9781970001754, 978-1970001754
    ISBN10: 1970001755

    Description

    Book Synopsis
    The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces-user input involving new media (speech, multi-touch, hand and body gestures, facial expressions, writing) embedded in multimodal-multisensor interfaces.

    This three-volume handbook is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas.

    This third volume focuses on state-of-the-art multimodal language and dialogue processing, including semantic integration of modalities. The development of increasingly expressive embodied agents and robots has become an active test bed for coordinating multimodal dialogue input and output, including processing of language and nonverbal communication. In addition, major application areas are featured for commercializing multimodal-multisensor systems, including automotive, robotic, manufacturing, machine translation, banking, communications, and others. These systems rely heavily on software tools, data resources, and international standards to facilitate their development. For insights into the future, emerging multimodal-multisensor technology trends are highlighted in medicine, robotics, interaction with smart spaces, and similar areas. Finally, this volume discusses the societal impact of more widespread adoption of these systems, such as privacy risks and how to mitigate them. The handbook chapters provide a number of walk-through examples of system design and processing, information on practical resources for developing and evaluating new systems, and terminology and tutorial support for mastering this emerging field. In the final section of this volume, experts exchange views on a timely and controversial challenge topic, and how they believe multimodal-multisensor interfaces need to be equipped to most effectively advance human performance during the next decade.



    Table of Contents
    • Preface
    • Figure Credits
    • Introduction: Toward the Design, Construction, and Deployment of Multimodal-Multisensor Interfaces
    • MULTIMODAL LANGUAGE AND DIALOGUE PROCESSING
      • Multimodal Integration for Interactive Conversational Systems
      • Multimodal Conversational Interaction with Robots
      • Situated Interaction
      • Software Platforms and Toolkits for Building Multimodal Systems and Applications
      • Challenge Discussion: Advancing Multimodal Dialogue
      • Nonverbal Behavior in Multimodal Performances
    • MULTIMODAL BEHAVIOR
      • Ergonomics for the Design of Multimodal Interfaces
      • Early Integration for Movement Modeling in Latent Spaces
      • Standardized Representations and Markup Languages for Multimodal Interaction
      • Multimodal Databases
    • EMERGING TRENDS AND APPLICATIONS
      • Medical and Health Systems
      • Automotive Multimodal Human-Machine Interface
      • Embedded Multimodal Interfaces in Robotics: Applications, Future Trends, and Societal Implications
      • Multimodal Dialogue Processing for Machine Translation
      • Commercialization of Multimodal Systems
      • Privacy Concerns of Multimodal Sensor Systems
    • Index
    • Biographies
    • Volume 3 Glossary

    Recently viewed products

    © 2026 Book Curl

      • American Express
      • Apple Pay
      • Diners Club
      • Discover
      • Google Pay
      • Maestro
      • Mastercard
      • PayPal
      • Shop Pay
      • Union Pay
      • Visa

      Login

      Forgot your password?

      Don't have an account yet?
      Create account