Description

Book Synopsis
The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces— user input involving new media (speech, multi-touch, gestures, writing) embedded in multimodal-multisensor interfaces. These interfaces support smart phones, wearables, in-vehicle and robotic applications, and many other areas that are now highly competitive commercially. This edited collection is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas. This first volume of the handbook presents relevant theory and neuroscience foundations for guiding the development of high-performance systems. Additional chapters discuss approaches to user modeling and interface designs that support user choice, that synergistically combine modalities with sensors, and that blend multimodal input and output. This volume also highlights an in-depth look at the most common multimodal-multisensor combinations—for example, touch and pen input, haptic and non-speech audio output, and speech-centric systems that co-process either gestures, pen input, gaze, or visible lip movements. A common theme throughout these chapters is supporting mobility and individual differences among users. These handbook chapters provide walk-through examples of system design and processing, information on tools and practical resources for developing and evaluating new systems, and terminology and tutorial support for mastering this emerging field. In the final section of this volume, experts exchange views on a timely and controversial challenge topic, and how they believe multimodal-multisensor interfaces should be designed in the future to most effectively advance human performance.

Table of Contents
  • Introduction
  • Theory and Neuroscience Foundations
  • Theoretical Foundations of Multimodal Interfaces and Systems
  • The Impact of Multimodal-Multisensory Learning on Human Performance and Brain Activation Patterns
  • Approaches to Design and User Modeling
  • Multisensory Haptic Interactions: Understanding the Sense and Designing for It
  • A Background Perspective on Touch as a Multimodal
  • Understanding and Supporting Modality Choices
  • Using Cognitive Models to Understand Multimodal Processes: The Case for Speech and Gesture Production
  • Multimodal Feedback in HCI: Haptics, Non-Speech Audio, and Their Applications
  • Multimodal Technologies for Seniors: Challenges and Opportunities
  • Common Modality Combinations
  • Gaze Informed Multimodal Interaction
  • Multimodal Speech and Pen Interfaces
  • Multimodal Gesture Recognition
  • Audio and Visual Modality Combination in Speech Processing Applications
  • Multidisciplinary Challenge Topic: Perspectives on Learning with Multimodal Technology
  • Contributors’ Brief Biographies: Editors, Authors and Challenge Discussants
  • Index

The Handbook of Multimodal-Multisensor

Product form

£79.20

Includes FREE delivery

RRP £99.00 – you save £19.80 (20%)

Order before 4pm today for delivery by Tue 20 Jan 2026.

A Paperback / softback by Sharon Oviatt, Björn Schuller , Philip Cohen

Out of stock


    View other formats and editions of The Handbook of Multimodal-Multisensor by Sharon Oviatt

    Publisher: Morgan & Claypool Publishers
    Publication Date: 30/05/2017
    ISBN13: 9781970001648, 978-1970001648
    ISBN10: 197000164X

    Description

    Book Synopsis
    The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces— user input involving new media (speech, multi-touch, gestures, writing) embedded in multimodal-multisensor interfaces. These interfaces support smart phones, wearables, in-vehicle and robotic applications, and many other areas that are now highly competitive commercially. This edited collection is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas. This first volume of the handbook presents relevant theory and neuroscience foundations for guiding the development of high-performance systems. Additional chapters discuss approaches to user modeling and interface designs that support user choice, that synergistically combine modalities with sensors, and that blend multimodal input and output. This volume also highlights an in-depth look at the most common multimodal-multisensor combinations—for example, touch and pen input, haptic and non-speech audio output, and speech-centric systems that co-process either gestures, pen input, gaze, or visible lip movements. A common theme throughout these chapters is supporting mobility and individual differences among users. These handbook chapters provide walk-through examples of system design and processing, information on tools and practical resources for developing and evaluating new systems, and terminology and tutorial support for mastering this emerging field. In the final section of this volume, experts exchange views on a timely and controversial challenge topic, and how they believe multimodal-multisensor interfaces should be designed in the future to most effectively advance human performance.

    Table of Contents
    • Introduction
    • Theory and Neuroscience Foundations
    • Theoretical Foundations of Multimodal Interfaces and Systems
    • The Impact of Multimodal-Multisensory Learning on Human Performance and Brain Activation Patterns
    • Approaches to Design and User Modeling
    • Multisensory Haptic Interactions: Understanding the Sense and Designing for It
    • A Background Perspective on Touch as a Multimodal
    • Understanding and Supporting Modality Choices
    • Using Cognitive Models to Understand Multimodal Processes: The Case for Speech and Gesture Production
    • Multimodal Feedback in HCI: Haptics, Non-Speech Audio, and Their Applications
    • Multimodal Technologies for Seniors: Challenges and Opportunities
    • Common Modality Combinations
    • Gaze Informed Multimodal Interaction
    • Multimodal Speech and Pen Interfaces
    • Multimodal Gesture Recognition
    • Audio and Visual Modality Combination in Speech Processing Applications
    • Multidisciplinary Challenge Topic: Perspectives on Learning with Multimodal Technology
    • Contributors’ Brief Biographies: Editors, Authors and Challenge Discussants
    • Index

    Recently viewed products

    © 2026 Book Curl

      • American Express
      • Apple Pay
      • Diners Club
      • Discover
      • Google Pay
      • Maestro
      • Mastercard
      • PayPal
      • Shop Pay
      • Union Pay
      • Visa

      Login

      Forgot your password?

      Don't have an account yet?
      Create account