IRMA-International.org: Creator of Knowledge
Information Resources Management Association
Advancing the Concepts & Practices of Information Resources Management in Modern Organizations

Multimodality in Mobile Computing and Mobile Devices: Methods for Adaptable Usability

Multimodality in Mobile Computing and Mobile Devices: Methods for Adaptable Usability
Author(s)/Editor(s): Stan Kurkovsky (Central Connecticut State University, USA)
Copyright: ©2010
DOI: 10.4018/978-1-60566-978-6
ISBN13: 9781605669786
ISBN10: 1605669784
EISBN13: 9781605669793

Purchase

View Multimodality in Mobile Computing and Mobile Devices: Methods for Adaptable Usability on the publisher's website for pricing and purchasing information.


Description

Originally designed for interpersonal communication, today mobile devices are capable of connecting their users to a wide variety of Internet-enabled services and applications.

Multimodality in Mobile Computing and Mobile Devices: Methods for Adaptable Usability explores a variety of perspectives on multimodal user interface design, describes a variety of novel multimodal applications, and provides real-life experience reports. Containing research from leading international experts, this innovative publication presents core concepts that define multi-modal, multi-channel, and multi-device interactions and their role in mobile, pervasive, and ubiquitous computing.



Preface

TENTATIVE

In the last two decades, two technological innovations produced a very significant impact on our everyday lives and habits. The first innovation, the Internet, provides new means and ways to access the vast amounts of information and an exploding number of online services available today. The Internet has revolutionized the way people communicate with each other, how they receive news, shop, and conduct other day-to-day activities. The second innovation, a mobile phone, provides users with a simple anytime and anywhere communication tool. Originally designed for interpersonal communication, today mobile phones are capable of connecting their users to a wide variety of Internet-enabled services and applications, which can vary from a simplified web browser to a GPS-enabled navigation system. Researchers and practitioners agree that a combination of these two innovations (e.g., online Internet-enabled services that can be accessed via a mobile phone) provides a revolutionary impact on the development of mobile computing systems.

However, development of mobile applications, may be somewhat hindered by some features of the mobile devices, such as their screen sizes which are often too small to effectively handle the graphical content that can be found on the Internet. Mobile phones may also suffer from being too slow, or having an inconvenient keyboard, making it difficult to access lengthy or media-rich information found on the web, or a relatively short battery life that may not be sufficient enough for such network traffic-intensive uses as web browsing or viewing mobile video broadcasts. So far, current research has been focusing mostly on mobile applications designed for smart phones, in which the application logic is usually placed within the mobile device. Speech, however, remains the most basic and the most efficient form of communication, and providing a form of speech-based communication remains the prevalent function of any mobile phone. Many other means of interpersonal communication and human perception, which include gestures, motion, and touch, are also finding their ways to the realm of mobile computing interfaces.

The primary functionality of any phone, no matter how basic, is to enable voice communication, which still remains the most natural and simple method of communication, ideally suited for on the go and hands-free access to information. A voice interface allows the user to speak commands and queries while receiving an audio response. Furthermore, a combination of mobile and voice technologies can lead to new venues for marketing, entertainment, news and information, and business locator services. Speech remains the most basic and the most efficient form of interpersonal communication, and facilitating voice communication remains the main function of even the simplest mobile phone. As mobile devices grow in popularity and become ever more powerful and feature-rich, they still remain constrained in terms of the screen and keyboard size, battery capacity, and processor speed. There is a wide variety of models, manufacturers, and operating systems for mobile devices, each of which may have unique input/output capabilities. This creates new challenges to the developers of mobile applications, especially if they are to embrace different interaction modalities. Current efforts to establish a standard for the multimodal interface specification still remain far from being mature and are not widely accepted by the industry. However, multimodal interface design is a rapidly evolving research area, especially in the area of mobile information services.

Applications or systems that combine multiple modalities of input and output are referred to as multimodal. For example, iPhone combines the capabilities of a traditional screen & keyboard interface, a touch interface, an accelerometer-based motion interfaces, and a speech interface, and all applications running on it (at least in theory) should be able to take advantage of these three modalities of input/output. The objectives of multimodal systems are two-pronged: to reach a kind of interaction that is closer to natural interpersonal human communication, and to improve the dependability of the interaction by employing complementary or redundant information. Generally, multimodal applications are more adaptable to the needs of different users in varying contexts. Multimodal applications have a stronger acceptance potential because they can generally be accessed in more than one manner (e.g. using speech and web interface) and by a broader range of users in a varying set of circumstances.

Recognizing that mobile computing is one of the most rapidly growing areas in the software market, this book explores the role of multimodality and multimodal interfaces in the area of mobile computing. Mobile computing has a very strong potential due the extremely high market penetration of mobile and smart phones, high degree of user interest in and engagement with mobile applications, and an emerging trend of integrating traditional desktop and online systems with their mobile counterparts. Multimodal interfaces play a very important role in improving the accessibility of these applications, therefore leading to their increased acceptance by the users.

This book is a collective effort of many researchers and practitioners from industry (including Orange, Microsoft, SAP, and others) and academia. It offers a variety of perspectives on multimodal user interface design, describes a variety of novel multimodal applications and provides several experience reports with experimental and industry-adopted mobile multimodal applications.

The book opens with the Introduction, which consists of two chapters. Chapter 1, Multimodal and Multichannel issues in pervasive and ubiquitous computing by José Rouillard describes the core concepts that define multi-modal, multi-channel and multi-device interactions and their role in mobile, pervasive and ubiquitous computing. This chapter also presents three case studies illustrating the issues arising in designing mobile systems that support different interaction modalities that include voice or gesture over different communication channels such as web or telephone. Chapter 2, Ubiquitous User Interfaces: Multimodal Adaptive Interaction for Smart Environments by Marco Blumendorf, Grzegorz Lehmann, Dirk Roscher, and Sahin Albayrak surveys the topic of multimodal user interfaces as hey provide the means to support the user in various situations and to adapt the interaction to the user’s needs. The authors focus on different aspects of modeling user interaction in an adaptive multimodal system and illustrate their approach with a system utilizes design-time user interface models at runtime to provide flexible multimodal user interfaces.

Theoretical foundations of multimodal interactions in the mobile environment are discussed in the second part of he book. Chapter 3, A Formal Approach to the Verification of Adaptability Properties for Mobile Multimodal User Interfaces by Nadjet Kamel, Sid Ahmed Selouani, and Habib Hamam discusses the benefits of using formal methods for specification and verification of multimodal user interfaces in mobile computing systems with the main emphasis on usability and adaptability. The authors present an approach that provides a formal interface verification using a fully automatic model-checking technique, which allows the verification at earlier stages of the development life cycle and decreases system maintenance costs. Chapter 4, Platform support for multimodality on mobile devices by Kay Kadner, Gerald Huebsch, Martin Knechtel, Thomas Springer, and Christoph Pohl surveys the basics of multimodal interactions in the context of mobility and introduces a number of concepts for platform support. This chapter also discusses different synchronization approaches for input fusion and output fission as well as a concept for device federation as a means to leverage from heterogeneous devices.

The third part of the book outlines some approaches for designing multimodal mobile applications and systems. Chapter 5, Designing Multimodal Mobile Applications by Marco de Sá, Carlos Duarte, Luís Carriço, and Tiago Reis describes a set of techniques and tools that aim at supporting designers while creating mobile multimodal applications. The authors present a framework for scenario generation and context definition that can be used to drive design and support evaluation within realistic settings, promoting in-situ design and richer results. This chapter also describes a prototyping tool that was developed to support the early stage prototyping and evaluation process of mobile multimodal applications, from the first sketch-based prototypes up to the final quantitative analysis of usage results. Chapter 6, Bodily Engagement In Multimodal Interaction: A Basis For A New Design Paradigm? by Kai Tuuri, Antti Pirhonen, and Pasi Välkkynen argues that the current mainstream design paradigm for multimodal user-interfaces takes human sensory-motor modalities and the related user-interface technologies as separate channels of communication between user and an application. This chapter outlines an alternative design paradigm, which is based on an action-oriented perspective on human perception and meaning creation process. This perspective stresses the integrated sensory-motor experience and the active embodied involvement of a subject in perception coupled as a natural part of interaction. Chapter 7, Two Frameworks for the Adaptive Multimodal Presentation of Information by Yacine Bellik, Christophe Jacquet, and Cyril Rousseau addresses the problem of choosing the correct communication modalities that are available to the system at the given moment. The authors consider this problem from two perspectives: as a contextual presentation of information in a “classical” interaction situation, as an opportunistic presentation of information in an ambient environment. As a combination of the two approaches, the authors define the characteristics of an ideal multimodal output system and discuss some perspectives relative to the intelligent multimodal presentation of information. Chapter 8, Usability Framework for the Design and Evaluation of Multimodal Interaction: Application to a Multimodal Mobile Phone by Jaeseung Chang and Marie-Luce Bourguet presents a usability framework to support the design and evaluation of multimodal interaction systems. This usability framework acts as a structured and general methodology both for the design and for the evaluation of multimodal interaction. The authors have implemented software tools and applied this methodology to the design of a multimodal mobile phone to illustrate the use and potential of the described framework.

The fourth part of the book describes several mobile multimodal applications and systems that have been developed and deployed. Chapter 9, Exploiting Multimodality for Intelligent Mobile Access to Pervasive Services in Cultural Heritage Sites by Antonio Gentile, Antonella Santangelo, Salvatore Sorce, Agnese Augello, Giovanni Pilato, Alessandro Genco, Salvatore Gaglio texamines he role of multimodality in intelligent, mobile guides for cultural heritage environments. This chapter outlines a timeline of cultural heritage system evolution, which highlights design issues as intelligence and context-awareness in providing information. The authors describe the role and advantages of using multimodal interfaces in such systems and present a case study of a multimodal framework, which combines intelligent conversational agents with speech recognition/synthesis technology within location-based framework. Chapter 10, Multimodal Search on Mobiles Devices - Exploring Innovative Query Modalities for Mobile Search by Xin Fan, Mark Sanderson, and Xing Xie explores innovative query modalities to enable mobile devices to support richer and hybrid queries such as text, voice, image, location, and their combinations. The authors describe a solution to support mobile users to perform visual queries, e.g. by captured pictures and textual information. Chapter 11 Simplifying the multimodal mobile user experience by Keith Waters describes multimodality with handsets in cellular mobile networks that are coupled to new opportunities in targeted Web services that aim to simplify and speed up interactions through new user experiences.

The fifth part of the book presents a number of new directions in mobile multimodal interfaces that researchers and practitioners are currently exploring. Chapter 12, Multimodal Cues: Exploring Pause Intervals between Haptic/Audio Cues and Subsequent Speech Information by Aidan Kehoe, Flaithri Neff, and Ian Pitt focuses on addressing numerous challenges to accessing user assistance information in mobile and ubiquitous computing scenarios. Speech, together with non-speech sounds and haptic feedback can be used to make assistance information available to users. This chapter examines the user perception of the duration of a pause between a cue that may be a variety of non-speech sounds, haptic effects or combined non-speech sound plus haptic effects, and the subsequent delivery of assistance information using speech. Chapter 13, Towards Multimodal Mobile GIS for the Elderly by Julie Doyle, Michela Bertolotto, and David Wilson focuses on developing technological solutions that can help the elderly to live full, healthy and independent lives. This chapter we analyzes mobile multimodal interfaces with emphasis on GIS and the specific requirements of the elderly in relation to the use of assistive technologies. The authors identify specific requirements for the design of multimodal GIS through a usage example of a system that have been developed. Chapter 14, Automatic Signature Verification on Handheld Devices by Marcos Martinez-Diaz, Julian Fierrez, Javier Ortega-Garcia introduces automatic signature verification as a component of a multimodal interface on a mobile device with comparatively low resolution. The authors analyze applications and challenges of signature verification and review available resources and research directions. The chapter includes a case study which describes a state-of-the-art signature verification system adapted to handheld devices.

With an in-depth coverage of a variety of topics in multimodal interaction in a mobile environment, this book aims to fill in the gap in the literature catalog dedicated to mobile computing. Mobile computing receives a growing coverage in the literature due to its tremendous potential. Today, there is a growing trend of convergence of mobile computing with online services, which will lead to an increased interest in publications covering different aspects of mobile computing and ways to make mobile applications more accessible and acceptable to users. Multimodal user interfaces that can combine mutually-complementing aspects of human-computer interaction are an excellent avenue for increasing the user satisfaction with technology and leading to a higher user acceptance of a computing system.

This book will be of interest to researches in industry and academia working in the areas of mobile computing, human-computer interaction, and interface usability, to graduate and undergraduate students, and anyone with an interest in mobile computing and human-computer interaction.

    Stan Kurkovsky
    Central Connecticut State University
More...
Less...

Reviews and Testimonials

Multimodality in Mobile Computing and Mobile Devices: Methods for Adaptable Usability offers a variety of perspectives on multimodal user interface design, describes a variety of novel multimodal applications and provides several experience reports with experimental and industry-adopted mobile multimodal applications.

– Stan Kurkovsky, Central Connecticut State University, USA

Author's/Editor's Biography

Stan Kurkovsky (Ed.)
Stan Kurkovsky is an associate professor at the Department of Computer Science at Central Connecticut State University. Stan earned his PhD from the Center for Advanced Computer Studies of the University of Louisiana (1999). Results of his doctoral research have been applied to network planning and industrial simulation. Stan’s current research interests are in mobile and pervasive computing, distributed systems, and software engineering. He published over 40 papers in refereed proceedings of national and international conferences, scientific journals, and books. Stan serves as a reviewer and a member of program committees on a number of national and international conferences. During his academic career, Stan received over a million dollars in funding from private and federal sources.

More...
Less...

Body Bottom