A Multimodal Interaction Framework for Blended Learning

Journal Title: EAI Endorsed Transactions on Creative Technologies - Year 2017, Vol 4, Issue 10

Abstract

Humans interact with each other by utilizing the five basic senses as input modalities, whereas sounds, gestures, facial expressions etc. are utilized as output modalities. Multimodal interaction is also used between humans and their surrounding environment, although enhanced with further senses such as equilibrioception and the sense of balance. Computer interfaces that are considered as a different environment that human can interact with, lack of input and output amalgamation in order to provide a close to natural interaction. Multimodal human-computer interaction has sought to provide alternative means of communication with an application, which will be more natural than the traditional “windows, icons, menus, pointer” (WIMP) style. Despite the great amount of devices in existence, most applications make use of a very limited set of modalities, most notably speech and touch. This paper describes a multimodal framework enabling deployment of a vast variety of modalities, tailored appropriately for use in blended learning environment and introduces a unified and effective framework for multimodal interaction called COALS.

Authors and Affiliations

N. Vidakis

Keywords

Related Articles

Towards a new software quality model for evaluation the quality of gamified systems

Due to the emergence of the Gamification concept in various domains, this article studies how to improve the quality of gamified systems with the software engineering discipline. Recent studies have shown a significant g...

Dynamic Resource Allocation Scheme Using Cooperative Game for Multimedia Services in LTE Advanced System

In LTE Advanced system, it is critical to carry on various types of services for different user applications. Obviously, the system resource requirements are quite different for each individual service. In order to alloc...

Characterisation of gestural units in light of human-avatar interaction

We present a method for characterizing coverbal gestural units intended for human-avatar interaction. We recorded 12 gesture types, using a motion-capture system. We used the markers positions thus obtained to determine...

Multi-GPU based framework for real-time motion analysis and tracking in multi-user scenarios

Video processing algorithms present a necessary tool for various domains related to computer vision such as motion tracking, event detection and localization in multi-user scenarios (crowd videos, mobile camera, scenes w...

Download PDF file
  • EP ID EP45867
  • DOI http://dx.doi.org/10.4108/eai.4-9-2017.153057
  • Views 202
  • Downloads 0

How To Cite

N. Vidakis (2017). A Multimodal Interaction Framework for Blended Learning. EAI Endorsed Transactions on Creative Technologies, 4(10), -. https://www.europub.co.uk/articles/-A-45867