A formal model for output multimodal HCI
Mohand Oussaïd, Linda
Ait Sadoune, Idir
Ait Ameur, Yamine
Multimodal human–computer interaction (HCI) combine modalities at an abstract specification level in order to get information from the user (input multimodality) and to return information to the user (output multimodality). These multimodal interfaces use two mechanisms: first, the fusion of information transmitted by the user on different modalities during input interaction and second, the fission or decomposition of information produced by the functional core in order to distribute the composite information on the different modalities during output interaction. In this paper, we present a generic approach to design output multimodal interfaces. This approach is based on a formal model, composed of two models: semantic fission model for information decomposition process and allocation model for modalities and media allocation to composite information. An Event-B formalization has been proposed for the fission model and for allocation model. This Event-B formalization extends the generic model and support the verification of some relevant properties such as safety or liveness. An example of collision freeness property verification is presented in this paper.
Multimodal interaction , Formal modelling , Semantic fission , Modalities and media allocation , Event-B