Given their status as a preeminent form of social interaction, mobile phone conversations have been the subject of relatively limited investigation, in terms of social behavior. This leaves open a major gap when two important developments take place. On one hand, Mobile HCI often deals with advanced mobile phones containing a large number of sensors (e.g., GPS, accelerometers, magnetometers, capacitive touch) and with sufficient processing power to capture with unprecedented richness behavior and context of users (e.g., position, movement, hand grip, proximity of social network members, gait type, auditory context).
On the other hand, the computing community, in particular Social Signal Processing (SSP), makes significant efforts towards automatic understanding (via analysis of verbal and nonverbal behavior) of social interactions captured with multiple sensors.
This volume aims at bridging the abovementioned gap by gathering contributions from both SSP and Mobile HCI communities. Cross-pollination is expected to extend the investigation area of the two domains and highlight a number of research questions that not only promise to bring significant novelty in both SSP and Mobile HCI, but also require the application of knowledge from both domains to be effectively investigated.
The research questions to be addressed include, but are not limited to:
– Is it possible to integrate the input of mobile phone sensors in current approaches for automatic analysis of social phenomena in conversations?
– Does context influence the communication behavior of people talking on the phone?
– Does the transmission of nonverbal behavioral cues, so important in face-to-face communication, improve phone conversation experience?
– Does a better understanding of communication behavior influence the design of mobile phones?
– Can we evaluate how use of a mobile phone affects the key social interaction variables of ‘trust’ and ‘competence’ evaluation?.
– Can we create metrics which help us evaluate the effect on social interaction of augmenting the voice channel with other feedback channels?
– Can we create non-vocal, but embodied interaction techniques which are appropriate for mobile use?
– What would be the ethical issues related to the everyday use of in-hand, automated social signal analysis?
Deadline: December 15th, 2010
The contributions will be published in a volume of the Springer LNCS series
Formatting instructions are available at the following site:
Papers are expected to be between 6 and 12 pages.
Paper can be submitted at the following URL:
For informal inquiries please contact Alessandro Vinciarelli (vincia(at)dcs.gla.ac.uk)