Interaction Techniques and Convenience

Smart Watches, Communication Robots & Auditory Interfaces

İrem Küçükali
5 min readFeb 9, 2020

Traditional ways of interaction such as visual displays with basic user interface elements are incapable of satisfy the needs of people when they try to do a task in urban context which is usually a noisy, complex, and messy environment by definition where people are usually moving or multitasking. There are some factors to consider when designing an urban interaction: nature of outdoor environment, bodily constraints due to physical space, advantages of spatial information, medium of interaction, technique of interaction, etc. In the present paper interaction techniques and their convenience for urban context will be investigated.

In a research article by Kanda et al. (2010), interaction of a communication robot in a shopping mall is examined. Interaction in a shopping mall setting have some challenges as in most of the other urban places. In the case of this article, the most important one is sensing and interpreting the human input, as in behavior or speech. Speech recognition challenges is about inconsistency of the robot and the noisy environment. The task of the robot was giving directions and route-guidance. Robot needs to guide people with a relay point when they cannot see their destional at their current positions.

By Comixboy at English Wikipedia, CC BY 2.5, https://commons.wikimedia.org/w/index.php?curid=9672553

As it is stated in a Japanese government report, information providing robots are asked by majority of citizens in public spaces (Kanda, 2010). A robot or a virtual assistant can help people out in an interactive way and with personal conversation. To accomplish that, robot needs to be friendly and bonding with people through information (that people shared with robot in the past).

Its interactive behavior comprises a few steps. First it introduces itself as a guide robot. Then when it is asked about guidance, it provides information or it can recommend people a restaurant or shop. When doing this, it greets people, reports its own experiences, tell stories, disclose secrets of its own (like its favorite food), identify previously visited people and remember their preferences and tries to establish a relationship with people.

As a result, it is seen that people mostly preferred to get in passive interactions with robot and gave minimum responses to it. As a method of exhibiting results, authors chose to share three case analysis in the paper. They also sent a questionnaire afterwards to people who interacted with the robot to rank their experiences with robot, in terms of their impression of robot, success of the route guidance, success of the providing information, and lastly building rapport. In from 1 to 7 scale robot got 5 points for impression, 5.3 points for route guidance, 4.6 points for both providing information and building rapport. In conclusion it is found that robot is more useful than an information display in a mall.

In another study by Sodnik, et al. (2008), authors made an experiment on the interaction in a mobile environment (while driving). They compared a standard visual interface with two proposed auditory interface of theirs. They proved that driving performance of people is much better, they distracted less and their satisfaction with the interaction is higher when they are using auditory interface (Sodnik, 2008).

As in any urban context, while driving it is required that a high degree of visual attention. In this case visual interfaces are not appropriate, since they distract user’s attention (Sodnik, 2008). Additionally the most frequent types of mobile visual displays are generally kept in pockets or bags where notifications cannot be immediately seen.

The authors exemplifies some interaction techniques that are alternatives to the visual channel in urban setting. Tactile interfaces with vibration, they describe, cause little distraction from main task but useful as short notifications not for long messages. And also they require physical closeness to the device.

Another alternative is auditory interfaces with a wide range from non-speech to earcons, from speech recognition/output to audio representation of complex data. Auditory interfaces also do not interfere with visual channel, they also have flexible range in terms of closeness to the device.

There are also spatial auditory interfaces that reflects spatial positions of audio items. These kind of sounds also preferred to be transmitted through headsets.

Authors stated that in order to reduce distraction, user interface is needed to be arranged accordingly the situation that user is in (Sodnik, 2008). Therefore to test the success of auditory interfaces, in the experiment, people were asked to perform five simple tasks using a mobile phone in the car.

  1. Writing a short message to a specific person
  2. Changing the profile of the device
  3. Calling a specific person
  4. Deleting an image from the phone
  5. Playing a specific song

As another example, I want to discuss Bieber et al.’s (2012) article about smart watches. In the paper, the authors draw attention to interaction gestures mostly such as clicking, doublelicking, tapping wiping, circling, turning, fistbumping, etc. They argued it is more convenient to have an interaction more than finger control.

Based on the bluetooth, and sensor technologies, gesture interaction can be very successful. Since the smart watches have a little screen, they also have the opportunity to display important information as text. They save people from the burden of getting their phones out of their pocket. Smart watches can also be used to recognize the ambient, and also the behavior of the user via sensors and data analysis. They can also be integrated with speech recognition or detection of hand movements. They can sense motion, they can be used to understand the sign language for deaf people.

The results of the studies mentioned show that in complex environments interaction through visual displays is not be the best option in terms of usability. Instead using auditory interaction creates much better results.

It is an incomplete work that I started for a research some time ago. It is better to be open and online than to be private for my eyes only, I thought. I hope it helps or inspires somebody for a better and complete work. Contributions are welcomed.

References

Bieber, G., Kirste, T., & Urban, B. (2012, June). Ambient interaction by smart watches. In Proceedings of the 5th International Conference on PErvasive Technologies Related to Assistive Environments (p. 39). ACM.

Kanda, T., Shiomi, M., Miyashita, Z., Ishiguro, H., & Hagita, N. (2010). A communication robot in a shopping mall. IEEE Transactions on Robotics, 26(5), 897–913.

Sodnik, J., Dicke, C., Tomažič, S., & Billinghurst, M. (2008). A user study of auditory versus visual interfaces for use while driving. International journal of human-computer studies, 66(5), 318–332.

--

--

İrem Küçükali

Urban technologist. I create and improve urban experiences using my software, urbanism and design skills.