Listen to what your bathroom is telling you!

In this article, we present the results of the IKS Ambient Intelligence (AmI) use case. The key questions addressed by this case is how to make content easily available in an every-day environment. The AmI use case represents a “far out” vision of IKS for direct user interactions with embedded contents that are organized by a “Semantic CMS Technology Stack”. We combine advanced content and knowledge management with an ubiquitous computing scenario in a place everybody is familiar with – the bathroom. Together with the bath furniture company Duravit, an interactive content-enhanced bathroom was developed and evaluated by USAARCNRDFKIHSG and SRDC. From an Information Systems point of view, we have adopted a Design Science approach and the result can be seen as a concrete instance of a Ubiquitous Information System. In the following, we present the results of the IKS AmI use case from two perspectives:

  • End User Perspective
  • Technical Perspective

End User Perspective

We have developed an interactive bathroom prototype that provides users with the following six content-centered information and communication services (cf. Figure 1 below):

  1. Weather Information Service: A service that provides weather information in the bathroom. Interaction: Distance sensor in front of the mirror triggers today’s weather informtion to be displayed on the mirror.
  2. Event Recommendation Service: A service that recommends events (e.g., theatre play, concert, movie in the cinema, etc.) in the bathroom. Interaction: The distance sensor in front of the mirror triggers three events to be displayed in the mirror.
  3. Ticket Order Service: A service that allows you to order tickets for events. Interaction: (a) Distance sensor in front of the mirror triggers event recommendations to be displayed in the mirror, (b) a ticket can be request by touching an event, (c) a verification question is asked via the speakers and (d) the array microphone listens to the answer by the user.
  4. Personalized Music Service: A service that plays music from a music collection in the bathroom. Interaction: Distance sensor in front of the eScreen starts the playlist and wiping along the touch-sensitive interaction border stops the music again.
  5. Personalized News Collage Service: A service that provides a personalized news collage (e.g., a news collage that addresses your interests sport and politics, etc.) in the bathroom. Interaction: (a) User asks for today’s news from within the bathroom; the request is captured by the microphone array (b) the news are then displayed via text or video depending on the location of the user on the eScreen, the Shower or the mirror.
  6. Adaptive News Service: A service that provides the same news as described above but in different ways (e.g., via audio or via text and images) depending on the location of the user in the bathroom. Interaction: Distance sensors in front of the mirror, eScreen or Shower trigger the form (text or video) of the news such that the user can “take” them from one location to another within the bathroom.

The interaction with these services differed from rather direct interaction techniques such as touch and speech control to indirect interaction with content, e.g., triggered by the location of users by distance sensors. The spatial placement of the services is shown in Figure 1.

Figure 1: Spatial placement of the six information and communication services. Note: IK point stands for Interactive Knowledge

Figure 1: Spatial placement of the six information and communication services. Note: IK point stands for Interactive Knowledge

A first user study with 55 participants was conducted in June, 2011. The six services have been perceived as useful and easy to use. Participants had also fun during usage. The behavioural intention to adopt those services in the very near future was also measured. Results show that all services with the exception of the Ticket order service are very likely to be adopted by our participants. Furthermore, the fit between the six services and the “bathroom situation” as such was perceived significantly positive. This indicates that the design of the interactive bathroom was perceived as being more or less “natural”.

Moreover, qualitative feedback shows that sometimes users were distracted from their own mirror picture due to the information made available on the screen. One solution might be therefore to make information draggable and to replace the content of the mirror-based-services from the center of the mirror – as it is currently implemented – to the periphery. Another challenge is the rather moderate reaction time of the system as mentioned by the participants of the study. However, as it is just the first prototype with various hardware and software components intercting with each other (see the next section for details), participants primarily addressed details of the current implementation rather than asking for a totally new concept or design of the bathroom.

In addition to the six services that have been evaluated, participants have proposed the following additional services that they were likely to use in an interactive bathroom and which impose relevant needs from a user perspective:

  • Date, time & calendar
  • Social networks & Twitter
  • E-Mail and other communication services such as phone
  • Radio, TV and Movies
  • Other services such as home automation, music clips, podcasts, etc.

You might now have a look at the following video clip that demonstrates the various services embedded into the bathroom.

In case you are interested in further details of this study, then please have a look at Section 9 of IKS Deliverable D4.1 “AmI Case Design and Implementation”.

Technical Perspective

From the technical perspective, the interactive bathroom prototype is an instance of a modular Ubiquitous Information System (UIS). The UIS consists of several loosely coupled and thus, exchangeable modules, that realize the aforementioned services while providing a maximum of extensibility and adaptability. The OSGi implementation eclipse Equinox is used to orchestrate the modules at runtime. The main modules involved in the AmI case UIS are the following:

  • Knowledge Repository Module
  • Device Management Module
  • Context & Situation Management Modules
  • Speech Communication Module
  • Semantic Content Extractor Module

A brief overview of each of these modules is provided in the following.

Knowledge Repository Module

The Knowledge Repository Module manages storage and orchestration of all knowledge representations and content items in the system. As a convention every content item in the system is referenced by using URIs and all changes in the system are communicated using semantically formatted messages. The Knowledge Repository Module not only cares for the message propagation between the modules but it also provides several utilities to simplify the work with the knowledge representations and content items. Beside this, it provides rule based reasoning capabilities and semantic listeners to enable uncomplicated and efficient access on the contextual and situational part of the managed knowledge representations.

In the reiteration phase the AmI use case the Stanbol Ontology Store has been integrated using the REST interface as a standardized storage layer for the knowledge representations.

Device Management Module

A big difference between classical information systems and UIS is the amount of devices that need to be managed in such a physical environment. On the one hand, there are sensing devices that capture the presence of users as well as their position and actions. On the other hand, several dynamically changing output devices need to be accessible for content item presentation.

These tasks, the dynamic discovery, integration and access of devices handles the Device Management Module based on UPnP device discovery. All devices provide their semantic device metadata that describes their functionalities and OSGi driver bundles. In this way devices can be integrated and removed from the environment at runtime. Other modules can query devices currently available and use them for the presentation of contents.

Context Management Module

The Context Management Module is responsible for a continuous update of contextual parts of the knowledge representation based on changes in the environment. The so called AmI ODPs are semantic representations of all concepts that are involved in the bathroom situation, e.g. the user and his preferences, content items like weather information, device descriptions like presentation devices, sensors or lights in the environment. The module continuously adapts the semantic representation of the context to the current state in the environment. And it provides also the capability to store content items of different forms retrieved from the Semantic Content Extractor Module.

Situation Management Module

The Situation Management Module uses the contextual part prepared by the Context Management Module in combination with the situational parts, i.e. semantic descriptions of situation patterns described as AmI Pre-Artifacts. That is, to manage the whole bathroom scenario from a situational perspective. This module searches for situation descriptions from the situational part of the knowledge representation that fit to the current situation and react accordingly. Hereby, a continuous analysis of the contextual representation is conducted. This process is managed by two components: (1) Situation Recognition & Processing and (2) Situation Adjustment.

The first component addresses the recognition, processing and broadcasting of situations concerning situational changes. Since situation recognition and processing tasks are highly interconnected, these two conceptual issues were realized in one component. By contrast, the second component – Situation Adjustment – cares for the adjustment of the situation based on contextual changes in the bathroom environment, e.g., in case the user moves to another location.

Speech Communication Module

The Speech Communication Module is responsible for speech interpretation / generation, and some discourse / dialogue management. The component recognizes spoken user imput and interprets the user requests. As a result of this operation, hypotheses are produced and checked against the situational context in order to identify the expected tasks and broadcast them to the system which retrieves expected content. After expected tasks have been performed a multimodal presentation is produced and broadcasted to the Device Access Component and speech presentation directly to the speech synthesis system.

The module and the following additional non IKS-specific multimodal dialog tools: the Nuance Dragon Naturally Speaking has been used for speech recognition, the SVOX TTS has been used for text-to-speech generation and the SemVox ODP Server as dialog server.

Semantic Content Extractor Module

The Semantic Content Extractor Module (SCEM) is responsible for collecting external content from different sources and aligning them into the AmI System. SCEM is composed of four subcomponents which are: (1) Content Aggregator, (2) Content Reengineer, (3) Content Refactorer and (4) Content Filterer. These subcomponents work through the Content Retrieval & Knowledge Extraction Pipeline. Thanks to this pipeline XML or RDF based external content is mapped to the AmI System compatible RDF. The Content Aggregator obtains content either in XML or RDF format. If the content is in XML format it is transformed into RDF format. Once the content is in RDF format, the Content Refactorer further transforms it into another RDF which is processable by the AmI System. This transformation is done based on the KReS Rules.

SCEM provides retrieval of contents from various sources in various formats and is capable of transforming gathered content into a standard representation that is managable by the AmI System. As well as any JCR compliant repositories, SCEM gathers content from NYTimesBBCWeatherBugGoogle CalendarGoogle MoviesEventful and Eventim. While serving the gathered content, it considers user preferences. For instance, it does not show social events that would overlap with extisting calendar entries of users. Or, it filters out those songs that do not fit to preferred music genres of a particular user. Finally, the Content Aggregator is configurable for different languages.

The following video provides an overview of the technical realization of the AmI prototype as described in the former sections:

http://www.youtube.com/watch?v=T2L47pHhDTs

If you have any further questions then please have a look at our detailed report IKS Deliverable D4.1 “AmI Case Design and Implementation” or contact Andreas Filler. This article was co-authored by Andreas Filler, Suat Gonul, Sabine Janzen, Tobias Kowatsch and Massimo Romanelli, whereas the whole team involved in the AmI case consists of the following persons: Sabine Janzen, Eva Blomqvist, Andreas Filler, Suat Gönül, Tobias Kowatsch, Alessandro Adamou, Sebastian Germesin, Massimo Romanelli, Valentina Presutti, Cihan Cimen, Wolfgang Maass, Senan Postaci, Erdem Alpay, Tuncay Namli, Gokce Banu Laleci Erturkmen.

Comments are closed.