Polymedia Semantic Video Annotation Services powered by Apache Stanbol

In order to be more effective and prepare for the industrial roadshow we decided to enhance our prototypes with respect to what was presented during the first two years of the project, namely: the Polymedia JSNI (Javascript Native Interface) Use Case and the the Multimedia Vertical Use Case (Editor and Player).

This has been done in the scope of the project with the main goal of increasing the appeal of the demonstrators when presented to potential prospects or, more in general, the participants of industrial events and conferences Polymedia/KIT digital has been planning to support. Of course we’re again talking about prototypes so we don’t pretend to provide fully fledged products in a mature engineering phase but instead concrete ideas to leverage the semantic technologies adding value to our current mainstream product line while addressing customers’ requests at the same time.

As a brief introduction to the improvements we’ve made, first of all let’s recall what we’re talking about: in the first period of the project, Polymedia designed and developed a demonstrator based on its flagship product: Polymedia CMS. The prototype has been based on the Polymedia CMS SmartGWT testbed environment. Such tool is an open implementation of the Company’s CMS with restricted functionalities aimed at testing the impact of changes, integrations and additional components before affecting the mainstream product and, most important, avoiding the limitations imposed by the Company’s business strategies. This demonstrator already integrates the Stanbol Enhancer to extract concepts, find images and Dbpedia links. Linked Open Data (LOD) functionality, developed by CNR, has been added as well. The textual content and the generated metadata are saved within the Polymedia CMS database opening up a wide range of possibilities like exploiting the semantically annotated text within the product by implementing additional functionalities or adopting components provided by IKS for further processing or metadata querying.

In the beginning of 2012 the prototype has been extended: firstly by refining the LOD integration, with the support of CNR and according to the updated services now based on the latest version of Stanbol.

Polymedia Site Publishing – Linked Open Data (LOD) enrichment

Secondly by replacing the proprietary editor with the Hallo Editor adding annotation capabilities through  VIE (annotateJS).

Polymedia Site Publishing – HALLO Editor replaced the proprietary one, enabling annotation

 

At the current stage, this use case leverages the following components developed in IKS:

  • Stanbol Enhancer
  • Linked Open Data functionalities
  • Hallo Editor
  • VIE and annotateJS

In 2011 Polymedia was acquired by KIT digital and, while maintaining the same resource and products asset, the core business shifted more towards the multimedia side. For this reason and in order to actualize the company’s offer to IKS in terms of expertise and business validation a new demonstrator was designed and developed: the multimedia vertical use case. The use case workflow foresees two actors: the publisher, using the Polymedia/KIT Cloud Video Editor in order to prepare the chosen video by adding tags (e.g.: the name of the actor appearing at a certain time, in the video stream) and the user, consuming the considered video thanks to a semantic player capable of re-producing the multimedia content while interpreting the tags in real-time. The combination of both these features is used to present pictures or articles linked to the current scene. This demonstrator leverages Semantic Video Annotation Services (Editor + Player) and the VIE Image widget (Player), developed in IKS. The early prototype presented during the second review meeting, however, was quite rough, in particular for what concerning the semantic player UI. At that time the main goal was to present the idea behind the demonstrator as well as the typical workflow from end to end; however we planned to improve the front-end before using it in the 2012 industrial roadshow.  Hence, Polymedia worked on a brand new UI for the Semantic Player. The player improved in many areas providing a better maturity impression for showcasing activities.

Semantic Player for the Industrial Roadshow (2012) – Landing page

A landing page has been added too in order to provide the user with the list of the published videos as well as the possibility to browse them by category. Clicking on a video will result in playing it back with the Semantic Player.

Semantic Player for the Industrial Road Show (2012)

The video is now played-back in full screen, the timeline is located on the bottom of the frame, in overlay with semi-transparent properties, and could be navigated in real-time by simply dragging the cursor from left to right. The user is enabled to play/pause the video by pressing the related button on the timeline bar.

On the left side of the frame, again in overlay, a semi-transparent grey box lists the widgets associated with the current video and triggered by the annotation performed by the publisher during the editing phase. The user can get rid of them, if not interested, by simply clicking on the red “X” icon. Among those you can see the VIE ones, integrated in order to compare results with a pure syntactic search like the one provided by Flickr through the exposed APIs.

As already mentioned, during the playback these widgets are triggered with respect to the semantic annotation. The “trigger” is unobtrusive so the activated widget simply blinks for notification. By clicking on it a box sharing the same color of the blinking object is rendered presenting the widget’s output. The picture below witnesses the semantic connection between the annotated video, the pictures retrieved by the VIE Picture widget and the entities behind it.

Such prototypes, at the current stage, have been already successfully presented at a number of industrial events following the roadshow plan.

Future developments foresee, upon evaluation, to further improve the Editor too, by integrating the Stanbol Enhancer with the final goal of semi-automatically performing the video tagging during the ingestion phase.

Comments are closed.