Objects

Demonstrate the power of the EFIM as the backend storage for video tagging application. We will show this by the E-player by which all heterogeneous information of video series will be mingled together in a unified way.

Demonstration

EFIM player screen shot (1)

To see the demonstration, we need a SMIL 2.0 player. Please download the RealPlayer from here. When you connect to the test server, you will see three regions: the top-left one playing the video, the top-right view showing scenes and episodes list and the bottom view showing related scenes and the search view. The top-right and bottom views should appear automatically on load. If they do not appear, then try stop and play the video again.

In the top-right view, you can select scenes, which are small story pieces of a episode. To change the episode to view, then select Episodes tab and select one of 25 episodes. The top-right view provides you the instant access the the media in the scene level.

In the bottom view, you will see that the related scene information changed automatically as the story flows on. When the playing scene is changed, it displays most related other scenes from 342 scenes in 25 episodes automatically. The relation is computed in three aspects: (1) similar words, (2) similar sentences, and (3) same speakers. This relation is computed automatically from these three aspects. See the screen shot on the left showing related scenes

however a user can search a specific information piece using the Search view in the bottom window.

In the Search tab located at the top of the bottom view provides a user to look over events and their related events. This view provides a way to navigate through all connected events. First try any keywords (not even a word but a number or datetime in xxxx-xx-xx format) then it will return associated seed events. Then click an event that you are interested in, then it will return on the right side all related events with their relationships. Try keep selecting events on the right to trace over relationships that will show you any events associated with.

Functions to deliver

  • Goal: Prove that the EFIM system is useful for multimedia information retrieval.
  • Experiment: Show the demonstration to navigate through sitcom episodes in below steps.
    • Step 1. Once after I collect all episode related materials (scripts, captions, faces and other external information source like Wikis on Friends) limited to easy materials we can access, then put them all into the EFIM database. The information model for each categories (many materials above in each disparate model) will be independent, heterogeneous and loosely coupled one by one. This means that we do not need to first design the whole complex class diagram but such a whole picture will be the instance view on current connected categories. Also all events in the EFIM will be queried without a prior knowledge on their internal model schema.
    • Step 2. Make the EFIM web-enabled through AJAX (Asynchronous JavaScript and XML). Develop the SMIL (Synchronized Multimedia Interfacing Language) interface. By through this interface, a user will see the episode and can click anywhere in the video anytime when he is interested to query other related scenes from other episodes.
    • Step 3. The difference that we want to make is that the type of relationships that we can provide to the a is expanded in multiple aspects to enhance user experiences. Assuming that a user clicks a video at some point and then a timestamp and a spatial mouse point location (x, y) are given, then we can provide:
      1. Related events per user requests in multiple granularity: speaker, speaker group and script words query with semantic relationships expansion.
      2. Sort by event properties: its popularity (ts reference count), time, associated data type and relationships with other events.
      3. Provides the brief preview on related events so that a user can go into details selectively. Two types summaries will be given in demonstration. One will be the text summary which only shows a subject, a verb and a object word of the script. The other will be the thumbnail of a video frame selected highest in its relevance by associated event properties.

Screen shots

To-do list

Start from Nov. 18, 2007.

  • Link thumbnail image sources to the sources. -- done (11/20/2007)
    • Server side: Read the current timestamp and per image request, retrieve the related thumbnail image source addres -- done (11/21/2007)