EMS represents the Experiential Meeting System and this was the project supported by HP Labs for three years from 2002-2004. A meeting system requires the integration of heterogeneous media including video, audio, texts, etc. over the time line to search related materials changing or progressing spatio-temporally. The meeting itself is also a rather unstructured event typically a collection of events associated in an unstructured manner. The goal of the project is developing the theoretical backgrounds to design the proper storage and query methods to handle meeting related data to provide users enhanced experiences on meeting.
One of the greatest challenges in enterprises today is the lack of dynamic and ongoing information about individuals’ activities, interests, and expertise. Availability of such “personal chronicles” can provide rich benefits at both an individual and enterprise level. For example, personal chronicles can help individuals to far more effectively retrieve and review their activities and interactions, while at an enterprise level they can be data-mined to identify groups of common and complementary interests and skills, or to identify. We are developing experiential meeting systems to allow people to be tele-present in a remote meeting and to be able to review proceedings of a meeting or of several meetings using all the data recorded in a meeting. We consider this as a problem in management and experiential access to all multimedia data acquired in a meeting. The data includes video, audio, presentations, text material, databases and websites related to people and the discussions in the meeting, and any other data or information that could be obtained related to the events in the meeting. For experiential access to live and archived meetings, we propose detecting and storing events at three levels, domain, elemental, and data. We address issues in organizing information at domain level and using current signal processing algorithms for detecting events at data level. We show that to provide better specifications for processing algorithms for video and audio, it is essential to identify what is expected from them and define the environment and expectations very clearly. We also believe that while data processing algorithms are being developed for automatic detection of events, it may be essential to build tagging environments that will allow rapid semiautomatic tagging of data at all three levels so practical meeting systems could be implemented. Use of tagging environment is not only required to enable development of meeting systems in near future, but for defining environments for automatic detection of events in multifarious data. In this paper we present our ideas and experience with different techniques and outline our future directions. implicit work processes that are commonplace in every enterprise. Today’s existing tools are very limited in their support for dynamic capture of ongoing activities, in the organization and presentation of captured information, and in supporting rich annotation, search, retrieval, and publication of this information.