IRMA-International.org: Creator of Knowledge
Information Resources Management Association
Advancing the Concepts & Practices of Information Resources Management in Modern Organizations

Video Abstraction Techniques for a Digital Library

Video Abstraction Techniques for a Digital Library
View Sample PDF
Author(s): Hang-Bong Kang (Catholic University of Korea, Korea)
Copyright: 2002
Pages: 13
Source title: Distributed Multimedia Databases: Techniques and Applications
Source Author(s)/Editor(s): Timothy K. Shih (Tamkang University, Taiwan)
DOI: 10.4018/978-1-930708-29-7.ch008

Purchase

View Video Abstraction Techniques for a Digital Library on the publisher's website for pricing and purchasing information.

Abstract

The abstraction of a long video is often useful to a user in determining whether the video is worth viewing or not. In particular, video abstraction guarantees users of digital libraries with the fast, safe and reliable access of video data. Two approaches, such as summary sequences and highlights are possible in video abstraction. The summary sequences are good for documentaries because they give an overview of the contents of the entire video, whereas highlights are good for movie trailers because they contain only the most interesting video segments. The video abstraction can be generated by three steps: analyzing video, selecting video clips, and synthesizing the output. In the analyzing video step, salient features, structures, or patterns in visual information, audio information, and textual information are detected. In the selecting step, meaningful clips are selected from detected features in the previous step. In the output synthesis step, the selected video clips are composed into the final form of the abstract. In this chapter, we will discuss various video abstraction techniques for digital libraries. In addition, we will also discuss a context-based video abstraction method in which contextual information of the video shot is computed. This method is useful in generating highlights because the contextual information of the video shot reflects semantics in video data.

Related Content

Nithin Kalorth, Vidya Deshpande. © 2024. 7 pages.
Nitesh Behare, Vinayak Chandrakant Shitole, Shubhada Nitesh Behare, Shrikant Ganpatrao Waghulkar, Tabrej Mulla, Suraj Ashok Sonawane. © 2024. 24 pages.
T.S. Sujith. © 2024. 13 pages.
C. Suganya, M. Vijayakumar. © 2024. 11 pages.
B. Harry, Vijayakumar Muthusamy. © 2024. 19 pages.
Munise Hayrun Sağlam, Ibrahim Kirçova. © 2024. 19 pages.
Elif Karakoç Keskin. © 2024. 19 pages.
Body Bottom