The IRMA Community
Newsletters
Research IRM
Click a keyword to search titles using our InfoSci-OnDemand powered search:
|
A Discriminative Locality-Sensitive Dictionary Learning With Kernel Weighted KNN Classification for Video Semantic Concepts Analysis
|
Author(s): Benjamin Ghansah (University of Education, Winneba, Ghana), Ben-Bright Benuwa (University of Education, Winneba, Ghana)and Augustine Monney (University of Education, Winneba, Ghana)
Copyright: 2021
Volume: 17
Issue: 1
Pages: 24
Source title:
International Journal of Intelligent Information Technologies (IJIIT)
Editor(s)-in-Chief: Vijayan Sugumaran (Oakland University, Rochester, USA)
DOI: 10.4018/IJIIT.2021010105
Purchase
|
Abstract
Video semantic concept analysis has received a lot of research attention in the area of human computer interactions in recent times. Reconstruction error classification methods based on sparse coefficients do not consider discrimination, essential for classification performance between video samples. To further improve the accuracy of video semantic classification, a video semantic concept classification approach based on sparse coefficient vector (SCV) and a kernel-based weighted KNN (KWKNN) is proposed in this paper. In the proposed approach, a loss function that integrates reconstruction error and discrimination is put forward. The authors calculate the loss function value between the test sample and training samples from each class according to the loss function criterion, and then vote on statistical results. Finally, this paper modifies the vote results combined with the kernel weight coefficient of each class and determine the video semantic concept. The experimental results show that this method effectively improves the classification accuracy for video semantic analysis and shorten the time used in the semantic classification compared with some baseline approaches.
Related Content
Li Liao.
© 2024.
16 pages.
|
Shuqin Zhang, Peiyu Shi, Tianhui Du, Xinyu Su, Yunfei Han.
© 2024.
27 pages.
|
Jinming Zhou, Yuanyuan Zhan, Sibo Chen.
© 2024.
29 pages.
|
G. Manikandan, Reuel Samuel Sam, Steven Frederick Gilbert, Karthik Srikanth.
© 2024.
16 pages.
|
Liangqun Yang.
© 2024.
17 pages.
|
V. Shanmugarajeshwari, M. Ilayaraja.
© 2024.
22 pages.
|
Kaisheng Liu.
© 2024.
21 pages.
|
|
|