IRMA-International.org: Creator of Knowledge
Information Resources Management Association
Advancing the Concepts & Practices of Information Resources Management in Modern Organizations

Ontological Random Forests for Image Classification

Ontological Random Forests for Image Classification
View Sample PDF
Author(s): Ning Xu (Beckman Institute, University of Illinois at Urbana-Champaign, USA), Jiangping Wang (Beckman Institute, University of Illinois at Urbana-Champaign, USA), Guojun Qi (Beckman Institute, University of Illinois at Urbana-Champaign, USA), Thomas S. Huang (University of Illinois at Urbana-Champaign, USA)and Weiyao Lin (Shanghai Jiao Tong University, China)
Copyright: 2018
Pages: 16
Source title: Computer Vision: Concepts, Methodologies, Tools, and Applications
Source Author(s)/Editor(s): Information Resources Management Association (USA)
DOI: 10.4018/978-1-5225-5204-8.ch031

Purchase

View Ontological Random Forests for Image Classification on the publisher's website for pricing and purchasing information.

Abstract

Previous image classification approaches mostly neglect semantics, which has two major limitations. First, categories are simply treated independently while in fact they have semantic overlaps. For example, “sedan” is a specific kind of “car”. Therefore, it's unreasonable to train a classifier to distinguish between “sedan” and “car”. Second, image feature representations used for classifying different categories are the same. However, the human perception system is believed to use different features for different objects. In this paper, we leverage semantic ontologies to solve the aforementioned problems. The authors propose an ontological random forest algorithm where the splitting of decision trees are determined by semantic relations among categories. Then hierarchical features are automatically learned by multiple-instance learning to capture visual dissimilarities at different concept levels. Their approach is tested on two image classification datasets. Experimental results demonstrate that their approach not only outperforms state-of-the-art results but also identifies semantic visual features.

Related Content

Aswathy Ravikumar, Harini Sriraman. © 2023. 18 pages.
Ezhilarasie R., Aishwarya N., Subramani V., Umamakeswari A.. © 2023. 10 pages.
Sangeetha J.. © 2023. 13 pages.
Manivannan Doraipandian, Sriram J., Yathishan D., Palanivel S.. © 2023. 14 pages.
T. Kavitha, Malini S., Senbagavalli G.. © 2023. 36 pages.
Uma K. V., Aakash V., Deisy C.. © 2023. 23 pages.
Alageswaran Ramaiah, Arun K. S., Yathishan D., Sriram J., Palanivel S.. © 2023. 17 pages.
Body Bottom