IRMA-International.org: Creator of Knowledge
Information Resources Management Association
Advancing the Concepts & Practices of Information Resources Management in Modern Organizations

3D Modeling for Environmental Informatics: Parametric Manifold of an Object under Different Viewing Directions

3D Modeling for Environmental Informatics: Parametric Manifold of an Object under Different Viewing Directions
View Sample PDF
Author(s): Xiaozheng Zhang (Ladbrokes, Australia)and Yongsheng Gao (Griffith University, Australia)
Copyright: 2017
Pages: 20
Source title: Decision Management: Concepts, Methodologies, Tools, and Applications
Source Author(s)/Editor(s): Information Resources Management Association (USA)
DOI: 10.4018/978-1-5225-1837-2.ch047

Purchase


Abstract

3D modeling plays an important role in the field of computer vision and image processing. It provides a convenient tool set for many environmental informatics tasks, such as taxonomy and species identification. This chapter discusses a novel way of building the 3D models of objects from their varying 2D views. The appearance of a 3D object depends on both the viewing directions and illumination conditions. What is the set of images of an object under all viewing directions? In this chapter, a novel image representation is proposed, which transforms any n-pixel image of a 3D object to a vector in a 2n-dimensional pose space. In such a pose space, it is proven that the transformed images of a 3D object under all viewing directions form a parametric manifold in a 6-dimensional linear subspace. With in-depth rotations along a single axis in particular, this manifold is an ellipse. Furthermore, it is shown that this parametric pose manifold of a convex object can be estimated from a few images in different poses and used to predict object's appearances under unseen viewing directions. These results immediately suggest a number of approaches to object recognition, scene detection, and 3D modeling, applicable to environmental informatics. Experiments on both synthetic data and real images were reported, which demonstrates the validity of the proposed representation.

Related Content

Okure Udo Obot, Kingsley Friday Attai, Gregory O. Onwodi. © 2023. 28 pages.
Thomas M. Connolly, Mario Soflano, Petros Papadopoulos. © 2023. 29 pages.
Dmytro Dosyn. © 2023. 26 pages.
Jan Kalina. © 2023. 21 pages.
Avishek Choudhury, Mostaan Lotfalian Saremi, Estfania Urena. © 2023. 20 pages.
Yuanying Qu, Xingheng Wang, Limin Yu, Xu Zhu, Wenwu Wang, Zhi Wang. © 2023. 26 pages.
Yousra Kherabi, Damien Ming, Timothy Miles Rawson, Nathan Peiffer-Smadja. © 2023. 10 pages.
Body Bottom