3D information acquisition
3D omnidirectional holoscopic imaging system
automatic feature-match selection algorithm
depth map estimation
feature block selection
full parallax 3D model
natural continuous parallax 3D objects
scene depth extraction
single aperture camera
volume spatial optical model
3D omni-directional Holoscopic Image
auto feature thresholding
MetadataShow full item record
Abstract3D Holoscopic Imaging (3DHI) is a promising technique for viewing natural continuous parallax 3D objects within a wide viewing zone using the principle of “Fly's eye”. The 3D content is captured using a single aperture camera in real-time and represents a true volume spatial optical model of the object scene. The 3D content viewed by multiple viewers independently of their position, without 3D eyewear glasses. The 3DHI technique merely requires a single recording that the acquisition of the 3D information and the compactness of depth measurement that is used has been attracting attention as a novel depth extraction technique. This paper presents a new corresponding and matching technique based on a novel automatic Feature-Match Selection (FMS) algorithm. The aim of this algorithm is to estimate and extract an accurate full parallax 3D model form from a 3D Omni-directional Holoscopic Imaging (3DOHI) system. The basis for the novelty of the paper is on two contributions: feature blocks selection and corresponding automatic optimization process. There are solutions for three main problems related to the depth map estimation from 3DHI: uncertainty and region homogeneity at image location, dissimilar displacements within the matching block around object borders, and computational complexity.
CitationAlazawi, E., Aggoun, A., Abbod, M., Swash, M.R., Abdul Fatah, O., Fernandez, J., (2013) 'Scene depth extraction from Holoscopic Imaging technology,' 3DTV-Conference: The True Vision-Capture, Transmission and Display of 3D Video (3DTV-CON). Aberdeen, 7-8 October.
TypeConference papers, meetings and proceedings
Showing items related by title, author, creator and subject.
Efficient edge, motion and depth-range adaptive processing for enhancement of multi-view depth map sequencesEkmekcioglu, Erhan; Velisavljević, Vladan; Worrall, Stewart T. (IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2009)The authors present a novel and efficient multi-view depth map enhancement method proposed as a post-processing of initially estimated depth maps. The proposed method is based on edge, motion and scene depth-range adaptive median filtering and allows for an improved quality of virtual view synthesis. To enforce the spatial, temporal and inter-view coherence in the multi-view depth maps, the median filtering is applied to 4-dimensional windows that consist of the spatially neighboring depth map values taken at different viewpoints and time instants. A fast iterative block segmentation approach is adopted to adaptively shrink these windows in the presence of edges and motion for preservation of sharpness and realistic rendering and for improvement of the compression efficiency. We show that our enhancement method leads to a reduction of the coding bit-rate required for representation of the depth maps and also leads to a gain in the quality of synthesized views at arbitrary virtual viewpoints.
Depth map compression for depth-image-based renderingCheung, Gene; Ortega, Antonio; Kim, Woo-Shik; Velisavljević, Vladan; Kubota, Akira (SpringerLink, 2013)In this chapter, we discuss unique characteristics of depth maps, review recent depth map coding techniques, and describe how texture and depth map compression can be jointly optimized.