Edge and motion-adaptive median filtering for multi-view depth map enhancement
Issue Date
2009Subjects
computational complexityimage colour analysis
image enhancement
free viewpoint video
multiview coding
Metadata
Show full item recordAbstract
The authors present a novel multi-view depth map enhancement method deployed as a post-processing of initially estimated depth maps, which are incoherent in the temporal and inter-view dimensions. The proposed method is based on edge and motion-adaptive median filtering and allows for an improved quality of virtual view synthesis. To enforce the spatial, temporal and inter-view coherence in the multiview depth maps, the median filtering is applied to 4-dimensional windows that consist of the spatially neighbor depth map values taken at different viewpoints and time instants. These windows have locally adaptive shapes in a presence of edges or motion to preserve sharpness and realistic rendering. We show that our enhancement method leads to a reduction of a coding bit-rate required for representation of the depth maps and also to a gain in the quality of synthesized views at an arbitrary virtual viewpoint. At the same time, the method carries a low additional computational complexity.Citation
Ekmekcioglu, E., Velisavljevic, V. and Worrall, S.T. (2009) 'Edge and motion-adaptive median filtering for multi-view depth map enhancement', Picture Coding Symposium (PCS 2009), Chicago, IL, USA, 6-8 May. Chicago: IEEE, pp.1-4.Additional Links
https://ieeexplore.ieee.org/document/5167415Type
Conference papers, meetings and proceedingsLanguage
enISBN
978-1-4244-4594-3ae974a485f413a2113503eed53cd6c53
10.1109/PCS.2009.5167415
