Edge and motion-adaptive median filtering for multi-view depth map enhancement
image colour analysis
free viewpoint video
MetadataShow full item record
AbstractThe authors present a novel multi-view depth map enhancement method deployed as a post-processing of initially estimated depth maps, which are incoherent in the temporal and inter-view dimensions. The proposed method is based on edge and motion-adaptive median filtering and allows for an improved quality of virtual view synthesis. To enforce the spatial, temporal and inter-view coherence in the multiview depth maps, the median filtering is applied to 4-dimensional windows that consist of the spatially neighbor depth map values taken at different viewpoints and time instants. These windows have locally adaptive shapes in a presence of edges or motion to preserve sharpness and realistic rendering. We show that our enhancement method leads to a reduction of a coding bit-rate required for representation of the depth maps and also to a gain in the quality of synthesized views at an arbitrary virtual viewpoint. At the same time, the method carries a low additional computational complexity.
CitationEkmekcioglu, E., Velisavljevic, V. and Worrall, S.T. (2009) 'Edge and motion-adaptive median filtering for multi-view depth map enhancement', Picture Coding Symposium (PCS 2009), Chicago, IL, USA, 6-8 May. Chicago: IEEE, pp.1-4.
TypeConference papers, meetings and proceedings