Show simple item record

dc.contributor.authorEkmekcioglu, Erhanen_GB
dc.contributor.authorVelisavljević, Vladanen_GB
dc.contributor.authorWorrall, Stewart T.en_GB
dc.date.accessioned2013-05-22T11:17:05Z
dc.date.available2013-05-22T11:17:05Z
dc.date.issued2009
dc.identifier.citationEkmekcioglu, E., Velisavljevic, V. and Worrall, S.T. (2009) 'Edge and motion-adaptive median filtering for multi-view depth map enhancement', Picture Coding Symposium (PCS 2009), Chicago, IL, USA, 6-8 May. Chicago: IEEE, pp.1-4.en_GB
dc.identifier.isbn978-1-4244-4594-3
dc.identifier.doi10.1109/PCS.2009.5167415
dc.identifier.urihttp://hdl.handle.net/10547/292571
dc.description.abstractThe authors present a novel multi-view depth map enhancement method deployed as a post-processing of initially estimated depth maps, which are incoherent in the temporal and inter-view dimensions. The proposed method is based on edge and motion-adaptive median filtering and allows for an improved quality of virtual view synthesis. To enforce the spatial, temporal and inter-view coherence in the multiview depth maps, the median filtering is applied to 4-dimensional windows that consist of the spatially neighbor depth map values taken at different viewpoints and time instants. These windows have locally adaptive shapes in a presence of edges or motion to preserve sharpness and realistic rendering. We show that our enhancement method leads to a reduction of a coding bit-rate required for representation of the depth maps and also to a gain in the quality of synthesized views at an arbitrary virtual viewpoint. At the same time, the method carries a low additional computational complexity.
dc.language.isoenen
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INCen_GB
dc.relation.urlhttp://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5167415en_GB
dc.subjectcomputational complexityen_GB
dc.subjectimage colour analysisen_GB
dc.subjectimage enhancementen_GB
dc.subjectfree viewpoint videoen_GB
dc.subjectmultiview codingen_GB
dc.titleEdge and motion-adaptive median filtering for multi-view depth map enhancementen
dc.typeConference papers, meetings and proceedingsen
html.description.abstractThe authors present a novel multi-view depth map enhancement method deployed as a post-processing of initially estimated depth maps, which are incoherent in the temporal and inter-view dimensions. The proposed method is based on edge and motion-adaptive median filtering and allows for an improved quality of virtual view synthesis. To enforce the spatial, temporal and inter-view coherence in the multiview depth maps, the median filtering is applied to 4-dimensional windows that consist of the spatially neighbor depth map values taken at different viewpoints and time instants. These windows have locally adaptive shapes in a presence of edges or motion to preserve sharpness and realistic rendering. We show that our enhancement method leads to a reduction of a coding bit-rate required for representation of the depth maps and also to a gain in the quality of synthesized views at an arbitrary virtual viewpoint. At the same time, the method carries a low additional computational complexity.


This item appears in the following Collection(s)

  • Centre for Wireless Research (CWR)
    The Centre for Wireless Research brings together expertise in the areas of mobile and wireless sensor networks. The breadth and depth of the expertise make the Centre rich with research and innovation potential.

Show simple item record