2.50
Hdl Handle:
http://hdl.handle.net/10547/335955
Title:
Super depth-map rendering by converting holoscopic viewpoint to perspective projection
Authors:
Alazawi, E.; Abbod, M.; Aggoun, Amar; Swash, M. R.; Fatah, O. Abdul; Fernandez, Juan C. J.
Abstract:
The expansion of 3D technology will enable observers to perceive 3D without any eye-wear devices. Holoscopic 3D imaging technology offers natural 3D visualisation of real 3D scenes that can be viewed by multiple viewers independently of their position. However, the creation of a super depth-map and reconstruction of the 3D object from a holoscopic 3D image is still in its infancy. The aim of this work is to build a high-quality depth map of a real 3D scene from a holoscopic 3D image through extraction of multi-view high resolution Viewpoint Images (VPIs) to compensate for the poor features of VPIs. To manage this, we propose a reconstruction method based on the perspective formula to convert sets of directional orthographic low resolution VPIs into perspective projection geometry. Following that, we implement an Auto-Feature point algorithm for synthesizing VPIs to distinctive Feature-Edge (FE) blocks to localize and provide an individual feature detector that is responsible for integration of 3D information. Detailed experiments proved the reliability and efficiency of the proposed method, which outperforms state-of-the-art methods for depth map creation.
Affiliation:
University of Bedfordshire; Brunel University
Citation:
Alazawi, E., Abbod, M., Aggoun, A., Swash, M.R., Fatah, O.A., Fernandez, J., (2014) 'Super depth-map rendering by converting holoscopic viewpoint to perspective projection' 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), Budapest, 2-4 July.
Publisher:
IEEE
Issue Date:
Jul-2014
URI:
http://hdl.handle.net/10547/335955
DOI:
10.1109/3DTV.2014.6874714
Additional Links:
http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6874714
Type:
Conference papers, meetings and proceedings
Language:
en
Sponsors:
The authors gratefully acknowledge the support of the European Commission under the Seventh Framework Programme (FP7) project 3D VIVANT (Live Immerse Video-Audio Interactive Multimedia).
Appears in Collections:
Centre for Computer Graphics and Visualisation (CCGV)

Full metadata record

DC FieldValue Language
dc.contributor.authorAlazawi, E.en
dc.contributor.authorAbbod, M.en
dc.contributor.authorAggoun, Amaren
dc.contributor.authorSwash, M. R.en
dc.contributor.authorFatah, O. Abdulen
dc.contributor.authorFernandez, Juan C. J.en
dc.date.accessioned2014-11-21T13:55:14Z-
dc.date.available2014-11-21T13:55:14Z-
dc.date.issued2014-07-
dc.identifier.citationAlazawi, E., Abbod, M., Aggoun, A., Swash, M.R., Fatah, O.A., Fernandez, J., (2014) 'Super depth-map rendering by converting holoscopic viewpoint to perspective projection' 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), Budapest, 2-4 July.en
dc.identifier.doi10.1109/3DTV.2014.6874714-
dc.identifier.urihttp://hdl.handle.net/10547/335955-
dc.description.abstractThe expansion of 3D technology will enable observers to perceive 3D without any eye-wear devices. Holoscopic 3D imaging technology offers natural 3D visualisation of real 3D scenes that can be viewed by multiple viewers independently of their position. However, the creation of a super depth-map and reconstruction of the 3D object from a holoscopic 3D image is still in its infancy. The aim of this work is to build a high-quality depth map of a real 3D scene from a holoscopic 3D image through extraction of multi-view high resolution Viewpoint Images (VPIs) to compensate for the poor features of VPIs. To manage this, we propose a reconstruction method based on the perspective formula to convert sets of directional orthographic low resolution VPIs into perspective projection geometry. Following that, we implement an Auto-Feature point algorithm for synthesizing VPIs to distinctive Feature-Edge (FE) blocks to localize and provide an individual feature detector that is responsible for integration of 3D information. Detailed experiments proved the reliability and efficiency of the proposed method, which outperforms state-of-the-art methods for depth map creation.en
dc.description.sponsorshipThe authors gratefully acknowledge the support of the European Commission under the Seventh Framework Programme (FP7) project 3D VIVANT (Live Immerse Video-Audio Interactive Multimedia).en
dc.language.isoenen
dc.publisherIEEEen
dc.relation.urlhttp://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6874714en
dc.subjectdepth-mapen
dc.subjectfeature descriptorsen
dc.subjectholoscopic 3D imageen
dc.subjectintegral imageen
dc.subjectorthographic projectionen
dc.subjectperspective projectionen
dc.subjectviewpoints imageen
dc.titleSuper depth-map rendering by converting holoscopic viewpoint to perspective projectionen
dc.typeConference papers, meetings and proceedingsen
dc.contributor.departmentUniversity of Bedfordshireen
dc.contributor.departmentBrunel Universityen
All Items in UOBREP are protected by copyright, with all rights reserved, unless otherwise indicated.