Optimal rate allocation for view synthesis along a continuous viewpoint location in multiview imaging
Abstract
The authors consider the scenario of view synthesis via depth-image based rendering in multi-view imaging. We formulate a resource allocation problem of jointly assigning an optimal number of bits to compressed texture and depth images such that the maximum distortion of a synthesized view over a continuum of viewpoints between two encoded reference views is minimized, for a given bit budget. We construct simple yet accurate image models that characterize the pixel values at similar depths as first-order Gaussian auto-regressive processes. Based on our models, we derive an optimization procedure that numerically solves the formulated min-max problem using Lagrange relaxation. Through simulations we show that, for two captured views scenario, our optimization provides a significant gain (up to 2dB) in quality of the synthesized views for the same overall bit rate over a heuristic quantization that selects only two quantizers - one for the encoded texture images and the other for the depth images.Citation
Velisavljevic, V., Cheung, G. and Chakareski, J. (2010) 'Optimal rate allocation for view synthesis along a continuous viewpoint location in multiview imaging', Picture Coding Symposium (PCS), Nagoya, Japan, 8-10 December. Nagoya: IEEE, pp.482-485.Type
Conference papers, meetings and proceedingsLanguage
enISBN
9781424471348ae974a485f413a2113503eed53cd6c53
10.1109/PCS.2010.5702542