Efficient bit allocation for multiview image coding & view synthesis
Abstract
In this paper, the authors address the problem of efficient bit allocation among texture and depth maps of multi-view images. We pose the following question: for chosen (1) coding tool to encode texture and depth maps at the encoder and (2) view synthesis tool to reconstruct uncoded views at the decoder, how to best select captured views for encoding and distribute available bits among texture and depth maps of selected coded views, such that visual distortion of a “metric” of reconstructed views is minimized. We show that using the monotonicity assumption, suboptimal solutions can be efficiently pruned from the feasible space during parameter search. Our experiments show that optimal selection of coded views and associated quantization levels for texture and depth maps can outperform a heuristic scheme using constant levels for all maps (commonly used in the standard implementations) by up to 2.0dB. Moreover, the complexity of our scheme can be reduced by up to 66% over full search without loss of optimality.Citation
Cheung, G. and Velisavljevic, V. (2010) 'Efficient bit allocation for multiview image coding & view synthesis', 17th IEEE International Conference on Image Processing (ICIP), Hong Kong, 26-29 September. Hong Kong: IEEE, pp.2613-2616.Type
Conference papers, meetings and proceedingsLanguage
enISBN
9781424479931ae974a485f413a2113503eed53cd6c53
10.1109/ICIP.2010.5651655