Multiscale deep context modeling for lossless point cloud geometry compression

Nguyen DT, Quach M, Valenzise G, Duhamel P (2021)


Publication Type: Conference contribution, Original article

Publication year: 2021

Conference Proceedings Title: 2021 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)

URI: https://ieeexplore.ieee.org/abstract/document/9455990

DOI: 10.1109/ICMEW53276.2021.9455990

Abstract

We propose a practical deep generative approach for lossless point cloud geometry compression, called MSVoxelDNN, and show that it significantly reduces the rate compared to the MPEG G-PCC codec. Our previous work based on autoregressive models (VoxelDNN [1]) has a fast training phase, however, inference is slow as the occupancy probabilities are predicted sequentially, voxel by voxel. In this work, we employ a multiscale architecture which models voxel occupancy in coarse-to-fine order. At each scale, MSVoxelDNN divides voxels into eight conditionally independent groups, thus requiring a single network evaluation per group instead of one per voxel. We evaluate the performance of MSVoxelDNN on a set of point clouds from Microsoft Voxelized Upper Bodies (MVUB) and MPEG, showing that the current method speeds up encoding/decoding times significantly compared to the previous VoxelDNN, while having average rate saving over G-PCC of 17.5%. The implementation is available at https://github.com/Weafre/MSVoxelDNN.

Authors with CRIS profile

Involved external institutions

How to cite

APA:

Nguyen, D.T., Quach, M., Valenzise, G., & Duhamel, P. (2021). Multiscale deep context modeling for lossless point cloud geometry compression. In 2021 IEEE International Conference on Multimedia & Expo Workshops (ICMEW).

MLA:

Nguyen, Dat Thanh, et al. "Multiscale deep context modeling for lossless point cloud geometry compression." Proceedings of the 2021 IEEE International Conference on Multimedia & Expo Workshops (ICMEW) 2021.

BibTeX: Download