Generic placeholder image

Recent Advances in Electrical & Electronic Engineering

Editor-in-Chief

ISSN (Print): 2352-0965
ISSN (Online): 2352-0973

Research Article

Multi-view 3D Reconstruction based on Context Information Fusion and Full Scale Connection

Author(s): Yunyan Wang, Yuhao Luo* and Chao Xiong

Volume 18, Issue 10, 2025

Published on: 06 January, 2025

Article ID: e23520965330361 Pages: 12

DOI: 10.2174/0123520965330361241007061452

Price: $65

Abstract

Background: Multi-view stereo matching is the reconstruction of a three-dimensional point cloud model from multiple views. Although the learn-based method achieves excellent results compared with the traditional method, the existing multi-view stereo matching method will lose the underlying details when extracting features due to the deepening of the number of convolutional layers, which will affect the quality of subsequent reconstruction.

Objective: The objective of this approach is to improve the integrity and accuracy of 3D reconstruction, and obtain a 3D point cloud model with richer texture and more complete structure.

Methods: Firstly, a context-semantic information fusion module is constructed in the feature extraction network FPN, and the feature maps containing rich context information can be obtained by using multi-scale dense connections.Subsequently, a full-scale jump connection is introduced in the regularization process to capture the shallow level of detail information and deep level of semantic information at the full scale, and capture the texture features of the scene more accurately, so as to carry out reliable depth estimation.

Results: The experimental results on DTU dataset show that the proposed CU-MVSNet reduces the completeness error by 3.58%, the accuracy error by 3.7%, and the overall error by 3.51% compared with the benchmark network. It also shows good generalization on TnT dataset.

Conclusion: The CU-MVSNet method proposed in this paper can improve the completeness and accuracy of 3D reconstruction, and obtain a 3D point cloud model with more detailed texture and more complete structure.

Keywords: Deep learning, three-dimensional reconstruction, multi-view stereo, full scale connection, information fusion, fringe projection profilometry.

Graphical Abstract

Rights & Permissions Print Cite
© 2025 Bentham Science Publishers | Privacy Policy