Synchronous Localization and Mapping Model of Autonomous Driving Based on Multi-Sensor Fusion
Abstract
With the development of intelligent transportation and autonomous driving technology, how to achieve precise and robust synchronous positioning and mapping has become a research focus. To improve the navigation accuracy and environmental perception capability of multi-sensor fusion systems in complex road environments, this study constructs a high-precision synchronous positioning and mapping model that integrates inertial measurement units, LiDAR, cameras, and wheel speed sensors. In addition, a closed-loop detection mechanism and graph optimization method have been introduced to enhance trajectory consistency and drift correction capability. The experiment showed that the positioning accuracy of the proposed method has increased from 85% to 97%, and the false alarm rate has decreased from 22% to 7%. The initial heading deviation was controlled within 0.8 degrees, with a root mean square error of 0.53 m and an average processing time of 60 milliseconds per frame. Further simulation showed that the model had an average positioning accuracy of 93.8%, detection coverage of over 95%, trajectory smoothness better than 0.07 m, and cumulative error drift controlled within 0.37 m/km in three typical road scenarios. The research shows that the model has significant performance in improving positioning accuracy, mapping consistency and environmental adaptability, and has the potential to be popularized and applied in actual autonomous driving systems.

