With the advancement of mobile mapping technology, multi-view imagery has become an important source for road observation. However, traditional matching methods struggle to overcome the challenges posed by distortions and variations in viewing angles between images. To enhance the precision and reliability of image matching, this study investigates a deep feature matching technique based on deep learning. By utilizing convolutional neural networks (CNN) to extract deep features, accurate matching and 3D spatial positioning of multi-view images can be achieved. The study employs Deep Feature Matching (DFM) technology, which is based on the pre-trained VGG19 model. Through a two-stage matching strategy and the RANSAC (Random Sample Consensus) algorithm, erroneous matching points are filtered out to ensure the reliability of the matching results. The experimental data consists of multi-view images, with traffic signs serving as the target objects for image matching and positioning. The research results reveal that, compared with the traditional SIFT method, DFM demonstrates a higher success rate and improved positioning accuracy in various image scenarios, including scale differences, shape distortions, and occlusion conditions. Notably, DFM achieves significantly more matching points than SIFT in distortion and occlusion scenarios. Furthermore, the analysis of traffic sign positioning indicates that the success rate of traffic sign positioning reaches 70%, with an average error of less than 0.5 m for successfully located points. This finding highlights the practical application potential of DFM in 3D positioning using multi-view images in complex scenarios and confirms that it achieves higher success rates and accuracy.