Traditional path planning algorithms for Unmanned Aerial Vehicles (UAVs) primarily optimize for geometric metrics such as path length and energy efficiency. However, in GPS-denied environments, where external positioning is unreliable, the quality of visual localization is paramount for mission success. This study introduces a novel Deep Reinforcement Learning (DRL) framework designed to co-optimize the UAV path for both geometric efficiency and visual localization robustness. Specifically, our method integrates the density of matched image feature points, extracted from post-processed aerial imagery, directly into the planning process, ensuring the generated trajectory passes through visually rich areas that enhance navigation accuracy. To tackle the path planning challenge and address issues related to sparse rewards and unstable training, we employ an advanced DRL architecture: Noisy Dueling Double DQN with Prioritized Experience Replay (Noisy D3QN with PER). This integration leverages Double DQN to refine value estimation, Dueling DQN to improve generalization, PER to enhance sample efficiency, and Noisy Networks to promote robust and efficient exploration. The proposed framework is implemented within a simulated 2.5D environment with a customized reward function that considers both UAV state parameters and terrain features. Experimental results demonstrate that the method generates efficient, visually coherent, and dynamically smooth trajectories. Crucially, it enables path inference for multiple independent missions from various starting points after a single training session, achieving superior computational efficiency compared to traditional geometric planners. This highlights the potential of integrating visual features into a reinforcement learning-based UAV path planning to significantly enhance visual localization performance in complex environments.