This paper presents a landmark-oriented visual navigation system for unmanned vehicles (UMVs). An extensible knowledge-based parser is implemented through a syntactic approach for the recognition of T-junctions and crossroads of corridors. In addition, both geometrical perspective and flat-earth models are adopted to estimate the distances from objects to the front of a UMV. With our steering control method, the initial position and orientation of the UMV need not be known exactly. Such a method incorporates human-like driving skills together with feature line information. A turning process is also developed through the use of the geometric relationship between a reference baseline and the orientation of the UMV. Many successful experiments were conducted on a concrete UMV to verify the effectiveness of the proposed methods. So far, this UMV can detect the locations of T-junctions and crossroads precisely, and can move forward and make smooth turns in corridors.