Sequence-to-sequence (seq2seq) neural network model are able to generate natural soundingconversational responses for open domain dialogue systems. However, these models tend to produce safe, universal responses (e.g., I don’t know) regardless of the input, which carry little information and can easily lead to the end of a conversation. In this paper, we propose a new Topic-driven Response Generation Model (TRGM). The proposed model leverages topic information to generate interesting and informative responses. Firstly, we design a topic generation model based on BERT to learn the topic information of the input. Then a response generation model utilizes a gate mechanism and a mixed probability model to integrate topic knowledge into a seq2seq model. We implement the two components using an end-to-end neural network and jointly train each component as a subtask. Experimental results on a public dataset demonstrate that our method significantly outperforms state-of-the-art baselines on both automatic evaluation metrics and human judgment.