With the rapid development of SSDs (Solid State Drives), traditional hard drives in many applications have been replaced by SSDs. Since SSDs consist of NAND flash memory, the main challenge to SSDs is that NAND flash memory is highly sensitive to write requests. A lot of write requests will cause garbage collection to reclaim free space due to the “out-place update” characteristic of flash memory. Frequent activities of garbage collection will reduce the lifetime of flash memory and overall performance. When SSDs are used for data storage, how to significantly decrease the amount of data written will become an important topic. In the paper, we will propose a data de-duplication access framework for SSDs. The objective is to eliminate duplicate data as much as possible and reduce space consumption. We will combine a file-based de-duplication and a static chunking de-duplication scheme to reach a complete data de-duplication. We will also investigate application-based locality and file-name locality to find out duplicate data. According to the experimental results, the proposed framework can efficiently identify duplicate data and decrease a lot of data written, and at the same time, the overhead is also reasonable.