Until now, most exposure fusion methods are easy to be influenced by the location of object in the image. However, when capturing the source images, slight shift in the camera’s position will yield blurry or double images. In order to solve the problem, a method called SIDWTBEF is proposed, which is based on shift-invariant discrete wavelet transform (SIDWT). It is more robust to images those have slight shift. On the other hand, in this paper, we present a novel way to get the chrominance information of the scene, and the saturation of the fused image can be adjusted using one user-controlled parameter. The luminance images sequence of the source images are decomposed by SIDWT into subimages with a certain level scale. In the transform domain, different fusion rules are used for the high-pass sub-images and the low-pass sub-images combination respectively. In the end, in order to reduce the inconsistencies induced by the fusion rule after applying the inverse transform of SIDWT, an enhancement operator is proposed. Experiments show that SIDWTBEF can give comparative results compared to other shift dependent exposure fusion methods.