JISE


  [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15]


Journal of Information Science and Engineering, Vol. 40 No. 3, pp. 649-659


Relation-Aware Image Captioning with Hybrid-Attention for Explainable Visual Question Answering


YING-JIA LIN, CHING-SHAN TSENG AND HUNG-YU KAO+
Department of Computer Science and Information Engineering
National Cheng Kung Univiserty
Tainan, 701 Taiwan
E-mail: hykao@mail.ncku.edu.tw


Recent studies leveraging object detection as the preliminary step for Visual Question Answering (VQA) ignore the relationships between different objects inside an image based on the textual question. In addition, the previous VQA models work like black-box functions, which means it is difficult to explain why a model provides such answers to the corresponding inputs. To address the issues above, we propose a new model structure to strengthen the representations for different objects and provide explainability for the VQA task. We construct a relation graph to capture the relative positions between region pairs and then create relation-aware visual features with a relation encoder based on graph attention networks. To make the final VQA predictions explainable, we introduce a multi-task learning framework with an additional explanation generator to help our model produce reasonable explanations. Simultaneously, the generated explanations are incorporated with the visual features using a novel Hybrid-Attention mechanism to enhance cross-modal understanding. Experiments show that the proposed method performs better on the VQA task than the several baselines. In addition, incorporation with the explanation generator can provide reasonable explanations along with the predicted answers.


Keywords: visual question answering, explainable VQA, multi-task learning, graph attention networks, vision-language model

  Retrieve PDF document (JISE_202403_14.pdf)