A multimodal fusion model with multi-level attention mechanism for depression detection


Computer Science

Document Type


Publication Date



Depression is a common mental illness that affects the physical and mental health of hundreds of millions of people around the world. Therefore, designing an efficient and robust depression detection model is an urgent research task. In order to fully extract depression features, we systematically analyze audio-visual and text data related to depression, and proposes a multimodal fusion model with multi-level attention mechanism (MFM-Att) for depression detection. The method is mainly divided into two stages: the first stage utilizes two LSTMs and a Bi-LSTM with attention mechanism to learn multi-view audio feature, visual feature and rich text feature, respectively. In the second stage, the output features of the three modalities are sent into the attention fusion network (AttFN) to obtain effective depression information, aiming to make use of the diversity and complementarity between modalities for depression detection. It is worth noting that the multi-level attention mechanism can not only extract valuable depressive features of intra-modality, but also learn the correlations of inter-modality, thereby improving the overall performance of the model by reducing the influence of redundant information. MFM-Att model is evaluated on the DAIC-WOZ dataset, and the result outperforms state-of-the-art models in terms of root mean square error (RMSE).

Journal Title

Biomedical Signal Processing and Control

Journal ISSN




Digital Object Identifier (DOI)