Result: Explainable depression detection with multi-aspect features using a hybrid deep learning model on social media

Title:
Explainable depression detection with multi-aspect features using a hybrid deep learning model on social media
Source:
urn:ISSN:1386-145X ; urn:ISSN:1573-1413 ; World Wide Web, 25, 1, 281-304
Publisher Information:
Springer
Publication Year:
2022
Collection:
UNSW Sydney (The University of New South Wales): UNSWorks
Document Type:
Academic journal article in journal/newspaper
File Description:
application/pdf
Language:
unknown
DOI:
10.1007/s11280-021-00992-2
Rights:
open access ; https://purl.org/coar/access_right/c_abf2 ; CC BY ; https://creativecommons.org/licenses/by/4.0/ ; free_to_read ; This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Accession Number:
edsbas.B58E7928
Database:
BASE

Further information

The ability to explain why the model produced results in such a way is an important problem, especially in the medical domain. Model explainability is important for building trust by providing insight into the model prediction. However, most existing machine learning methods provide no explainability, which is worrying. For instance, in the task of automatic depression prediction, most machine learning models lead to predictions that are obscure to humans. In this work, we propose explainable Multi-Aspect Depression Detection with Hierarchical Attention Network MDHAN, for automatic detection of depressed users on social media and explain the model prediction. We have considered user posts augmented with additional features from Twitter. Specifically, we encode user posts using two levels of attention mechanisms applied at the tweet-level and word-level, calculate each tweet and words’ importance, and capture semantic sequence features from the user timelines (posts). Our hierarchical attention model is developed in such a way that it can capture patterns that leads to explainable results. Our experiments show that MDHAN outperforms several popular and robust baseline methods, demonstrating the effectiveness of combining deep learning with multi-aspect features. We also show that our model helps improve predictive performance when detecting depression in users who are posting messages publicly on social media. MDHAN achieves excellent performance and ensures adequate evidence to explain the prediction.