Location

Harare, Zimbabwe and Virtual

Start Date

12-9-2024 4:55 PM

End Date

12-9-2024 5:20 PM

Description

Artificial Intelligence (AI) is developed with predictive tools and is used to predict and diagnose mental health disorders based on data mined on social media platforms. Despite the growth of its adoption, there is limited literature that explicitly states the ethical limits of this technology within the context of predictive mental health assessment. This study adopts a qualitative and deductive approach, it adopts the socio-technical systems theory which is evaluated using a systematic literature review approach focused on a data set of 40 documents. The results indicate that there are no clear regulations that define how much and what type of data social networking companies can collect for mental health assessments. They often act outside any formal ethical governance regulations that are typically imposed on medical professionals. Diagnosing mental health disorders without informed consent, violating privacy through data sharing, lack of transparency and accountability, and limited training data scope, emerged as the key ethical limitations. The exploration of various ethical issues provided insight into the responsible and effective use of this technology in mental health care.

Share

COinS
 
Sep 12th, 4:55 PM Sep 12th, 5:20 PM

Ethical Limitations of Using AI to Predict and Diagnose Mental Health Disorders Based on Individuals’ Social Media Activity

Harare, Zimbabwe and Virtual

Artificial Intelligence (AI) is developed with predictive tools and is used to predict and diagnose mental health disorders based on data mined on social media platforms. Despite the growth of its adoption, there is limited literature that explicitly states the ethical limits of this technology within the context of predictive mental health assessment. This study adopts a qualitative and deductive approach, it adopts the socio-technical systems theory which is evaluated using a systematic literature review approach focused on a data set of 40 documents. The results indicate that there are no clear regulations that define how much and what type of data social networking companies can collect for mental health assessments. They often act outside any formal ethical governance regulations that are typically imposed on medical professionals. Diagnosing mental health disorders without informed consent, violating privacy through data sharing, lack of transparency and accountability, and limited training data scope, emerged as the key ethical limitations. The exploration of various ethical issues provided insight into the responsible and effective use of this technology in mental health care.