The talk is inspired by the topics and challenges faced by the FAITH research project, which aims to develop an AI-based instrument able to recognize depression signs in post cancer patients.
This is done by analyzing daily life data of the patients such as nutrition, sleep quality, physical activity and voice patterns. The goal of the talk is twofold: on one side we want to highlight a tech/engineering related problem, namely the trend of building tools that replace practitioners overlooking the impacts on the specific contexts.
On the other side, we will describe how we are tackling this problem. When AI is used for recognizing depression signals, the risk is to end up with a “mere” diagnostic tool. Let’s assume we manage to develop a perfect algorithm that detects depression; at a first sight this might look as the desired achievement; as we look deeper, however, we face several important challenges and paradoxes.
For example What happens if the tool is used autonomously by a patient? Would the patient be negatively influenced by a tool that is suggesting a high risk of depression? How useful would a tool that recognizes depression be compared to a diagnosis done by a psychiatrist?
In the talk, we will analyse these problems to show that their implications are far from trivial. We will use the tool we are developing in the FAITH project and the approach we are using to build an instrument capable of recognizing mental health risks before becoming a clinical problem, monitoring patients status even in situations where direct contacts are difficult (as in the case of post cancer patients, but valid also for the ones introduced by the pandemy).
Furthermore, we will have a look at common challenges that arise when AI is introduced in critical context like the mental health one: Results transparency & AI explainability Responsibilities in case of errors Autonomy of the practitioners (how comfortable would be a practitioner to override the AI results when they think it is not accurate)