Join MEXA here. Register for the Hackathon here.
Teams will have from 0:00 UTC on December 3 to 24:00 UTC on December 5 to work on the challenge.
Traditional methods of assessment in mental health, such as clinical interviews and self-report questionnaires, have issues. Patients are asked to recall how often and how intensely they experience their symptoms, but that recollection is subjective, and depending on how the question is constructed, not fully comprehensive. Some of these measures also provide only a snapshot of an individual’s mental health at a single moment and use a limited set of data collected at a moment in time. Your challenge is to overcome these limitations and design tools or solutions that use generative AI to revolutionize mental health measurement.
Mental health care and research have long relied on clinical interviews and self-report questionnaires (e.g., PHQ-9 for depression, GAD-7 for anxiety) to understand the nature of people’s problems, make appropriate diagnoses, and monitor change over time. However, these methods often provide a limited or inaccurate point-in-time snapshot of an individual's mental health status, which can lead to inaccurate diagnosis and ineffective treatment. As mental health is a dynamic, fluctuating experience, there is a growing need for continuous, real-time monitoring that provides deeper insights into an individual's mental health journey.
Advances in AI, particularly large language models (LLMs), offer new opportunities to analyze passively collected natural language data, such as texts, emails, or social media interactions, to assess mental health problems. These language-based data streams are rich in emotional, behavioral, and cognitive signals that may be valuable for understanding the mental states of individuals over time. Moreover, combining natural language data with other data streams, such as physiological data from wearable sensors or behavioral data from smartphone usage, has the potential to create even more robust and accurate mental health measurement tools. And those are just the beginning, there may be many other sources of data that contain useful information on mental state.
The key word for this challenge is measurement. This challenge asks participants to explore the role LLMs, language, and other data sources could play in better measuring the status and progression of mental health problems over time and/or provide a more nuanced understanding of how people experience their symptoms. Projects should provide solutions or tools that help health care professionals or people with mental health conditions to better see mental health as a dynamic and fluctuating state or provide more detailed and nuanced info about a given diagnosis, patient, or symptom.
Answers to this challenge prompt might range from research tools to professional solutions or somewhere in between.
Projects should be informed by the perspectives of those with lived experience of relevant mental health problems. Teams should think about where they would need to get input from end users with lived experience to shape their project and test their assumptions about what safe, effective and ethical tools would look like.
You may choose to consider the following areas (or any other areas or points that you consider are important):
- Data Types and Sources: Identify the types of natural language data (e.g., social media posts, text messages, emails, diary entries) that could be analyzed to assess mental health conditions. What specific linguistic markers or patterns (e.g., sentiment shifts, use of particular words or phrases) can LLMs detect in these data that correlate with mental health symptoms such as anxiety, depression, or stress? Consider how data sources themselves could create bias or risk; are there opportunities to reduce that?
- Combining Multi-Modal Data Streams: Explore how language data could be combined with other types of data, such as passively collected sensor data (e.g., heart rate variability from wearables, activity levels from fitness trackers) or behavioral data (e.g., sleep patterns, smartphone usage). How can multi-modal data integration improve the accuracy and robustness of mental health assessments compared to using language data alone?
- Ethical and Privacy Considerations: Address the ethical challenges of using passively collected data. How can your solution ensure data privacy, consent, and security while providing meaningful insights? How will you prevent the model from making inaccurate or harmful conclusions, especially when operating autonomously?
- Evaluating Progress Over Time: Propose how your solution could be used to track changes in mental health conditions over time. How can LLMs detect meaningful trends or patterns in an individual’s language and behaviors that signal improvement, deterioration, or a need for intervention?
- Health Care and Real-World Applicability: Consider how this solution could be implemented in real-world settings. How can it assist health care professionals in identifying and monitoring patients at risk of developing mental health problems? Alternatively, how could it be used by individuals as a self-management tool for mental health?