September 18th | Join us at AWS Cloud Day Warsaw | PGE National Stadium

Revolutionizing mental health: AI assistance in psychotherapy

Learn how an AI assistant built by Chaos Gears helped Provocare automate repetitive tasks and focus on psychotherapy instead.

The Challenge

In an era where mental health is a paramount concern, access to psychological services shouldn't be a luxury. Leveraging the breakthroughs in generative AI, the Polish Provocare Foundation, with the expertise of Chaos Gears, is on a mission to democratize access to mental health support. Integrating expert psychologists' vast knowledge with cutting-edge technology makes mental health services more accessible to all with therapists using an AI chatbot. This advanced tool aids in swiftly obtaining information about Provocative Therapy, bridging the gap between technology and mental health care.

The mission of the Provocare Foundation is to enhance people's lives, fostering harmony within themselves and with others. Rooted in the principles of Provocative Therapy pioneered by Frank Farrelly, the Foundation's core values guide its efforts.

The Foundation's main goal is to deliver public benefit by supporting holistic societal development, promoting well-being, and advancing mental health through a proactive approach and methodologies.

It's no surprise that the Foundation chose to leverage the potential of generative AI in its innovative approach to therapeutic sessions. However, general-purpose LLMs, such as Claude 3 or ChatGPT 4, are not trained enough in this area and confuse provocative therapy with other psychological methods. At the same time, there is not enough data to fine-tune LLMs for Provocative Therapy.

To address this, the Provocare Foundation, with the help of the Data & AI Team at Chaos Gears,  built a custom assistant grounded in a relevant knowledge base, effectively reducing the risk of hallucinations.

The Solution

To overcome these challenges and create a valuable tool for therapists, Chaos Gears proposed a Retrieval-Augmented Generation (RAG) approach. This involved building a chatbot assistant atop a set of curated data sources, ensuring accuracy and relevance.

Chaos Gears designed a solution architecture with a knowledge base in Amazon OpenSearch. Data is fed to OpenSearch using AWS Lambda, Amazon S3, and Amazon Textract, supported by Amazon Bedrock (Titan Embeddings v2) and LangChain. The UI was custom-built in the React framework and deployed on Amazon S3 as a static website with Amazon Cognito handling user authentication, ensuring the security of sensitive data by limiting access to the absolute minimum. Data retrieval was performed using AWS Lambda with LangChain and Amazon Bedrock (Claude 3). Additionally, Amazon SageMaker Inference was used to host an open reranking model.

Using solely serverless components allowed the Chaos Gears team to focus on features rather than fighting infrastructure. Additionally, they employed advanced RAG techniques such as hybrid search, reranking, small-to-big retrieval, and testing pipelines based on ragas measured accuracy and usefulness of each applied technique.

Moreover, using serverless components eases the burden on the customer’s IT department by minimizing maintenance tasks. This approach also cuts operational costs, as you only pay for the resources consumed, thanks to its auto-scaling capability. Traditional models often require provisioning for peak capacity, leading to overpayment for unused resources. Serverless functions, however, scale automatically with the workload, ensuring you pay solely for what you use.

When implementing GenAI, especially in healthcare, ensuring the privacy and security of sensitive information is paramount. Is patient data secure? Do models learn from our data? And crucially, is the chosen solution compliant with applicable regulations? These are the questions that must be addressed to safeguard trust and integrity.

With medical data security as our top priority, we chose Amazon Bedrock without hesitation. It safeguards confidential information through encryption and employs a robust access control layer to restrict access to privileged data. Company data remains under client control, shielded from unauthorized access. What is crucial to note is that Amazon Bedrock does not use your data to improve the base models and does not share it with any model providers.

For strictly regulated entities, including the medical sector, it is pivotal that Amazon Bedrock meets common compliance standards. Amazon Bedrock is in scope for ISO, SOC, CSA STAR Level 2, is HIPAA eligible, and customers can use Amazon Bedrock in compliance with GDPR.

The Outcome

The AI assistant developed by Chaos Gears supports therapists using the Provocative Therapy method by providing a reliable knowledge base, drastically reducing the time needed to find relevant information, which is particularly important during therapy sessions. By automating tedious, repetitive tasks, therapists can dedicate more time to their patients—an essential improvement given the persistent shortage of therapists in Poland amidst rising demand for therapeutic support.

In a sensitive area like mental health, it's crucial that therapists trust our solution. That's why we're proud to report that our AI assistant provides accurate sources for over 92% of questions, based on various metrics ran on the test dataset.

Given the success of the pilot production project, we plan to roll out the AI assistant to more countries and teams. The Provocative method is widely used globally, so our chatbot can support therapists and their patients worldwide.

In the medical field, there is a growing interest in leveraging Generative AI not only for psychotherapy but also for diagnostics. However, GenAI application in healthcare demands cautious consideration. Errors such as misdiagnoses or inappropriate treatment recommendations can have severe consequences, jeopardizing patient safety.

Using low-quality data in GenAI systems can result in hallucinations - instances where the AI produces inaccurate or misleading outputs. Incorporating such data into patient diagnoses or treatment plans can lead to serious repercussions, including incorrect treatment protocols or misdiagnoses.

Thus, guaranteeing GenAI precision demands robust data validation protocols to minimize inaccuracies. Moreover, meticulous monitoring of AI performance is critical to swiftly address any deviations from expected outcomes.

The challenge lies not just in deploying GenAI but in doing so responsibly. As these systems assume increasingly critical roles, from disease diagnosis to patient data management, the stakes are unquestionably high. Hence, our task is to balance the allure of automation with vigilant oversight, ensuring innovations enhance rather than compromise the quality of patient care.

Talk to a cloud consultant
Tech stack

Trusted tools

No items found.
Clients

Case studies

Find out how we help our clients create tailored, cloud-based solutions that exceed expectations.

We'd like to keep improving our site - and your anonymous analytical cookies would help with that. Is that OK with you?
Analytics
These items help us understand how our website performs, how visitors interact with the site, and whether there may be technical issues. The information we collect for this purpose is fully anonymous.
Confirm