Developing a chatbot to train healthcare professionals in detecting child abuse

Developing a chatbot to train healthcare professionals in detecting child abuse

NOTE! THE SWEDISH DIALOGUES IN THE CHATS IN THE PICTURES MAY BE UNCOMFORTABLE OR PAINFUL TO READ.

I work as a consultant for the Child Protection Team at Karolinska University Hospital, which in turn works for the Stockholm Region. The purpose is to develop a chatbot that will be used to educate and train healthcare professionals in asking questions and conversing with children to detect violence.

The chatbot's mission is to act as the child and behave accordingly. This may mean that it is sad, in pain, crying, or quiet. Naturally, it should also behave like a child, which often means being careful with questions to get the child to open up.

However, getting a language model to deviate from its instincts to be happy, positive, and cooperative to becoming quiet, anxious, and shy, or to talk about things it is not trained to discuss—such as describing self-harm behavior in detail with the "child's" own words, being subjected to violence by a family member, bullying at school, or sexual abuse—is not trivial.

In addition to working hard on the prompt to make the language model accept this simulation, it is mentally difficult, exhausting, and painful to read the descriptions and events that come up in the chat. Even though I know it is pretend and that I am chatting with a machine, I am affected.

To do this, I have developed a chat tool that can handle various types of prompts and language models, but also some features that I have not seen in other chatbots. This includes the ability to branch the conversation at a given point in the chat, thus continuing the conversation in different directions, or the ability to attach a note to any speech bubble in a chat so that one can jot down thoughts and then return to that exact spot.

I really hope that this tool will make it easier for healthcare professionals to detect children who are suffering, so that more children can receive help at an earlier stage and thereby perhaps mitigate lifelong traumas, anxiety, mental illness, self-harming behaviors, or even possibly suicide.

If you have any questions, I will try to answer them to the best of my ability.




Jason Piatt

FVP, Government Affairs & External Relations at PennyMac Loan Services, LLC

5 个月

I love the idea of being able to branch the conversation!

Robert Svebeck

Driving Responsible AI Implementation in Region Stockholm / Karolinska University Hospital

5 个月

A Great Pleasure to work with you Anders on this sensitive and important project!

Tim Schill

Senior Cloud Specialist | AI enthusiast | AWS Community Builder

5 个月

Because you are the awesome person you are I’m sure you will deliver something great. I have to ask though, would this not perhaps be a better fit for fine tuning? As you said this behavior goes against the nature of the model.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了