As I mentioned in my last post, I spent Mother’s Day participating in a workshop focused on technology and mental health. Specifically, innovation in the user experience (UX) with technology as it applies to identifying, intervening, and preventing mental health problems.
Since attending the workshop, one of the take-home experiences I’ve been exploring is the Koko app that crowdsources the classic counseling technique of reframing a client’s statement. Reframes are a therapeutic, conversational technique that involve reflecting what a client has said, but not in a mere “parroting” of the original statement, but in a way that helps the client see their situation from a different perspective, often one that is more “open” to change.
For example, a client might say, “I’m completely burned out by my current project at work which is a massive failure and something that I hate doing”. Therapists are trained to pickup on “absolute” or “extreme” terms, such as completely, massive and hate. While possibly accurate from a given perspective, they are also limiting and oppressive, such statements don’t afford many options. A reframe might be, “I can tell you’ve been working really hard and are struggling to make things come together. That’s limiting the satisfaction you get out of your work“. This reframe validates the client’s effort and emotional experience and highlights an area where she is not getting her needs met. A reframe “works” when the client identifies with it, and they experience an emotional and cognitive shift. If not, the conversation must continue.
A tiny fraction of people on earth have access to trained counselors, and even those individuals rarely use them. There are many barriers to engaging in counseling, and until the advent of recent technological advances, we had pretty limited capacity to make services more available. We could basically train more people and attempt to reduce transportation and time barriers. But over several decades, we haven’t gotten very far with that approach. The internet offered a moderate advantage by reducing the barriers of time and travel, but initial applications still relied on expert counselors to engage with clients. “Mechanized” interventions began to emerge, but these were along the lines of self-help books, there was no dynamic tailoring of client needs with therapeutic responses.
Enter crowdsourcing and artificial intelligence. Crowdsourcing is essentially a disruptive use of technology which harnesses the wisdom of the collective rather than reliance on individual experts. Perhaps the most well known, and one of the first such applications, is Wikipedia. When Wikipedia first started getting attention, most people could not believe that an “open-source” encyclopedia could produce accurate and quality information. Today we know better, and the encyclopedia as I knew it growing up, has gone the way of land-line telephones.
But Wikipedia does not dispense with expert individuals. It is a curated repository of information and there have been several generations of innovation with respect to how Wikipedia manages content in order to insure quality and attempt to avoid major controversy.
As Rob Morris explained at the workshop, Koko started out using something similar to the Wikipedia model, insomuch as users of Koko could post their troubling thoughts and the Koko community (anyone interested in signing up), would respond to posts with their attempt at reframing the original post in such a way that validated the poster and helped them see their situation from a different perspective. As I mentioned in my previous post, the model was also very similar to the StackOverflow platform in which individuals post problems they are having related to programming or statistics or a host of technology related issues, and community members attempt to help them out. Like Wikipedia, the content is monitored by individuals who have earned the privilege of content management through their judicious participation in the community.
The challenge of course is how do you build up a community of experts who can curate the content? To start with, Rob and one of his partners were curating responses to user posts. They had to read every response and determine if it was appropriate or not. Mostly, I believe they were trying to just weed out the sometimes cynical, critical, or judgmental responses people would propose.This very quickly became a major bottleneck and in turn became the inspiration for harnessing the power of artificial intelligence to scale the platform to be able to support thousands (hundreds of thousands of posts). It’s my understanding, that they are currently using IBM’s Watson to do two important tasks, 1) curate responses and filter out harmful content; and 2) to abstract the user posts into pithy summaries. It appears that the first one is working well, I haven’t seen a response come through that I felt was harmful or negative. Granted, my observation count is small and I imagine there aren’t too many of these. The second goal, the abstraction definitely has room for improvement. Many are spot on, but the algorithm currently reads too much into statements and produces pithy statements that overstep their bounds. However, I don’t see this as a major problem so long as the community focuses on the original post and does not rely on the pithy abstraction.
But, don’t take my word for it. Download the app and give it a try!