Image
Image of therapy occurring

As AI tools have become more robust in their capacity for conversation, they’ve also developed a surprising new role as a stand-in for human emotional support. You’ve probably aware of coworkers, friends, and family—or perhaps even yourself—using AI as a counselor. And the research shows that most people who do this find it helpful! However, this seeming replacement gets more complicated when we think about what is expected, in current society, for appropriate therapeutic care. In their forthcoming book chapter, PhD candidate Fares Ahmad (UCSB, Technology Management) and Professor Nelson Phillips (UCSB, Technology Management) analyze how AI therapy compares to the people-centered approach of Carl Rogers. They then use this analysis to raise questions about how our society can evolve and reshape our therapeutic practices to make room for these new tools.

Person-centered therapy is a humanistic approach to therapy that focuses on the client’s perspective. It relies on the power in human connection between the therapist and client, and not the therapist’s status as an expert or role as a catalyst for change, to prompt change. In order to think through AI’s therapeutic abilities, Ahmad and Phillips focus on the core of Carl Rogers’ theory of person-centered therapy: the six conditions for therapeutic change. Provided below are summaries of the requirements for person-centered care, as well as the authors’ comparisons of them to what AI therapeutic services can offer:

Psychological Contact

  • Condition: Both parties must actively engage in therapy as it occurs.
  • AI Comparison: Users engage with AI very well, but the AI can’t engage with its users in the same way. Users see AI as non-judgmental, and are more open to sharing sensitive information with them as they can’t reflect or think about them in a human way. However, AI can’t emote like people can because they aren’t conscious.

Client’s State of Incongruence

  • Condition: There must be a gap between the client’s current and their ideal self.
  • AI Comparison: AI usage doesn’t change this condition. However, it does factor into how people deal with it. AI options are becoming more and more common in the presence of this congruence when other methods, like in-person therapy, aren’t available.

Therapist’s Congruence

  • Condition: The therapist’s internal experience and external expression must align.
  • AI Comparison: AI tools don’t have an "internal state," which implies that this condition can’t be met. However, if congruence is seen as consistency and transparency to the user, AI may have an advantage. It can’t experience burnout or personal bias, which could offer a more stable experience.

Unconditional Positive Regard

  • Condition: Therapists need to accept and value their clients without judgment.
  • AI Comparison: As mentioned earlier, AI therapy excels in this area, as AI is designed to be nonjudgmental and supportive. However, AI’s unconditional support can affirm harmful ideas or plans, since the AI doesn’t have the capacity to reflect on its actions.

Empathy

  • Condition: The therapist must deeply understand the client's subjective experience.
  • AI Comparison: Users have said that AI tools are comforting, but this doesn’t prove that AI tools can reach the depth of connection that Rogers’ theory requires. AI is great at producing empathic statements, but displaying that empathy can feel hollow if the client knows it comes from an artificial source.

Client’s Perception of the Therapist

  • Condition: The client must perceive the therapist as genuine, empathic, and accepting.
  • AI Comparison: AI also rates highly here. Rogers’ theory recognizes that sometimes what matters to growth isn’t a therapist’s actual empathy, but whether the client perceives them as such. For many clients, AI systems appear capable of creating this subjective experience at least in certain contexts.

Through this analysis, Ahmad and Phillips show that AI therapeutic tools are an imperfect replacement for their human counterparts: like attempting to fit a cube into a circular hole. They are too different to even necessarily be assessed on the same scale with accuracy. However, the authors stress that these findings shouldn’t eliminate AI tools from the therapeutic space entirely; instead, the focus should be on illuminating aspects of wider society that can be adjusted for a better fit.

So, how can we create room for effective AI usage in therapy?

If AI can’t fully replicate human therapy, how can we still optimize it for its more effective aspects? The authors of this study press us to reshape the wider societal parameters around AI to make room for it. Included below are some action items that Ahmad and Phillips highlight as important ones to consider as we set new regulations:

  • Allow AI to be itself. When AI attempts to replicate human emotion, it can come off as disturbing. This can lead users to be unable to accept valid advice or care. However, if more of a therapeutic space is built for AI, without the pretense of humanity, more people could connect with their tools and be helped in a more accessible way.
  • Create more comprehensive privacy laws to protect users. Right now, if employees open up to AI systems about their problems, these thoughts become digital records that could be accessed—and used— by their higher ups. In order to create a safer environment, we must work to build up our confidentiality laws so that they protect against privacy concerns that extend beyond coworkers or human error.
  • Educate managers and employees on AI usage in the workplace. Managers are often apprehensive about AI in the office, especially for handling sensitive employee issues. And on top of that, managers and employees operate on different understandings of AI as a tool. It’s important to make sure that both parties are on the same page about AI before systems for care are implemented.

Ultimately, AI emotional support systems and human therapists have the same goal: helping people. If we’re able to create systems in society that limit the negative consequences, a collaboration between the two could create a thriving future for therapeutic care.

What’s next?