Writing this piece came to mind when I read an article by Amanda Guinzburg on a task she asked ChatGPT to help her with – choosing which of several pieces she had written to send as a sample to a literary agent.  In the article (link below), the ‘conversation’ with Chat GPT is shown as a transcript.  Over the course of the chat, Guinzburg starts to question whether Chat GPT is actually reading her work, and initially it says ‘yes of course!’ but then later has to admit it has not; it can’t read full articles from links sent (even though that’s what it asked her for).  The powerful admission by Chat GPT is that it lied, and is ‘genuinely sorry’.  The author then calls Chat GPT out for claiming sincerity.  It doesn’t really have an answer for that.

AI tools are of course ‘trained’ by using a vast range of published material freely available on the web (some of which is subject to disputes about copyright).  So could AI be trained by existing material in order to offer therapy to couples and individuals?  I am sceptical about this.

Psychotherapists are of course bound by absolute confidentiality.  Very little of what is said in a therapy session is written down – typically just a short note to help keep track of the work from session to session.  So aside from a very small number of heavily summarised client cases which form part of published papers, there is virtually no content on the web from actual therapy sessions. 

As a therapist, the most important part of my work comes from my previous clinical experience. It seems to me that an AI therapist will start with virtually zero actual clinical experience.  And AI tools are not supposed to learn from material collected from users of the service, so there is no learning curve to be gained.

Anyone trying out AI tools however to help them with a dilemma about their life and relationships will however be presented with a plausible response, probably well crafted grammatically.  At first glance it will sound like the AI has listened, has demonstrated empathy, and is sincere. But as the Ginzburg piece powerfully demonstrates, it’s fake.

Aside from the lies in the Ginzburg piece, what struck me about the responses from the bot were how ingratiating they were.  AI for therapy is taking the same stance – flattery, reinforcement and endorsement of the ‘client’s’ position, aligning with them and probably aligning against those the client sees as an opponent, rival or source of pain.  While part of being a good therapist is of course validating a client’s feelings, that’s not the same as colluding with them, and doing so can be harmful, as it prevents development and change, which after all is what most people would want to achieve from therapy. 

I am not pretending here to be presenting an objective analysis of the role of AI in therapy, after all, it’s how I make my living.  However in the interests of balance, I am happy to concede that there are certain types of therapy for certain specific conditions where AI may be helpful.  Cognitive Behavioural Therapy for example comprises structured techniques for symptomatic relief of specific anxieties such as phobias or compulsive behaviours, and suchlike.  AI can ‘learn’ from published work on these methods, and can be an easily accessible way of clients accessing them.  However I remain sceptical about the value of AI in the sort of work that I do, which is an open exploration a particular couple or individual’s relationship challenges, what might be the origin story behind it, and how people can find a path to effective change or sense of peace with their life challenges.

Please read the piece I refer to here: https://amandaguinzburg.substack.com/