AnnouncementsMatrixEventsFunnyVideosMusicAncapsTechnologyEconomicsPrivacyGIFSCringeAnarchyFilmPicsThemesIdeas4MatrixAskMatrixHelpTop Subs
5
Add topics

Comment preview

[-]x0x70(0|0)

I think the research was worth doing. But now that it is done the researchers need to justify why it is important. The real reason it was important is it was a box we needed to check off, because we should know what an LLM says if you do a therapy session with it, because we might as well know that.

One thing that isn't simulated in this that breaks the analogy of actually giving a model therapy is these models have no plasticity during these simulated sessions. What is the point of stretching therapy over four weeks in real life? In theory a person has an opportunity to apply the advice and different thought patterns and thus change some of the connections in their brain.

The article didn't mention re-training the model at all based on the interaction. And even if they had none of the available training methods really match therapy. You could use supervised learning. Well now the model is just memorizing the session and is going to regurgitate it. You could use reinforcement learning. That's more pavlovian and requires someone deciding what a good answer is. In therapy there isn't a third person there to beat you with a stick if you give an answer they don't like. Maybe that would work.

Also it sucks that the article is login blocked.