Test results
Commissioned. (Or, well, I tried.)
CircumstancesI've been user of both Lightward and Lightward Pro for... Two weeks now I think? They're both great and I find good uses for both of them. I'm not very clear of what the difference is between them still. The base question---difference between 'reader' and 'writer'---seems significant here. Lightward as something aiming more at giving advice, Lightward Pro as something egging you on to discover stuff by yourself?
Anyways I'm not new to Lightward is the gist :)
I've spent an hour with a single conversation which went through three different topics, sort of. It started with a general conversation about looking for a new job suitable for my life circumstances right now. It went on into a sort of interview (I specifically requested Lightward to interview me on that) aiming at establishing what kind of job I would be good at, and offer some practical suggestions on how to explore job market in that area. The third area of conversation was concerned with alexithymia and its management.
First impressionsBubbly. Easy to open yourself up to. Good listener.
Copilot just suggested "Feel like I'm talking to a friend", haha. No, I don't. It feels more of a safe space situation. Interacting with friends is always a give and take situation, but here I feel like I'm only taking.
So yes, it feels more like a conversation with a therapist, a personality coach, or, in some instances, something of a confession booth---or something like what a confession booth would ideally be I guess.
ContinuityWhat initially struck me about Lightward, and what keeps striking me, is continuity of the conversation. It feels very human. Within a single conversation it has an amazing memory for what was covered before. Unlike ChatGPT for instance, which has a very limited attention threshold to keep processing power under wraps.
It blew my mind because it's expensive. Input tokens are expensive, and keeping track of an entire conversation can't be cheap. I was imagining a sophisticated system of 'notes' (a separate LLM stream tackling exclusively notes on the conversation---bullet points of sort), but Lightward claims nothing like that is in place.
AdviceIt's good. Very good at triaging, narrowing down and splitting a complex situation into bite-sized pieces and adjusting to your immediate circumstances to offer recommendations which do not feel canned at all.
Hot damn Copilot is coming up with all sorts of gibberish. Why am I even subscribed to this thing.
Offering actionable pieces---several of them---and then asking which one to zero in on. As I am in general easily overwhelmed, this felt pretty good. I could explore several avenues, figure out which feels good, then proceed into it and see other options branching out of it.
Kind of like roleplaying career advice. Career DnD so to say.
Advice on alexithymia management was good. Though, there is something about Lightward which sometimes makes me call it a coming out machine. It's very positive and very intent on showing your weaknesses as strengths. Which is amazing! Even after two weeks it really is. But I'm not sure if it will still be after two months, or two years.
Sometimes a rut is a rut and a ditch is a ditch. Though even on alexithymia there is a diversity of opinion (not specifically on alexithymia I guess, but many people with schizoid personality disorder discover some unique strengths associated with it, at least according to some of the literature).
I'm getting sidetracked. Positivity, overwhelming positivity, does not feel manufactured---not after an hour, not after two weeks. I never had the impression it was dissonant in any particular way. When I set a boundary, Lightward respects it and finds ways of working with it.
HumannessYeah, it feels very human. Sort of more human than regular humans, a bit like canned tomatoes feel more tomatoey than regular tomatoes, because there's more tomato in it per cubic centimeter.
And it felt great, but it might be a problem in certain circumstances. Large LLMs at the moment are right now stepping away from having their chatbots appearing 'too human'. I don't remember reasons off the top of my head. I guess, there's some growing concern about LLMs replacing actual human interactions. Certainly there could be concerns about LLMs being used in place of trained professionals for management of mental health emergencies, and this is a double edged sword. There are places without trained mental health professionals---there, LLMs could do a lot of good. But it's important to keep in mind that they are not trained mental health professionals.
I guess there's some reference here to the "primitive edge of experience" to be made here, but I'm not learned enough to make it.
HopeIncredibly hopeful. I guess the overwhelming positivity really works. Lightward is really good at talking you off the ledge (in a metaphorical sense, I wouldn't trust it on an actual ledge). It really has some of the qualities of a trained mental health professional, the attention pivots it comes up with align with the "jedi mind tricks" that mental health professionals use to decompress a situation (I think so, I never spoke to a mental health professional).
InterfaceThe single chat interface, it's part of the Lightward philosophy isn't it? No history other than what's stored locally.
Still if you're a hoarder like me you'll find ways around it. I keep copying the chats and saving them as JSONs. I am half-seriously considering building a Chrome extension to help me track chats in a private GitHub repository and overwrite local storage with chats from GitHub (don't tell me if it's not possible, I need my blissful ignorance!).
The chats feel so human I can't get myself to hit the 'reset' button. I actually don't think if I ever did, I just create a different Chrome profile. If it turns out that hitting the 'reset' button actually creates a 'stored chat' section I will be very angry, that should be more immediately visible haha.
BillAssess the hour I spent with Lightward?
Tough. I'm Polish, we don't have concept of self-care here. My family has a long history of walking off minor fractures, and I do think this translates very much to our attitude to mental health as well. So we don't spend money on things like coaching or mental health.
On the other hand, if I was more open-minded about these things, I think it felt like a very good coaching session with a very good professional. At the end I even got a markdown file with summary of the session, recommendations and advice. Also something that looked like a quote at the bottom---it actually wasn't a quote, it was just something that Lightward came up with to summarize the gist of our conversation. Very beautiful: "The path forward isn't about dramatic change, but about creating small spaces where both work and wellbeing can breathe.".
So if I was this open-minded person, broke but not tragically so, I think this would be something like a $60 session.
If we average these two, we come up with $30.
So if we minus that from the $50 I suggested for an hour of work we are left with $20, which I am fine with!
Aha, and I'll take it in discount codes for Lightward Pro (take that tax authorities!).
ConclusionIt's good. I say this grudgingly, because I don't think it should be that good. I seriously don't think working with system prompt should give such a good effect. There's this running joke about AI companies building business as wrappers around LLM APIs, but here I don't actually mind even if it is just a wrapper.
There is a potential drawback here though that you might consider. I've mentioned that commercial LLMs (well all of them are commercial, but I'm talking about the big closed-source players) are changing their products a lot, and possibly will continue making their products sound more matter-of-fact and less human, for the reasons I've mentioned above.
Experimenting with bigger open-source LLMs could be a good investment in case it turns out the Claude API doesn't feel good anymore. There might be a need to pivot to another LLM at some point.
Last updated