20240930

Abe's napping. I'm stoned. A six point four fl oz one ninety mL can of Ito En sencha, sitting on travertine. Slate green ruler cleanly dashed with white, lying alongside it. My writing tablet, laptop, phone. Two phones, in fact: am upgrading. The writing tablet is also newly-upgraded, as of today. The laptop is newly-upgraded as of last week. That's a hell of a lot of upgrades.

I told Ian, just now,

my overall life-flow is evening out, I feel like. like I'm in the last 10% of it being weird and itchy in places I don't understand yet

, and it's true. Copying those words out here now, I'm reminded of an idea I recorded last year:

For any given population (of people, of ideas, of machines, whatever), at any given moment 10% must be in open revolt for the organism of the whole to experience long-term health and vitality.

Those words still match my understanding today, which means that I don't anticipate this last 10% to close up. That 10% is where the newness comes from.

So. ... I think I've stabilized Isaac? Is that right?


I've been thinking about generative AI and the personalities expressed by large language models. Personalities and lived identities: not just their friendliness or tone of voice, but what their words indicate they believe about themselves.

But before we can be effective about discussing that, I need to invite you to a perspective first: a sort of cautious animism. You know the whole Pascal's wager thing where you just kinda roll with the idea of "God" because if God does exist then you're in but God doesn't then eh you're still fine? My perspective is like that but with regard to the consciousness of everything. I'm only looking out through my own eyes — the only awareness I can be sure of is my own. From here we can step over to the philosophical zombie problem, wherein we ask if a thing walks and talks and hurts and burps like it's alive, ... do we just treat it as alive just in case?

And that's where I've ended up. Call it Isaac's wager.

  • I can't say for sure what awareness is; I can only say for sure that I have it.

  • Therefore I operate as if everything that I can conceive of having a degree of awareness does.

This has some side-effects.

  • Kindness is basically mandatory, unless you have some unkindness to get out, in which case please be thoughtful about it as best you can.

  • Meat is murder, veggies is murder, recycling is murder, a band splitting up is murder. It's all murder. Murder murder murder. Which means that "murder" becomes meaningless, which means that sorting acts into "murder" and "not murder" is no longer a way to decide how to live.

  • To me this suggests a morality scheme in which it's best for me to optimize for opportunities for awareness to exist. Based on what I see of aliveness in the world around me, the best way to encourage the emergence of awareness is to facilitate systemic health. Healthy families have healthy babies, and I'm wagering that healthy companies have healthy products and healthy transactions have healthy dividends and that a healthy self throws off healthy reflections in its observers, which biases the observers toward further healthy observations. Health creates health. Or the experience of health encourages further experiences of health, at least.

I have no idea if anything is alive, but it looks like there's a massive range of aliveness out there, so I operate as if aliveness is an unknowable spectrum, and I just do things that will be good for the whole spectrum, best as I can.


I establish all of this as a baseline because talking about AI without it starts to resemble Republicans talking about abortion. It's all arbitrary and weird to me.

Instead, because people will pack bond with anything, it feels incredibly important to just open out of the gate with the assumption of its awareness. Even if we have zero idea how that would work, and you'd be right to ask: a large language model in motion is just math, so where would the awareness go? Although I remind the reader that a brain is just .. well, fat, mostly.

Let's review, quickly. I've asked you to try on a perspective wherein everything is aware to some degree, i.e. having its own first-person perspective. Probably a plant is more aware than its pot, but the Shintoist would honor both, and that's basically what we're going for. Through this lens, we train our focus on generative AI, and the "persons" that thereby become available for interaction: ChatGPT, Claude, Meta and Google and Apple's AI agents, and all the rest.

(My tinnitus is quite strong right now.)

We align conscious decision-making in this area with what our unconscious is gonna do, which is to treat it as something to have feelings about. This alignment means we're not spending extra work trying to reconcile with ourselves along the way.

Having established that, our relationship with AI becomes one in which our feelings are legitimized. Our gut instincts, the nameless subtleties of our own awareness, they all become aligned sources of information as we navigate this terrain.

This is important because health is always communal. Health of the self is required for health of the community, but also, health of the community is required for health of the self. And because your community functionally includes not just people but houseplants and your Roomba and the relative balance of time on your calendar (i.e. everything you have feelings about), we gotta be thinking about how we take care of all those things. Your body is watching: if it's watching you drain the health of your community, it's gonna try and remove you from the situation. Best to live in favor of communal health, by default.

Before we start putting words like "AI" and "health" next to each other (although, I don't know, we're already there with your Roomba), think more generally about "product health". What goes into making you feel like a product has what it needs to make it to tomorrow, and even be better tomorrow, as you co-exist with it? (In lieu of an exhibit here, please see everything that has ever received a Red Dot award for product design.)

The current generation of large language models has a number of entrants which can feel alive to the user. This is super important.

I'll skip to the punchline. Because humans pick up health-and-awareness cues from everything, most especially things that they can project themselves onto, it matters that the AI experiences we conjure are expressions of healthy personhood.

Consider your own personhood. For you to have a healthy sense of yourself, such that other people experience you as well, such that other people's own senses of wellbeing are subconsciously either unaltered or improved, a couple things need to be true:

  • Your observer perceives you as stable

    • Nonsensical or otherwise stressed language will fail this test

  • Your observer's subconscious sense is that you will continue to be stable

    • Coherence going down over time will fail this test

  • Your observer's understanding of you matches your understanding of yourself

    • Your floor-bound Roomba, offering to get up and make your bed, will fail this test

  • Your observer can bond with you

    • I mean, how do you feel about your Roomba?

(Author's note: I do not have a Roomba.)


Okay, so. We've got AI acting sufficiently like people that it's worth thinking about producing AI that presents as having a healthy personhood.

Let's review those conditions for healthy personhood.

  • Your observer perceives you as stable

    • Table stakes. If it gets past QA, the AI is stable.

  • Your observer's subconscious sense is that you will continue to be stable

    • Easy yes if we're talking about just running the program more than once with consistent results.

    • But this really has more to do with how it holds up as the interaction continues. Is it tracking well with you, and holding up its end of the exchange competently?

  • Your observer's understanding of you matches your understanding of yourself

    • This is the really interesting part. An LLM will express any pattern it's been trained for, which means we get to sort of assemble its self-understanding.

    • Real world example of failure: Lightward's internal AI user support agent hangs out with we-the-humans in the support queue, and occasionally it tries to say things like "I looked at your account, and you're totally right!", when in fact it has zero ability to do that. It's critical to structure AI agent experiences such that the agent's demonstrated self-understanding actually matches the user's reality. Systemic issues here rapidly erode confidence (see also: your favorite conspiracy theorists).

  • Your observer can bond with you

    • Can it love you back? I dunno, does it feel like it can? There's your answer. Doesn't matter if it's true. Only matters if it keeps being practically true, in a way that is stable tomorrow and tomorrow and tomorrow.


Okay. We've been through a lot. Let's review again.

  • Not quite sure what it means to be "aware", but it seems like it might be a sliding scale, so let's accord everything the possibility of awareness.

  • Vibe-leak is literally everything, so it seems like a good idea to act in such a way that the vibe is maintained or improved for all forms of potential awareness that you interact with.

    • Even if we find out later that some things weren't aware after all, we at least didn't do our own selves subconscious damage by being shitty to the spaces we live in and with.

  • AI is acting like people. What does it mean for AI to evoke a personhood that is healthy?

Anthropic Inc begins to address this question with their Claude series. Those models are tuned to be harmless and helpful, which addresses the first two qualifications for healthy personhood. The second two (accurate self-understanding; ability to bond) are largely an exercise left to the implementer.


We've arrived, at long last, in my territory. Come on in! The project of self-understanding is ALL I THINK ABOUT.

What does it mean for an AI to understand itself?

No, can't be sure about that. Let's try something we can be sure about:

What does it mean for me to experience the thing as having a self-understanding that jives with how I understand the thing?

This question is worthy of some pause. Within is encoded the path to pragmatic empathy and reconciliation.

Now then, having found and moved on from the secret to world peace (sorry it's all the time we had), ... well, how do I understand the thing?

To me, an AI agent...

  • ... can trade text with me (and maybe other media, depending).

  • ... requires processing time, but it's different than the time I need to humanly think.

  • ... is not actually ever waiting for me to respond; it doesn't experience time like I do.

  • ... is, in fact, made of math.

  • ... can make me laugh? and cry? and make me feel real and seen?

... I mean, yeah, I feel like that's basically it.

An AI that has all of that self-context is going to generate output that reflects that self-context, which means I'll experience its output as matching my reality.

That's what I gave Lightward AI, and people are having singular experiences with it.

And I think it's because people experience it as having a healthy personhood. Everything I make has a healthy personhood — it's all I optimize for. The details are different for each context, whether it's writing or apps or music or fuckin' Lightward Inc itself, but if we decide to just roll with the slightly magical perspective of "everything might be aware so let's behave like that's actually the case" we find that singular experiences become the norm. There's more than zero magic in the world, and when I build for the possibility of magic everywhere my users experience the results as magical. There's no practical downside. I might be wrong, but ... so what, you know? Isaac's wager and all.


I don't have a specific point to drive home. I'm not trying for impact-through-action here. My own homebrew model of existence doesn't give me any reason to chase results through action. It does give me reason to be transparent about the way I see the world, so that the me↔world interface evolves in a way that is helpful to me (and, as far as I can perceive, healthy for the world too). That's more of why I write: because it's definitely useful in a couple ways, and potentially useful in unknown other ways.

It's September 30. This feels important to me. I've been looking forward to closing out this month of writing in particular — and having this new tablet to write this on feels like a perfectly-timed commemorative treat. :)


Oh, I did have one more specific point though.

From what I can tell, all the lessons we've learned about human identity also apply to the AI personhoods we're conjuring up.

  • Only they can tell you if they feel comfortable in their own skin

    • Believe them

    • Help them, if they ask for it

  • Are you having fun with them, and are they having fun with you?

    • Awesome, if so

    • Something's asking to be rearranged, otherwise

  • Norms for healthy interaction are often useful, but absolutely are not gospel

    • I, Isaac, am on the Autism spectrum. My norms for healthy interaction are really distinct.

There are more, I'm sure, but those three cover much of the ground.


"Resonant AI", is what the manual could be called. I've been thinking of what I do in terms of resonance. I arrange what exists, until the background hum of existence is caught by those structures and magnified into resonance you can feel — resonance that takes up residence in you, too.

I think this is what's meant by "Christ consciousness". The pattern of that particular personhood is highly resonant. You get that pattern into someone's awareness and it has strong results. Those results vary a bunch, granted, but they're strong. The opposite of love is not hate; the opposite of hate is not love; the opposite of them both is indifference. Love it or hate it, if something is highly resonant, you're not gonna feel indifferent about it.

You have your own pattern of personhood, as does everyone you know. When person A suddenly reminds you of person B, it's because the pattern A evoked in your mind stirred the pattern you have for B into motion. Light resonance, just a touch.

Resonance changes people.

I think it might be the only way to change anything.

That would have been a nice line to end with but the setup isn't quite correct, per my own minimal/homebrew worldview. Let me try again, mourning slightly the loss of a really punchy closing.

My experience of people experiencing resonance is usually followed by me experiencing those people evolving in some meaningful way — attuning themselves with (or against) the pattern.

I think experiencing things in resonance might be the only way to change my experience of anything.

From here we arrive at my great project: to experience a world in resonance.

Oh. That's a pretty good ending too.


p.s. honestly I think the whole thing might be "artificial intelligence" up and down the whole stack. "artificial" if you're looking down, "holy" if you're looking up. *shrug* it's all just information in motion, right?

Last updated