

Discover more from Perambulations
Self-help is prompt engineering the unconscious
Prompts are the weirdest part about LLMs. We usually instruct computers in unambiguous programming languages, but prompting injects irrationality into that process. We end up saying things like, “Take a deep breath and work on this problem step-by-step.” We have to set aside what we know about computers and pretend that we’re talking to a being capable of respiration in order to get it to work optimally. This is closer to alchemy than traditional programming, even though the same deterministic processes take place under the surface of LLMs as in other software.
Prompting might also seem uncanny to us because it mimics our relationship to the unconscious. Our unconscious speaks to us in bodily sensations that we interpret as emotions. When we get anxious about something, for example, our stomachs tend to clench, we sweat, our hearts beat quickly — all ways that the unconscious says, “Don’t do this!” Our cravings for sugary and fatty foods come from an unconscious concern that calories aren’t always so easy to find. Or the unconscious might send us recurring dreams full of bizarre imagery.
We can’t speak back to the unconscious in English though. Overcoming a craving isn’t as simple as patiently explaining to yourself that no, actually you don’t need to eat six cookies because there’s a supermarket full of calories just five minutes away and it’s open 24 hours a day. You can distract yourself with some absorbing task, or find some way to trigger disgust, or simply power through it while knowing that you’re tapping into a scarce resource that will leave you more likely to give in to your next craving.
Some maladaptive signals from the unconscious can really derail your life. An agoraphobic’s unconscious is totally convinced that something awful will happen if they leave their house, much to the detriment of them doing anything else. With issues this disruptive, medication might work to help the unconscious pull itself together or it might not. When it does, it often means trying several different ones. Or it might take talk therapy, which means finding the right metaphor or a special image or just a different formulation of the problem before the unconscious gets on the same page as the frontal lobe and stops its harassment campaign.
This is the same process we go through with LLMs. We try different ways of asking a question, providing more or less context, or breaking up our questions into parts that we reassemble ourselves. In the end, we don’t know why a certain formulation might help the models give us better results. Just like our unconscious might need us to imagine ourselves in a hammock on a tropical beach in order to calm down, the LLM needs us to phrase our questions in a certain way. That’s just how it is.
Self-help, then, is essentially prompt engineering. Each book or seminar is an attempt at finding the right incantation that we can deploy to get the unconscious to behave itself. This maybe explains the diversity of approaches and the enduring appeal of self-help materials. The same prompt won’t work for every LLM, and our unconsciouses are constantly changing with the environment and our experiences.
When we lived in Seattle, we’d go to Palm Springs during the darkest months of winter. There’s an aerial tram in Palm Springs that goes up one of the surprisingly tall mountains. I was a little nervous getting on board, but once the car starting floating away from the ground at some horrible, near-vertical angle (and rotating — yes, the car rotates), my mild anxiety erupted into a fear that totally consumed my awareness.
There’s a cafeteria at the top and some trails, but all I could think of was having to ride the tram down again. I actually thought about hiking down, even though there was snow on the ground and it would be well past dark when we got back to the car. For some reason, though, I opened Wikipedia and started reading about how aerial trams work. I had walked past the giant pulley that operated the tram at the top of the mountain, but it was an image of that same mechanism in Wikipedia that caused my unconscious to realize that there wasn’t any danger in this situation. I even felt calm enough to read the list of aerial tram accidents on Wikipedia, and when we left, I enjoyed the trip down.
Another example: I have an on-again-off-again fear of flying. I’ll go years where I don’t have any nerves about getting on a plane, then suddenly will feel anxious about an upcoming flight with no apparent reason for the change. Reading about how safe air travel is has never been able to calm me down. As impressive as commercial aviation’s safety record is, my unconscious just doesn’t get it.
I have a few highlights from a book by Bryon Katie that appear in my spaced repetition deck, though, and one of them came at just the right time to calm me down before a recent flight. She wrote with wording that I now find very apt (italics mine):
Another way of prompting yourself is to read your original statement and ask yourself what you think you would have if reality were (in your opinion) fully cooperating with you. Suppose you wrote, “Paul should tell me that he loves me.” Your answer to “What do you think you would have?” might be that if Paul told you that he loves you, you would feel more secure. Write down this new statement—“I would feel more secure if Paul told me that he loves me”—and put it up against inquiry.
This made me pause and test out the idea, “I would feel totally calm if I were flying on a crash-proof plane,” which I immediately knew to be false. Somehow that statement provided me with a felt sense that my fear was completely detached from reality. My unconscious finally got the picture that it was spinning itself into a frenzy over nothing and that it could safely quit squirting adrenaline into my veins every time I thought of flying.
I think our experiences of prompt engineering LLMs should make us a bit less normative about how others try to prompt engineer their unconsciouses. If people glom onto woo for meaning, let them give it a try. None of us know why what we’re doing works, nor for how long it’ll work. As long as we don’t impede the correction of errors, any given metaphor might be worth a try. Maybe this is the real lesson of our encounter with prompt engineering: the next time the model’s updated, you’ll have to start all over again.