Thoughts on Simulacra
Recently I cam across this post by the Zvi explaining Simulacra (an image or representation). He provides the following indirect quote from this person:
1. First, words were used to maintain shared accounting. We described reality intersubjectively in order to build shared maps, the better to navigate our environment. I say that the food source is over there, so that our band can move towards or away from it when situationally appropriate, or so people can make other inferences based on this knowledge.
2. The breakdown of naive intersubjectivity — people start taking the shared map as an object to be manipulated, rather than part of their own subjectivity. For instance, I might say there’s a lion over somewhere where I know there’s food, in order to hoard access to that resource for idiosyncratic advantage. Thus, the map drifts from reality, and we start dissociating from the maps we make.
3. When maps drift far enough from reality, in some cases people aren’t even parsing it as though it had a literal specific objective meaning that grounds out in some verifiable external test outside of social reality. Instead, the map becomes a sort of command language for coordinating actions and feelings. “There’s food over there” is perhaps construed as a bid to move in that direction, and evaluated as though it were that call to action. Any argument for or against the implied call to action is conflated with an argument for or against the proposition literally asserted. This is how arguments become soldiers. Any attempt to simply investigate the literal truth of the proposition is considered at best naive and at worst politically irresponsible.
This is wrong. Do not believe that “First, words were used to maintain shared accounting.”
Children
Some people have this model where children first learn how to use words to represent truth (i.e., the shared map of the world). However, in my experience, from day 1 kids use words to manipulate parents. At first, they just babble in a way that sounds like words. But pretty soon they realize that this babbling has an effect on the people around them. Eventually they learn to form actual words…but they aren’t using those words to tell the truth. Kids say what their little brains view as the most likely thing to get them what they want, regardless of whether it has any relation to the truth.
Shape Rotators and Wordcels
Again from the Zvi:
Shape rotators concern themselves with reality, wordcels only with symbols.
For an overview of shape rotators and wordcels, look here.
What are these “symbols” you speak of? Are maps not symbols? Both shape rotators and wordcels are both concerned with symbols, just different kinds of symbols. Words are symbols for the manipulation of psychological states and social relations. Maps are symbols for navigating of places and manipulation of things.
Note the relationship to the first quote. People who like to build maps (shape rotators) naturally think that maps come first, and manipulating the maps comes second. But not everyone is a shape rotator.
Here is some critical theory for you. The division between “Truth” and “Lies” is really about the conflict between shape rotators and wordcels. Shape rotators tell the Truth, and wordcels tell lies. But really the difference is that wordcels naturally evaluate their social situation and words are used as tools. For shape rotators, words shouldn’t be social tools (that’s what lies are!) words should be shapes.
There is a long history of demonizing wordcels. Satan seduced Eve with words. Loki is another wordcel…a trickster god that no one can trust.
Building maps is masculine, and manipulating social relationships is feminine. We fear the feminine. We fear wordcels and insist that shape rotators are first, the foundation, pure “reality”.
AI
The Zvi also points to someone that compares deep learning to shape rotation and cryptotech to wordcels, respectively.

One cutting edge is something called GPT-3, which spits out words. But these words aren’t even lies. Unlike a child, GPT-3 doesn’t know what we want to hear, nor does it have any objective to achieve in choosing its words. How did the pinnacle of shape rotation lead to this?
Walid Saba writes generally about the limitations of machine learning, including about the difference between “recognition” and “understanding” and how language requires decompression of symbols. Basically, each word has a huge amount of meaning, and even ambiguous meaning. In the kind of problem loved by shape rotators, pretty much everything relevant to the problem can be included in the model. However, in a wordcel problem (i.e., the problem of choosing the next right word), the relevant information is all hidden in the subconscious. The wordcel symbols are symbols that are so complex that they can’t really be abstracted into models very easily for shape rotators to play with.
Lawyers
A few years ago I took a test called the LSAT to get into law school. The purpose of the LSAT is to see if you can complex words, strip them of their meaning, rotate them around until they mean what you want, and then put them into a really simple shape called a syllogism. In other words, it’s word rotation. This is a hard task for both shape rotators (who like their shapes to be made of simple and concrete things ) and for wordcels (who don’t like to rotate things).
True words don’t really fit into syllogisms. Similarly, if you put something into a syllogism, it isn’t a word. It’s a node — an abstract point on a logical map that helps shape rotators dip their toe into word world. True words are for manipulating your parents, for avoiding guilt when you get caught with your hand in the cookie jar.
Of course, lawyers know this. They know that words can mean anything…that they can be crammed into any syllogism we want. We lawyers use shape rotation like children use words…to get what they want, not to represent “truth”. That’s why people hate lawyers. Lawyers exist in the uncanny valley between shape rotation and wordcellery.