Artificially Intelligent

Any mimicry distinguishable from the original is insufficiently advanced.

These Are a Few of My Favorite Questions

| 870 words

Much of my progression as a thinking thing can be traced to internalizing questions that force rigor and care in thought and action. These are some that serve me well.


Would this be a mistake?

Sometimes, my bedtime approaches and I feel the urge to delay sleep to watch television. My brain insists that I check whether or not I have something important to do tomorrow, how good the TV show is, etc. As I’m preparing to gather the relevant information, a voice in the back of my brain cries out: “would delaying sleep be a mistake?” Yes, obviously. At this moment, the choice has already been made, for I am not the type of person that makes mistakes.


In general, nearly all of my failures aren’t because I failed to execute some high level technique, they’re because I didn’t follow through on basic things. If I observed the actions I took on a given day, most of the loss would come from taking actions I know to be mistaken. It seems that I’m very good at convincing myself to act poorly: decide I’m too tired to exercise, overrule my schedule, open social media, etc.

But if I can spot my mistakes in retrospect, there shouldn’t be anything stopping me from spotting them in the moment. The problem is, when I start evaluating actions, my brain starts trying to justify rather than decide. In this cloud of motivated reasoning, the question “would this be a mistake?” cuts through the murk by demanding a clean, crisp answer.

Rarely, the answer is actually “no”, so I can proceed to evaluating the action. Much of the time, however, the answer is “yes” and the decision has already been made.

Related reading by Dan Luu and lc.

What are the consequences?

Once, I encountered one of my friends sleeping on the couch. Wouldn’t it be hilarious, I thought, if I knocked on their head and asked “is anyone awake?” It turns out that people sleeping on couches generally do not enjoy being woken.


When I’m going about my life, sometimes I find myself slipping into various roles. The questions that determine my actions shift away from “is this a good way to get what I want?” and become “is this a clever thing to do?”, “would this sound smart?” or “is this what a productive person would do?” This isn’t always a bad thing. Sometimes, substituting these questions allows me save computation and make decisions faster.

Sometimes, however, being in these roles makes me forget that actions have consequences. Being clever becomes the end of an action instead of a property the action might have. When I’m in such roles, asking myself what the consequences of an action is a stark reminder that the real world exists. It snaps me out of acting based on associations to various concepts and into acting based on consequences.

Related reading by Nate Soares.

Why do I think this?

Recently, I was thinking about how the fact that China had discovered porcelain meant that they stopped searching for glass, which meant they didn’t have glasses, which meant that their scientists had shorter productive lives, which meant that China fell behind Europe in science. Then, I noticed the story had FOUR CAUSAL LINKS. I asked myself why I believed it, and, as best I can figure, I read this story more than 6 years ago and it’s been sticking in my brain ever since.


Abstractly, the causal reason for your belief can be independent of the truth of that belief; people can, and often do, believe the right things for the wrong reasons. However, by Mathematical law, believing the right thing for the wrong reasons has to be unlikely; you can’t generate information from nothing.

If minds were properly designed, acquiring explicit beliefs would require some amount of deliberate effort. Sometimes, this is true; sometimes I’m read a claim and I remember to ask myself if that claim makes sense and is true. Often times, however, I’m slightly distracted or tired and I forget to check if claims I read make sense, I just store them in my brain. Without me ever having made a conscious decision, these things slowly become things I “believe”.

In fact, given the relative rarity of truth compared to falsehood, the default assumption is that statements are false. If you don’t have good reasons for believing something, it’s extremely unlikely that belief is true by coincidence. As a belief becomes more complicated, it gets exponentially more likely that it’s false.

Given that I have spent years acquiring floating and unverified claims, my brain is filled with things I “believe” that are pretty likely to be false. If you think of all the beliefs I have as nodes in graph, with edges representing “partially follows from”, then these floating beliefs would be unconnected from the rest of the graph. The question “why do I think this?” asks a node “are you connected with the core belief network?” If a node is unintegrated with everything else I know, it gets integrated or, in all probability, discarded.

Related reading by Eliezer Yudkowsky.