Artificially Intelligent

Any mimicry distinguishable from the original is insufficiently advanced.

Interpolate Claims (Un)charitably

The principle of charity requires interpreting the speaker’s claims in the strongest possible light. This principle is ambiguous. If I say “money can’t buy happiness,” the truest version is vacuous: “there is no way to convert money directly into happiness”. More nuanced notions of strength are context-dependent; “strongest possible light” references background beliefs. Two listeners can hear the same claim, apply the principle of charity, and have different interpretations. For example, one person hears “money can’t buy happiness” and takes it to mean “money has diminishing marginal returns on happiness.” In contrast, the other takes it to mean “people above a certain income threshold do not become happier any when given additional money.” Both people might agree with their charitable interpretation of what I said, but one of them disagrees with what I meant.

A better way to interpret a claim is to take all possible ways it could be construed and indicate how much you agree with each interpretation. For instance, if someone claims “apples are heavy,” I would provide a probability distribution over apples’ weight. In general, I would model the situation with free parameters and give a distribution over those parameters. This process is intractable.

A tractable approximation is picking the “most true” and “most false” interpretations and interpolating between them. I call these the weak and strong versions of the claim. This process amounts to modeling the claim with one free variable and giving a rough distribution. For example, suppose I claim, “humanity is bad at coordination.” When I get called out for not being specific enough, I might say, “The weak version of the claim is that there exists at least one societal Pareto improvement that hasn’t been implemented. The strong version might be that such Pareto improvements are plentiful and easy to spot. I think the weak version is obviously true, and the strong version is mostly true.” The free parameter in this example is the number of untaken societal Pareto improvements, implicitly weighted by their benefits and how difficult they are to spot.

Finding weak and strong versions of a claim forces you to be specific. Interpolating between two claims means you have an operationalization. Two points make a line, telling you the direction of the claim. If your interpretation is too abstract, interpolating will make this clear. Suppose I said, “The weak version of the claim is that oranges are horrible. The strong version is that they’re great. I disagree with the weak version and partially agree with the strong version.” It is clear that on the ladder of abstraction, I have stepped sideways instead of down.