A Puzzle About Time-Inconsistency

Alexander Douglas
8 min readOct 26, 2017

--

Time-inconsistency is a central concept in economics, particularly in macroeconomics. The study of it has had real-world effects: Kydland and Prescott’s research on time-inconsistency, according to the information package that went with their Nobel Prize, led to “the reforms of central banks undertaken in many countries as of the early 1990s”.

To be time-inconsistent is to have one preference at one time and then a preference inconsistent with it at a later time. For instance, you might prefer to watch television than go swimming now, but if I take you to the pool you might prefer to swim than watch television.

My problem with the concept of time-inconsistency is pretty simple. In any example I can think of, it’s never time alone making the difference to preferences. In the example I just gave, I took you to the pool, that is, your circumstances changed. So why say that you have inconsistent preferences (tv>swim at t1; swim>tv at t2) rather than that you have consistent, circumstance-dependent preferences (tv-if-at-home>swim-if-at-home; swim-if-at-pool>tv-if-at-pool)?

I think economists might have settled on a flawed concept by making the equivocation the great Kiwi logician A.N. Prior pointed out in “The Consequences of Actions”. Actions are simultaneously treated as determinate and indeterminate. Let me try to explain.

There is a fairly straightforward explanation of time-inconsistency on Greg Mankiw’s blog. He gives various examples; I’ll use my own, but you can check that it’s formally similar to the ones Mankiw gives.

Suppose you and I play the following game. First, out of 100 pennies, I choose how many you get to keep, if you take the deal. If you don’t take the deal, neither of us gets any of the pennies.

Assuming we’re the rational bastards of standard economic theory, my optimal move is to offer you 1 penny and keep the other 99. If you reject the deal, you get nothing. So you might as well take it. Then I’ll get 99 pennies. I know this, and so by backward induction I work out that it’s the optimal offer, for me. What then is your rational move? Obviously it’s to take the penny.

We can try to turn this into a problem of time-inconsistency if we imagine you making some commitment at the start of the game. You might say, for instance, that you won’t take fewer than 99 pennies. That seems rational. But then when I ignore your threat and only offer you 1 penny, you take it; it’s still better than nothing. So you’re inconsistent, some economists might say. What you committed to at the start isn’t what you ended up being committed to.

But this seems an odd construal. Why would it be rational for you to make the commitment at the start of the game, if it’s not going to deter me anyway? There is no reason for me to care at all about your commitments before I move. You can commit to anything you like; I know that when you’re presented with a choice between 1 penny and nothing, you’ll take the penny. Thus there seems no grounds for saying that any commitment is rational for you at the start of the game. In fact, your commitments are no more relevant to the game than your hair colour. These things just don’t feature in the formal problem at all.

The reason economists frame things this way, I think, is because of the conclusion they want to draw, which is as follows. If you could develop a ‘commitment technology’ — some sort of device for forcing you to hold your commitment even when it would no longer be rational to do so, this will of course affect how I move, so long as I know about the technology (don’t keep the Doomsday Machine a secret!). Paradoxically, the fact that you would follow through with your threat means that you won’t have to, so long as I know this.

This is, apparently, why we should have independent central banks rather than a democratic monetary policy, though I won’t go through that argument here. Speaking as if there were a time-inconsistency — an inconsistency between what is rational for you to do at one time and what is rational for you to do later — nudges us to think of the solution as something that restores consistency: a ‘commitment technology’ makes your preferences after my move the same as your preferences before my move, by brute force. This is where economists in any case want to get to with the problem.

But still, I insist, it is not so. The commitment technology changes the nature of the game. It gives you the first move. Now you can present me with two options: offer 99 pennies, or get nothing. What then is rational for your later self to do? That isn’t part of the game: your later self’s choice has been taken away by the ‘commitment technology’.

So here is what we have. There are two different games, one with the commitment technology and one without it. In the original game, with no commitment technology, there was no time-inconsistency. I have the first move, and it is rational for you to take my 1-penny offer (or any offer above 0). There isn’t anything it’s rational for you to do before my move: you just don’t have a relevant move before mine (we can both announce our intentions, honestly or dishonestly,but but this isn’t part of the game). In the modified game, with the commitment technology, you have the first move. It’s rational for you to commit to rejecting any deal where you get less than 99 pennies. Once I’ve moved, there isn’t anything it’s rational for you to do. That, again, isn’t part of the game; it’s been taken out of the game by way of the commitment technology. So in neither case do we have time inconsistency.

What if the game were repeated many times? Still I don’t see how we could have time-inconsistency. Your consistent strategy for a repeated original game would be: always take an offer > 0. Your consistent strategy for a repeated modified game would be: always commit to rejecting any offer < 99.

You might think we can get time-inconsistency to emerge, however, if suppose that in the original game, when you’re making your commitment, you don’t know if I’m going to believe you. It’s rational to make a commitment, to see if I’ll believe you. But then it’s rational to break your commitment if I don’t. Is this time-inconsistency? I still don’t think so. If somebody asked you, before I moved, “will you really keep this commitment if he doesn’t respond to the threat?”, your honest answer would be “no”. And it would be the same answer after I’ve moved.

Here is how you might get to the, I think erroneous, idea that there is time-inconsistency in a case like this. We take the original game, but treat it as a three-move game. Move 1: you make a commitment. Move 2: I make an offer. Move 3: you accept/reject. I have already argued that if I am rational, Move 1 is irrelevant. But suppose that at the start of the game, it isn’t: I am not impervious to your threats. You threaten to reject any offer < N — the highest number you can demand and back up with a credible threat. When we get to Move 2, however, I change my mind. I see through your threat, and I make the stingy offer. At Move 3, it’s not rational for you to honour your threat. But at Move 1 it was rational for you to commit to honouring it. So there is real time-inconsistency here… right?

I say not. At Move 1 it was rational for you to commit to this: if I believe your threat, adopt the policy: if I don’t make the right offer, reject it at Move 3. That is, part of your optimal strategy is: I-believe-threat -> (I-make-wrong-offer -> You-reject-offer). Since my believing your threat would make the antecedent of the embedded conditional false, this is a safe strategy that won’t result in you actually rejecting the offer. And at Move 3 this is still a rational policy. It’s just that the first antecedent has proven false. Meanwhile at Move 3, part of your optimal strategy is: I-don’t-believe-threat -> (I-make-wrong-offer -> You-accept-offer). Both antecedents have proven true, so you accept the offer. Yet that was also a rational strategy to have back at Move 1. The two strategies are consistent, and together they make up your optimal strategy through the whole game. So I still don’t think we have time-inconsistency here.

What might make it look as if we do is, I think, a confusion about agency. This is what I was getting at by referring to Prior above. It is nonsense to say that at Move 1 I was going to believe your threat and at Move 3 I didn’t believe your threat. It is simply contradictory to say that what happens at t2 is different from what was, at t1, going to happen at t2. On the other hand, we might want to say that the future is open: e.g. that agents have freedom and their choices are indeterminate until made. Game theoretic models are confusing in this regard — they speak of agents making choices while being, by way of rationality assumptions, formally identical to models of non-choice behaviour. But in any case if we treat the future as open in this way, what we must say is that there is no determinate truth at t1 about what is going to happen at t2. It is not yet written. Thus if what happens at t2 is relevant to choosing your strategy at t1, your strategy will have to cover the different possibilities. You won’t end up time-inconsistent unless you are straightforwardly irrational or misinformed.

To get time-inconsistency out of examples like the above, I think economists do this. They think of a choice being made at t1 on the basis of what is going to happen at t2. Then they think of what happens at t2 as being different from what was going to happen at t2 at t1, on account of the free choice of agents. And then they think of a new choice being made, based on what did happen at t2. These choices can then be inconsistent, but only because they occur in a world that is inconsistent. I don’t think there can be any inconsistent worlds like that. But even if there are, I certainly can’t imagine what rational choices would be in them.

--

--