Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Comparing Judicial Incentives under Persuasive and Binding Precedent, Lecture notes of Decision Making

This document compares the incentives for judges to acquire information under persuasive and binding precedent. The authors find that the incentive to acquire information is stronger in earlier periods under binding precedent due to the higher cost of making a wrong decision, but as more precedents are established, the incentive becomes weaker due to the limited use of acquired information. Conversely, under persuasive precedent, the incentive to acquire information remains strong over time as each judge has the flexibility to make their own decision based on the available information.

Typology: Lecture notes

2021/2022

Uploaded on 09/27/2022

mangaxxx
mangaxxx 🇬🇧

4.7

(10)

218 documents

1 / 35

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Information Acquisition under Persuasive
Precedent versus Binding Precedent
(Preliminary and Incomplete)
Ying Chenulya Eraslan
March 25, 2016
Abstract
We analyze a dynamic model of judicial decision making. A court regulates
a set of activities by allowing or banning them. In each period a new case arises
and the appointed judge has to decide whether the case should be allowed or
banned. The judge is uncertain about the correct ruling until she conducts a
costly investigation.
We compare two institutions: persuasive precedent and binding precedent.
Under persuasive precedent, the judge is not required to follow previous rul-
ings but can use the information acquired in an investigation made in a pre-
vious period. Under binding precedent, however, the judge must follow previ-
ous rulings when they apply. We analyze both a three-period model and an
infinite-horizon model. In both models, we find that the incentive to acquire
information for the judge is stronger in earlier periods when there are few
precedents under binding precedent than under persuasive precedent, but as
more precedents are established over time, the incentive to acquire information
becomes weaker under binding precedent than under persuasive precedent.
Department of Economics, Johns Hopkins University
Department of Economics, Rice University
1
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23

Partial preview of the text

Download Comparing Judicial Incentives under Persuasive and Binding Precedent and more Lecture notes Decision Making in PDF only on Docsity!

Information Acquisition under Persuasive

Precedent versus Binding Precedent

(Preliminary and Incomplete)

Ying Chen∗^ H¨ulya Eraslan†

March 25, 2016

Abstract We analyze a dynamic model of judicial decision making. A court regulates a set of activities by allowing or banning them. In each period a new case arises and the appointed judge has to decide whether the case should be allowed or banned. The judge is uncertain about the correct ruling until she conducts a costly investigation. We compare two institutions: persuasive precedent and binding precedent. Under persuasive precedent, the judge is not required to follow previous rul- ings but can use the information acquired in an investigation made in a pre- vious period. Under binding precedent, however, the judge must follow previ- ous rulings when they apply. We analyze both a three-period model and an infinite-horizon model. In both models, we find that the incentive to acquire information for the judge is stronger in earlier periods when there are few precedents under binding precedent than under persuasive precedent, but as more precedents are established over time, the incentive to acquire information becomes weaker under binding precedent than under persuasive precedent.

∗Department of Economics, Johns Hopkins University †Department of Economics, Rice University

1 Introduction

We analyze a dynamic model of judicial decision making. A court regulates a set of activities by allowing or banning them. In each period a new judge is appointed and a new case arises which must be decided by the judge appointed in that period. The judges share the same preferences over whether a case should be allowed or banned, but they are uncertain about the correct ruling until a costly investigation is made. Following Baker and Mezzetti [2012], we assume that the judge appointed in a given period can either investigate the case before making a ruling or summarily decide without investigation. We compare two institutions: persuasive precedent and binding precedent. Under persuasive precedent, a judge is not required to follow previous rulings but can use the information acquired by the investigations of previous judges. Under binding precedent, however, a judge must follow previous rulings when they apply. We show that the incentives to acquire information for the appointed judges are stronger in earlier periods when there are few precedents under binding precedent than under persuasive precedent, but as more precedents are established over time, the incentive to acquire information for the appointed judge becomes weaker under binding precedent than under persuasive precedent. To see why, note that the cost of making a wrong summary decision is higher under binding precedent than under persuasive precedent since future judges have to follow precedents when they are binding even if they are erroneous. Hence, a judge who faces few precedents is more inclined to investigate to avoid mistakes under binding precedent. As more precedents are established over time, however, the value of information acquired through investigation becomes lower under binding precedent since future judges may not be able to use the information to make rulings, and this discourages judges from acquiring information under binding precedent. We establish these results first in a simple three-period model and then in an infinite-horizon model. In the infinite horizon model, we show there is a unique Markov perfect equilibrium payoff by showing that the Contraction Mapping The- orem applies. In our model, the uniqueness of Markov perfect equilibrium payoff

whether to permit the activity or ban it.^12 An investigation allows the judge to learn the value of θ at a fixed cost z. If the case is decided without an investigation, we say the judge made a summary decision. Let s = ((L, R), x). In what follows, for expositional convenience, we refer to s as the state even though it does not include the information about θ. Let Sp^ denote the set of possible precedents, i.e. Sp^ = {(L, R) ∈ [0, 1]^2 : R > L}, and let S denote the set of possible states, i.e. S = Sp^ × [0, 1]. Denote the ruling at time t by rt ∈ { 0 , 1 }, where rt = 0 if the case is banned and rt = 1 if the case is permitted. After the judge makes her ruling, the precedent changes to Lt+1 and Rt+1. If xt was permitted, then Lt+1 = max{Lt, xt} and Rt+1 = Rt; if xt was banned, then Lt+1 = Lt and Rt+1 = min{Rt, xt}. Formally, the transition of the precedent is captured by the function π : S × { 0 , 1 } → Sp^ where π(s, 0) is the vector (L, min{R, x}) and π(s, 1) is the vector (max{L, x}, R). We consider two institutions: a binding precedent and a persuasive precedent. Under binding precedent, in period t the judge must permit xt if xt ≤ Lt and ban it if xt ≥ Rt. Under persuasive precedent, the judge is free to make any decision. In this case, the role of the precedent is potentially to provide information regarding whether the case is beneficial or not. We assume the violation of a binding precedent is infinitely costly. The payoff of the judge from the ruling rt on case xt in period t is given by

u(xt, θ, rt) =

0 if xt < θ and rt = P , or xt ≥ θ and rt = B, `(xt, θ) otherwise,

where (xt, θ) is the cost of making a mistake, that is, permitting a case when it is above θ or banning a case when it is below θ. Assume that(x, θ) = 0 if x = θ (^1) For expositional simplicity, we assume that the judge investigates the case when indifferent. (^2) We assume that the judge learns about her preference parameter θ through investigation. Al- ternatively, we can assume that the judge learns about her preferences in terms of the consequences of cases, but does not know how the consequence of a particular case unless she investigates. To illustrate, let c(x) denote the consequence of a case x and assume that c(x) = x + γ. The judge would like to permit case x if c(x) is below some threshold ¯c and would like to ban it otherwise. Suppose that the judge knows ¯c and observe x, but γ is unknown until a judge investigates. This alternative model is equivalent to ours.

and (x, θ) < 0 if x 6 = θ. Also assume that(x, θ) is continuous, strictly decreasing in x and strictly increasing in θ if x > θ and strictly increasing in x and strictly decreasing in θ if x < θ. For example, if `(x, θ) = f (|x − θ|) where f is a continuous and strictly decreasing function with f (0) = 0, then these assumptions are satisfied. The dynamic payoff of the judge is the sum of her discounted payoffs from the rulings made in each period net of the investigation cost, appropriately discounted, if she carries out an investigation. The discount factor is denoted by δ. Persuasive precedent In the model with persuasive precedent, the payoff-relevant state in any period is the realized case x ∈ [0, 1] and the information about θ. If θ is known at the time when the relevant decisions are made, then it is optimal not to investigate the case for any x ∈ [0, 1] and it is optimal to permit x if x < θ and to ban x if x > θ. If θ is unknown at the time when the relevant decisions are made, a policy for the judge is a pair of functions σ = (μ, ρ), where μ : [0, 1] → { 0 , 1 } is an investigation policy and ρ : [0, 1] → { 0 , 1 } is an uninformed ruling policy, where μ(x) = 1 if and only if an investigation is made when the case is x; and ρ(x) = 1 if and only if case x is permitted. For each policy σ = (μ, ρ), let V (·; σ) be the associated value function, that is, V (x; σ) represents the dynamic payoff of the judge when she is uninformed, faces case x in the current period, and follows the policy σ. In what follows, we suppress the dependence of the dynamic payoffs on σ for notational convenience. The policy σ∗^ is optimal if σ∗^ and the associated value function V ∗^ satisfy the following conditions: (P1) The uninformed ruling policy satisfies ρ∗(x) = 1 if

∫ (^) max{x,θ} θ

`(x, θ)dF (θ) >

∫ (^) θ¯ min{x,θ¯}

`(x, θ)dF (θ)

In the model with binding precedent, the payoff-relevant state in any period is the precedent pair (L, R) ∈ Sp, the realized case x ∈ [0, 1], and the information about θ. If θ is known at the time when the relevant decisions are made, then it is optimal not to investigate the case for any s, to permit x if x < max{L, θ}, and to ban x if x > min{R, θ}. Let C(L, R) denote the expected dynamic payoff of the judge when the precedent is (L, R), conditional on θ being known when decisions regarding the cases are made where the expectation is taken over θ before it is revealed and over all future cases x. Formally

C(L, R) = (^1) −^1 δ

[∫

L

∫ L

θ

`(x, θ)dG(x)dF (θ) +

R

∫ (^) θ R

`(x, θ)dG(x)dF (θ)

]

where L is the (possibly degenerate) interval [θ, max{L, θ}] and R is the (possibly degenerate) interval [min{R, θ¯}, θ¯]. Equivalently,

C(L, R) =

0 if L ≤ θ and R ≥ θ¯ 1 1 −δ

[∫ L

θ

∫ L

θ `(x, θ)dG(x)dF^ (θ) +^

∫ (^) θ¯ R

∫ (^) θ R `(x, θ)dG(x)dF^ (θ)

]

if L > θ and R < θ¯ 1 1 −δ

[∫ L

θ

∫ L

θ `(x, θ)dG(x)dF^ (θ)

]

if L > θ and R ≥ θ¯ 1 1 −δ

[∫ (^) ¯θ R

∫ (^) θ R `(x, θ)dG(x)dF^ (θ)

]

if L ≤ θ and R < θ¯

To see how we derive C(L, R), note that if θ < L and x ∈ (θ, L], then the judge incurs a cost of (θ − x) since she has to permit x; similarly, if θ > R and x ∈ [R, θ), then the judge incurs a cost of(x − θ) since she has to ban x. It follows that the expected per-period payoff of a judge conditional on[∫ θ being known is

L

∫ L

θ `(x, θ)dG(x)dF^ (θ) +^

R

∫ (^) θ R `(x, θ)dG(x)dF^ (θ)

]

, and her dynamic payoff in the infinite horizon model is 1/(1−δ) times the per-period payoff. Note that max{L, θ} is increasing in L and min{R, θ¯} is decreasing in R, and therefore C(L, R) is decreasing in L and increasing in R. If θ is unknown at the time when the decisions regarding the cases are made, a policy for the judge is a pair of functions σ = (μ, ρ), where μ : S → { 0 , 1 } is an investigation policy and ρ : S → { 0 , 1 } is an uninformed ruling policy, where

μ(s) = 1 if and only if an investigation is made when the state is s, and ρ(s) = 1 if and only if case x is permitted when the state is s. For each policy σ = (μ, ρ), let V (·; σ) denote the associated value function, that is, V (s; σ) represents the dynamic payoff of the judge when the state is ((L, R), x), θ is unknown, and she follows the policy σ. In what follows, we suppress the dependence V on σ for notational convenience. The policy σ∗^ is optimal if σ∗^ and the associated value function V ∗^ satisfy the following conditions: (B1) Given V ∗, the uninformed ruling policy satisfies ρ∗(s) = 1 if either x ≤ L or x ∈ (L, R) and

∫ (^) max{x,θ} θ

`(x, θ)dF (θ) + δ

0

V ∗(π(s, 1), x′)dG(x′)

∫ (^) θ¯ min{x,¯θ}

`(x, θ)dF (θ) + δ

0

V ∗(π(s, 0), x′)dG(x′),

and ρ∗(s) = 0 if either x ≥ R or x ∈ (L, R) and

∫ (^) max{x,θ} θ

`(x, θ)dF (θ) + δ

0

V ∗(π(s, 1), x′)dG(x′)

<

∫ (^) θ¯ min{x,¯θ}

`(x, θ)dF (θ) + δ

0

V ∗(π(s, 0), x′)dG(x′),

for any state s. (B2) Given V ∗^ and the uninformed ruling policy ρ∗, for any state s, the investigation policy μ∗^ satisfies μ(s) = 1 if and only if

−z + (^1) L(x)

∫ (^) x θ

`(x, θ)dF (θ) + (^1) R(x)

∫ (^) θ¯ x

`(x, θ)dF (θ) + δC(L, R) ≥

ρ∗(s)

∫ (^) max{x,θ} θ

`(x, θ)dF (θ) + (1 − ρ∗(s))

∫ (^) θ¯ min{x,¯θ}

`(x, θ)dF (θ) + δ

0

V ∗(π(s, ρ∗(s)), x′)dG(x′)

where (^1) A(x) takes the value 1 if x ∈ A and 0 otherwise.

3.1 Persuasive Precedent

Consider the judge in period t. If the judge has investigated in a previous period, then θ is known and judge t permits or bans case xt according to θ. Suppose the judge has not investigated, then her belief about θ is the same as the prior. If the judge investigates in period t, her payoff is −z in period t and 0 in future periods. The following result says that if the judge is uninformed and does not investigate in period t, then there exists a threshold in (θ, ¯θ) such that she permits xt if it is below this threshold and bans xt if it is above this threshold.

Lemma 1. Under persuasive precedent, there exists xˆ ∈ (θ, θ¯) such that if the judge has not investigated in a previous period and does not investigate xt in period t, then she permits xt if xt < ˆx and bans xt if xt > xˆ.

Now we analyze the judges’ investigation decisions. The following lemma says that when the investigation cost is sufficiently low, the judge investigates with posi- tive probability in each period, the cases that the judge investigates in period t forms an interval, and the interval of investigation is larger in an earlier period. Intuitively, for the cases that fall in the middle, it is less clear to a judge whether she should permit it or ban it and the expected cost of making a mistake is higher. Hence, the value of investigation for these cases is higher. Since the judge can use the informa- tion she acquires in an earlier period for later periods, the value of investigation is higher in an earlier period, resulting in more cases being investigated in an earlier period.

Lemma 2. In the three-period model under persuasive precedent, there exists z∗^ > 0 such that if z < z∗, the judge investigates xt (t = 1, 2 , 3 ) with positive probability in equilibrium. Specifically, there exist xLt and xHt > xLt such that the judge investigates xt if and only if xt ∈ [xLt , xHt ]. Moreover, xL 1 < xL 2 < xL 3 and xH 3 < xH 2 < xH 1.

3.2 Binding precedent

We first show that in equilibrium, in each period t, the cases that judge t inves- tigates also form a (possibly degenerate) interval under binding precedent.

Lemma 3. Under binding precedent, the set of cases that the judge investigates in period t in equilibrium is either empty or convex for any precedent (Lt, Rt); if xt ∈/ (Lt, Rt), then the judge does not investigate xt in period t.

In the next proposition, we show that the judge investigates more under binding precedent than under persuasive precedent in period 1, but she investigate less under binding precedent than under persuasive precedent in periods 2 and 3.

Proposition 1. The judge investigates less under binding precedent than under per- suasive precedent in period 3. Specifically, for any precedent (L 3 , R 3 ), the judge investigates x 3 if and only if x 3 ∈ (L 3 , R 3 ) ∩ [xL 3 , xH 3 ]. The judge also investigates less under binding precedent than under persuasive precedent in period 2. Specifically, if [θ, θ¯] ⊆ [L 2 , R 2 ], then the set of cases that the judge investigates in period 2 under binding precedent is the same as [xL 2 , xH 2 ]; otherwise the set of cases she investigates under binding precedent is a subset of [xL 2 , xH 2 ]. The judge investigates more under binding precedent than under persuasive prece- dent in period 1 , that is, given any precedent (L 1 , R 1 ) ⊃ (θ, θ¯), the set of cases that judge investigates under binding precedent contains the set [xL 1 , xH 1 ].

The reason for the judge to investigate less in period 3 under binding precedent is that investigation has no value if x 3 ≤ L 3 or if x 3 ≥ R 3 since the judge must permit any x 3 ≤ L 3 and must ban any x 3 ≥ R 3 no matter what the investigation outcome; moreover, since period 3 is the last period, the information about θ has no value for the future either. For x 3 ∈ (L 3 , R 3 ), the judge faces the same incentives under binding and persuasive precedent and therefore investigate the same set of cases. If the precedent in period 2 satisfies [θ, θ¯] ⊆ [L 2 , R 2 ], then investigation avoids mistakes in ruling in the current period as well as the future period even under binding precedent. In this case, the judge faces the same incentives under binding and persuasive precedent and therefore investigate the same set of cases. However, if the precedent in period 2 does not satisfy [θ, θ¯] ⊆ [L 2 , R 2 ], then even if x 2 ∈ (L 2 , R 2 ) and the judge investigates, mistakes in ruling can still happen in period 3 under binding precedent if θ /∈ [L 2 , R 2 ] since the judge is bound to follow the precedent. In this case,

Proposition 3. Under persuasive precedent, the optimal investigation policy satisfies θ < ap^ ≤ bp^ < θ¯.

Let EV ∗^ =

0 V^

∗(x′)dG(x′). To find the optimal policy and the value function,

note that

V ∗(x) =

δEV ∗^ if x ≤ θ, or if x ≥ θ,¯ ∫ (^) x θ (x, θ)dF^ (θ) +^ δEV^ ∗^ if^ θ < x < ap, −z if x ∈ [ap, bp], ∫ (^) ¯θ x(x, θ)dF^ (θ) +^ δEV^ ∗^ if^ bp^ < x <^ θ.¯

Hence, we have

EV ∗^ = −z[G(bp) − G(ap)] + δEV ∗^ [G(ap) + 1 − G(bp)]

∫ (^) ap θ

∫ (^) x θ

`(x, θ)dF (θ)dG(x) +

∫ (^) θ¯ bp

∫ (^) ¯θ x

`(x, θ)dF (θ)dG(x).

For any a, b such that θ < a ≤ b < θ¯, let h(a, b) =

∫ (^) a θ

∫ (^) x

∫ (^) ¯ θ^ `(x, θ)dF^ (θ)dG(x) + θ b

∫ (^) θ¯ x `(x, θ)dF^ (θ)dG(x). Then

EV ∗^ = h(a

p, bp) − z[G(bp) − G(ap)] 1 − δ[G(ap) + 1 − G(bp)] (3) Since the judge is indifferent between investigating and not investigating when x = ap, we have

− z =

∫ (^) ap θ

`(ap, θ)dF (θ) + δEV ∗. (4)

Similarly, since the appointed judge is indifferent between investigating and not investigating when x = bp, we have

− z =

∫ (^) θ¯ bp

`(bp, θ)dF (θ) + δEV ∗. (5)

We can solve for EV ∗, ap, bp^ from equations (3), (4), and (5). Plugging these in

(2), we can solve for V ∗(x).

4.2 Binding precedent

We now consider binding precedent. Let F denote the set of bounded measurable functions on S taking values in R. For f ∈ F, let ||f || = sup{|f (s)| : s ∈ S}. An operator Q : F → F satisfies the contraction property for || · || if there is a β ∈ (0, 1) such that for f 1 , f 2 ∈ F, we have ||Q(f 1 )−Q(f 2 )|| ≤ β||f 1 −f 2 ||. For any operator Q that satisfies the contraction property, there is a unique f ∈ V such that Q(f ) = f. Recall L is the (possibly degenerate) interval [θ, max{L, θ}] and R is the (possibly degenerate) interval [min{R, θ¯}, θ¯]. Let A(s) denote the judge’s dynamic payoff if he investigates in state s = ((L, R), x), not including the investigation cost. Formally,

A(s) = (^1) L(x)

∫ (^) x

θ

`(x, θ)dF (θ) + (^1) R(x)

∫ (^) ¯θ

x

`(x, θ)dF (θ) + δC(L, R).

Also, let gp(s) be the judge’s current-period payoff if she permits the case without investigation in state s and gb(s) be her current period payoff if she bans the case without investigation in state s. Formally,

gp(s) =

∫ (^) max{x,θ} θ `(x, θ)dF^ (θ)^ if^ x < R, −∞ if x ≥ R,

gb(s) =

∫ (^) θ¯ min{x,θ¯} `(x, θ)dF^ (θ)^ if^ x > L, −∞ if x ≤ L. For any V ∈ F and (L, R) ∈ Sp, let EV (L, R) =

0 V^ (L, R, x ′)dG(x′). Note that

for any s ∈ S, μ∗(s) as defined in (B2) satisfies

μ∗(s) ∈ arg max μ∈{ 0 , 1 }

μ[−z + A(s)]

  • (1 − μ) max{gp(s) + δEV ∗(max{x, L}, R), gb(s) + δEV ∗(L, min{x, R})}.

(a(L, R), b(L, R)) where a(L, R) ≥ L and b(L, R) ≤ R. We refer to (a(L, R), b(L, R)) as the investigation interval under (L, R). We conjecture that if L < a(L, R) < b(L, R) < R, then under precedent (L, R), the judge is indifferent between investi- gating or not when x = a(L, R) and when x = b(L, R), which implies that the set of cases that the appointed judge investigates is the closed interval [a(L, R), b(L, R)]. Suppose that given the initial precedent (L 1 , R 1 ), the set of cases that the judge inves- tigates is nonempty (if it is empty, then no investigation will be carried out in any pe- riod). Specifically, the judge investigates x 1 if and only if x 1 ∈ [a(L 1 , R 1 ), b(L 1 , R 1 )]. For notational simplicity, let a 1 = a(L 1 , R 1 ) and b 1 = b(L 1 , R 1 ). If x 1 ∈ [a 1 , b 1 ], judge 1 investigates x 1. In this case, since θ becomes known, no future judge will investigate. If x 1 ∈/ [a 1 , b 1 ], then the judge makes a sum- mary ruling without any investigation and changes the precedent to (x 1 , R 1 ) if he permits the case and to (L 1 , x 1 ) if he bans the case. Note that when judge 1 makes a summary ruling, the resulting new precedent satisfies L 2 < a 1 and b 1 < R 2. Monotonicity of μ∗^ in Proposition 5 implies that the investigation in- terval in period 2 satisfies a(L 2 , R 2 ) ≥ a 1 and b(L 2 , R 2 ) ≤ b 1 and therefore we have L 2 < a(L 2 , R 2 ) ≤ b(L 2 , R 2 ) < R 2. An iteration of this argument shows that on any realized equilibrium path, the investigation interval is a strict subset of the prece- dent in any period and is either closed or empty. Denote a nonempty investigation interval on an equilibrium path by [a(Le, Re), b(Le, Re)]. By Propositions 5 and 6, we have Le^ < a(Le, Re) ≤ b(Le, Re) < Re^ and given the precedent (Le, Re), the judge is indifferent between investigating x or not if x = a(Le, Re) or if x = b(Le, Re). On the equilibrium path, the investigation intervals either converge to ∅ or to some nonempty set [ˆa, ˆb] such that if the precedent is (L, R) = (ˆa, ˆb), then a(L, R) = ˆa and b(L, R) = ˆb. More formally, consider a sequence {an, bn, Ln, Rn} such that if {x : μ∗(Ln, Rn, x) = 1 } 6 = ∅, then an = a(Ln, Rn), bn = b(Ln, Rn), Ln < Ln+1 < a(Ln, Rn), b(Ln, Rn) < Rn+1 < Rn, and if {x : μ∗(Ln, Rn, x) = 1} = ∅, then an = bn = Ln+ 2 Rn, an+1 = an, bn+1 = bn, Ln+1 = Ln, Rn+1 = Rn. Since L 1 < a(L 1 , R 1 ) < b(L 1 , R 1 ) < R 1 , a(L, R) is increasing in L and decreasing in R and b(L, R) is decreasing in L and increasing in R, we can find such a sequence.

Note that an is increasing and bn is decreasing. Since a monotone and bounded sequence converges, we can define ˆa = lim an and ˆb = lim bn. We next show that in period 1 when the precedent is (L 1 , R 1 ), the judge in- vestigates more under binding precedent than under persuasive precedent. But as more precedents are established over time, the judge has less freedom in making her ruling when precedents are binding, and eventually she investigates less than if the precedent is persuasive. Recall that [ap, bp] denotes the set of cases that the judge investigates under persuasive precedent.

Proposition 7. We have (ˆa, ˆb) ⊂ [ap, bp] ⊂ [a(L 1 , R 1 ), b(L 1 , R 1 )].

Proposition 7 is analogous to Proposition 1 in the three-period model, which says that the judge investigates more under binding precedent than under persuasive precedent in the first period but investigate less under binding precedent in the second and the third periods.

∫ (^) ˆx θ `(ˆx, θ)dF^ (θ) =^

∫ (^) θ x ˆ `(ˆx, θ)dF^ (θ).^ Let^ z∗^ =^ −^

∫ (^) ˆx θ `(ˆx, θ)dF^ (θ)^ >^0.^ If^ z < z∗, then the judge investigates some cases in period 3. Specifically, suppose z < z∗^ and let xL 3 < ˆx and xH 3 > xˆ be such that

∫ (^) xL 3 θ (x L 3 , θ)dF (θ) = −z and ∫^ θ¯ xH 3(x H 3 , θ)dF (θ) = −z. If

x 3 ∈ [xL 3 , xH 3 ], then the judge investigates case x 3 if she is uninformed. Let V (^) tp be the expected continuation payoff of of the judge in period t if no investigation was carried out in any previous period. Then, we have

V 3 p =

∫ (^) xL 3 θ

∫ (^) x θ

`(x, θ)dF (θ)dG(x) +

∫ (^) θ xH 3

∫ (^) ¯θ x

`(x, θ)dF (θ)dG(x) − [G(xH 3 ) − G(xL 3 )]z > −z.

Now consider period 2. Suppose the judge did not investigate in period 1. If the judge chooses to investigate in period 2, then her payoff in period 2 is −z and her expected payoff in period 3 is 0. If the judge chooses not to investigate in period 2, then by Lemma 1, she permits any case x 2 < xˆ and bans any case x 2 > xˆ. Note that if x 2 ≤ θ or if x 2 ≥ θ, then her payoff is 0 if she does not investigate since she makes the correct decision. Consider θ < x 2 < xˆ and suppose the judge does not investigate the case. Since she permits such a case, her expected payoff in period 2 is

∫ (^) x 2 θ `(x^2 , θ)dF^ (θ).^ Similarly, for ˆx < x 2 < θ, if the judge does not investigate the case, she bans it and in this case, her expected payoff in period 2 is

∫ (^) ¯θ x 2 `(x^2 , θ)dF^ (θ). Now consider the judge’s optimal investigation policy in period 2. For x 2 ∈/ [θ, θ], since the judge’s expected payoff is δV 3 > −δz if she does not investigate the case and −z if she investigates, it is optimal for her not to investigate x 2. For θ < x 2 < xˆ, if −z ≥

∫ (^) x 2 θ

`(x 2 , θ)dF (θ) + δV 3 p

then it is optimal for the judge to investigate x 2 in period 2. Similarly, for ˆx < x 2 < x¯, if

−z ≥

∫ (^) θ¯ x 2

`(x 2 , θ)dF (θ) + δV 3 p ,

then it is optimal for the judge to investigate x 2 in period 2. For z < z∗, since

V 3 p < 0, there exist xL 2 ∈ (θ, xL 3 ) and xH 2 ∈ (xH 3 , θ¯) such that

−z =

∫ (^) xL 2 θ

`(xL 2 , θ)dF (θ) + δV 3 p =

∫ (^) ¯θ xH 2

`(xH 2 , θ)dF (θ) + δV 3 p.

For x 2 ∈ [xL 2 , xH 2 ], it is optimal for the judge to investigate x 2. Thus, we have

V 2 p =

∫ (^) xL 2 θ

[∫ (^) x θ

`(x, θ)dF (θ) + δV 3 p

]

dG(x) +

∫ (^) θ xH 2

[∫ (^) θ¯

x

`(x, θ)dF (θ) + δV 3 p

]

dG(x) − [G(xH 2 ) − G(xL 2 )]z

Note that

V 3 p = max {a,b∈[θ,θ¯],b>a}

∫ (^) a θ

∫ (^) x θ

`(x, θ)dF (θ)dG(x)+

∫ (^) θ b

∫ (^) ¯θ x

`(x, θ)dF (θ)dG(x)−[G(b)−G(a)]z,

and V 3 p < 0. It follows that V 2 p < V 3 p. Now consider period 1. If −z ≥ δV 2 p , then the judge investigates all cases in period 1. Suppose z < z∗^ and z > −δV 2 p , then by a similar argument as in period 2, there exist xL 2 ∈ (θ, xL 3 ) and xH 2 ∈ (xH 3 , θ¯) such that

−z =

∫ (^) xL 1 θ

`(xL 1 , θ)dF (θ) + δV 2 p =

∫ (^) ¯θ xH 1

`(xH 1 , θ)dF (θ) + δV 2 p.

For x 1 ∈ [xL 1 , xH 1 ], it is optimal for the judge to investigate x 1 in period 1.

Proof of Lemma 3: Consider period 3 first. Suppose the judge has not investigated in a previous period. Recall that under persuasive precedent, the judge investigates x 3 if and only if x 3 ∈ [xL 3 , xH 3 ]. Since under binding precedent, investigation has no value if x 3 ≤ L 3 or if x 3 ≥ R 3 , the judge investigates x 3 if x 3 ∈ [xL 3 , xH 3 ] ∩ (L 3 , R 3 ). Hence, the set of cases that the judge investigates in period 3 is either empty or convex and the judge does not investigate x 3 if x 3 ∈/ (L 3 , R 3 ). Let k(L, R) denote the judge’s expected payoff in period t under binding precedent when the precedents are (L, R) in period t conditional on θ being known where the