Symmetric Duopoly Model of Entry/Exit

Symmetric Duopoly Model of Entry/Exit

Model Ingredients

Here are the basic ingredients of the model:

  • There are two firms indexed by \(f\in\{0,1\}\)
  • There are \(M\) markets indexed by \(m\)
  • Time is discrete and indexed by \(t\)
  • Each firm makes an entry decision every period. We let \(j\in\{0,1\}\) index this decision to enter or not. Let \(j(f,m,t)\) indicate the choice of firm \(f\) in market \(m\) in period \(t\).
  • We let \(a_{f,m,t}=j(f,m,t-1)\) indicate whether firm \(f\) is active in market \(m\) in period \(t\), which means they entered in the previous period.
  • Let \(x_{m}\) be a market-level observable that shifts the profitability of operations in market \(m\).
  • In addition to the observed states, each firm draws a pair of idiosyncatic shocks to payoffs in each period, \(\epsilon_{f}=[\epsilon_{f0},\epsilon_{f1}]\) that is private information to the firm and is iid over markets, firms, and time periods.
  • Firms make their decisions in each period simultaneously

To simplify notation, suppress dependance of outcomes on the market \(m\) and time period \(t\). Because we are writing a symmetric model, we will also suppress dependence on \(f\). The deterministic component of the payoff to entering is a function of the market primitives (\(x\)), the firm’s activity status (\(a\)), and the other firm’s entry decision \(j^\prime\):

\[ u_{1}(x,a,j^{\prime}) = \phi_{0} + \phi_{1}x - \phi_{2}j^\prime - \phi_{3}(1-a) \]

The payoff to not entering is simply:

\[{u}_{0}(x,a) = \phi_{4}a \]

Before characterizing the solution to the firm’s problem, let’s code up these payoff functions:


u1(x,a,j′,ϕ) = ϕ[1] + ϕ[2]*x - ϕ[3]j′ + ϕ[4]*(1-a)
u0(a,ϕ) = a * ϕ[5]
u0 (generic function with 1 method)

Solving the firm’s problem

Let \(j^*(x,a,a',\epsilon)\) be the firm’s optimal decision given the state and the idiosyncratic shock. We will focus on symmetric equilibria so this policy function is sufficient to describe the behavior of both firms.

The value to either firm of arriving in a period with state \((x,a,a')\) can be written recursively as:

\[ \begin{split} V(x,a,a') = \mathbb{E}_{\epsilon,\epsilon'}\max\{u_{1}(x,a,j^*(x,a,a',\epsilon'))+\epsilon_{1} + \beta V(x,1,j^*(x,a,a',\epsilon')), \\ u_{0}(x,a) + \epsilon_{0} + \beta V(x,0,j^*(x,a,a',\epsilon'))\} \end{split} \]

Define the optimal choice probability in equilibrium as:

\[ p(x,a,a') = \int_{\epsilon}j^*(x,a,a',\epsilon)dF(\epsilon) \]

With this in hand we can integrate out the other firm’s shocks \(\epsilon\)’ to get:

\[ \begin{split} V(x,a,a') = \mathbb{E}_{\epsilon}\max\{\phi_{0}+\phi_{1}x - \phi_{2}p(x,a',a) +\epsilon_{1} + \beta [p(x,a',a)V(x,1,1) + (1-p(x,a',a))V(x,1,0)], \\ a \phi_{4} + \epsilon_{0} + \beta [p(x,a',a)V(x,0,1) + (1-p(x,a',a))V(x,0,0)]\} \end{split} \]

Define the choice-specific values as:

\[ v_{1}(x,a,a') = \phi_{0}+\phi_{1}x - \phi_{2}p(x,a',a) + \beta [p(x,a',a)V(x,1,1) + (1-p(x,a',a))V(x,1,0)] \]

and

\[ v_{0}(x,a,a') = a \phi_{4} + \beta [p(x,a',a)V(x,0,1) + (1-p(x,a',a))V(x,0,0)] \]

So assuming that \(\epsilon\) is distributed as type I extreme value random variable with location parameter 0 and scale parameter 1 we get analytical expressions for the choice probabilities and the expected value of the maximum:

\[ V(x) = \gamma + \log\left(\exp(v_{0}(x,a,a'))+\exp(v_{1}(x,a,a'))\right)\]

where \(\gamma\) is the Euler-Mascheroni constant and

\[ p(x,a,a') = \frac{\exp(v_{1}(x,a,a'))}{\exp(v_{0}(x,a,a'))+\exp(v_{1}(x,a,a'))} \]

Before we define equilibrium and think about solving the model, let’s quickly write up the mapping between the other firm’s choice probabilities and the choice values:

# Fixing x, assume that V is stored as a 2 x 2 array
# The argument p is the current guess of p(x,a',a)
function choice_values(x,a,p,V,ϕ,β)
    v0 = u0(a,ϕ) + β * p * V[1,2] + β * (1-p) * V[1,1]
    v1 = u1(x,a,p,ϕ) + β * p * V[2,2] + β * (1-p) * V[2,1]
    return v0,v1
end
choice_values (generic function with 1 method)

In principle we could iterate on this mapping to find (for a fixed \(p\)), the firm’s optimal solution. But that won’t be an efficient way to try and solve for the equilibrium.

Equilibrium

The solution concept for this model is Markov Perfect Equilibrium. Fixing the market \(x\), here the equilibrium be characterized as a fixed point in the value function \(V\) and choice probabilities, \(p\). In words, equilibrium is summarized by a \(V\) and a \(p\) such that:

  1. Given \(p\), \(V\) is a fixed point in the recursive formulation of values; and
  2. \(p\) are the optimal choice probabilities of each firm given \(V\) and given the other firm’s choice probabilities are \(p\).

How should we solve for this symmetric equilibrium? We could try iterating on \(V\) and \(p\) as follows:

# V is a 2x2 array with values
# p is a 2x2 array with choice probabilities
function iterate_model(V,p,x,ϕ,β)
    Vnew = copy(V)
    pnew = copy(p)
    for a′ in axes(V,2)
        for a in axes(V,1)
            p′ = p[a′,a]
            v0,v1 = choice_values(x,a-1,p′,V,ϕ,β)
            pnew[a,a′] = exp(v1) / (exp(v0)+exp(v1))
            Vnew[a,a′] = log(exp(v0)+exp(v1))
        end
    end
    return Vnew,pnew
end

function solve_by_iteration(x,ϕ,β; max_iter = 1000, verbose = false)
    V0 = zeros(2,2)
    p0 = fill(0.1,2,2)
    err = Inf
    iter = 1
    while err>1e-10 && iter<max_iter
        V1,p1 = iterate_model(V0,p0,x,ϕ,β)
        err = maximum(abs.(V1 .- V0))
        if mod(iter,100)==0 && verbose
            println("Iteration $iter, error is $err")
        end
        V0 = V1
        p0 = p1
        iter += 1
    end
    return V0,p0
end

β = 0.95
ϕ = 2 * [1.,0.1,0.5,2.,0.5]
solve_by_iteration(0.,ϕ,β; verbose = true)
Iteration 100, error is 0.04823924738592211
Iteration 200, error is 0.001242310554474102
Iteration 300, error is 5.032709619001707e-5
Iteration 400, error is 2.123540213005981e-6
Iteration 500, error is 8.863101186307176e-8
Iteration 600, error is 3.693557459882868e-9
Iteration 700, error is 1.538467131467769e-10
([69.73147518902888 70.96824731388737; 68.46263413289174 67.89546273371974], [0.9107652821657111 0.990549524651413; 0.052475860075290155 0.27729654446688445])

This seems to work! But notice that it takes a while for the iteration to converge. Also, unlike the single agent case, there is no guarantee that this iteration is always a contraction.

We can also solve this model relatively easily using Newton’s Method and the magic of Automatic Differentiation. To do this, we’ll solve over the pair of choice-specific values \(v_{0}\) and \(v_{1}\) (these encode both values and choice probabilities) and store these values as a vector instead of an array:

using ForwardDiff, LinearAlgebra

# this function returns V as a 2 x 2 array given the vector of choice specific values in V
function calc_V(v)
    idx = LinearIndices((2,2,2))
    [log(exp(v[idx[1,1+a,1+a′]]) + exp(v[idx[2,1+a,1+a′]])) for a in 0:1, a′ in 0:1]
end

# this function returns choice probabilities as a 2x2 array given the vector v
function calc_p(v)
    idx = LinearIndices((2,2,2))
    [1 / (1+exp(v[idx[1,1+a,1+a′]] - v[idx[2,1+a,1+a′]])) for a in 0:1, a′ in 0:1]
end


function iterate_model_v(v,x,ϕ,β)
    idx = LinearIndices((2,2,2)) #<- this is for convenient indexing over v
    vnew = copy(v)
    V = calc_V(v)
    for a′ in axes(idx,3)
        for a in axes(idx,2)
            i0′ = idx[1,a′,a] #<- this locates the position in v for v_{0}(x,a',a)
            i1′ = idx[2,a′,a] #<- this locates the position in v for v_{1}(x,a',a)
            p = 1 / (1 + exp(v[i0′] - v[i1′]))
            v0,v1 = choice_values(x,a-1,p,V,ϕ,β)
            vnew[idx[1,a,a′]] = v0
            vnew[idx[2,a,a′]] = v1
        end
    end
    return vnew
end

F(v,x,ϕ,β) = v .- iterate_model_v(v,x,ϕ,β)
function solve_model_newton(x,ϕ,β;max_iter = 10, verbose = false)
    v = zeros(8)
    dF(v) = ForwardDiff.jacobian(y->F(y,x,ϕ,β),v)
    err = Inf
    iter = 1
    while (err>1e-10) && (iter<max_iter)
        Fv = F(v,x,ϕ,β)
        dFv = dF(v)
        vnew = v - inv(dFv) * Fv
        err = maximum(abs.(Fv))
        if verbose
            println("Iteration $iter, error is $err")
        end
        iter += 1
        v = vnew
    end
    return v
end

solve_model_newton(0.,ϕ,β;verbose = true);
Iteration 1, error is 6.158489821531948
Iteration 2, error is 1.7766463237555712
Iteration 3, error is 0.056247498263360285
Iteration 4, error is 0.00028434628951856666
Iteration 5, error is 3.4473004006940755e-8
Iteration 6, error is 1.4210854715202004e-14

Let’s try timing each solution method to quickly compare:

solve_model_newton(0.,ϕ,β)
solve_by_iteration(0.,ϕ,β)

@time solve_model_newton(0.,ϕ,β)
@time solve_by_iteration(0.,ϕ,β)
  0.000099 seconds (157 allocations: 58.250 KiB)
  0.000251 seconds (2.15 k allocations: 201.031 KiB)
([69.73147518902888 70.96824731388737; 68.46263413289174 67.89546273371974], [0.9107652821657111 0.990549524651413; 0.052475860075290155 0.27729654446688445])

In this case Newton’s method is faster. Let’s double check that both methods return the same answer:

v0 = solve_model_newton(0.,ϕ,β)
V0,p = solve_by_iteration(0.,ϕ,β)
p1 = calc_p(v0)
[p p1]
2×4 Matrix{Float64}:
 0.910765   0.99055   0.910765   0.99055
 0.0524759  0.277297  0.0524759  0.277297

Looks good! We can re-use this code when we get to thinking about estimation later on. To do this we will have to solve the model for different values of \(x_{m}\), but that can be done by using this code and iterating (potentially in parallel) over different values of \(x\).

If you play around with parameters, you will see how convergence times may change and that solution methods are not always stable, especially when choice probabilities in equilibrium are very close to one or zero.