Skip to main content

The Official Journal of the Pan-Pacific Association of Input-Output Studies (PAPAIOS)

Nonmyopia and incentives in the piecewise linearized MDP procedures with variable step-sizes

Abstract

The paper formulates a piecewise linearized version of the procedure developed by Sato (1983) and analyzes its properties. In so doing, Fujigaki’s (1981) private goods economy is extended to involve a public good; and the intertemporal game of Champsaur and Laroque is piecewise localized by dividing the time interval in their game and by using variable step-sizes to formalize the piecewise linearized procedure, called λMDP Procedure, that can possess similar desirable properties shared by continuous-time procedures. Under Nonmyopia assumption, each player’s best reply strategy at any discrete date is to reveal his/her anticipated marginal rate of substitution for a public good at the end of a current time interval of the λ MDP Procedure.

JEL Classification:H41.

1 Introduction

This paper formulates a piecewise linearized version of the procedure developed by Sato ([1983]) and then analyzes its properties. In so doing, Fujigaki’s ([1981]) private good economy is extended to involve a public good. Also the intertemporal game of Champsaur and Laroque ([1981]) and ([1982]) is piecewise localized by dividing the time interval in their game and by using variable step-sizes for revising an amount of the public good to formalize the procedure. It is able to possess similar desirable features shared by a continuous one, i.e., efficiency and incentive compatibility. I employ the idea of modeling agents as having intermediate time horizon, which differs from the previous results on incentives in either continuous or discrete planning procedures.

The MDP Procedure received a lot of attention in the 1970s and 1980s, especially on the problem of incentives in planning procedures with public goods, but there has been very little work on it over the last fifteen years. This paper is a follow up on the literature on the use of processes as mechanisms for aggregating the decentralized information needed for guiding and financing a public good.

Initiated by three great pioneers - Malinvaud ([1970-1971]), and Drèze and de la Vallée Poussin ([1971]) - this field of research has made a remarkable progress in the last three decades. The analyses of incentives in planning tâtonnement procedures began in the late sixties and were mathematically refined by the characterization theorems of Champsaur and Rochet ([1983]), which generalized the previous results of Fujigaki and Sato ([1981, 1982]), as well as Laffont and Maskin ([1983]). Champsaur and Rochet highlighted the incentive theory in the planning context to reach the acme and culminated in their generic theorems. Most of these procedures can be characterized by the axioms, the formal definitions of which are given in Section 3:

  1. (i)

    Feasibility

  2. (ii)

    Monotonicity

  3. (iii)

    Pareto Efficiency

  4. (iv)

    Local Strategy Proofness

  5. (v)

    Neutrality.

The procedure to be presented is aiming at bridging the gap between local and intertemporal games. Our process differs from that of Champsaur, Drèze, and Henry ([1977]) in the sense that the step-sizes for revising a public good are variable at each iteration along the solution paths. Our procedure is also different from Green and Schoumaker ([1980]), where global information, viz., a part of each player’s indifference curve, is needed to be revealed. Only local information, i.e., marginal rates of substitution (MRSs) of any player, is required to determine the trajectories of our piecewise intertemporal process. It is verified that the best reply strategy for each player at each discrete date is to reveal his/her anticipated true MRS for the public good at the end of the current time interval, which maximizes each player’s payoff in the piecewise intertemporal incentive game. Thus, our procedure can achieve ‘piecewise intertemporal strategy proofness.’

The remainder of the paper is organized as follows. The next section outlines the general framework. Section 3 reviews the MDP Procedure and renames the LSP MDP and the Generalized MDP Processes. Section 4 presents a piecewise linearized version of the Generalized MDP Procedures with variable step-sizes and then examines their properties. Section 4 also explores players’ strategic manipulability in the piecewise intertemporal incentive game associated with each time interval of the procedure and presents our theorems. Discussions about myopia and discrete procedures are given in Section 5. The last section provides some final remarks. Proofs to the theorems are given in the Appendix.

2 The model

The simplest model incorporating the essential features of the problem proposed in this paper involves two goods, one public good and one private good, whose quantities are represented by x and y, respectively. y i is denoted as an amount of the private good allocated to the i th consumer. The economy is supposed to possess N individuals. Each consumer iN={1,,N} is characterized by his/her initial endowment of a private good ω i and his/her utility function u i : R + 2 R.

The production sector is represented by the transformation function G: R + R + , where y=G(x) signifies the minimal private good quantities needed to produce the public good x. It is assumed as usual that there is no production of a private good.

The following assumptions and definitions are used throughout this paper.

Assumption 1 For any iN, u(,) is strictly quasi-concave and at least twice continuously differentiable.

Assumption 2 For any iN, u i x (x, y i ) u i (x, y i )/x0, u i y (x, y i ) u i (x, y i )/ y i >0, and u i (0,0)=0 for any (x, y i ).

Assumption 3 G(x) is convex and twice continuously differentiable.

Let γ(x)=dG(x)/dx denote the marginal rate of transformation which is assumed to be known to the planning center. It asks each individual i to report his/her marginal rate of substitution between the public good and the private good used as a numéraire.

π i (x, y i )= u i x (x, y i )/ u i y (x, y i ).

Definition 1 An allocation z is feasible if and only if

zZ= { ( x , y 1 , , y N ) R + N + 1 | i N y i + G ( x ) = i N ω i } .

Definition 2 An allocation z is individually rational if and only if

u i (x, y i ) u i (0, ω i ),iN.

Definition 3 A Pareto optimum for this economy is an allocation z Z such that there exists no feasible allocation z with

u i ( x , y i ) u i ( x , y i ) , i N , u j ( x , y j ) > u j ( x , y j ) , j N .

These assumptions and definitions altogether give us conditions for Pareto optimality in our economy.

Lemma 1 Under Assumptions 1-3, necessary and sufficient conditions for an allocation to be Pareto optimal is

i N π i γand ( i N π i γ ) x=0.

Furthermore, conventional mathematical notation is used throughout in the same manner as in Sato ([2011]). Hereafter all variables are assumed to be functions of time t; however, the argument t is often omitted unless confusion could arise. The analyses in the following sections bypass the possibility of boundary problem at x(t)=0. This is an innocuous assumption in the single public good case, because x is always increasing. The results below can be applied to the model with many public goods.

3 The class of MDP Procedures

3.1 A brief review of the MDP Procedure and its properties

Let us describe a generic model of our planning procedures for a public good and a private good as:

{ d x / d t X ( t ) , d y i / d t Y i ( t ) , i N .

The MDP Procedure is the best-known member belonging to the family of the quantity-guided procedures, in which the planning center asks individual agents their MRSs between the public good and the private numéraire. Then the center revises an allocation according to the discrepancy between the reported MRSs and the MRT. The relevant information exchanged between the center and the periphery is in the form of quantity. Let ψ(t)=( ψ 1 (t),, ψ N (t)) R + N be a vector of MRSs announced at any iteration t[0,) of the procedure. Needless to say, ψ i is not necessarily equal to π i , thus, the incentive problem matters.

The MDP Procedure reads:

{ X ( ψ ( t ) ) = j N ψ j ( t ) γ ( t ) , Y i ( ψ ( t ) ) = ψ i ( t ) X ( ψ ( t ) ) + δ i { j N ψ j ( t ) γ ( t ) } X ( ψ ( t ) ) , i N .

Denote a distributional coefficient as δ i >0, iN, with i N δ i =1, determined by the planner prior to the beginning of an operation of the procedure. Its role is to share among individuals the ‘social surplus,’ { j N ψ j (t)γ(t)}X(ψ(t)), which is always positive except at the equilibrium.

Remark 1 δ i >0 was posited by Drèze and de la Vallée Poussin ([1971]), and followed by Roberts ([1979a, 1979b]), whereas δ i 0 was assumed by Champsaur ([1976]) who advocated a notion of neutrality to be explained below.

A local incentive game associated with each iteration of the process is formally defined as the normal form game (N,Ψ,U); N is the set of players, Ψ= × j N ψ j R + is the Cartesian product of the Ψ j , which is the set of player i’s strategies, and U=( U 1 ,, U N ) is the N-tuple of payoff functions. The time derivative of consumer i’s utility is such that

d u i / d t U i ( ψ ( t ) ) = u i x X ( ψ ( t ) ) + u i y Y i ( ψ ( t ) ) = u i y { π i X ( ψ ( t ) ) + Y i ( ψ ( t ) ) }

which is the payoff that each individual obtains in the local incentive game along the procedure.

The behavioral hypothesis underlying the above equations is the following myopia assumption: i.e., each player determines his/her strategy ψ i Ψ i in order to maximize his/her instantaneous utility increment U i (ψ(t)).

3.2 Normative conditions for the family of the MDP Procedures

The conditions that I have presented in the Introduction are in order. Let ψ i =( ψ 1 ,, ψ i 1 , ψ i + 1 ,, ψ N ) Ψ i = × j N { i } Ψ j R + .

Condition F Feasibility:

γ(t)X ( ψ ( t ) ) + j N Y j ( ψ ( t ) ) =0,iN,t[0,).

Condition M Monotonicity:

ψ Ψ , i N , t [ 0 , ) , U i ( ψ ( t ) ) = u i y { π i ( t ) X ( ψ ( t ) ) + Y i ( ψ ( t ) ) } 0 .

Condition PE Pareto Efficiency:

X ( ψ ( t ) ) =0 i N ψ i (t)=γ(t),ψΨ.

Condition LSP Local Strategy Proofness:

π i ( t ) X ( π i ( t ) , ψ i ( t ) ) + Y i ( π i ( t ) , ψ i ( t ) ) π i ( t ) X ( ψ ( t ) ) + Y i ( ψ ( t ) ) , ψ Ψ , ψ i Ψ i , i N , t [ 0 , ) .

Condition N Neutrality:

z = lim t z(t) P 0 ,δΔ,z()Z,

where P 0 is the set of individually rational Pareto optimum (IRPO), Δ is the set of δ=( δ 1 ,, δ N ), and z() is a solution of the procedure.

It was Champsaur ([1976]) who advocated the notion of neutrality for the MDP Procedure, and Cornet ([1977b]) generalized it by omitting two restrictive assumptions imposed by Champsaur, i.e., (i) uniqueness of solution and (ii) concavity of the utility functions. Neutrality depends on the distributional coefficient vector δ. Remember that the role of δ is to attain any IRPO by redistributing the social surplus generated during the operation of the procedure: δ varies trajectories to reach every IRPO. In other words, the planning center can guide an allocation via the choice of δ, however, it cannot predetermine a final allocation to be achieved. This is a very important property for noncooperative games, since the equity considerations among players matter.Footnote 1

Remark 2 Conditions except PE must be fulfilled for any t[0,). PE is based on the announced values, ψ i , iN, which implies that a Pareto optimum reached is not necessarily equal to the one achieved under the truthful revelation of preferences for the public good. Condition LSP signifies that the truth-telling is a dominant strategy. Condition N means that for every efficient point z Z and for any initial point z 0 Z, there exists δ and z(t,δ), a trajectory starting from z 0 , such that z =z(,δ).

The MDP Procedure enjoys feasibility, monotonicity, stability, neutrality, and incentive properties pertaining to minimax and Nash strategies, as was proved by Drèze and de la Vallée Poussin ([1971]), and Roberts ([1979a, 1979b]). The MDP Procedure as an algorithm evolves in the allocation space and stops when the Samuelson’s conditions are met so that the public good quantity is optimal, and simultaneously the private good is allocated in a Pareto optimal way, i.e., (x, y 1 ,, y N ) is Pareto optimal. Malinvaud ([1971, 1972]) designed a price-guided and a price-quantity guided planning procedures. Drèze ([1972]) constructed a tâtonnement process under uncertainty.

3.3 The process renamed the LSP MDP Procedure

In our context, as a planner’s most important task is to achieve an optimal allocation of the public good, he or she has to collect the relevant information from the periphery so as to meet the conditions presented above. Fortunately, the necessary information is available if the procedure is locally strategy proof. It was already shown by Fujigaki and Sato ([1982]), however, that the locally strategy proof MDP Procedure cannot preserve neutrality, since δ i iN, was concluded to be fixed, i.e., 1/N to accomplish LSP, keeping the other conditions fulfilled. δ i =1/N0, since N is greater than two.

Fujigaki and Sato ([1981]) presented the LSP MDP Procedure which reads:

{ X ( ψ ( t ) ) = ( j N ψ j ( t ) γ ( t ) ) | j N ψ j ( t ) γ ( t ) | N 2 , Y i ( ψ ( t ) ) = ψ i ( t ) X ( ψ ( t ) ) + 1 N ( j N ψ j ( t ) γ ( t ) ) X ( ψ ( t ) ) , i N .

Remark 3 We termed our procedure the ‘Generalized MDP Procedure’ in our paper (1981). Certainly, the public good decision function was generalized to include that of the MDP Procedure, whereas, the distributional vector was fixed to the above specific value. Thus, in order to be more precise, let me call hereafter the above procedure the ‘LSP MDP Procedure.’ The genuine Generalized MDP Procedure is presented below.

The LSP MDP Procedure for optimally providing the public good has the following properties:

  1. (i)

    The Procedure monotonically converges to an individually rational optimum, even if agents do not report their true valuation, i.e., MRS for the public good.

  2. (ii)

    Revealing his/her true MRS is always a dominant strategy for each myopically behaving agent.

  3. (iii)

    The Procedure generates similar trajectories in the feasible allocation space as the MDP Procedure with uniform distribution of the instantaneous surplus generated at each iteration, which leaves no influence of the planning authority on the final plan. Hence, the Procedure is nonneutral.

Remark 4 The property (ii) is an important one that cannot be enjoyed by the original MDP Process except when there are only two agents with an equal surplus share, i.e., δ i =1/2i=1,2. The result on nonneutrality in (iii) can be modified by designing the Generalized MDP Procedure below. See Roberts ([1979a, 1979b]) for these properties.

Theorems are enumerated without proofs which were given in Fujigaki and Sato ([1981]).

Theorem 1 The LSP MDP Procedure fulfills Conditions F, M, PE, and LSP. However, it cannot satisfy Condition N.

Theorem 2 For the LSP MDP Procedure and for any z 0 Z, there exists a unique solutionz():[0,)Z, which is such that lim t z(t)exists and is a Pareto optimum.

3.4 The Generalized MDP Procedures

In the local incentive game the planner can know the true information of individuals, since the LSP MDP Procedure induces them to elicit it. Its operation does not even require truthfulness of each player to be a Nash equilibrium strategy, but it needs only aggregate correct revelation to be a Nash equilibrium, as was verified in Sato ([1983]). It is easily seen from the above discussion that the LSP MDP Procedure is not neutral at all, which means that local strategy proofness impedes the attainment of neutrality. Hence, Sato ([1983]) proposed another version of neutrality, and Condition Aggregate Correct Revelation (ACR) which is much weaker than LSP.

In order to present Condition ACR, I need some notation. ϕ i is a Nash equilibrium strategy given by Roberts ([1979a, 1979b]) as

ϕ i = π i 1 2 δ i N 1 ( j N ψ j γ ) ,iN.

Let π=( π 1 ,, π N ) be a vector of MRSs for the public good and Π be its set. The condition can be stated in our context as follows:

Condition ACR Aggregate Correct Revelation:

i N ϕ i ( π ( t ) ) = i N π i (t),πΠ,t[0,).

Remark 5 Condition ACR means that the sum of the Nash equilibrium strategies, ϕ i , iN, always coincides with the aggregate value of the correct MRSs. Clearly, ACR only claims truthfulness in the aggregate.

I needed also the following two conditions. Let ρ: R + N R + N be a permutation function and T i (ψ) be a transfer in private good to agent i.

Condition TA Transfer Anonymity:

T i (ψ)= T i ( ρ ( ψ ) ) ,ψΨ,iN.

Remark 6 Condition TA says that the agent i’s transfer in private good is invariant under permutation of its arguments, i.e., the order of strategies does not affect the value of T i (ψ)iN. Sato ([1983]) proved that T i (ψ)= T i ( j N ψ j γ), which is an example of transfer rules.

Condition TN Transfer Neutrality:

z = lim t z(t), z P 0 ,TΩ,z()Z,

where T=( T 1 ,, T N ) is a vector of transfer functions and Ω is its set.

Now, I enumerate the properties of the Generalized MDP Procedures just renamed supra. Proofs are already given in Sato ([1983]), so they are omitted here.

Theorem 3 The Generalized MDP Procedures fulfill Conditions ACR, F, M, PE, TA and TN. Conversely, any planning process satisfying these conditions is characterized to:

{ X ( ψ ( t ) ) = ( j N ψ j ( t ) γ ( t ) ) | j N ψ j ( t ) γ ( t ) | N 2 , Y i ( ψ ( t ) ) = ψ i ( t ) X ( ψ ( t ) ) + T i ( j N ψ j ( t ) γ ( t ) ) , i N .

Theorem 4 Revealing preferences truthfully in any Generalized MDP Procedure is a minimax strategy for anyiN. It is the only minimax strategy for anyiN, whenx>0.

Theorem 5 ϕ i = π i holds for anyiNat the equilibrium of the Generalized MDP Procedures.

Theorem 6 Under Assumptions 1-3, for every individually rational Pareto optimum z , there exists δ and a trajectoryz():[0,)Zof the differential equation defining the Generalized MDP Procedures such that, iN, u i ( z )= lim t u i (x(t), y i (t)).

Keeping the same nonlinear public good decision function as derived from Condition LSP, Sato ([1983]) could state the above characterization theorem. In the sequel, the Generalized MDP Procedure with T i ( j N ψ j γ)= δ i ( j N ψ j γ)X(ψ) is employed. Via the pertinent choice of T i () we can make the family of the Generalized MDP Procedures, including the MDP and the LSP MDP Procedures as special members.

Remark 7 Green and Laffont ([1979]), Laffont ([1979]), and Champsaur and Rochet ([1983]) gave a systematic study on the family of planning procedures that are asymptotically efficient and locally strategy proof. Now we know that the class of the LSP procedures is large enough, which includes the Bowen Procedure, the Generalized Wicksell Procedure, and the LSP MDP Procedure as special members, as classified by Rochet ([1982]) and Sato ([2011]). Sato ([2010]) presented a discrete version of the procedure developed by Green and Laffont, which was the first LSP procedure with pivotal agents.

The next section provides a positive result on neutrality, different from Champsaur and Laroque ([1981, 1982]), and Laroque and Rochet ([1983]) who concluded nonneutrality of the intertemporal MDP Procedures with and without public goods.

4 The piecewise linearized MDP Procedure

4.1 A description of the piecewise linearized MDP Procedure

In the Procedure below, the planner plans to provide an optimal quantity of a public good by revising its quantity at discrete times, t={ τ 1 ,, τ s , τ s + 1 ,,D}T: the set of discrete dates. The length of time horizon D, which can take the value ∞, is predetermined by the planner. In order for the planner to decide in what direction an allocation should be changed, it proposes a tentative feasible allocation, z(0)=(χ(0), ω 1 ,, ω N )iN, with a tentative step-size of the public good, χ(0), at the initial time 0 given by the planner to which agents are asked to report his/her true MRS, π i (z(0))iN, as a local privately held information. At each discrete date τ s the planner can easily calculate for any t the sum of their announced MRSs to change the allocation at the next date τ s + 1 . It is supposed that it can get the exact value of MRT. Assume also that the agents have rational expectations on the time interval, although the latter are bounded; they not only have complete knowledge as to the planning rules of the procedure defined below, but also can at least predict an allocation to be attained at the beginning of the next interval. Champsaur and Laroque ([1982], p.326) wrote that ‘[s]uch a situation of limited intertemporal consistency is similar to the discrete procedures.’ Champsaur and Laroque ([1981, 1982]) took into consideration the effects of the agents’ strategies upon the final allocation. Agents in the private good economy of Fujigaki ([1981]) are assumed to maximize their utility anticipated at the end of each time interval. So I extend his model to involve a public good in order to examine nonmyopic behaviors on the part of strategic players, as in Champsaur and Laroque ([1981]).

To formulate our planning rules, let us equally divide the time horizon [0,D] into D intervals [ τ s , τ s + 1 ). As repeated to apply our procedure to each interval, an allocation at any point of each interval is given for any τ s T and for any t[ τ s , τ s + 1 )

{ x ( t ) = 0 τ s X α ( t ) d t + ( t τ s ) X α ( t ) , y i ( t ) = 0 τ s Y i α ( t ) d t + ( t τ s ) Y i α ( t ) , i N ,

where X α (t) and Y i α (t) are the average speeds of adjustments over the interval [ τ s , τ s + 1 ), which are defined by the Generalized MDP Procedure with T i specified above.

Hence, the trajectories are piecewise linear and the variable step-sizes for each t[ τ s , τ s + 1 ) in our procedure are in order:

{ χ ( t ) = x ( t ) x ( τ s ) = ( t τ s ) X α ( t ) , υ i ( t ) = y i ( t ) y i ( τ s ) = ( t τ s ) Y i α ( t ) , i N .

For any τ s T and for any t[ τ s , τ s + 1 ), our piecewise linearized procedure can be defined as:

{ x ( t ) = x ( τ s ) + χ ( t ) , y i ( t ) = y i ( τ s ) + υ i ( t ) , i N .

Note that the planner has to observe the size, χ(t), but not each υ i (t), since the former determines the latter. Let us call this piecewise linearized procedure the λMDP Procedure, which plays as a rule of a piecewise intertemporal incentive game.

4.2 Normative conditions for the λ MDP Procedure

The following new conditions are defined for our λ MDP Procedure.

Condition PIF Piecewise Intertemporal Feasibility:

i N Y i α ( ψ ( t ) ) +γ(t) X α (t)=0,ψΨ, τ s T,t[ τ s , τ s + 1 ).

Condition PIM Piecewise Intertemporal Monotonicity:

U i ( ψ ( t ) ) = u i y { π i ( t ) X α ( ψ ( t ) ) + Y i α ( ψ ( t ) ) } 0 , i N , ψ Ψ , τ s T , t [ τ s , τ s + 1 ) .

Condition PISP Piecewise Intertemporal Strategy Proofness:

π i ( t ) X α ( π i ( t ) , ψ i ( t ) ) + Y i α ( π i ( t ) , ψ i ( t ) ) π i ( t ) X α ( ψ ( t ) ) + Y i α ( ψ ( t ) ) , i N , ψ Ψ , ψ i Ψ i , τ s T , t [ τ s , τ s + 1 ) .

Condition PISP may also be called Stepwise Strategy Proofness.

4.3 The λ MDP Procedure as a piecewise intertemporal incentive game form

To examine incentive properties of the procedure, an assumption of truthful revelation of preferences is omitted. Each player’s announcement, ψ i , is not necessarily equal to his/her true MRS, π i . Thus, π i must have been replaced with ψ i in the dynamic system of the λ MDP Procedure. The nonmyopia assumption is introduced for our procedure, since a discrete time framework is a weaker representation of myopia. The procedure and the game are repeated for each interval in our framework.

What I associate with the above process instead of intertemporal game used by Champsaur and Laroque ([1981]) is so to speak a ‘bounded’ or ‘piecewise’ intertemporal game, since I divide the time interval in the model. A piecewise intertemporal game played at discrete dates of each time interval of the procedure is formally defined as the normal form game (N,Ψ,V). N is the set of players, Ψ= × i N Ψ i R + is the Cartesian product of Ψ i which is the set of player i’s strategies, and V=( V 1 ( τ s + 1 ),, V n ( τ s + 1 )) is the n-tuple of payoff functions at the end of the current time interval [ τ s , τ s + 1 ) such that V i ( τ s + 1 )= u i (x( τ s + 1 ), y i ( τ s + 1 ))iN.

The maximization problem for any player is as follows: τ s + 1 T and t[ τ s , τ s + 1 )

Max V i ( τ s + 1 ) s.t. x ( t ) = x ( τ s ) + χ ( t ) and y i ( t ) = y i ( τ s ) + υ i ( t ) .

Let us give a definition here.

Definition 4 The best reply strategy for each individual i in the piecewise intertemporal game (N,Ψ,V) is the strategy ψ i ( τ s ) Ψ i such that for any τ s T:

V i ( ψ i ( τ s ) , ψ i ( τ s ) ) V i ( ψ i ( τ s ) , ψ i ( τ s ) ) , ψ i Ψ i , ψ i Ψ i .

Remark 8 Condition PISP satisfies if truth-telling coincides with the best reply strategy in the piecewise intertemporal game. The behavioral hypothesis underlying the above equation is the nonmyopia assumption, i.e., each player determines his/her best reply strategy at the beginning of each interval [ τ s , τ s + 1 ) in order to maximize his/her payoff, V i ( τ s + 1 ), at the beginning of the next interval [ τ s + 1 , τ s + 2 ).

Nonmyopia Assumption Every player is assumed to behave nonmyopically: viz., when each player determines his/her strategy in a piecewise intertemporal game, he/she does not maximize the time derivative of utility function but the utility increment based on the allocation that he/she can foresee to get at the end of the current time interval.

Remark 9 This behavioral hypothesis may be justified by considering that the future development of an allocation cannot be predicted for exactly. Hence, every player has to make a piecewise decision under uncertainty. Players are rather assumed to forecast at least what will happen at the next discrete date.

Now I examine the properties of the λ MDP Procedure just defined above. This paper is confined to PISP, instead of LSP or Strongly Locally Individually Incentive Compatibility (SLIIC).

Suppose the λ MDP Procedure is not at an equilibrium at τ s + 1 , then the following theorems hold. Proofs are postponed to the Appendix.

The following notation is used for each iN and for any τ s + 1 T

U i + ( τ s + 1 )= lim t τ s + 1 + u i ( t ) u i ( τ s ) t τ s

and

U i ( τ s + 1 )= lim t τ s + 1 u i ( τ s + 1 ) u i ( t ) τ s + 1 t .

Theorem 7 For eachiNand for any τ s + 1 T, U i ( τ s + 1 )0.

Theorem 8 For each iN and for any τ s + 1 T

U i + ( τ s )> u i ( t 1 ) u i ( t 0 ) τ s + 1 τ s > U i ( τ s + 1 )>0.

Therefore, the average speed of each individual’s utility increment is positive over the interval [ τ s , τ s + 1 ) for any revision date τ s T.

The next theorem states that the utility is monotonically nondecreasing over the interval [ τ s , τ s + 1 ) for any τ s T.

Theorem 9 For eachiNand for anyt[ τ s , τ s + 1 ), U i (t)>0.

Theorem 10 For eachiN, ψ i ( τ s )= π i ( τ s + 1 )is player i’s best reply strategy at date τ s , which maximizes V i ( τ s + 1 )in the piecewise intertemporal incentive game associated with the λMDP Procedure.

That is to say, truthful revelation for the public good is the best reply strategy in the piecewise intertemporal game, and it is the only best reply strategy when x>0.

Remark 10 Theorem 10 means that the best reply strategy at τ s for each player is to reveal his/her true MRS for the public good to be provided at date τ s + 1 , i.e., π i ( τ s + 1 ), but not π i ( τ s ). For each time interval [ τ s , τ s + 1 ), the λMDP Procedure is piecewise intertemporally strategy proof in the sense that each player’s MRS announced at date τ s coincides with the true one which corresponds to an allocation anticipated by that player at the end of the current interval [ τ s , τ s + 1 ). The crucial point is that each player’s best reply strategy, ψ i ( τ s ) is not π i ( τ s ) but π i ( τ s + 1 ). This result comes from the difference between the myopia and nonmyopia assumptions, i.e., the length of time horizon of the players matters.

Remark 11 The myopia assumption is common in local games associated with both continuous and discrete planning procedures such as the MDP and the CDH (Champsaur-Drèze-Henry) Procedures. See Henry ([1979]), Schoumaker ([1977, 1979]) for details on this point. Also, nontâtonnement procedures are of concern in real economic life. Hence, in view of obvious practical relevance, I must have constructed our discrete process in a nontâtonnement setting. However, I have confined myself to develop a piecewise linearized process as an approximation. Under nonmyopia assumption, a sincere revelation of preference for the public good at any discrete date of the λMDP Process is the best reply strategy for each player.

Hence, I am now in a position to present the theorem.

Theorem 11 Under Assumptions 1-3, the λMDP Procedure satisfies PIF, PIM, TN, PE, and PISP.

Remark 12 Our λMDP Procedure can keep neutrality, which is different from Champsaur and Laroque ([1981])’s result on nonneutrality of the procedures with intertemporal strategic behaviors of agents. This possibility stems from Sato ([1983]) who proposed an aggregate correct revelation as a condition to be replaceable with local strategy proofness, and he constructed a planning procedure which simultaneously satisfies three desiderata: efficiency, neutrality, and aggregate correct revelation.

Let T 1 ={ τ 1 } be the set of dates for revising the allocation by the center. When τ 1 tends to infinity, the MRSs revealed by the players at date 0 converge to those corresponding to a Pareto optimal allocation, z( τ 1 ), achieved via the procedure. Theorem 11, therefore, brings another theorems whose proofs are obvious, thus omitted here.

Theorem 12 When τ 1 tends to infinity, any trajectory of the λMDP Process converges towards a Pareto optimal allocation. Furthermore, it is intertemporally strategy proof in the sense of Champsaur and Laroque.

5 Literature on myopia and discreteness

5.1 A discussion on discreteness

Here I present some comments on the discrete procedures. A proper discrete procedure could be constructed via the use of a decreasing pitch proposed by Champsaur, Drèze and Henry ([1977]), but I have attempted a different approach, in which discussions can be extended to a piecewise linearized procedure. The above dynamic system can be generalized to involve many public goods, amounts of which can be simultaneously adjusted at each iteration. This result differs from Champsaur, Drèze, and Henry ([1977]), in which the quantity of only one public good can be revised at each discrete date.

Incidentally, little is known about the speed of convergence of the procedures, particularly when they are formulated in the discrete versions, which are the only realistic ones from the standpoint of actual planning practices. The continuous version implies that the player’s responses are transmitted continuously to the planning center, with no computation cost or adjustment lag.Footnote 2 However, for the simplicity of presentation, the technical advantages of the differential approach are well known. As Malinvaud ([1970-1971], p.192) rightly pointed out, a continuous formulation removes the difficult question of choosing an adjustment speed. Hence, the continuous version is justified mainly by convenience. Moreover, a continuous formulation might be considered as an approximation to a discrete representation.Footnote 3

Casual observations suggest that discrete procedures are more realistic than continuous ones, and that the revisions of resource allocation are essentially made in discrete time. But most planning procedures discussed in the literature are formulated in continuous time because of the difficulties involved in using the discrete version. As indicated by Malinvaud ([1967]) and others, this dilemma concerns a traditional technical difficulty which is summarized in such a way that if one selects a pitch large enough to get a rapid convergence, one runs the risk of no convergence. On the other hand, if one chooses a pitch small enough to expect an exact convergence, there is a possibility of delay.

Discrete versions of the MDP Procedure have been presented by several authors, and there are different strains of related literature. The first strain - taken by Champsaur, Drèze, and Henry ([1977]) - is characterized by a decreasing adjustment pitch (or step-size) as a parameter, with which they could overcome a dilemma associated with a discrete formulation by keeping the pitch constant as long as it allows progress in efficiency, and by halving it as soon as it is impossible. The above-mentioned dilemma associated with discrete procedures is therefore overcome.Footnote 4

Discussions of incentives in discrete-time MDP Procedures are given in Henry ([1979]), and Schoumaker ([1976, 1977, 1979]). They analyzed players’ strategic behaviors in the discrete MDP Processes, by ruling out the assumption of truthful revelation. The result they achieved is that their procedures still converge to a Pareto optimum even under strategic preference revelation à la Nash.

Approaching the same issue from another angle, Green and Schoumaker ([1980]) presented a discrete MDP Process with a flexible step-size at each iteration, and studied its incentive properties in the game theoretical framework. Their analysis dispensed with the ‘strategic indifference’ assumption imposed by Henry ([1979]) and Schoumaker ([1979]), i.e., the players choose truth-telling if the resulting outcome would be indifferent. Their discrete-time procedure, however, requires reporting global information with respect to the preferences of consumers. More precisely, consumers’ marginal willingness to pay functions are constrained to be compatible with and a part of their utility functions. Essentially, a Nash equilibrium concept is employed. Although their ideas are interesting, the informational burden in their model is much greater than that in other approaches.

Mas-Colell ([1980]) proposed a voluntary financing process, which is a global analog of the MDP Procedure.Footnote 5 He obtained characterizations of Pareto optimal and core states in terms of valuation functions. The incentive problem was not considered. Chander ([1985]) presented a discrete version of the MDP Procedure and insisted that his system is the most informationally efficient allocation mechanism, without taking any consideration on its incentive property, though. Otsuki ([1978]) employed the feasible direction method in the theory of discrete planning and applied it to the MDP and the Heal Procedures by devising implementable algorithms. Again, the problem of incentives was not treated in his paper.

Roberts ([1987]) challenged another difficult issue which is not yet fully settled: he attempted to relax both the assumptions of myopia and complete information in the simplest version of an iterative planning framework due to Champsaur, Drèze, and Henry ([1977]). In his procedure the agents initially are imperfectly informed but gradually learn about each other to predict future behaviors of others. He discussed the Baysian incentive compatibility of his procedure. And he gave a numerical example of a condominium as a public good, entrance of which is redecorated by its members who use the iterative process.Footnote 6

Allard et al. ([1980]) proposed definitions of temporary and intertemporal Pareto optimality. In their paper individuals are represented by Roy-consistent expectation functions induced by their learning processes. In order to explain their concepts of expectation functions, they referred to a pure exchange MDP Process, in which the planner asks agents to evaluate present goods and to send him/her their demands. So as to value present goods, they must forecast future quantities. Thus, Allard et al. ([1980]) assumed that the consumers are endowed with expectation functions.

As was criticized by Coughlin and Howe ([1989]), none of the above discrete procedures satisfied a criterion of intertemporal Pareto optimality. Following them, only the process devised by Green and Schoumaker ([1980]) insinuated a possible avenue to the criterion of intertemporal Pareto optimality. Thus, I have shown a different version of the Green and Schoumaker ([1980])’s discrete process with variable step-sizes and only local informational requirement.

5.2 A digression and justification of myopia

In the literature on the problem of incentives in planning procedures, the myopic strategic behavior prevailed. Many papers imposed this behavioral hypothesis, i.e., myopia, on which the forgoing discussions crucially depended, spawning numerous desirable results in connection with the family of MDP Procedures.

The aim of this paper has been to examine the consequences of dropping the assumption that individuals choose their strategies to maximize an instantaneous change in utility function at each iteration along the procedure. Instead of the myopic behavior, I have assumed that the agents select their announcements concerning their marginal rates of substitution to maximize their utility increment to be obtained at the end of each time interval.

Also verified is that the λ MDP Procedure can always keep neutrality different from Champsaur and Laroque ([1981, 1982]), and Laroque and Rochet ([1983]). They analyzed the properties of the MDP Procedure under the nonmyopic assumption. They treated the case where each individual attempts to forecast the influence of his/her announcement to the planning center over a predetermined time horizon, and optimizes his/her responses accordingly. It is proved that, if the time horizon is long enough, any noncooperative equilibrium of intertemporal game attains an approximately Pareto optimal allocation. But at such an equilibrium, the influence of the center on the final allocation is negligible, which entails nonneutrality of the procedure. Their attempt was to bridge the gap between the local instantaneous game and the global game, as was pointed out by Hammond ([1979]). Our aim has been, however, to bridge the gap between the local game and intertemporal game, by constructing a compromise of continuous and discrete procedures, i.e., the piecewise linearized procedure. By letting the length of the discrete periods shrink to zero (noting that χ and hence, υ i iN, would also shrink to zero), we would approach the continuous patch.

Incidentally, how can we justify the myopia assumption which is a crucial underpinning to obtain a lot of fruitful results in the theory of incentives, especially in the planning procedures for optimally allocating public goods? Indeed, in reality people seem to be considered to behave myopically rather than farsightedly. Matthews ([1982], p.638) wrote that “myopia may be regarded as a tractable approximation, a result of ‘bounded rationality’.”

Laffont ([1985], pp.19-20) justified myopia as follows: the participants in a planning procedure always believe that it is the last step of the procedure or that they will not enter the complexities of strategic behavior for a longer time horizon. In the MDP Procedure, a correct revelation of preferences is a maximin strategy in the global game, as was pointed out by Drèze. As the procedure is monotone in utility functions, the worst that could happen is the termination of the procedure. In other words, the global game reduces to the local game, in which the maximin strategy consists of correctly revealing preferences. Conversely, choosing a myopic strategy reduces to adopting a maximin approach to the global game. It would be logical, however, to adopt a maximin strategy in the local game, too.

Finally, let me introduce two justifications of myopia by Moulin ([1984], pp.131-132). The first one is to consider an isolated player who finds him/herself so small that his/her proper choice of strategies influences the others’ choice in a negligible way. The other, which completes the first, is complete ignorance where no player knows his/her opponents’ utility functions; a player knows that he/she is unable to predict in what direction the change will occur.

The method of Truchon ([1984]) is to examine a nonmyopic incentive game, where each agent’s payoff is a utility at the final allocation. Differently from others, Truchon introduced a ‘threshold’ into his model to analyze agents’ strategic behavior. T. Sato ([1983]) also investigated how the MDP Procedure works when players with individual expectation functions nonmyopically play a sequential game, by letting them forecast what allocation would be proposed over the period when they take a certain path of strategies.

6 Final remarks

The present paper has formulated a piecewise linearized version of the Generalized MDP Procedure and analyzed its properties. On doing so, I have extended the Fujigaki’s private goods economy to involve a public good, and have localized the intertemporal game à la Champsaur and Laroque ([1981, 1982]), by dividing the time interval and by applying the Generalized MDP Procedure for each interval. In the piecewise intertemporal game associated with any interval generated by our procedure, each player’s payoff is the utility increment at the initial point of each next interval. Variable step-sizes are used to formalize the piecewise linearized procedure that shares similar desirable properties with continuous procedures. This process involves the partitioning of the planning horizon into a specific sequence of time intervals. I have called this process the λ MDP Procedure and shown that it can simultaneously achieve efficiency and piecewise intertemporal strategy proofness. That is, it converges to a Pareto optimum; and the best replay strategy of each player at each date τ s is to declare his/her anticipated MRS at the end of the current time interval[ τ s , τ s + 1 ): i.e., ψ i ( τ s )= π i ( τ s + 1 ). The λ MDP Procedure can also preserve transfer neutrality.

Recognizing the difficulties concerning the possibility of manipulating private information by individuals, the literature has verified that this incentive problem could be dealt with by the planning procedures that require a continuous revelation of information, provided that agents adopt myopic behavior. Whereas, if individuals are farsighted, the traditional impossibility results occur, i.e., incentive compatibility is incompatible with efficiency, as it was pointed out by Champsaur, Laroque and Rochet. This paper has studied an intermediate situation where agents are only asked to declare their anticipated MRS at discrete dates, where the direction and speed of adjustment are changed. Consequently, the associated dynamic process named the λMDP Procedure has become piecewise linear. Individuals are assumed to take the interval between two discrete dates as their time horizon. Their behavior is hence intermediate between myopia and farsightedness. The idea of looking at an intermediate time horizon for agents’ manipulations of information is more natural and more realistic than myopia and farsightedness.

Appendix

Proof of Theorem 7 Our λ MDP Procedure gives

u i ( τ s + 1 )= u i ( α i ( τ s ) + ( τ s + 1 τ s ) A i ) ,

where α i ( τ s )=(x( τ s ), y i ( τ s )) and A i d α i /dt=( X α ( τ s ), Y i α ( τ s )). Let π i ( τ s + 1 )= u i x ( τ s + 1 )/ u i y ( τ s + 1 ). Under the truth-telling, we have

U i ( τ s + 1 ) = u i x ( τ s + 1 ) X α ( τ s ) + u i y ( τ s + 1 ) Y i α ( τ s ) = u i y ( τ s + 1 ) { π i ( τ s + 1 ) X α ( τ s ) + Y i α ( τ s ) } = u i y ( τ s + 1 ) δ i { j N ψ j ( τ s ) γ ( τ s ) } X α ( τ s ) 0

since X α ( τ s ) and j N ψ j ( τ s )γ( τ s ) are sign-preserving. □

Proof of Theorem 8 Because of the strict concavity of utility functions, it follows that for any α 0 , α 1 R + 2 and for any real number β[0,1]

u i { ( 1 β ) a 0 + β α 1 } (1β) u i ( α 0 )+β u i ( α 1 ).
(1)

Since an allocation path is a line segment, if we set α 0 =α( τ s ) and α 1 =α( τ s + 1 ), then we can associate via the choice of β any allocation given by the λ MDP Procedure over the interval [ τ s , τ s + 1 ).

Denote β=β(t)=(t τ 0 )/( τ 1 τ 0 ) for each t[ τ s , τ s + 1 ). Thus, we have

(1β) α 0 +β α 1 =(1β) α 0 +β( τ 1 τ 0 ) A i =α(t).

In light of Eq. (1)

U i ( α ( t ) ) > ( 1 β ( t ) ) u i ( α 0 )+β(t) u i ( α 1 ).
(2)

Combining (2) with

u i ( α ( t ) ) = τ 0 t U i (τ)dτ

and

u i ( α 1 )= τ 0 τ 1 U i (τ)dτ

yields the expression

u i ( τ s )+ τ s t U i (τ)dτ> ( 1 β ( t ) ) u i ( τ s )+β(t) { u i ( τ s ) + τ s τ s + 1 U i ( τ ) d τ } .

Consequently, we have for any t( τ s , τ s + 1 )

1 β ( t ) β ( t ) τ s t U i (τ)dτ> t τ s + 1 U i (τ)dτ.

Since (1β(t))/β(t)=( τ 1 t)/( τ 1 τ 0 ), this equation gives for any t( τ s , τ s + 1 )

1 t τ 0 τ s t U i (τ)dτ> 1 τ 1 t t τ s + 1 U i (τ)dτ.
(3)

As t tends to τ s + 1 , the L.H.S. of Eq. (3) approaches the average speeds of utility change over the interval [ τ s , τ s + 1 ), and the R.H.S. signifies the infinitesimal speed of utility increment. By Theorem 7

u i ( τ s + 1 ) u i ( τ s ) τ s + 1 τ s > U i ( τ s + 1 )>0.

When t goes τ s in Eq. (3), we get

U i + ( τ s )> 1 τ s + 1 τ s τ s τ s + 1 U i (τ)dτ.

These equations give us the statement of the theorem. □

Proof of Theorem 9 Theorem 8 follows that

u i ( τ s + 1 ) u i ( τ s ) τ s + 1 τ s >0

and therefore u i ( τ s + 1 ) u i ( τ s ).

Continuity of utility functions, and thus the intermediate value theorem, assures the existence of a certain t[ τ s , τ s + 1 ) such that u i ( τ s + 1 )> u i (t)> u i ( τ s ).

Denote I 1 =[ τ s ,t] and I 2 =[t, τ s + 1 ]. Applying the similar argument of Theorem 8 for each interval yields

1 ζ 1 τ s τ s ζ 1 U i ( τ ) d τ 1 t ζ 1 ζ 1 t U i ( τ ) d τ , ζ 1 I 1 , 1 ζ 2 t t ζ 2 U i ( τ ) d τ 1 τ s + 1 ζ 2 ζ 2 τ s + 1 U i ( τ ) d τ , ζ 2 I 2 ,

where ζ 1 and ζ 2 are any real numbers in each interval. From the above equations, we obtain

1 ζ 1 τ s τ s ζ 1 U i (τ)dτ u i ( t ) u i ( t ζ 1 ) t ζ 1 .

Letting ζ 1 approach t from below yields

1 t τ k τ k t U i (τ)dτ U i (t).

If the similar manipulation is adapted by letting ζ 2 tend to t from above, we get

U i + (t)( τ s + 1 t) t τ s + 1 U i (τ)dτ.

By utility functions of class C 2 , the above two equations give

1 τ s + 1 t t τ s + 1 U i (τ)dτ U i (t) 1 t τ s τ s t U i (τ)dτ.

It is concluded that U i (t)>0, since u i ( τ s + 1 )> u i (t) by assumption. Consequently, we show that u i ( τ s )< u i (t)< u i ( τ s + 1 ) reduces to u i (t)>0, iN, t[ τ s , τ s + 1 ). It is easily seen that there exists no such t that u i ( τ s ) u i (t) and u i (t) u i ( τ s + 1 ) over the interval [ τ s , τ s + 1 ). In fact, if there exists t[ τ s , τ s + 1 ) such that u i (t) u i ( τ s + 1 ), then, for any t ˜ [t, τ s + 1 )

u i ( t ˜ ) u i ( τ s + 1 )

must hold. This clearly contradicts U i ( τ s + 1 )>0, hence the desired conclusion is obtained. □

Proof of Theorem 10 Without a truthful revelation for the public good, which is different from the proof of Theorem 9, we observe

U i ( τ s + 1 )= u i y ( τ s + 1 ) [ π i ( τ s + 1 ) ψ i ( τ s ) + δ i { j N ψ j ( τ s ) γ ( τ s ) } X α ( τ s ) ] =0.

At any equilibrium of the λ MDP Procedure, the third term in the brackets vanishes, and u i y >0 by assumption, so that we conclude that ψ i ( τ s )= π i ( τ s + 1 )= ψ i ( τ s ) holds for any iN and for any τ s T. □

Proof of Theorem 11 Condition PIF is easily checked to be satisfied, since it has been already used to formulate the procedure. To sum up, the features of the process are as follows: its solution z[t,z(0)] defining the procedure is the function which associates a program z(t) as well as step-sizes χ(t) and υ i (t), iN, with every iteration t. If an initial program is feasible, then every succeeding one is also feasible. It can be demonstrated under Assumptions 1-3 that the process is stable and always converges monotonically from any initial point to an individually rational Pareto optimum. The proofs of other conditions immediately follow from the proofs of theorems supra and the definitions of the Generalized MDP Procedure. □

Notes

  1. For the concepts of neutrality associated with planning procedures, see Cornet ([1977a, 1977b]), Cornet and Lasry ([1976]), Rochet ([1982]), Sato ([1983, 2011]). See also d’Aspremont and Drèze ([1979]) for a version of neutrality which is valid for the generic context.

  2. See Laffont and Saint-Pierre ([1979]) for an exception with an information processing cost.

  3. The essence of the discrete version of the MDP Procedure (CDH Procedure) can be captured in Henry and Zylberberg ([1977]). See, in addition, Tulkens ([1978]), Laffont ([1982]), Mukherji ([1990]) and Salanié ([1998]) for lucid summaries of the MDP Procedure. It can be seen as a ‘nontâtonnement process,’ because of its feasibility, one can therefore truncate it at any time. As for a contribution to the MDP literature, see Von Dem Hagen ([1991]), where a differential game approach is taken. De Trenquale ([1992]) defined a dynamic mechanism different from the MDP Procedure, that implements with local dominant strategies a Pareto efficient and individually rational allocations in a general two-agent model. Chander ([1993]) verified the incompatibility between core convergence property and local strategy proofness. Sato ([2007]) designed the Hedonic MDP Procedure for optimizing gaseous attributes which compose the global atmosphere in the new theoretical context.

  4. See Henry and Zylberberg ([1978]) for graphical illustration of how the method of a decreasing pitch successfully works until a Pareto optimum is attained. Although they treated the case with increasing returns to scale, the structure is isomorphic to the model with public goods. Crémer ([1983, 1990]) took another approach to treat increasing returns to scale as well as useful ideas that can be applied for public goods. See Heal ([1986]) for a comprehensive account of the planning theory and the dilemma of choosing a step-size in discrete procedures. See also Henry and Zylberberg ([1977]) for the Heal Procedure.

  5. For another global analog, see also Dubins’ mechanism which is a speed transform of the MDP Procedure explained in Green and Laffont ([1979]).

  6. See Spagat ([1995]) for incisive critics on iterative planning theory and his re-examination of the standard procedures in the Bayesian learning real-time model.

References

  • Allard M, Bronsard C, Richelle Y: Temporary Pareto optimum theory. J Public Econ 1980, 38: 343–368.

    Article  Google Scholar 

  • Champsaur P: Neutrality of planning procedures in an economy with public goods. Rev Public Econ 1976, 43: 293–300.

    Google Scholar 

  • Champsaur P, Drèze J, Henry C: Stability theorems with economic applications. Econometrica 1977, 45: 272–294. 10.2307/1913309

    Article  Google Scholar 

  • Champsaur P, Laroque G: Le Plan Face aux Comportements Stratégiques des Unités Décentralizées. Ann INSEE 1981, 42: 19–33.

    Google Scholar 

  • Champsaur P, Laroque G: Strategic behavior and decentralized planning procedures. Econometrica 1982, 50: 325–344. 10.2307/1912632

    Article  Google Scholar 

  • Champsaur P, Rochet J-C: On planning procedures which are locally strategy proof. J Econ Theory 1983, 30: 353–369. 10.1016/0022-0531(83)90112-6

    Article  Google Scholar 

  • Chander P: The design of efficient resource allocation mechanisms. In Microeconomic theory. Edited by: Samuelson L. Kluwer Academic, Boston; 1985:19–58.

    Google Scholar 

  • Chander P: Dynamic procedures and incentives in public good economies. Econometrica 1993, 61: 1341–1354. 10.2307/2951645

    Article  Google Scholar 

  • Cornet B: Accessibilités des Optimums de Pareto par des Processus Monotones. C R Acad Sci, Ser A 1977, 282: 641–644.

    Google Scholar 

  • Cornet B: Neutrality of planning procedures. J Math Econ 1977, 11: 141–160.

    Article  Google Scholar 

  • Cornet B, Lasry J-M: Un Théorème de Surjectivité pour une Procédure de Planification. C R Acad Sci, Ser A 1976, 282: 1375–1378.

    Google Scholar 

  • Coughlin B, Howe C: Policies over time and Pareto optimality. Soc Choice Welf 1989, 6: 259–273. 10.1007/BF00446984

    Article  Google Scholar 

  • Crémer J: The discrete heal algorithm with intermediate goods. Rev Econ Stud 1983, 6: 383–391.

    Article  Google Scholar 

  • Crémer J: Two planning procedures for all economies. In Microeconomics, essays in honor of Edmond Malinvaud. Edited by: Champsaur P, Deleau M, Grandmont J-M, Gusnerie R, Henry C, Laffont J-J, Laroque G, Mairesse J, Monfort A, Younès Y. MIT Press, Cambridge; 1990. Chapter 2 Chapter 2

    Google Scholar 

  • d’Aspremont C, Drèze J: On the stability of dynamic processes in economic theory. Econometrica 1979, 47: 733–737. 10.2307/1910418

    Article  Google Scholar 

  • De Trenquale P: Dynamic implementation in two-agent economies. Econ Lett 1992, 39: 305–308. 10.1016/0165-1765(92)90266-2

    Article  Google Scholar 

  • Drèze J: A tâtonnement process for investment under uncertainty in private ownership economies. In Mathematical methods in investment and finance. Edited by: Szego G, Shell K. North-Holland, Amsterdam; 1972:3–23.

    Google Scholar 

  • Drèze J, de la Vallée Poussin D: A tâtonnement process for public goods. Rev Econ Stud 1971, 38: 133–150. 10.2307/2296777

    Article  Google Scholar 

  • Fujigaki Y: Incentives and optimality: simple examples of incentive compatible planning procedures for the private goods economy. J Fac Econ 1981, 12: 67–78.

    Google Scholar 

  • Fujigaki Y, Sato K: Incentives in the generalized MDP procedure for the provision of public goods. Rev Econ Stud 1981, 48: 473–485. 10.2307/2297159

    Article  Google Scholar 

  • Fujigaki Y, Sato K: Characterization of SIIC continuous planning procedures for the optimal provision of public goods. Econ Stud Q 1982, 33: 211–226.

    Google Scholar 

  • Green J, Laffont J-J: Incentives in public decision-making. North-Holland, Amsterdam; 1979.

    Google Scholar 

  • Green J, Schoumaker F: Incentives in discrete-time MDP Processes with flexible step-size. Rev Econ Stud 1980, 47: 557–565. 10.2307/2297307

    Article  Google Scholar 

  • Hammond P: Symposium on incentive compatibility: introduction. Rev Econ Stud 1979, 47: 181–184.

    Google Scholar 

  • Heal G: Planning. In Handbook of mathematical economics. Edited by: Arrow K, Intriligator M. North-Holland, Amsterdam; 1986. Chapter 27 Chapter 27

    Google Scholar 

  • Henry C: On the free rider problem in the M.D.P. procedure. Rev Econ Stud 1979, 46: 293–303. 10.2307/2297052

    Article  Google Scholar 

  • Henry C, Zylberberg A: Procédures de Planification avec Rendements Croissants ou Biens Publics. Cah Sémin économ 1977, 18: 27–38.

    Google Scholar 

  • Henry C, Zylberberg A: Planning algorithms to deal with increasing returns. Rev Econ Stud 1978, 45: 67–75. 10.2307/2297083

    Article  Google Scholar 

  • Laffont J-J: Aggregation and revelation of preferences. North-Holland, Amsterdam; 1979.

    Google Scholar 

  • Laffont J-J: Cours de Théorie Microéconomique: Vol. 1 - Fondements de l’Economie Publique. Economica, Paris; 1982. English edition: Laffont J-J (1985) Fundamentals of public economics (trans: Bonin J, Bonin H). MIT Press, Cambridge English edition: Laffont J-J (1985) Fundamentals of public economics (trans: Bonin J, Bonin H). MIT Press, Cambridge

    Google Scholar 

  • Laffont J-J: Incitations dans les Procédures de Planification. Ann INSEE 1985, 58: 3–37.

    Google Scholar 

  • Laffont J-J, Maskin E: A characterization of strongly locally incentive compatible planning procedures with public goods. Rev Econ Stud 1983, 50: 171–186. 10.2307/2296963

    Article  Google Scholar 

  • Laffont J-J, Saint-Pierre P: Planning with externalities. Int Econ Rev 1979, 20: 617–634. 10.2307/2526261

    Article  Google Scholar 

  • Laroque G, Rochet J-C: Myopic versus intertemporal manipulation in decentralized planning procedures. Rev Econ Stud 1983, 50: 187–196. 10.2307/2296964

    Article  Google Scholar 

  • Malinvaud E: Decentralized procedures of planning. In Activity analysis in the theory of growth and planning Edited by: Malinvaud E, Bacharach M. 1967. Proceedings of a conference held in Cambridge, UK by the International Economic Association, Macmillan Proceedings of a conference held in Cambridge, UK by the International Economic Association, Macmillan

    Google Scholar 

  • Malinvaud E: Procedures for the determination of collective consumption. Eur Econ Rev 1970–1971, 2: 187–217. 10.1016/0014-2921(70)90012-7

    Article  Google Scholar 

  • Malinvaud E: A planning approach to the public good problem. Swed J Econ 1971, 73: 96–121. 10.2307/3439136

    Article  Google Scholar 

  • Malinvaud E: Prices for individual consumption, quantity indicators for collective consumption. Rev Econ Stud 1972, 39: 385–406. 10.2307/2296508

    Article  Google Scholar 

  • Mas-Colell A: Efficiency and decentralization in the pure theory of public goods. Q J Econ 1980, XCIV: 625–641.

    Article  Google Scholar 

  • Matthews S: Local simple games in public choice mechanisms. Int Econ Rev 1982, 23: 623–645. 10.2307/2526379

    Article  Google Scholar 

  • Moulin H: Comportement Stratégique et Communication Conflictuelle: Le Cas Non-Coopératif. Rev Econ 1984, 50: 109–125.

    Google Scholar 

  • Mukherji A: Walrasian and non-Walrasian equilibria. Clarendon, Oxford; 1990.

    Google Scholar 

  • Otsuki M: Discrete procedures of economic planning: a unified view from feasible direction methods. Rev Econ Stud 1978, 45: 77–84. 10.2307/2297084

    Article  Google Scholar 

  • Roberts J: Incentives in planning procedures for the provision of public goods. Rev Econ Stud 1979, 46: 283–292. 10.2307/2297051

    Article  Google Scholar 

  • Roberts J: Strategic behavior in the MDP procedure. In Aggregation and revelation of preferences. Edited by: Laffont J-J. North-Holland, Amsterdam; 1979. Chapter 19 Chapter 19

    Google Scholar 

  • Roberts J: Incentives, information, and iterative planning. In Information, incentives, and economic mechanisms: essays in honor of Leonid Hurwicz. Edited by: Groves T, Radner R, Reiter S. Blackwell, Basil; 1987.

    Google Scholar 

  • Rochet J-C (1982) Planning procedures in an economy with public goods: a survey article. CEREMADE, DP. No. 213, Université de Paris

    Google Scholar 

  • Salanié B: Microéconomie: Les Défaillances du Marché. Economica, Paris; 1998. English edition: Salanié B (2000) Microeconomics of market failures. MIT Press, Cambridge English edition: Salanié B (2000) Microeconomics of market failures. MIT Press, Cambridge

    Google Scholar 

  • Sato K: On compatibility between neutrality and aggregate correct revelation for public goods. Econ Stud Q 1983, 34: 97–109.

    Google Scholar 

  • Sato K (2007) Incentives in the Hedonic MDP Procedure for the global atmosphere as a complex of gaseous attributes. Presented at the workshop of the environment, held at l’Université Catholique de Louvain, Louvain-la-Neuve, Belgium, 19 April; also presented at the Regional Science Workshop in Sendai, held at the Graduate School of Information Sciences, Tohoku University, 26 July

    Google Scholar 

  • Sato K (2010) Incentive compatible discrete Green and Laffont planning procedures with pivotal agents. Presented at the autumn meeting of the Japanese Economic Association, held at Kwansei Gakuin University, 18 September 2010

    Google Scholar 

  • Sato K (2011) The MDP procedure for public goods. Presented at the regional science workshop in Sendai, held at the Graduate School of Information Sciences, Tohoku University, 17 June 2011

    Google Scholar 

  • Sato T: On the MDP procedure with non-myopic agents. Econ Stud Q 1983, 34: 110–123.

    Google Scholar 

  • Schoumaker F (1976) Best replay strategies in the Champsaur-Drèze-Henry procedure. DP 262, Centre for Mathematical Studies in Economics and Management Science, Northwestern University

    Google Scholar 

  • Schoumaker F: Révélation des Préférences et Planification: une Approche Stratégique. Rechécon Louvain 1977, 43: 245–259.

    Google Scholar 

  • Schoumaker F: Strategic behaviour in a discrete-time procedure. In Aggregation and revelation of preferences. Edited by: Laffont J-J. North-Holland, Amsterdam; 1979. Chapter 20 Chapter 20

    Google Scholar 

  • Spagat M: Leaving some stones unturned: a reassessment of iterative planning theory. J Public Econ 1995, 58: 85–105. 10.1016/0047-2727(94)01469-5

    Article  Google Scholar 

  • Truchon M: Nonmyopic strategic behaviour in the MDP planning procedure. Econometrica 1984, 52: 1179–1189. 10.2307/1910994

    Article  Google Scholar 

  • Tulkens H: Dynamic processes for public goods: a process-oriented survey. J Public Econ 1978, 9: 163–201. 10.1016/0047-2727(78)90042-7

    Article  Google Scholar 

  • Von Dem Hagen O: Strategic behaviour in the MDP procedure. Eur Econ Rev 1991, 35: 121–138. 10.1016/0014-2921(91)90107-T

    Article  Google Scholar 

Download references

Acknowledgements

This is one of a series of papers dedicated to the XXXXth Anniversary of the MDP Procedure. It was at the Brussels Meeting of the Econometric Society in September 1969 that Drèze and de la Vallée Poussin together, and Malinvaud independently, presented their papers on planning tâtonnement processes for guiding and financing the optimal provision of public goods. A preliminary version of this paper was presented at the Far Eastern Meeting of the Econometric Society held at the International Conference Center Kobe, Japan, July 21, 2001. The revised version was presented at the autumn meeting of the Japanese Economic Association held at Hitotsubashi University, October 8, 2001. Some major revisions were made thereafter.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kimitoshi Sato.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Sato, K. Nonmyopia and incentives in the piecewise linearized MDP procedures with variable step-sizes. Economic Structures 1, 5 (2012). https://doi.org/10.1186/2193-2409-1-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/2193-2409-1-5

Keywords