Journal of Economic Structures

The Official Journal of the Pan-Pacific Association of Input-Output Studies (PAPAIOS)

Journal of Economic Structures Cover Image
Open Access

The MDP Procedure for public goods and local strategy proofness

Journal of Economic StructuresThe Official Journal of the Pan-Pacific Association of Input-Output Studies (PAPAIOS)20165:12

DOI: 10.1186/s40008-016-0040-0

Received: 2 February 2015

Accepted: 8 March 2016

Published: 2 April 2016

Abstract

This paper revisits the family of MDP Procedures and analyzes their properties. It also reviews the procedure developed by Sato (Econ Stud Q 34:97–109, 1983) which achieves aggregate correct revelation in the sense that the sum of the Nash equilibrium strategies always coincides with the aggregate value of the correct marginal rates of substitution. The procedure named the Generalized MDP Procedure can possess other desirable properties shared by continuous-time locally strategy proof planning procedures, i.e., feasibility, monotonicity and Pareto efficiency. Under myopia assumption, each player’s dominant strategy in the local incentive game associated at any iteration of the procedure is proved to reveal his/her marginal rate of substitution for a public good. In connection with the Generalized MDP Procedure, this paper analyzes the structure of the locally strategy proof procedures as algorithms and game forms. An alternative characterization theorem of locally strategy proof procedures is given by making use of the new condition, transfer independence. A measure of incentives is proposed to show that the exponent attached to the decision function of public good is characterized. A Piecewise Nonlinearized MDP Procedure is presented, which is coalitionally locally strategy proof. Equivalence between price-guided and quantity-guided procedures is also discussed.

Keywords

Aggregate correct revelation Coalitional local strategy proof Fujigaki–Sato Procedure Generalized MDP Procedure Local strategy proof Measure of incentives Piecewise Nonlinearized MDP Procedure Price-quantity equivalence Transfer independence

JEL Classification

H41

1 Background

According to Samuelson (1954), public goods are characterized by nonrivalness and nonexcludability. Since the appearance of Samuelson’s seminal paper, the prevalent view was that the free rider problem was inevitable in the provision of pure public goods: once the good is made available to one person, it is available to all. It was epoch-making that this pessimistic view was shattered by the advent of the Malinvaud-Drèze-de la Vallée Poussin (hereafter, MDP) procedure. Since then large literature has accumulated that develops so many kinds of individually rational and incentive compatible planning procedures for optimally providing public goods.

Jacques Drèze and Dominique de la Vallée Poussin, and Edmond Malinvaud independently presented a tâtonnement process for guiding and financing an efficient production of public goods at the 1969 meeting of the Econometric Society in Brussels. As Malinvaud noted in his paper that the two approaches closely resembled each other: each attempted a dynamic presentation of the Samuelson’s Condition for the optimal provision of public goods. Subsequently, Malinvaud published another article on the subject, proposing a mixed (price-quantity) procedure. Their papers are among the most important contributions in planning theory and in public economics. They came to be termed the MDP Procedure and spawned numerous papers.1

The theory of incentives in planning procedures with public goods was initiated by these three great pioneers, and this field of research made a remarkable progress in the last four and half decades. They sowed the seeds for the subsequent developments in the theory of public goods and successfully introduced a game theoretical approach in the planning theory of public goods. Numerous succeeding contributions generated the means of providing incentives to correctly reveal preferences for public goods. The analyses of incentives in tâtonnement procedures began in 1969 and were mathematically refined by the characterization theorems of Champsaur and Rochet (1983), which generalized the previous results of Fujigaki and Sato (1981, 1982) , as well as Laffont and Maskin (1983). Champsaur and Rochet highlighted the incentive theory in the planning context to reach the acme and culminated in their generic theorems. Most of these procedures can be characterized by the conditions, the formal definitions of which are given in Sect. 3.2, i.e., (1) feasibility, (2) monotonicity, (3) Pareto efficiency, (4) local strategy proofness and (5) neutrality.

The MDP theory was very appealing for its mathematical elegance and the direct application of the Samuelson’s Condition, and it received a lot of attention in the 1970s and 1980s, especially on the problem of incentives in planning procedures with public goods, but there has been very little work on it over the last twenty years, leaving some very difficult problems. Sato (2012) is a follow-up on the literature of the use of processes as mechanisms for aggregating the decentralized information needed for determining an optimal quantity of public goods. 2 This paper tries to add some interesting results on the family of MDP procedures. In addition to implementation, it is required that the equilibria of the procedures be limit points of a given dynamic adjustment process. This paper also aims at clarifying the structure of the locally strategy proof planning procedures as algorithms and game forms, including the MDP Procedure. They are called locally strategy proof, if players’ correct revelation for a public good is a dominant strategy for any player in the local incentive game associated with each iteration of procedures. This property is not possessed by the original MDP procedure, when the number of players exceeds three. The task of the MDP Procedure is to enable the planner or the planning center to determine an optimal amount of public goods. As an algorithm, it can reach any Pareto optimum.

This paper revisits the procedure developed by Sato (1983) who advocated Aggregate Correct Revelation in the sense that the sum of the Nash equilibrium strategies always coincides with the aggregate value of correct preferences for public goods. I could escape out of the impossibility theorem among the above five desiderata, without requiring dominance. The procedure developed by Sato (1983) is able to possess similar desirable features shared by continuous-time procedures, i.e., efficiency and incentive compatibility. An alternative characterization theorem of locally strategy proof procedures is given by making use of the new condition, transfer independence. It means that the transfer function of a private good as a numéraire is independent of any strategy of players.

The remainder of the paper is organized as follows. The next section outlines the general framework. Section 3 reviews the MDP Procedure and the Fujigaki–Sato Procedure and introduces the genuine Generalized MDP Procedure which achieves neutrality and aggregate correct revelation. This section explores players’ strategic manipulability in the incentive game associated with each iteration of the procedure and presents the new theorems. Section  4 analyzes the structure of the locally strategy proof planning procedures. A Piecewise Nonlinearized MDP Procedure is presented, which is coalitionally locally strategy proof. Equivalence between price-guided procedures and quantity-guided procedures is discussed in Sect. 5. The last section provides some final remarks.

2 The model

The model involves two goods, one public good and one private good, whose quantities are represented by x and y, respectively. Denote \(y_{i}\) as an amount of the private good allocated to the ith consumer. The economy is supposed to possess n individuals. Each consumer \(i\in \mathbf {N}=\{1,\ldots ,n\}\) is characterized by his/her initial endowment of a private good \(\omega _{i}\) and his/her utility function \(u_{i}:\mathbf {R}_{+}^{2}\rightarrow \mathbf {R}\).3 The production sector is represented by the transformation function \(g:\mathbf {R}_{+}\rightarrow \mathbf {R}_{+}\), where \(y=g(x)\) signifies the minimal private good quantities needed to produce the public good x. It is assumed as usual that there is no production of private good. Following assumptions and definitions are used throughout this paper.

Assumption 1

For any \(i\in \mathbf {N},\) \(u_{i}(\cdot ,\cdot )\) is strictly concave and at least twice continuously differentiable.

Assumption 2

For any \(i\in \mathbf {N},\) \(\partial u_{i}(x,y_{i})/\partial x\ge 0,\) \(\partial u_{i}(x,y_{i})/\partial y_{i}\) \(>0 \) and \(\partial u_{i}(x,0)/\partial x=0\) for any x.

Assumption 3

\(\ g(x)\) is convex and twice continuously differentiable.

Let \(\gamma (x)={\text {d}}g(x)/{\text {d}}x\) denote the marginal cost in terms of the private good, which is assumed to be known to the planner or the planning center. It asks each individual i to report his/her marginal rate of substitution between the public good and the private good used as a numéraire in order to determine an optimal quantity of the public good.
$$\begin{aligned} \pi _{i}\left( x,y_{i}\right) =\frac{\partial u_{i}(x,y_{i})/\partial x}{\partial u_{i}(x,y_{i})/\partial y_{i}},\,\forall i\in \mathbf {N}. \end{aligned}$$
(1)

Definition 1

An allocation z is feasible if and only if
$$\begin{aligned} z\in \mathbf {Z}=\left\{ \left( x,y_{1},\ldots ,y_{n}\right) \in \mathbf {R}_{+}^{n+1}|\sum _{i\in \mathbf {N}}y_{i}+g\left( x\right) =\sum _{i\in \mathbf {N}}\omega _{i}\right\} . \end{aligned}$$
(2)

Definition 2

An allocation z is individually rational if and only if
$$\begin{aligned} u_{i}\left( x,y_{i}\right) \ge u_{i}\left( 0,\omega _{i}\right) ,\,\forall i\in \mathbf {N}. \end{aligned}$$
(3)

Definition 3

A Pareto optimum for this economy is an allocation \(z^{*}\in \mathbf {Z}\) such that there exists no feasible allocation z with
$$\begin{aligned} u_{i}\left( x,y_{i}\right)&\ge u_{i}\left( x^{*},y_{i}^{*}\right) ,\,\forall i\in \mathbf {N} \end{aligned}$$
(4)
$$\begin{aligned} u_{j}\left( x,y_{j}\right)&>u_{j}\left( x^{*},y_{j}^{*}\right) ,\,\exists j\in \mathbf {N}. \end{aligned}$$
(5)

These assumptions and definitions altogether give us conditions for Pareto optimality in our economy.

Lemma 1

Under Assumptions 13, necessary and sufficient conditions for an allocation to be Pareto optimal are
$$\begin{aligned} \sum _{i\in \mathbf {N}}\pi _{i}\le \gamma \text { and }\left( \sum _{i\in \mathbf {N}}\pi _{i}-\gamma \right) x=0. \end{aligned}$$
The condition, \(\sum _{i\in \mathbf {N}}\pi _{i}=\gamma \) for \(x>0,\) is called the Samuelson’s Condition.4 Conventional mathematical notation is used throughout in the same manner as in Sato (2012). Hereafter all variables are assumed to be functions of time t; however, the argument t is often omitted. The analyses in the following sections bypass the possibility of boundary problem at \(x=0\). This is an innocuous assumption in the single public good case, because x is always increasing. The results below cannot be applied to the model with many public goods.

3 The family of MDP Procedures

3.1 Reviewing the MDP Procedure and its properties

The MDP Procedure is the best-known member belonging to the family of the quantity-guided procedures in which the relevant information exchanged between the center and the periphery is in the form of quantity. The planning center asks individuals their MRSs between the public good and the private good as a numéraire. Then the center revises an allocation according to the discrepancy between the sum of the reported MRSs and the MRT.

Besides full implementation, an additional property is required: its equilibria must be approachable via an adjustment process. Suppose a game is played repeatedly in continuous time at any iteration \(t\in [0,\infty )\) of the procedure. Denote \(\psi _{i}(t)\) as player i’s strategy announced at t. Let \(\psi (t)=\left( \psi _{1}(t),\ldots ,\psi _{n}(t)\right) \in \mathbf {R}_{+}^{n}\) be the vector of strategies. Needless to say, \(\psi _{i}(t)\) does not necessarily coincide with the true MRS, \(\pi _{i}\); thus, the incentive problem matters.

The MDP Procedure reads:
$$\begin{aligned} \left\{ \begin{array}{l} \dot{x}=\sum _{i\in \mathbf {N}}\psi _{i}(t)-\gamma (t) \\ \\ \dot{y}=-\gamma (t)\dot{x} \\ \\ \dot{y}_{i}=-\psi _{i}(t)\dot{x}+\delta _{i}\dot{x}^{2},\,\forall i\in \mathbf {N}.\end{array}\right. \end{aligned}$$
(6)

Remark 1

  1. (i)

    The term, \(-\psi _{i}(t)\dot{x}<0\) is a contribution to \(\dot{x}>0\), and \(-\psi _{i}(t)\dot{x}>0\) is a compensation from \(\dot{x}<0.\) Denote a distributional coefficient \(\delta _{i}>0,\) \(\forall i\in \mathbf {N},\) with \(\sum _{i\in \mathbf {N}}\delta _{i}=1,\) determined by the planner prior to the beginning of an operation of the procedure. Its role is to share among individuals the “social surplus,” \(\dot{x}^{2}\), generated along the procedure, which is always positive except at the equilibrium.

     
  2. (ii)
    Drèze and de la Vallée Poussin (1971) set \(\delta _{i}>0\), which was followed by Roberts (1979a, b), whereas \(\delta _{i}\ge 0\) was assumed by Champsaur (1976) who advocated a notion of neutrality to be explained below. A local incentive game associated with each iteration of the process is formally defined as the normal form game \((\mathbf {N},\mathbf {\Psi },\mathbf {U})\); \(\mathbf {\Psi }=\Pi _{i\in \mathbf {N}}\mathbf {\Psi }_{i}\subset \mathbf {R}_{+}\) is the Cartesian product of the \(\mathbf {\Psi }_{i},\) which is the set of player i’s strategies, and \(\mathbf {U}=(\dot{u}_{1},\ldots ,\dot{u}_{n})\) is the n-tuple of payoff functions. The time derivative of consumer i’s utility is such that
    $$\begin{aligned} \dot{u}_{i}\left( \psi (t)\right) =\frac{\partial u_{i}}{\partial x}\dot{x}+\frac{\partial u_{i}}{\partial y_{i}}\dot{y}_{i}=\frac{\partial u_{i}}{\partial y_{i}}(\pi _{i}\dot{x}+\dot{y}_{i}) \end{aligned}$$
    (7)
    which is the payoff that each player obtains at iteration t in the local incentive game along the procedure.
     
The behavioral hypothesis underlying the above equation is the following myopia assumption. In order to maximize his/her instantaneous utility increment, \(\dot{u}_{i}(\psi (t))\) as his/her payoff, each player determines his/her dominant strategy, \(\tilde{\psi }_{i}\in \mathbf {\Psi }_{i}.\)

Denote \(\psi _{-i}=\left( \psi _{1},\ldots ,\psi _{i-1},\psi _{i+1},\ldots ,\psi _{n}\right) \in \mathbf {\Psi }_{-i}=\Pi _{j\in \mathbf {N}-\{i\}}\mathbf {\Psi }_{j}\) and we introduce the defnition.

Definition 4

A dominant strategy for each player in the local incentive game \((\mathbf {N},\mathbf {\Psi },\mathbf {U})\) is the strategy \(\tilde{\psi }_{i}\in \mathbf {\Psi }_{i}\) such that
$$\begin{aligned} \dot{u}_{i}(\tilde{\psi }_{i},\psi _{-i})\ge \dot{u}_{i}(\psi _{i},\psi _{-i}),\,\forall \psi _{i}\in \mathbf {\Psi }_{i},\,\forall \psi _{-i}\in \mathbf {\Psi }_{-i},\,\forall i\in \mathbf {N}. \end{aligned}$$
(8)

In the Procedure, the planning center plans to provide an optimal quantity of a public good by revising its quantity at iteration \(t=[0,\infty )\). In order for the center to decide in what direction an allocation should be changed, it proposes a tentative feasible amount of the public good, x(0) at the initial time 0 given by the center to which agents are asked to report his/her true MRS, \(\pi _{i}(x(t),\omega _{i}),\) \(\forall i\in \mathbf {N},\) \(\forall t\in [0,\infty )\), as a local privately held information. The planning center can easily calculate for any t the sum of their announced MRSs to change the allocation at the next iteration \(t+dt. \) It is supposed that the center can get an exact value of MRT.

The continuous-time dynamics is summarized as follows.
  • Step 0: At initial iteration 0, the center proposes a feasible allocation \((x(0),\omega _{1},\ldots ,\omega _{n})\) and asks players to reveal their preference for the public good, \(\pi _{i}(x(0),\omega _{i})\).

  • Step t: At each iteration t, players report their information and the center calculates the discrepancy between the sum of MRSs and the MRT. Unless the equality of these two values holds, the center suggests a new proposal allocation, and players update and reveal their preferences. If the Samuelson’s Condition holds at some iteration, the MDP Procedure is truncated and an optimal quantity of the public good is determined and supplied.

With many public goods \(k\in \mathbf {K}\), the MDP Procedure is defined as:
$$\begin{aligned} \left\{ \begin{array}{l} \dot{x}_{k}=\left\{ \begin{array}{l} \sum _{i\in \mathbf {N}}\psi _{ik}(t)-\gamma _{k}(t),\quad x_{k}(t)>0, \\ \\ {\text {max}}\left[ 0,\,\sum _{i\in \mathbf {N}}\psi _{ik}(t)-\gamma _{k}(t)\right] ,\quad x_{k}(t)=0\end{array}\right\} \,\begin{array}{l} k=1,\ldots ,m \\ \\ t\ge 0\end{array} \\ \\ \dot{y}=-\sum _{k\in \mathbf {K}}\gamma _{k}(t)\dot{x}_{k} \\ \\ \dot{y}_{i}=-\sum _{k\in \mathbf {K}}\psi _{ik}(t)\dot{x}_{k}+\delta _{i}\sum _{k\in \mathbf {K}}\dot{x}_{k}^{2},\quad\forall i\in \mathbf {N}.\end{array}\right. \end{aligned}$$
(9)
where the max operator is needed to avoid a decrease of any public good in negative direction. This paper is confined to one public good.

Remark 2

For the existence of solutions to the equations with the discontinuous right-hand side, see Henry (1972, 1973) and Champsaur et al. (1977) who reproduced Castaing and Valadier (1969) and Attouch and Damlamian (1972).

3.2 Normative conditions for the family of the procedures

The conditions presented in the Introduction are in order. They characterize procedures. The conditions except PE must be fulfilled for any \(t\in [0,\infty ).\) PE is based on the announced values, \(\psi _{i},\forall i\in \mathbf {N},\) which implies that a Pareto optimum reached is not necessarily equal to one achieved under the truthful revelation of preferences for the public good. Condition LSP signifies that truth-telling is a dominant strategy for each player. It is also called Strongly Locally Individual Incentive Compatibility (SLIIC).

Let \(\mathbf {P}_{0}\) be the set of individually rational Pareto optima (IRPO) which are better than the status quo among Pareto optima. Let \(\Delta \) be the set of \(\delta =(\delta _{1},\ldots ,\delta _{n}),\) and \(z\left( \cdot \right) \) a solution along the procedure. Condition N means that for every efficient point \(z^{*}\in \mathbf {Z}\) and for any initial point \(z_{0}\in \mathbf {Z}\), there exists \(\delta \) and \(z(t,\delta )\), a trajectory starting from \(z_{0}\), such that \(z^{*}=z(\infty ,\delta ).\) It was Champsaur (1976) who advocated the notion of neutrality for the MDP Procedure, and Cornet (1983) generalized it by omitting two restrictive assumptions imposed by Champsaur, i.e., (i) uniqueness of solution and (ii) concavity of the utility functions. Neutrality depends on the distributional coefficient vector \(\delta .\) Remember that the role of \(\delta \) is to attain any IRPO by distributing the social surplus generated during the operation of the procedure: \(\delta \) varies trajectories to reach every IRPO. In other words, the planning center can guide an allocation via the choice of \(\delta \); however, it cannot predetermine a final allocation to be achieved. This is a very important property for the noncooperative games, since the equity considerations among players matter.5

Condition F. Feasibility
$$\begin{aligned} \gamma (t)\dot{x}(\psi (t))+\sum _{i\in \mathbf {N}}\dot{y}_{i}(\psi (t))=0,\,\forall t\in [0,\infty ). \end{aligned}$$
(10)
Condition M. Monotonicity
$$\begin{aligned} \dot{u}_{i}= & {} \frac{\partial u_{i}}{\partial y_{i}}\{(\pi _{i}\dot{x}(\psi (t))+\dot{y}_{i}(\psi (t))\}\ge 0\nonumber \\&\qquad \forall \psi (t)\in \mathbf {\Psi },\,\forall i\in \mathbf {N},\,\forall t\in [0,\infty ). \end{aligned}$$
(11)
Condition PE. Pareto Efficiency
$$\begin{aligned} \dot{x}\left( \psi (t)\right) =0\iff \sum _{i\in \mathbf {N}}\psi _{i}(t)=\gamma (t),\,\forall \psi (t)\in \mathbf {\Psi }. \end{aligned}$$
(12)
Condition \(LSP.\ \ \) Local Strategy Proofness
$$\begin{aligned}&\pi _{i}\dot{x}(\pi _{i}(t),\psi _{-i}(t))+\dot{y}_{i}(\pi _{i}(t),\psi _{-i}(t))\ge \pi _{i}\dot{x}(\psi (t))+\dot{y}_{i}(\psi (t))\nonumber \\&\qquad \forall \psi _{i}\in \Psi ,\,\forall \psi _{-i}\in \mathbf {\Psi }_{-i},\,\forall i\in \mathbf {N},\,\forall t\in [0,\infty ). \end{aligned}$$
(13)
Condition N. Neutrality
$$\begin{aligned} \exists \delta \in \Delta \text { and }\exists z(t,\delta )\in \mathbf {Z},\,\forall z_{0}\in \mathbf {Z}\text { and }\forall z^{*}= \lim _{t\rightarrow \infty }z \left( t,\delta \right) \in \mathbf {P}_{0}. \end{aligned}$$
(14)
The MDP Procedure enjoys feasibility, monotonicity, stability, neutrality and incentive properties pertaining to minimax and Nash equilibrium strategies, as was proved by Drèze and de la Vallée Poussin (1971), Schoumaker (1977), Henry (1979) and Roberts (1979a, b). The MDP Procedure as an algorithm evolves in the allocation space and stops when the Samuelson’s Condition is met so that the public good quantity is optimal, and simultaneously the private good is allocated in a Pareto optimal way, i.e., \(z^{*}=(x^{*},y_{1}^{*},\ldots ,y_{n}^{*})\) is Pareto optimal.

3.3 The locally strategy proof MDP Procedure

In our context, as the planning center’s most important task is to achieve an optimal allocation of the public good, it has to collect the relevant information from the periphery so as to meet the conditions presented above. Fortunately, the necessary information is available if the procedure is locally strategy proof. It was already shown by Fujigaki and Sato (1981), however, that the incentive compatible n-person MDP Procedure cannot preserve neutrality, since \(\delta _{i},\) \(\forall i\in \mathbf {N},\) was concluded to be fixed, i.e., 1 / n to accomplish LSP, keeping the other conditions fulfilled. This is a sharp contrasting result between local and global games, since the class of Groves mechanisms is neutral. [See Green and Laffont (1979), pp. 75–76.]

Let \(a\in \mathbf {R}_{++}\) be an arbitrary adjustment speed of public good. Fujigaki and Sato (1981) presented the Locally Strategy Proof MDP Procedure which reads:
$$\begin{aligned} \left\{ \begin{array}{l} \dot{x}=a\left\{ sgn\left( \sum _{i\in \mathbf {N}}\psi _{i}(t)-\gamma (t)\right) \right\} ^{n-2}\left( \sum _{i\in \mathbf {N}}\psi _{i}(t)-\gamma (t)\right) ^{n-1} \\ \\ \dot{y}_{i}=-\psi _{i}(t)\dot{x}\left( \psi (t)\right) +\dfrac{1}{n}\left( \sum _{i\in \mathbf {N}}\psi _{i}(t)-\gamma (t)\right) \dot{x}\left( \psi (t)\right) ,\,\forall i\in \mathbf {N}\end{array}\right. \end{aligned}$$
(15)
where
$$\begin{aligned} sgn\left( \sum _{i\in \mathbf {N}}\psi _{i}(t)-\gamma (t)\right) =\left\{ \begin{array}{cc} +1 &{} \begin{array}{c} \,if\,\sum _{i\in \mathbf {N}}\psi _{i}(t)-\gamma (t)\ge 0 \\ \end{array} \\ -1 &{} \,if\,\sum _{i\in \mathbf {N}}\psi _{i}(t)-\gamma (t)<0.\end{array}\right. \end{aligned}$$
(16)
Equivalently
$$\begin{aligned} \left\{ \begin{array}{l} \dot{x}=a\left( \sum _{i\in \mathbf {N}}\psi _{i}(t)-\gamma (t)\right) \left| \sum _{i\in \mathbf {N}}\psi _{i}(t)-\gamma (t)\right| ^{n-2} \\ \\ \dot{y}_{i}=-\psi _{i}(t)\dot{x}\left( \psi (t)\right) +\dfrac{1}{n}\left( \sum _{i\in \mathbf {N}}\psi _{i}(t)-\gamma (t)\right) \dot{x}\left( \psi (t)\right) ,\quad \forall i\in \mathbf {N}.\end{array}\right. \end{aligned}$$
(17)

Remark 3

We termed our procedure the “Generalized MDP Procedure” in our paper (1981). Certainly, the public good adjustment function was generalized to include the MDP Procedure, whereas the distributional vector was concluded to be fixed to the above specific value: 1/n. Let me call hereafter the above procedure the FujigakiSato Procedure as named by Laffont and Rochet (1985). The genuine Generalized MDP Procedure is presented below.

The Fujigaki–Sato Procedure for optimally providing the public good has the following properties:
  1. (i)

    The Procedure monotonically converges to an individually rational Pareto optimum, even if agents do not report their true valuation, i.e., MRS for the public good.

     
  2. (ii)

    Revealing his/her true MRS is always a dominant strategy for each myopically behaving agent.

     
  3. (iii)

    The Procedure generates in the feasible allocation space similar trajectories as the MDP Procedure with uniform distribution of the instantaneous surplus occurred at each iteration, which leaves no influence of the planning center on the final plan. Hence, the Procedure is nonneutral.

     

Remark 4

The property (ii) is an important one that cannot be enjoyed by the original MDP Procedure except when there are only two agents with the equal surplus share, i.e., \(\delta _{i}=1/2\), \(i=1,2\). [See Roberts (1979a, b) for these properties]. The result on nonneutrality in (iii) can be modified by designing the Generalized MDP Procedure below.

3.4 Best reply strategy and the Nash equilibrium strategy

In the local incentive game the planning center is assumed to know the true information of individuals, since the Fujigaki–Sato Procedure induces them to elicit it. Its operation does not even require truthfulness of each player to be a Nash equilibrium strategy, but it needs only aggregate correct revelation to be a Nash equilibrium, as was verified by Sato (1983). It is easily seen from the above discussion that the Fujigaki–Sato Procedure is not neutral at all, which means that local strategy proofness impedes the attainment of neutrality. Hence, Sato (1983) proposed another version of neutrality, and Condition Aggregate Correct Revelation (ACR) which is much weaker than LSP. In order to introduce Condition ACR, I need \(\phi _{i}\) as a best reply strategy given by
$$\begin{aligned} \phi _{i}=\frac{1}{n(\delta _{i}-1)}\left\{ (1-n)\pi _{i}-(1-n\delta _{i})\left( \sum _{j\ne i}\psi _{j}-\gamma \right) \right\} ,\quad \forall i\in \mathbf {N}. \end{aligned}$$
(18)
Let \(\alpha ^{\prime }=(\alpha _{1},\ldots ,\alpha _{n})\) and \(\alpha _{i}=(1-n\delta _{i})/(n-1),\) then one observes
$$\begin{aligned} \left( \left[ \begin{array}{ccccc} 1 &{} \ldots &{} 0 &{} \ldots &{} 0 \\ \vdots &{} &{} \vdots &{} &{} \vdots \\ 0 &{} \ldots &{} 1 &{} \ldots &{} 0 \\ \vdots &{} &{} \vdots &{} &{} \vdots \\ 0 &{} \ldots &{} 0 &{} \ldots &{} 1\end{array}\right] +\left[ \begin{array}{ccc} \alpha _{1} &{} \ldots &{} \alpha _{1} \\ \vdots &{} &{} \vdots \\ \alpha _{i} &{} \ldots &{} \alpha _{i} \\ \vdots &{} &{} \vdots \\ \alpha _{n} &{} \ldots &{} \alpha _{n}\end{array}\right] \right) \left[ \begin{array}{c} \psi _{1} \\ \vdots \\ \psi _{i} \\ \vdots \\ \psi _{n}\end{array}\right] =\left[ \begin{array}{c} \pi _{1} \\ \vdots \\ \pi _{i} \\ \vdots \\ \pi _{n}\end{array}\right] +\gamma \left[ \begin{array}{c} \alpha _{1} \\ \vdots \\ \alpha _{i} \\ \vdots \\ \alpha _{n}\end{array}\right] . \end{aligned}$$
(19)
Let us solve a system of n linear equations to get a Nash equilibrium vector \(\Phi =\{\phi _{i}\}\). First of all, the inverse matrix is computed as:
$$\begin{aligned} (I+A)^{-1}=(I-A)/\left( 1+\sum _{i\in \mathbf {N}}\alpha _{i}\right) =I-A. \end{aligned}$$
(20)
The Nash equilibrium vector \(\Phi \) as a function of \(\pi \) reads
$$\begin{aligned} \Phi&= {} (I+A)^{-1}(\pi +\alpha \gamma )=(I-A)(\pi +\alpha \gamma )\nonumber \\&= \pi +\alpha \gamma -\left( \sum _{j\in \mathbf {N}}\pi _{j}+\gamma \sum _{j\in \mathbf {N}}\alpha _{j}\right) \alpha \nonumber \\&= \pi -\left( \sum _{j\in \mathbf {N}}\pi _{j}-\gamma \right) \alpha . \end{aligned}$$
(21)
Hence, the Nash equilibrium strategy for player i is
$$\begin{aligned} \phi _{i}=\pi _{i}-\frac{1-n\delta _{i}}{n-1}\left( \sum _{j\in \mathbf {N}}\pi _{j}-\gamma \right) . \end{aligned}$$
(22)
It is easily seen that
$$\begin{aligned} \phi _{i}=\pi _{i}\quad if\,\delta _{i}=1/n \end{aligned}$$
(23)
which is a requirement of LSP procedures.

3.5 Aggregate correct revelation of preferences

Let \(\pi =\left( \pi _{1},\ldots ,\pi _{n}\right) \) be a vector of true MRSs for the public good and \(\mathbf {\Pi }\) be its set. Sato (1983) named Condition Aggregate Correct Revelation (ACR) which insists that the sum of Nash equilibrium strategies, \(\phi _{i},\) \(\forall i\in \mathbf {N}\), always coincides with the aggregate correct revelation of MRSs. Clearly, ACR only claims truthfulness in the aggregate.

Condition ACR. Aggregate Correct Revelation:
$$\begin{aligned} \sum _{i\in \mathbf {N}}\phi _{i}\left( \pi (t)\right) =\sum _{i\in \mathbf {N}}\pi _{i}(t),\,\forall \pi (t)\in \mathbf {\Pi },\,\forall t\in [0,\infty ). \end{aligned}$$
(24)
To prove the main theorem, I also needed the following two conditions. Let \(\rho :\mathbf {R}_{+}^{n}\rightarrow \mathbf {R}_{+}^{n}\) be a permutation function and \(T_{i}\left( \psi (t)\right) \) be a transfer in private good to agent i. Condition Transfer Anonymity says that agent i’s transfer in private good is invariant under permutation of its arguments; i.e., the order of strategies does not affect the value of \(T_{i}\left( \psi (t)\right) ,\) \(\forall i\in \mathbf {N}\). Sato (1983) proved that \(T_{i}\left( \psi (t)\right) =T_{i}\left( \sum _{i\in \mathbf {N}}\psi _{i}(t)-\gamma (t)\right) \) which is an example of transfer functions. Condition Transfer Neutrality states that any allocation in \(\mathbf {P}_{0}\) is attainable by means of choice of transfers. The planning center can attain neutrality by choosing \(T_{i}\left( \psi (t)\right) ,\) \(\forall i\in \mathbf {N}\).
Condition TA. Transfer Anonymity
$$\begin{aligned} T_{i}\left( \psi (t))\right) =T_{i}\left( \rho \left( \psi (t)\right) \right) ,\,\forall \psi \in \mathbf {\Psi },\,\forall i\in \mathbf {N},\,\forall t\in [0,\infty ). \end{aligned}$$
(25)
Condition TN. Transfer Neutrality
$$\begin{aligned} \exists T\in \mathbf {\Omega },\,\exists z \left( t,T\right) \in \mathbf {Z},\,\forall z\left( \cdot \right) \in \mathbf {Z},\,\forall z^{T}= \lim _{t\rightarrow \infty }z \left( t,T\right) \end{aligned}$$
(26)
where \(T=\left( T_{1},\ldots ,T_{n}\right) \) is a vector of transfer functions and \(\mathbf {\Omega }\) is its set.

Theorems are enumerated with the proofs.

Theorem 1

The Generalized MDP Procedures fulfill Conditions ACR, F, M, TA and TN. Conversely, any planning process satisfying these conditions is characterized to:
$$\begin{aligned} \left\{ \begin{array}{l} \dot{x}=a\left( \sum _{i\in \mathbf {N}}\psi _{i}-\gamma \right) \left| \sum _{i\in \mathbf {N}}\psi _{i}-\gamma \right| ^{n-2} \\ \\ \dot{y}_{i}=-\psi _{i}\dot{x}+\Gamma _{i}\left( \sum _{i\in \mathbf {N}}\psi _{i}-\gamma \right) ,\,\forall i\in \mathbf {N}.\end{array}\right. \end{aligned}$$

Proof

Consider the following process:
$$\begin{aligned} \left\{ \begin{array}{l} \dot{x}(\psi )=\Gamma (\Theta ) \\ \\ \dot{y}_{i}(\psi )=-\psi _{i}\Gamma (\Theta )+T_{i}(\psi ),\,\forall i\in \mathbf {N}.\end{array}\right. \end{aligned}$$
(27)
where \(\Theta \equiv \sum _{i\in \mathbf {N}}\psi _{i}-\gamma \).
i) Condition F gives
$$\begin{aligned} \sum _{i\in \mathbf {N}}T_{i}(\psi )=\Theta \Gamma (\Theta ). \end{aligned}$$
(28)
Differentiating this with respect to \(\psi _{j}\) yields
$$\begin{aligned} \sum _{i\in \mathbf {N}}T_{ij}(\psi )=\Gamma (\Theta )+\Theta \Gamma _{j}(\Theta ),\,\forall j\in \mathbf {N} \end{aligned}$$
(29)
where \(\Theta _{ij}(\psi )=\partial T_{i}(\psi )/\partial \psi _{j}\) and \(\Gamma _{j}(\Theta )=\partial \Gamma (\Theta )/\partial \psi _{j}.\)
The payoff for each individual i is
$$\begin{aligned} \dot{u}_{i}\left( \psi \right) =\frac{\partial u_{i}}{\partial y_{i}}\{(\pi _{i}-\psi _{i})\Gamma (\Theta )+T_{i}\left( \psi \right) \}. \end{aligned}$$
(30)
Maximizing this with respect to \(\psi _{i}\) gives
$$\begin{aligned} -\Gamma (\Theta )+(\pi _{i}-\psi _{i})\Gamma _{i}(\Theta )+T_{ii}\left( \psi \right) =0. \end{aligned}$$
(31)
The vector \(\psi =(\psi _{1},\ldots \psi _{n})\) fulfilling a system of the equations (31) is a Nash equilibrium which is defined as a function of \(\pi . \) Since Condition ACR is a requirement that \(\sum _{i\in \mathbf {N}}\psi \left( \pi \right) =\sum _{i\in \mathbf {N}}\pi _{i}\) holds for any \(\pi ,\) which implies that the total over i of (31) yields
$$\begin{aligned} \sum _{i\in \mathbf {N}}T_{ii}(\psi )=n\Gamma (\Theta ),\,\forall \psi \in \mathbf {\Psi }, \end{aligned}$$
(32)
while Condition TA implies
$$\begin{aligned} T_{ij}(\psi )=T_{i\rho (j)}(\rho (\psi )). \end{aligned}$$
(33)
Further, due to TA
$$\begin{aligned} \sum _{i\in \mathbf {N}}T_{ii}(\rho ^{-1}(\psi ))=\sum _{i\in \mathbf {N}}T_{i\rho (i)}(\rho \circ \rho ^{-1}(\psi ))=\sum _{i\in \mathbf {N}}T_{i\rho (i)}(\psi ),\quad\forall \rho ^{-1},\,\forall \psi \in \mathbf {\Psi }. \end{aligned}$$
(34)
where \(\rho ^{-1}\) signifies the inverse of \(\rho .\) On using (32) and (34)
$$\begin{aligned} \sum _{i\in \mathbf {N}}T_{i\rho (i)}(\psi )=n\Gamma (\Theta ),\,\forall \rho ,\,\forall \psi \in \mathbf {\Psi }. \end{aligned}$$
(35)
Consider the permutation \(\rho (i)=i+k,\) \(k=0,1,\ldots ,n-1.\) \([\rho (i)=n-i+k,\) when \(i-k>1].\) Then, Eq. (35) reads
$$\begin{aligned}&\begin{array}{cc} T_{11}+T_{22}+\cdots +T_{nn}=n\Gamma (\Theta ),&\qquad if\,k=0\end{array} \nonumber \\&\begin{array}{cc} T_{12}+T_{23}+\cdots +T_{n1}=n\Gamma (\Theta ),&\qquad if\,k=1\end{array} \nonumber \\&................................................... \nonumber \\&\begin{array}{cc} T_{1n}+T_{21}+\cdots +T_{n(n-1)}=n\Gamma (\Theta ),&\quad if \,k=n-1.\end{array} \end{aligned}$$
(36)
Summing all these equations, we get
$$\begin{aligned} \sum _{i\in \mathbf {N}}\sum _{j\in \mathbf {N}}T_{ij}(\psi )=n^{2}\Gamma \left( \Theta \right) \end{aligned}$$
(37)
and Eq. (29) implies
$$\begin{aligned} \sum _{i\in \mathbf {N}}\sum _{j\in \mathbf {N}}T_{ij}(\psi )=n\{\Gamma \left( \Theta \right) +\Theta \Gamma (\Theta )\}. \end{aligned}$$
(38)
Combining (37) and (38) gives
$$\begin{aligned} -\Theta \Gamma _{j}(\Theta )+(n-1)\Gamma (\Theta )=0. \end{aligned}$$
(39)
Rearranging terms and using \(\partial \Theta /\partial \psi _{j}=1\) yields
$$\begin{aligned} \frac{d\Gamma (\Theta )/d\Theta }{\Gamma (\Theta )}=\frac{n-1}{\Theta }. \end{aligned}$$
(40)
Solving this equation for \(\Gamma (\Theta )\), we obtain
$$\begin{aligned} \Gamma (\Theta )=a\Theta ^{n-1},\,a\in \mathbf {R}_{++}. \end{aligned}$$
(41)
Since \(\Gamma (\Theta )\) is sign-preserving from Eq. (28), I finally have the desired conclusion:
$$\begin{aligned} \Gamma (\Theta )=a\Theta |\Theta |^{n-2},\,a\in \mathbf {R}_{++}. \end{aligned}$$
(42)
ii) In view of (32) and (39), Eqs. (29) and (35) can be rewritten as:
$$\begin{aligned} \sum _{i\in \mathbf {N}}\tau _{ij}=0,\,\forall j\in \mathbf {N} \end{aligned}$$
(43)
and
$$\begin{aligned} \sum _{i\in \mathbf {N}}\tau _{i\rho (i)}=0,\,\forall \rho \end{aligned}$$
(44)
where \(\tau _{ij}=\Gamma _{ij}-\Gamma_{ii}.\)
Let me first show that \(\tau _{ij}\) satisfying (43) and (44) are all zero, that is
$$\begin{aligned} \tau _{ij}=0,\,\forall i,j\in \mathbf {N}. \end{aligned}$$
(45)
By definition
$$\begin{aligned} \tau _{ii}=0,\,\forall i\in \mathbf {N} \end{aligned}$$
(46)
and considering a permutation \(\rho \) which, keeping the other terms fixed, permutates any pair (ij), I obtain from (44)
$$\begin{aligned} \tau _{ij}+\tau _{ji}=0,\,\forall i,j\in \mathbf {N}. \end{aligned}$$
(47)
Let us prove by induction that (43) and (44) imply (45).
Case I: \(n=2.\) Since
$$\begin{aligned} \tau _{11}=\tau _{22}=0 \end{aligned}$$
(48)
we get from (43)
$$\begin{aligned} \tau _{12}=\tau _{21}=0. \end{aligned}$$
(49)
Case II: \(n=k.\) Assuming that (43) and (44) imply (45), we verify that this observation holds for \(n=k+1.\)
Denote
$$\begin{aligned} \sigma _{ij}=\tau _{ij}+\frac{1}{k}\tau _{k+1,\,j}. \end{aligned}$$
(50)
By virtue of (43) and (44) for \(n=k+1,\) we have for any permutation \(\rho \) such that \(\rho (n+1)=n+1\)
$$\begin{aligned} \sum _{i\mathbf {=}1}^{k}\sigma _{ij}=\sum _{i=1}^{k+1}\tau _{ij}=0,\,\forall j=1,\ldots ,k+1, \end{aligned}$$
(51)
and
$$\begin{aligned} \sum _{i=1}^{k}\sigma _{i\rho (i)} &= \sum _{i=1}^{k}\tau _{i\rho (i)}+\frac{1}{k}\sum _{i=1}^{k}\tau _{k+1,\,\rho (i)} \\& =-\tau _{k+1,\,\rho (k+1)}-\frac{1}{k}\sum _{i=1}^{k}\tau _{\rho (i),\,k+1}\nonumber \\&= -\tau _{k+1,\,k+1}-\frac{1}{k}\tau _{k+1,\,k+1}=0. \end{aligned}$$
(52)
Hence, by assumption
$$\begin{aligned} \sigma _{ij}=0,\,\forall i,j=1,\ldots ,k. \end{aligned}$$
(53)
Particularly, we have
$$\begin{aligned} \sigma _{jj}=\frac{1}{k}\tau _{k+1,\,j}=0,\,\forall j=1,\ldots ,k, \end{aligned}$$
(54)
and thus
$$\begin{aligned} \tau _{ij}=0,\,\forall i,j=1,\ldots ,k+1. \end{aligned}$$
(55)
In conclusion, Eq. (45) implies the following:
$$\begin{aligned} \tau _{i1}=\tau _{i2}=\cdots =\tau _{in},\,\forall i\in \mathbf {N}, \end{aligned}$$
(56)
which means that \(\Gamma _{i}\) is constant as far as \(\sum _{j\in \mathbf {N}}\psi _{j}\) is constant. \(\square \)

Theorem 2

Truthful revelation of preferences in any Generalized MDP Procedure is a minimax strategy for any i \(\in \mathbf {N}.\) It is the only minimax strategy for any i \(\in \mathbf {N},\) when \(x>0.\)

Proof

Differentiating player i’s payoff \(\dot{u}_{i}\) with respect to \(\psi _{j}\) yields
$$\begin{aligned} \frac{\partial \dot{u}_{i}\left( \psi \right) }{\partial \psi _{j}}=\frac{\partial u_{i}}{\partial y_{i}}\{\pi _{i}-\psi _{i}+2\delta \dot{x}\left( \psi \right) \}=0. \end{aligned}$$
(57)
Hence,
$$\begin{aligned} \psi _{j}=\frac{\psi _{i}-\pi _{i}}{2\delta _{i}}+\gamma -\sum _{i\ne j}\psi _{i}. \end{aligned}$$
(58)
When player j uses this strategy, \(\dot{u}_{i}\) is minimized as follows.
$$\begin{aligned} \dot{u}_{i}\left( \psi \right)&= \frac{\partial u_{i}}{\partial y_{i}}\left\{ \left( \pi _{i}-\psi _{i}\right) \left( \frac{\psi _{i}-\pi _{i}}{2\delta _{i}}\right) +\delta _{i}\left( \frac{\psi _{i}-\pi _{i}}{2\delta _{i}}\right) ^{2}\right\} \nonumber \\&= -\frac{\partial u_{i}}{\partial y_{i}}\left\{ \frac{(\psi _{i}-\pi _{i})^{2}}{4\delta _{i}}\right\} \le 0. \end{aligned}$$
(59)
Maximizing \(\dot{u}_{i}\left( \psi \right) \) requires that \(\psi _{i}=\pi _{i},\) \(\forall i\in \mathbf {N},\) \(\forall t\in [0,\infty ),\) which is a minimax strategy for \(x\ge 0.\) When \(x>0,\) it is the only minimax strategy. \(\square \)

Theorem 3

\(\phi _{i}=\pi _{i}\) holds for any i \(\in \mathbf {N}\) at the equilibrium of any Generalized MDP Procedure.

Proof

Since \(\dot{x}=0\) at the equilibrium of the Procedure, the second term of the following equation disappears.
$$\begin{aligned} \phi _{i}=\pi _{i}-\frac{1-n\delta _{i}}{n-1}\left( \sum _{j\in \mathbf {N}}\pi _{j}-\gamma \right) ,\quad\forall i\in \mathbf {N}. \end{aligned}$$
(60)
Thus, the statement of the Theorem follows. \(\square \)

Theorem 4

There exists a vector of transfers T and a trajectory z \(\left( \cdot \right) :[0,\infty )\rightarrow \mathbf {Z} \) of the Generalized MDP Procedures such that \(u_{i}\left( z^{*}\right) =\lim _{t\rightarrow \infty }u_{i}\left( x\left( t\right) ,y_{i}\left( t\right) \right) ,\forall i\in \mathbf {N},\) for every individually rational Pareto optimum \(z^{*}.\)

Proof

Due to Conditions M and PE, the stationary point of the differential equations of the Fujigaki–Sato Procedure is clearly individually rational Pareto optimal. To prove stability, take the sum of the utility functions:
$$\begin{aligned} L=\sum _{i\in \mathbf {N}}u_{i}(x,y_{i}). \end{aligned}$$
(61)
Because the set of feasible allocations \(\mathbf {Z}\) is compact, Condition F means that the solution path \((x,y_{i})\) is bounded for any \(i\in \mathbf {N}. \) Hence, by continuity of the utility functions L is also bounded. Furthermore, L is a monotonically increasing function. In fact, since \(T_{i}\ge 0,\) \(\forall i\in \mathbf {N}\) from Condition M
$$\begin{aligned} \dot{L}=\sum _{i\in \mathbf {N}}\dot{u}_{i}(x,y_{i})=\sum _{i\in \mathbf {N}}\left( \frac{\partial u_{i}}{\partial x}\dot{x}+\frac{\partial u_{i}}{\partial y_{i}}\dot{y}_{i}\right) =\sum _{i\in \mathbf {N}}\left( \frac{\partial u_{i}}{\partial y_{i}}\right) T_{i}\ge 0. \end{aligned}$$
(62)
Therefore, L may be considered to be a suitable choice of a Lyapunov function, and thus, the procedure fulfilling Conditions F and M is quasi-stable; i.e., any limit point of the trajectory is a stationary point. Owing to strict concavity of the utility functions, we can conclude that the Fujigaki–Sato Procedure monotonically converges to a unique stationary point and that it is stable. \(\square \)

Keeping the same nonlinear public good decision function as derived from Condition LSP, Sato (1983) could state the above characterization theorem. In the sequel, I use the Generalized MDP Procedure with \(T_{i}\left( \sum _{i\in \mathbf {N}}\psi _{i}-\gamma \right) =\delta _{i}\left( \sum _{i\in \mathbf {N}}\psi _{i}-\gamma \right) \dot{x}\left( \psi \right) .\) Via the pertinent choice of \(T_{i}\left( \cdot \right) \) we can make the family of the Generalized MDP Procedures, including the MDP Procedure and the Fujigaki–Sato Procedure as special members.

Remark 5

Champsaur and Rochet (1983) gave a systematic study on the family of planning procedures that are asymptotically efficient and locally strategy proof. Now we know that the family of the LSP procedures is large enough; Rochet’s (1982) classification includes the Bowen Procedure, the Bowen–Laffont Procedure, the Champsaur–Rochet Procedure, the Fujigaki–Sato Procedure, the Generalized Wicksell Procedure and the Laffont–Maskin Procedure as special members.6

4 The structure of locally strategy proof procedures

4.1 The MDP Procedure versus the Fujigaki–Sato Procedure

The existence of Fujigaki–Sato Procedure is assured by the integrability and differentiability of the adjustment functions which define the procedure. The MDP Procedure has a linear adjustment function and its adjustment speed of public good is constant, whereas the Fujigaki–Sato Procedure has a nonlinear adjustment function which is a kind of “turnpike.” If illustrated in the coordinates, when the Fujigaki–Sato MDP Procedure evolves far from the origin, it runs more nimbly, while its adjustment speed of public good reduces in the neighborhood of the origin. This structural difference of these procedures has made a sharp contrast about the strength of incentive compatibility. This difference stems from the integrability and differentiability of the adjustment function of public good.7

Let me show the incentive property of the Fujigaki–Sato Procedure.

Theorem 5

The Fujigaki–Sato Procedure cannot be manipulated by more than three players’s strategic behaviors.

Sketch of the Proof

Let me show that the Fujigaki–Sato Procedure cannot be manipulated by players in the local incentive game associated with the procedure when there are three agents. Under the truthful revelation of preference, as a payoff to player i, the time derivative of utility is represented by
$$\begin{aligned} \dot{u}_{i}=\delta _{i}\left( \sum _{i\in \mathbf {N}}\pi _{i}-\gamma \right) ^{2}\ge 0. \end{aligned}$$
(63)
Let \(\dot{u}_{3}^{r}\) signify the payoff given by underreporting of preference on the part of player 3 with \(\pi _{3}>\psi _{3}.\) Define \(\pi _{3}=\psi _{3}+\varepsilon ,\) \(\varepsilon >0,\) whereas it is assumed that \(\psi _{1}=\pi _{1}\) and \(\psi _{2}=\pi _{2}\). Then we have the payoffs with underreporting and true revelation for the public good.
$$\begin{aligned} \dot{u}_{3}^{r}= & {} \varepsilon \left( \sum _{j\in \mathbf {N}}\pi _{j}-\varepsilon -\gamma \right) +\delta _{3}\left( \sum _{j\in \mathbf {N}}\pi _{j}-\varepsilon -\gamma \right) ^{2}\ge 0 \end{aligned}$$
(64)
and
$$\begin{aligned} \dot{u}_{3}=\delta _{3}\left( \sum _{j\in \mathbf {N}}\pi _{j}-\gamma \right) \dot{x}\ge 0. \end{aligned}$$
(65)
If \(\delta _{3}=1/3,\) then
$$\begin{aligned} \dot{u}_{3}^{r}-\dot{u}_{3}=\varepsilon (1-2\delta _{3})\left( \sum _{j\in \mathbf {N}}\pi _{j}-\gamma \right) -\varepsilon ^{2}<0. \end{aligned}$$
(66)
Thus, player 3 cannot get more payoff by falsifying his/her preference for the public good if \(\delta _{3}=1/3\).

Let me show a numerical example. Specify their quasi-linear utility function as \(u_{1}=2x+y_{1}\), \(u_{2}=3x+y_{2}\) and \(u_{3}=5x+y_{3}.\) Then, \(\partial u_{i}/\partial y_{i}=1,\) \(i=1,2,\) and 3, \(\pi _{1}=\psi _{1}=2\) and \(\pi _{2}=\psi _{2}=3\). Suppose that the public good is produced as \(g(x)=3x\) and \(\gamma =3.\) Provided that individual 3 underreports his preference by announcing \(\psi _{3}=1\) instead of his true MRS, \(\pi _{3}=5\).

Assuming \(a=1,\) then the Generalized MDP Procedure with three persons reads
$$\begin{aligned} \left\{ \begin{array}{c} \dot{x}=\left( \sum _{j=1}^{3}\psi _{j}-\gamma \right) \left| \sum _{j=1}^{3}\psi _{j}-\gamma \right| \\ \\ \dot{y}_{i}=-\psi _{i}\dot{x}+\dfrac{1}{3}\left( \sum _{j=1}^{3}\psi _{j}-\gamma \right) \dot{x}.\end{array} \right. \end{aligned}$$
(67)
With the above numerical example, this Procedure yields \(\dot{u}_{3}^{r}=45<114.33=\) \(\dot{u}_{3}.\) Similarly, \(\dot{u}_{3}^{\eta }=81<114.33=\) \(\dot{u}_{3},\) where \(\eta \) means “overreport,” when he/she reports \(\psi _{3}=7\) instead of his true value, 5. Consequently, free-riding individual 3 loses his/her payoff in the both cases of underreporting and overreporting. The Fujigaki–Sato Procedure gives the payoff such that
$$\begin{aligned} \dot{u}_{i}=(\pi _{i}-\psi _{i})\dot{x}+\frac{1}{3}\left( \sum _{j=1}^{3}\psi _{j}-\gamma \right) ^{2}\left| \sum _{j=1}^{3}\psi _{j}-\gamma \right| \end{aligned}$$
(68)
where \(\pi _{i}=\psi _{i}\) assures \(\dot{u}_{i}\ge 0,\) \(\forall i=1,2\) and 3,  thus, the Fujigaki–Sato Procedure is locally strategy proof for three persons. This affirmative result can be applied to any number of individuals. This is not the property enjoyed by the original MDP Procedure. \(\square \)

4.2 A characterization theorem with transfer independence

Next, let me give a proof to the following theorem by making use of a new axiom. This is a modified version of the property introduced by Green and Laffont (1977, 1979), which means the equality of the increment of transfer in accordance with the marginal change of strategy. This is an important condition which is connected with equity.

Condition TI. Transfer Independence:
$$\begin{aligned} \frac{\partial T_{i}\left( \psi \right) }{\partial \psi _{i}}=\frac{\partial T_{j}\left( \psi \right) }{\partial \psi _{j}},\quad\forall i,j\in \mathbf {N}. \end{aligned}$$
(69)
Then, the following characterization theorem holds.

Theorem 6

The planning procedure defined below that satisfies Conditions ACR and TI is characterized to:
$$\begin{aligned} \left\{ \begin{array}{c} \dot{x}=a\left( \sum _{i\in \mathbf {N}}\psi _{i}-\gamma \right) |\sum _{i\in \mathbf {N}}\psi _{i}-\gamma |^{n-1},\quad a\in \mathbf {R}_{++} \\ \\ \dot{y}_{i}=\int \Gamma \left( \sum _{i\in \mathbf {N}}\psi _{i}-\gamma \right) d\psi _{i}+H_{i}(\psi _{-i}),\quad \forall i\in \mathbf {N}\end{array}\right. \end{aligned}$$
where \(H_{i}(\psi _{-i})\) is an arbitrary function independent of \(\psi _{i}\).

Proof

Consider the process
$$\begin{aligned} \left\{ \begin{array}{l} \dot{x}=\Gamma (\Theta ) \\ \\ \dot{y}_{i}=-\psi _{i}\Gamma (\Theta )+\delta _{i}\Theta \Gamma (\Theta ).\end{array}\right. \end{aligned}$$
(70)
Using the decision functions specified above yields the payoff to player i : 
$$\begin{aligned} \dot{u}_{i}=\frac{\partial u_{i}}{\partial y_{i}}\left\{ \pi _{i}\Gamma (\Theta )-\psi _{i}\Gamma (\Theta )+\delta _{i}\Theta \Gamma (\Theta )\right\} . \end{aligned}$$
(71)
Differentiating this equation with respect to \(\psi _{i}\) this gives
$$\begin{aligned} \frac{d\dot{u}_{i}}{d\psi _{i}}&=\frac{\partial u_{i}}{\partial y_{i}}\left[ \pi _{i}\frac{d\Gamma (\Theta )}{d\Theta }-\Gamma (\Theta )-\psi _{i}\frac{d\Gamma (\Theta )}{d\Theta }\right. \nonumber \\&\quad\left. +\,\delta _{i}\left\{ \Gamma (\Theta )+\Theta \frac{d\Gamma (\Theta )}{d\Theta }\right\} \right] =0. \end{aligned}$$
(72)
As a reference, if Condition LSP holds, then
$$\begin{aligned} \Gamma (\Theta )\frac{1-\delta _{i}}{\delta _{i}}=\Theta \frac{d\Gamma (\Theta )}{d\Theta },\quad\forall i\in \mathbf {N}. \end{aligned}$$
(73)
This equation holds only if \(\delta _{i}=\delta _{j},\forall i,j\in \mathbf {N}.\) Consequently, local strategy proof of the MDP Procedure with two persons requires \(\delta _{i}=1/2,\) \(\forall i\in \mathbf {N}.\) Hence, the MDP Procedure can possess LSP only for a two-person economy.
Instead, if Condition ACR holds
$$\begin{aligned} \Gamma (\Theta )=\left( \frac{\Theta }{n-1}\right) \frac{d\Gamma (\Theta )}{d\Theta },\,\forall i\in \mathbf {N}. \end{aligned}$$
(74)
Solving for \(\Gamma (\Theta )\) yields
$$\begin{aligned} \Gamma (\Theta )=a\Theta ^{n-1},\,a\in \mathbf {R}_{++}. \end{aligned}$$
(75)
Since \(\Gamma (\Theta )\) is sign-preserving, we finally get
$$\begin{aligned} \Gamma (\Theta )=a\Theta |\Theta |^{n-2},\,a\in \mathbf {R}_{++}. \end{aligned}$$
(76)
Next, let me show with Conditions ACR and TI that
$$\begin{aligned} \dot{y}_{i}(\psi )=\int \Gamma \left( \sum _{i\in \mathbf {N}}\psi _{i}-\gamma \right) d\psi _{i}+H_{i}(\psi _{-i}),\,\forall i\in \mathbf {N}. \end{aligned}$$
(77)
The best reply strategy \(\phi _{i}\) for player i is, given \(\psi _{-i}\)
$$\begin{aligned} \phi _{i}=\left\{ \frac{\partial \Gamma (\Theta )}{\partial \psi _{i}}\right\} ^{-1}\left\{ \pi _{i}\frac{\partial \Gamma (\Theta )}{\partial \psi _{i}}-\psi _{i}\frac{\partial \Gamma (\Theta )}{\partial \psi _{i}}-\Gamma (\Theta )+\frac{\partial T_{i}(\psi )}{\partial \psi _{i}}\right\} ,\,\forall i\in \mathbf {N} \end{aligned}$$
(78)
where all the partial derivatives are evaluated at \(\psi _{i}=\pi _{i}.\)
From Condition ACR
$$\begin{aligned} \sum _{i\in \mathbf {N}}\left\{ \frac{\partial \Gamma (\Theta )}{\partial \psi _{i}}\right\} ^{-1}\left\{ -\Gamma (\Theta )+\frac{\partial T_{i}(\psi )}{\partial \psi _{i}}\right\} =0. \end{aligned}$$
(79)
Since \(\Gamma (\Theta )\) is symmetric with respect to \(\psi _{i},\)
$$\begin{aligned} \frac{\partial \Gamma (\Theta )}{\partial \psi _{i}}=\frac{\partial \Gamma (\Theta )}{\partial \psi _{j}}\ne 0. \end{aligned}$$
(80)
Thus,
$$\begin{aligned} \sum _{i\in \mathbf {N}}\frac{\partial T_{i}(\psi )}{\partial \psi _{i}}=n\Gamma (\Theta ) \end{aligned}$$
(81)
or
$$\begin{aligned} \dfrac{1}{n}\sum _{i\in \mathbf {N}}\frac{\partial T_{i}(\psi )}{\partial \psi _{i}}=\Gamma (\Theta ). \end{aligned}$$
(82)
If Condition TI holds, then
$$\begin{aligned} \frac{\partial T_{i}(\psi )}{\partial \psi _{i}}=\Gamma (\Theta ). \end{aligned}$$
(83)
Thus, the desired conclusion follows straightforwardly. \(\square \)
In Theorem 6 the function \(T_{i}(\psi )\) cannot be uniquely determined without Condition TI, and thus,
$$\begin{aligned} \frac{1}{n}\left\{ \frac{\partial T_{i}(\psi )}{\partial \psi _{i}}\right\} =\delta _{i}\Gamma (\Theta ). \end{aligned}$$
(84)

4.3 A measure of incentives

It is shown that the exponent attached to the public good decision function is closely related to the number of players taking part in the LSP procedures and that this fact enables procedures to achieve local strategy proofness.

Theorem 7

The exponent attached to the public good decision function is \(\beta =n-1\), if and only if the Fujigaki–Sato Procedure fulfills LSP.

Proof

Consider the Fujigaki–Sato Procedure:
$$\begin{aligned} \left\{ \begin{array}{l} \dot{x}\left( \psi \right) =\left( \sum _{i\in \mathbf {N}}\psi _{i}-\gamma \right) \left| \sum _{i\in \mathbf {N}}\psi _{i}-\gamma \right| ^{\beta -1} \\ \\ \dot{y}_{i}\left( \psi \right) =-\psi _{i}\dot{x}\left( \psi \right) +\dfrac{1}{n}\left( \sum _{i\in \mathbf {N}}\psi _{i}-\gamma \right) \dot{x}\left( \psi \right) ,\,\forall i\in \mathbf {N}\end{array}\right. \end{aligned}$$
(85)
where \(\beta >1\) is a parameter.
In the local incentive game associated with each iteration of the process, the payoff for each player i is given by
$$\begin{aligned} \dot{u}_{i}\left( \psi \right)&= \frac{\partial u_{i}}{\partial y_{i}}\left\{ \pi _{i}-\psi _{i}+\frac{1}{n}\left( \sum _{j\in \mathbf {N}}\psi _{j}-\gamma \right) \right\} \nonumber \\&\quad\times \left( \sum _{j\in \mathbf {N}}\psi _{j}-\gamma \right) \left| \sum _{j\in \mathbf {N}}\psi _{j}-\gamma \right| ^{\beta -1}. \end{aligned}$$
(86)
Differentiating this equation with respect to \(\psi _{i}\) gives
$$\begin{aligned} \frac{\partial \dot{u}_{i}(\psi _{i},\psi _{-i})}{\partial \psi _{i}}=\frac{\partial u_{i}}{\partial y_{i}}\left\{ \beta (\pi _{i}-\psi _{i})+\frac{\beta -n+1}{n}\right\} \left( \sum _{i\in \mathbf {N}}\psi _{i}-\gamma \right) ^{\beta }=0. \end{aligned}$$
(87)
Since \(\left( \sum _{i\in \mathbf {N}}\psi _{i}-\gamma \right) ^{\beta }\ne 0\) holds out of equilibrium, the best reply strategy for player i is
$$\begin{aligned} \psi _{i}=\pi _{i}+\frac{\beta -n+1}{\beta n}. \end{aligned}$$
(88)
Let us show that this procedure satisfies LSP if and only if \(\beta =n-1\). For this purpose, define a measure of incentives:
$$\begin{aligned} \Phi (n)=\sum _{i\in \mathbf {N}}(\psi _{i}-\pi _{i})^{2}. \end{aligned}$$
(89)
Substituting (88) in (89) yields
$$\begin{aligned} \Phi (n)=\left( \frac{\beta -n+1}{\beta n}\right) ^{2}. \end{aligned}$$
(90)
Differentiating this equation with respect to n gives
$$\begin{aligned} \frac{\partial \Phi (n)}{\partial n}=\frac{2(\beta +1)(n-1-\beta )}{\beta ^{2}n^{3}}=0. \end{aligned}$$
(91)
The measure of incentives \(\Phi (n)\) has a maximum at \(\beta =n-1.\) Since \(n>1,\) we know that \(\Phi (n)\rightarrow 0\) as \(\beta \rightarrow n-1\) and that \(\Phi (n)=0\) if and only if \(\beta =n-1.\) Hence, the Fujigaki–Sato Procedure has the unique form of decision function of public good with \(\beta =n-1\) to accomplish LSP. \(\square \)

4.4 Coalitional local strategy proofness

The problem of falsificating preferences by colluding individuals was dealt with for static revelation mechanisms or demand revealing mechanisms by some authors. For instance, Bennett and Conn (1977) considered an economy with one public good and proved that there is no revelation mechanism which is group incentive compatible. For any revelation mechanism to provide public goods, if any coalition formation is possible, some group of individuals are able to gain by misrepresenting their preferences for the public goods. Green and Laffont (1979) also studied the problem of coalitional manipulability in a generic context. They verified under the separability of utility functions that revelation of the truth was a dominant strategy for each individual in demand revealing mechanisms used to provide public goods. They also showed that any revelation mechanism can be manipulated by coalitions of two or more agents. Their payoff by colluding, however, approaches zero as the number of agents becomes infinite, i.e., the large economy.

The main purpose of this subsection is to show whether the Fujigaki–Sato Procedure is robust to coalitional manipulation of preferences on the part of the agents. If the structure of coalitions is fixed and known to the planner, their misrepresentation can be overcome by treating each coalition as an individual agent and applying the Fujigaki–Sato Procedure to the strategies composing of the aggregated preferences over the members of each coalition, then we can have a Coalitionally Locally Strategy Proof (CLSP) planning procedure.

However, what could happen if the coalition structure is flexible and unknown to the planner? Is it possible to construct a CLSP planning process in this case? The answer is partly negative and partly affirmative. Chakravorti (1995) presented coalition-proof procedures; however, he required the assumption of separable utility functions. He extended the method of Truchon (1984) who examined a nonmyopic incentive game, where each agent’s payoff is a utility at the final allocation. Different from the others, Truchon introduced a “threshold” level of a public good into his model to analyze agents’ strategic behaviors. We propose a Piecewise Nonlinearized Procedure which is CLSP.

Retaining the same assumptions as in Sato (2012), we add some new definitions and notation. Let \(\mathbf {C}\subseteq \mathbf {N}\) be a coalition of individual agents. The vector \(\psi _{C}\) denotes the projection of \(\psi \in \mathbf {R}^{n},\) the marginal rate of substitution announced by the coalition C. Let \(\pi _{c}\in \mathbf {R}^{n}\) be a vector of the true MRS of the coalition C. We use \((\psi \backslash \psi _{C})\) to signify the components of \(\psi \) with the exception of \(\psi _{i},\) \(\forall i\in \mathbf {C},\) and we also use the notation \((\psi _{C},\psi _{N\backslash C})\) and || as a cardinality.

Definition 5

A joint strategy for a coalition C, \(\tilde{\psi }_{C}\in \mathbf {R}^{|C|}\) is called a dominant joint strategy if it fulfills
$$\begin{aligned} \dot{u}_{i}(\tilde{\psi }_{C},\psi _{N\backslash C})\ge \dot{u}_{i}(\psi _{C},\psi _{N\backslash C}),\,\forall i\in \mathbf {C},\,\forall \psi _{C}\in \mathbf {R}^{|C|},\,\forall \psi _{N\backslash C}\in \mathbf {R}^{|N\backslash C|}. \end{aligned}$$

Definition 6

The payoff function of an agent in a coalition is given by
$$\begin{aligned} \dot{u}_{i}(\psi _{C},\psi _{N\backslash C})&= \frac{\partial u_{i}}{\partial x}\dot{x}(\psi _{C},\psi _{N\backslash C})+\frac{\partial u_{i}}{\partial y_{i}}\dot{y}_{i}(\psi _{C},\psi _{N\backslash C}) \nonumber \\&= \frac{\partial u_{i}}{\partial y_{i}}\left\{ \pi _{i}\dot{x}(\psi _{C},\psi _{N\backslash C})+\dot{y}_{i}(\psi _{C},\psi _{N\backslash C})\right\} . \end{aligned}$$
(92)

Thus, we can state the condition related to coalitions.

Condition CLSP: Coalitional Local Strategy Proofness
$$\begin{aligned}&\pi _{i}\dot{x}(\pi _{C}(t),\psi _{N\backslash C}(t))+\dot{y}_{i}(\pi _{C}(t),\psi _{N\backslash C}(t)) \nonumber \\\ge & \pi _{i}\dot{x}(\psi _{C}(t),\psi _{N\backslash C}(t))+\dot{y}_{i}(\psi _{C}(t),\psi _{N\backslash C}(t))\nonumber \\&\quad \forall \psi _{C}(t)\in \mathbf {R}^{|C|},\,\forall \psi _{N\backslash C}(t)\in \mathbf {R}^{|N\backslash C|},\,\forall i\in \mathbf {C},\,\forall t\in [0,\infty ). \end{aligned}$$
(93)
The following theorem shows the nonexistence of CLSP procedures.

Theorem 8

With the nonseparable utility functions, there exists no continuous procedure which fulfills Condition CLSP.

Proof

A CLSP planning procedure is an LSP process. Let us consider the joint payoff \(\dot{u}_{ik}(\psi _{C},\psi _{N\backslash C})\) of the two-size coalition \(\{i,k\}\).
$$\begin{aligned} \dot{u}_{ik}(\psi _{C},\psi _{N\backslash C})=\sum _{\ell =i,k}\frac{\partial u_{\ell }}{\partial y_{\ell }}\left\{ \pi _{\ell }-\psi _{\ell }+\frac{1}{n}\left( \sum _{j\in \mathbf {N}}\psi _{j}-\gamma \right) \right\} \dot{x}(\psi _{C},\psi _{N\backslash C}). \end{aligned}$$
Differentiation with respect to \(\psi _{l}\) gives
$$\begin{aligned}\frac{\partial \dot{u}_{ik}(\psi _{C},\psi _{N\backslash C})}{\partial \psi _{\ell }}&=\sum _{\ell =i,k}\frac{\partial u_{\ell }}{\partial y_{\ell }}\left\{ \frac{1-n}{n}\dot{x}(\psi _{C},\psi _{N\backslash C})\right. \nonumber \\&\quad \left. +\left( \pi _{\ell }-\psi _{\ell }+\frac{1}{n}\sum _{j\in \mathbf {N}}\psi _{j}-\frac{1}{n}\gamma \right) \frac{\partial \dot{x}(\psi _{C},\psi _{N\backslash C})}{\partial \psi _{\ell }}\right\} . \end{aligned}$$
(94)
Since \(\dot{x}(\psi _{C},\psi _{N\backslash C})=0\) at an equilibrium where the above equation is zero if
$$\begin{aligned} \pi _{i}-\psi _{i}+\frac{1}{n}\sum _{j\in \mathbf {N}}\psi _{j}-\frac{1}{n}\gamma =0 \end{aligned}$$
(95)
and
$$\begin{aligned} \pi _{k}-\psi _{k}+\frac{1}{n}\sum _{j\in \mathbf {N}}\psi _{j}-\frac{1}{n}\gamma =0. \end{aligned}$$
(96)
Combining these two yields
$$\begin{aligned} \pi _{i}-\psi _{i}-\pi _{k}+\psi _{k}=0 \end{aligned}$$
(97)
which does not necessarily imply the requirement of LSP:
$$\begin{aligned} \pi _{i}=\psi _{i}\text { and }\pi _{k}=\psi _{k}. \end{aligned}$$
(98)
Hence, even the two-size coalition \(\{i,k\}\) can manipulate the LSP procedure. \(\square \)

Next, we show the existence of a full coalition of players even with LSP procedures.

Theorem 9

In the Fujigaki–Sato Procedure, there exists a full coalition of players with a vector of strategies \(\hat{\psi }=\left( \hat{\psi }_{1},\ldots ,\hat{\psi }_{n}\right) \) such that \(\dot{u}_{i}(\hat{\psi })=\infty ,\) \(\forall i\in \mathbf {N}\).

Proof

Consider the Procedure:
$$\begin{aligned} \left\{ \begin{array}{l} \dot{x}=a\left( \sum _{j\in \mathbf {N}}\psi _{j}-\gamma \right) \left| \sum _{j\in \mathbf {N}}\psi _{j}-\gamma \right| ^{n-2} \\ \\ \dot{y}_{i}=-\psi _{i}\dot{x}\left( \psi \right) +\frac{1}{n}\left( \sum _{j\in \mathbf {N}}\psi _{j}-\gamma \right) \dot{x}\left( \psi \right) ,\,\forall i\in \mathbf {N}.\end{array}\right. \end{aligned}$$
(99)
Let us show that there exists \(\hat{\psi }_{i},\forall i\in \mathbf {N},\) which satisfies the procedure:
$$\begin{aligned} \left\{ \begin{array}{l} \lambda \dot{x}=a\left( \sum _{j\in \mathbf {N}}\psi _{j}-\gamma \right) \left| \sum _{j\in \mathbf {N}}\psi _{j}-\gamma \right| ^{n-2} \\ \\ \lambda \dot{y}_{i}=-\psi _{i}\lambda \dot{x}\left( \psi \right) +\frac{1}{n}\left( \sum _{j\in \mathbf {N}}\psi _{j}-\gamma \right) \lambda \dot{x}\left( \psi \right) ,\,\forall i\in \mathbf {N}.\end{array}\right. \end{aligned}$$
(100)
where \(\lambda >1\).
The above equations give
$$\begin{aligned} \hat{\psi }_{i}=-\frac{\dot{y}_{i}\left( \hat{\psi }\right) }{\dot{x}\left( \hat{\psi }\right) }+\frac{1}{n}\left( \sum _{j\in \mathbf {N}}\hat{\psi }_{j}-\gamma \right) ,\,\forall i\in \mathbf {N}. \end{aligned}$$
(101)
Thus,
$$\begin{aligned} \sum _{j\in \mathbf {N}}\hat{\psi }_{j}-\gamma =-\sum _{i}\frac{\dot{y}_{i}\left( \hat{\psi }\right) }{\dot{x}\left( \hat{\psi }\right) }-\gamma +\sum _{j\in \mathbf {N}}\hat{\psi }_{j}-\gamma . \end{aligned}$$
(102)
and we obtain
$$\begin{aligned} \sum _{i\in \mathbf {N}}\dot{y}_{i}\left( \hat{\psi }\right) +\gamma \dot{x}\left( \hat{\psi }\right) =0. \end{aligned}$$
(103)
It is also true for the MDP Procedure. \(\square \)
Nevertheless, it is possible to construct a planning procedure which is coalition proof. Let a be a positive constant of adjustment speed of the public good. Let also \(\chi \) be a constant, e.g., \(\chi =1\). Consider the following three cases.
  • (i) \(\ \sum _{j}\psi _{j}-\gamma >a^{-1/(n-2)}\)

  • (ii) \(-a^{-1/(n-2)}\le \sum _{j}\psi _{j}-\gamma \le a^{-1/(n-2)}\)

  • (iii) \(-a^{-1/(n-2)}>\sum _{j}\psi _{j}-\gamma \)

The Piecewise Nonlinearized MDP Procedure reads:
$$\begin{aligned} \left\{ \begin{array}{l} \dot{x}=\left\{ \begin{array}{cc} \chi &{} \,\,if\,(i)\,holds \\ a\left( \sum _{j}\psi _{j}-\gamma \right) \left| \sum _{j}\psi _{j}-\gamma \right| ^{n-2} &{} \,\, if\,(ii)\,holds \\ -\chi &{} \,\,if\,(iii)\,holds\end{array}\right. \\ \\ \dot{y}_{i}=-\psi _{i}\dot{x}\left( \psi \right) +\delta _{i}\left( \sum _{j\in \mathbf {N}}\psi _{j}-\gamma \right) \dot{x}\left( \psi \right) ,\,\forall i\in \mathbf {N}.\end{array}\right. \end{aligned}$$
(104)
With this procedure, we can state the theorem.

Theorem 10

The Piecewise Nonlinearized MDP Procedure satisfies CLSP, F, M, N and PE.

Proof

Conditions F, M, N and PE follow straightforwardly from the construction of the Piecewise Nonlinearized MDP Procedure which is not differentiable. With this process, nobody can make his/her utility increment to be \(\infty ,\) because of the boundedness of the public good decision function which is predetermined. \(\square \)

5 Price-quantity equivalence in planning procedures

5.1 Price-guided procedures and their normative conditions

This section establishes the equivalence on equity between price-guided and quantity-guided planning procedures, which is an extension of Laffont and Rochet (1985). Their nonlinear pricing scheme and the duality results on the conditions overtuned the widely held view as to the superiority of the quantity-guided procedures, particularly with public goods. Contrary to the conjecture in Malinvaud (1971) about the falsification of preferences, Laffont and Rochet (1985) proved the equivalence between price and quantity planning procedures. To tackle the equity issue, this paper adopts Kolm’s super-equity and transforms it into our dynamic context.

Despite the dramatical growth in the theory of incentives in planning procedures with public goods, one of their vital aspects has been largely neglected; that is equity and fairness which are another prerequisites for the processes to be accomplished. It is recognized that the planning procedures could go further than one expected; that is, they attain some measure of equity and fairness in an economy with or without public goods. The notion of Kolm (1973) may help to fill the gap. Kolm’s super-equity involves the Foley’s equity. Super-fairness, which is formally defined below, can bring us fruitful equivalence results on the price-guided and quantity-guided planning processes.8

For the sake of completeness, let us summarize the Laffont/Rochet’s framework in this subsection. Laffont and Rochet (1985) established the equivalence theorem between locally strategy proof quantity-guided planning procedures and nonlinear price-guided planning procedures well defined.

Formally, at each iteration, individual i is confronted with a nonlinear price, \(\xi _{i}(\dot{x},\psi _{-i}),\) and a revenue function, \(R_{i}(\psi _{-i}),\) parametrized by the announcements of the others, \(\psi _{-i}.\) \(\xi _{i}(\dot{x},\psi _{-i})\) is considered as the price that agent i has to pay for a marginal increase \(\dot{x}\) of a public good and \(R_{i}(\psi _{-i})-\xi _{i}(\dot{x},\psi _{-i})\) is his/her marginal increase of private good consumption. One poses as usual \(\xi _{i}(0,\psi _{-i})=0\) for any i, and
$$\begin{aligned} \zeta _{i}(\dot{x},\psi _{-i})=\partial \xi _{i}(\dot{x},\psi _{-i})/\partial x. \end{aligned}$$
(105)
Following the myopic behavior, agent i tries to solve the problem:
$$\begin{aligned} {\text {Max}}\left\{ \psi _{i}\dot{x}-\int _{0}^{\dot{x}}\zeta _{i}\left( \theta ,\psi _{-i}\right) d\theta +R_{i}(\psi _{-i})\right\} ,\,\forall i\in \mathbf {N}. \end{aligned}$$
(106)
The normative conditions that Laffont and Rochet (1985) posed are as follows. In order to state the condition, let the demand for the public good be represented by
$$\begin{aligned} D_{i}(\psi _{-i})=Argmax\left\{ \psi _{i}\dot{x}-\int _{0}^{\dot{x}}\zeta _{i}\left( \theta ,\psi _{-i}\right) d\theta +R_{i}(\psi _{-i})\right\} ,\quad\forall i\in \mathbf {N}. \end{aligned}$$
(107)
The procedure defines a feasible allocation of a public good at each iteration, if every agents demand the same amount of public good variation. Thus, it is said that the system of personalized prices and revenues, \(\{\zeta _{i},R_{i}\}_{i\in \mathbf {N}},\) is coherent.

Now, the conditions for price-guided procedures are in order.

Condition PF. Feasibility
$$\begin{aligned} \sum _{i\in \mathbf {N}}R_{i}\left( \psi _{-j}\right) +\gamma \dot{x}=\sum _{i\in \mathbf {N}}\int _{0}^{\dot{x}}\zeta _{i}\left( \theta ,\psi _{-i}\right) d\theta ,\,\forall \psi \in R_{+}^{\mathbf {N}}. \end{aligned}$$
(108)
Condition PM. Monotonicity
$$\begin{aligned} \forall \psi&\in R_{+}^{\mathbf {N}},\,\forall i\in \mathbf {N} \nonumber \\ \dot{u}_{i}(\psi )&={\text {Max}}\left\{ \psi _{i}\dot{x}-\int _{0}^{\dot{x}}\zeta _{i}\left( \theta ,\psi _{-i}\right) d\theta +R_{i}(\psi _{-i})\right\} \ge 0. \end{aligned}$$
(109)
Condition PPE. Pareto Efficiency
$$\begin{aligned} \dot{x}\left( s\right) =R_{1}(\psi _{-1})=\cdots =R_{n}(\psi _{-n})\iff \sum _{i\in \mathbf {N}}\psi _{i}=\gamma ,\,\forall \psi \in R_{+}^{\mathbf {N}}. \end{aligned}$$
(110)
Condition \(C.\ \ \) Coherency
$$\begin{aligned} D_{1}(\psi _{-1})=\cdots =D_{n}(\psi _{-n}),\,\forall \psi \in R_{+}^{\mathbf {N}}. \end{aligned}$$
(111)
Condition N. Neutrality
$$\begin{aligned} \exists \delta \in \Delta \text { and }\exists z(t,\delta )\in \mathbf {Z},\,\forall z_{0}\in \mathbf {Z}\text { and }\forall z^{*}= \lim _{t\rightarrow \infty }z \left( t,\delta \right) \in \mathbf {P}_{0}. \end{aligned}$$
(112)
Neutrality is the same for the quantity-guided procedures. My point of departure is the following result.

Theorem 11

(Laffont and Rochet (1985)) (i) The mapping \(z=(x,y_{1},\ldots ,y_{n})\) defines a locally strategy proof quantity-guided planning procedure if and only if there exists a coherent system of personalized prices and incomes, \(\{\zeta _{i},R_{i}\}_{i\in \mathbf {N}}\) such that: \(\forall \psi \in R_{+}^{\mathbf {N}}\)
$$\begin{aligned} \left\{ \begin{array}{l} \dot{x}=Argmax\left\{ \psi _{i}\dot{x}-\int _{0}^{\dot{x}}\zeta _{i}\left( \theta ,\psi _{-i}\right) d\theta +R_{i}(\psi _{-i})\right\} \\ \\ \dot{y}_{i}=-\int _{0}^{\dot{x}}\zeta _{i}\left( \theta ,\psi _{-i}\right) d\theta +R_{i}(\psi _{-i}),\,\forall i\in \mathbf {N}.\end{array}\right. \end{aligned}$$
In other words, C \(\Leftrightarrow \) LSP.

(ii) Let z be a quantity-guided planning procedure satisfying LSP and \(\{\zeta _{i},R_{i}\}_{i\in \mathbf {N}}\) its dual formulation in terms of prices.

Then, a) z is feasible if and only if \(\{\zeta _{i},R_{i}\}_{i\in \mathbf {N}}\) is feasible. In other words: \(PF\Leftrightarrow F\Leftrightarrow \)
$$\begin{aligned} \sum _{i\in \mathbf {N}}\dot{u}_{i}(\psi _{i})=\left( \sum _{i\in \mathbf {N}}\psi _{i}-\gamma \right) \dot{x},\,\forall \psi \in R_{+}^{\mathbf {N}}. \end{aligned}$$
b) z is individually rational if and only if \(\{\zeta _{i},R_{i}\}_{i\in \mathbf {N}}\) is individually rational. In other words: \(PM\Leftrightarrow M\Leftrightarrow \)
$$\begin{aligned} \dot{u}_{i}(\psi _{i})\ge 0,\,\forall i\in \mathbf {N},\,\forall \psi \in R_{+}^{\mathbf {N}}. \end{aligned}$$
c) z is Pareto efficient if and only if \(\{\zeta _{i},R_{i}\}_{i\in \mathbf {N}}\) is Pareto efficient. In other words: \(PPE\Leftrightarrow PE\Leftrightarrow \)
$$\begin{aligned} \dot{u}_{1}(\psi )=\cdots =\dot{u}_{n}(\psi )=\dot{x}=0\iff \sum _{i\in \mathbf {N}}\psi _{i}=\gamma . \end{aligned}$$

5.2 Equity and fairness along the procedures

My concern in this subsection has concentrated on the theory of incentives in planning procedures having asymptotic efficiency and local strategy proofness. Let us dip into a particular topic of equity and fairness in what follows. Malinvaud (1972) was the earliest to discuss the central role that the conception of equity played along planning procedures. In brief, Malinvaud’s findings are as follows: the direction of the utility change is the same over individuals; namely, utility increment of one person increases, those of all the others also increase, and vice versa. We can describe Malinvaud’s equity in a somewhat different way from the original expression. Let \(\dot{u}_{i}(x,y_{i})=(\partial u_{i}/\partial y_{i})(\pi _{i}\dot{x}+\dot{y}_{i}).\)

Definition 7

An allocation is Malinvaud equitable if
$$\begin{aligned} \dot{u}_{i}(x,y_{i})\dot{u}_{j}(x,y_{j})\ge 0,\,\forall i,j\in \mathbf {N}. \end{aligned}$$
(113)

In the celebrated book (1979, p. 274), Green and Laffont wrote, “By choosing equal shares of the cost, the procedure can be made equitable in the following sense: if the agents consider the procedure before knowing their own preferences, in the spirit of the Rawlsian approach, no particular agent is favored.” According to their criterion, their procedure can be equitable for the agents as in the original position à la Rawls, since these procedures can involve equal shares of the cost.

The first concept of equity was proposed by Foley (1967), and the Foley’s theorem states under suitable assumptions that there exists an equitable allocation in a pure exchange economy. Kolm’s famous monograph (1971) gave a systematic study on equity and justice in pure exchange economies. Just after the publication of this well-known book, Kolm (1973) proposed a concept of super-equity which includes that of no-envy equity. Suzumura and Sato (1985) verified that the concept of no-envy equity is neither robust nor appropriate for an economy with public goods. In so doing, we checked the performances of the Lindahl equilibrium, the Zeuten–Nash bargaining solution, the Kalai–Smorodinsky arbitration scheme and the Perles–Machler super-additive solution. With their numerical examples, Suzumura and Sato concluded that none of these approaches could not achieve no-envy equity for an economy with public goods.

Here introduced are two concepts of equity.

Definition 8

An allocation z is Foley equitable if
$$\begin{aligned} u_{i}(x,y_{i})\ge u_{i}(x,y_{j}),\,\forall i,j\in \mathbf {N}. \end{aligned}$$
(114)

Definition 9

An allocation z is super-equitable if
$$\begin{aligned} u_{i}(x,\,y_{i})\ge u_{i}\left( x,\,\lambda _{i}\sum _{j\in \mathbf {N}}y_{j}\right) ,\,\forall i\in \mathbf {N} \end{aligned}$$
(115)
with \(\sum _{j\in \mathbf {N}}\lambda _{j}=1,\) where \(\{\lambda _{i}\}\) are nonnegative numbers. This definition is independent from the form of mechanism which implements allocations and brings us an interesting result on the relationship between price-guided and quantity-guided planning procedures.

The theory developed in this field being basically static, we add a dynamic touch to the theory of equity and fairness of planning procedures. Let me propose two conditions on super-equity on price and quantity procedures, then we get the theorem.

Condition TSEP. Transversal Super-Equity for Price Procedures
$$\begin{aligned} \dot{u}_{i}(\psi (t))&={\text {Max}}\left\{ \psi _{i}(t)\dot{x}-\int _{0}^{\dot{x}}\zeta _{i}\left( \theta (t),\psi _{-i}(t)\right) d\theta +R_{i}(\psi _{-i}(t))\right\} \nonumber \\&\ge {\text {Max}}\left[ \psi _{i}(t)\dot{x}-\lambda _{i}\sum _{j\in \mathbf {N}}\left\{ \int _{0}^{\dot{x}}\zeta _{j}\left( \theta (t),\psi _{-j}(t)\right) d\theta \right. \right. \nonumber \\&\left. \left. \quad-\,R_{j}(\psi _{-j}(t))\right\} \right] \nonumber \\&\qquad \forall \psi \in R_{+}^{\mathbf {N}},\,\forall i\in \mathbf {N},\,\forall t\in [0,\infty ). \end{aligned}$$
(116)
Condition TSEQ. Transversal Super-Equity for Quantity Procedures
$$\begin{aligned} \dot{y}_{i}(t)\ge\, & \lambda _{i}\sum _{j\in \mathbf {N}}\dot{y}_{j}(t) \nonumber \\ & \forall \psi\in R_{+}^{\mathbf {N}},\,\forall i\in \mathbf {N},\,\forall t\in [0,\infty ). \end{aligned}$$
(117)

Theorem 12

Let the mapping \(z=(x,y_{1},\ldots ,y_{n})\) be defined by a quantity-guided planning procedure fulfilling LSP and \(\{\zeta _{i},R_{i}\}_{i\in \mathbf {N}}\) its dual formulation of nonlinear price-guided procedure. Then, the mapping is transversally super-equitable if and only if there exists a coherent system of personalized prices and incomes, \(\{\zeta _{i},R_{i}\}_{i\in \mathbf {N}}.\) In other words, TSEP \(\Leftrightarrow \) TSEQ \(\Leftrightarrow \)
$$\begin{aligned} \dot{u}_{i}(\psi )=\dot{u}_{i}(x(\psi ),y_{i}(\psi ))\ge \dot{u}_{i}\left( x(\psi ),\,\lambda _{i}\sum _{j\in \mathbf {N}}y_{j}(\psi )\right) ,\,\forall i\in \mathbf {N}. \end{aligned}$$

Proof

Let \(\mathbf {X}\) the set of adjustment functions of public good. Laffont and Rochet (1985) verified that C becomes C’ such that
$$\begin{aligned} \xi _{i}\left( \dot{x},\psi _{-i}\right) =\psi _{i},\,\exists \dot{x}\in \mathbf {X},\,\forall i\in \mathbf {N},\,\forall \psi _{-i}\in \mathbf {R}_{+}^{n-1} \end{aligned}$$
(118)
which immediately entails the equivalence between TSEP and TSEQ.
$$\begin{aligned} \dot{u}_{i}(\psi )&= \dot{u}_{i}(x(\psi ),y_{i}(\psi )) \nonumber \\&= {\text {Max}}\left\{ \psi _{i}\dot{x}-\int _{0}^{\dot{x}}\zeta _{i}\left( \theta ,\psi _{-i}\right) d\theta +R_{i}(\psi _{-i})\right\} \nonumber \\&\ge {\text {Max}}\left[ \psi _{i}\dot{x}-\lambda _{i}\sum _{j\in \mathbf {N}}\left\{ \int _{0}^{\dot{x}}\zeta _{j}\left( \theta ,\psi _{-i}\right) d\theta -R_{j}(\psi _{-i})\right\} \right] \nonumber \\&= \dot{u}_{i}\left( x(\psi ),\,\lambda _{i}\sum _{j\in \mathbf {N}}y_{j}(\psi )\right) . \end{aligned}$$
(119)
The converse is obvious. \(\square \)
Laffont and Rochet (1985) named the original Generalized MDP Procedure as the Fujigaki–Sato Procedure:
$$\begin{aligned} \left\{ \begin{array}{l} \dot{x}\left( \psi \right) =a\left( \sum _{i\in \mathbf {N}}\psi _{i}-\gamma \right) \left| \sum _{i\in \mathbf {N}}\psi _{i}-\gamma \right| ^{n-2} \\ \\ \dot{y}_{i}\left( \psi \right) =-\psi _{i}\dot{x}\left( \psi \right) +(1/n)\left( \sum _{i\in \mathbf {N}}\psi _{i}-\gamma \right) \dot{x}\left( \psi \right) ,\,\forall i\in \mathbf {N}.\end{array}\right. \end{aligned}$$
(120)
The nonlinear pricing scheme counterpart of this procedure was proposed by Laffont and Rochet (1985) as the coherent system of personalized prices and revenues such that:
$$\begin{aligned} \zeta _{i}\left( \dot{x},\psi _{-i}\right) =\frac{1}{a}\dot{x}(\psi )^{\frac{1}{n-1}}+\gamma -\sum _{j\ne i}\psi _{j} \end{aligned}$$
(121)
or equivalently
$$\begin{aligned} \left\{ \begin{array}{l} \dot{x}=Argmax\left\{ \psi _{i}\dot{x}-\int _{0}^{\dot{x}}\zeta _{i}\left( \theta ,\psi _{-i}\right) d\theta +R_{i}(\psi _{-i})\right\} \\ \\ \dot{y}_{i}=-\int _{0}^{\dot{x}}\zeta _{i}\left( \theta ,\psi _{-i}\right) d\theta +R_{i}(\psi _{-i}),\,\forall i\in \mathbf {N}.\end{array}\right. \end{aligned}$$
(122)
Since the planning center’s task is to achieve a super-equitable allocations via either quantity or price planning procedures, it has to collect the relevant information from the periphery so as to meet either conditions TSEP or TSEQ. It is available if the procedure is coherent or locally strategy proof. It has verified the existence of the procedure which satisfies simultaneously super-equity, Pareto optimality and local strategy proofness.

The normative analysis of planning procedures has focused almost exclusively upon their efficiency and incentive facts. This paper has proposed a concept of transversal super-equity in an economy with a public good, where equity as well as efficiency and local strategy proofness are salient concepts. The focus has been upon issues pertaining to the performance characteristics of price-guided and quantity-guided planning procedures. The methodology of this paper is chiefly based on Laffont and Rochet (1985). I have borrowed from their insightful work in much of my analysis. The concept of transversal super-equity is consistent with the conditions already established for planning procedures. For our purpose, the notion of super-equity must have been transformed into our dynamic setting of planning procedures.

6 Final remarks

This paper has revisited the family of MDP Procedures and analyzed their properties. In the local game associated with any iteration of any procedure each player’s payoff is the utility increment at each point of time. Laffont’s differential method is used to formalize the procedures with desirable properties. I have shown that the Nonlinearized MDP Procedure or Fujigaki–Sato Procedure can simultaneously achieve efficiency and local strategy proofness. That is, it converges to a Pareto optimum and that the best replay strategy of each player at each iteration is to declare his/her true MRS. Instead, the Generalized MDP Procedure can possess aggregate correct revelation which means the equality between the sum of true MRSs and that of Nash equilibrium strategies.

Recognizing the difficulties concerning the possibility of manipulating private information by individuals, the literature has verified that this incentive problem could be treated by the planning procedures that require a continuous revelation of information, provided that agents adopt a myopic behavior, whereas, if individuals are farsighted, the traditional impossibility results occur; i.e., incentive compatibility is incompatible with efficiency, as were pointed out by Champsaur, Laroque and Rochet. This paper has studied an instantaneous situation where agents are only asked to reveal their true MRS at continuous dates, and the direction and speed of adjustment are changed according to the information collected. Consequently, the associated dynamic process named as the Fujigaki–Sato Procedure has concluded to be nonlinearized. Individuals are assumed to take myopic behaviors at each date. Their behavior is hence characterized to be myopia, not farsightedness. The idea of looking at an intermediate time horizon for agents’ manipulations of information is more natural and more realistic, but more difficult than myopia and perfect foresight. [See Roberts (1987) for this point.]

In the literature on the problem of incentives in planning procedures with public goods, the myopic strategic behavior prevailed. Many papers imposed this behavioral hypothesis; i.e., myopia, on which the forgoing discussions crucially depended, spawning numerous desirable results in connection with the family of MDP Procedures. The aim of this paper has been to examine the consequences of the assumption that individuals choose their strategies to maximize an instantaneous change in utility function at each iteration along the procedure, as analyzed by Sato (1983).

Also verified is that the Generalized MDP Procedure can always keep neutrality which is different from Champsaur and Laroque (1981, 1982) and Laroque and Rochet (1983). They analyzed the properties of the MDP Procedure under the nonmyopic assumption. They treated the case where each individual attempts to forecast the influence of his/her announcement to the planning center over a predetermined time horizon and optimizes his/her responses accordingly. It is proved that, if the time horizon is long enough, any noncooperative equilibrium of intertemporal game attains an approximately Pareto optimal allocation. But at such an equilibrium, the influence of the center on the final allocation is negligible, which entails nonneutrality of the procedure. Their attempt is to bridge the gap between the local instantaneous game and the global game, as was pointed out by Hammond (1979). Sato (2012) aimed, however, to bridge the gap between the local game and intertemporal game, by constructing a compromise of continuous and discrete procedures: i.e., the piecewise linearized procedure that Sato presented.

Footnotes
1

See Malinvaud (1969, 1970, 1970–1971, 1971, 1972) and Drèze and de la Vallée Poussin (1969, 1971). For an idea of the tâtonnement process, see also Drèze (1972, 1974).

 
2

See Kakhbod et al. (2013) for the most recent research in this field.

 
3

Drèze and de la Vallée Poussin (1971) did not explicitly introduce initial resources, but implicitly incorporated them in the production set.

 
4

See Samuelson (1954) and Mukherji (1990). See also McLure (1968), Milleron (1972) and Laffont (1982, 1985) for diagrams with public goods.

 
5

For the concepts of neutrality associated with planning procedures, see Cornet (1977a, b, c, d) and Cornet (1979), Cornet and Lasry (1976), Roberts (1982) and Sato (1983). See also D’Aspremont and Drèze (1979) for a version of neutrality which is valid for the generic context.

 
6

See Laffont (1985) for the Bowen–Laffont Procedure.

 
7

See Laffont and Maskin (1980) for the integrability of the equations defining dominant strategy mechanisms.

 
8

See also Arrow and Hurwicz (1960) and Tulkens (1978) for the price-guided planning procedures. The concepts below can also hold for an economy with only private goods.

 

Declarations

Acknowledgements

This is a paper dedicated to the XXXXVth Anniversary of the MDP Procedure and the Lth Anniversary of CORE at l’Université Catholique de Louvain. This is a revised version of the paper presented at the Regional Science Workshop in Sendai held at the Graduate School of Information Sciences, Tohoku University, June 17, 2011. The revised version of this paper was presented at the Open Lecture held at International Christian University, October 18, 2013. Its further revised version was presented at the Public Economics Seminar held at Keio University, April 18, 2014 and at the annual meeting of the Japanese Economic Association held at Doshisha University, June 15, 2014. Moreover, it was presented at the Microeconomics Workshop held at the Graduate School of Economics, The University of Tokyo, June 30, 2015. Thanks are due to the participants for their useful comments and helpful suggestions. The author thanks Jacques Drèze, Claude Henry, Jean-Jacques Laffont and Henry Tulkens for their useful discussions and their encouragement on my research of the problems of incentives in planning procedures with public goods.

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
The Center for Advancement of Higher Education, Tohoku University

References

  1. Arrow K, Hurwicz L (1960) Decentralization and computation in resource allocation. In: Pfouts R (ed) Essays in economics and econometrics in honor of Harold hotelling. University of North Carolina Press, Chapel Hill, pp 34–104Google Scholar
  2. Attouch H, Damlamian A (1972) On multivalued evolution equations in Hilbert spaces. Isr J Math 12:373–390View ArticleGoogle Scholar
  3. Bennett E, Conn D (1977) The group incentive properties of mechanisms for the provision of public goods. Public Choice 29:95–102View ArticleGoogle Scholar
  4. Castaing C, Valadier M (1969) Equations différentielles multivoques dans les espaces localement convexes. Revue Française d’Informatique et de Recherche Opérationnelle 16:3–16Google Scholar
  5. Chakravorti B (1995) Dynamic public goods provision with coalitional manipulation. J Public Econ 56:143–161View ArticleGoogle Scholar
  6. Champsaur P (1976) Neutrality of planning procedures in an economy with public goods. Rev Econ Stud 43:293–300View ArticleGoogle Scholar
  7. Champsaur P, Laroque G (1981) Le plan face aux comportements stratégiques des unités décentralizées. Annales de $$l^{\prime }INSEE$$ l ′ I N S E E 42: 19–33Google Scholar
  8. Champsaur P, Laroque G (1982) Strategic behavior and decentralized planning procedures. Econometrica 50:325–344View ArticleGoogle Scholar
  9. Champsaur P, Rochet J-C (1983) On planning procedures which are locally strategy proof. J Econ Theory 30:353–369View ArticleGoogle Scholar
  10. Champsaur P, DrèZe J, Henry C (1977) Stability theorems with economic applications. Econometrica 45:272–294View ArticleGoogle Scholar
  11. Cornet B (1977a) An abstract theorem for planning procedures. In: Auslender A (ed) Convex analysis and its application. Lecture notes in economics and mathematical systems, vol 144. Springer, pp 53–59Google Scholar
  12. Cornet B (1977b) Accessibilités des optimums de pareto par des processus monotones. Comptes Rendus de l’Académie des Sciences, Série A 282:641–644Google Scholar
  13. Cornet B (1977c) On planning procedures defined by multivalued differential equations. In: Systèmes Dynamiques et Modèles Economiques, Colloques Internationaux du C.N.R.S., Paris, N $$^{0}$$ 0 , vol 259, Chapter 2Google Scholar
  14. Cornet B (1977d) Sur la Neutralité d’une Procédure de Planification. Cahiers du Séminaire d’Économétrie, 19. C.N.R.S, Paris, pp 71–81Google Scholar
  15. Cornet B (1979) Monotone planning procedures and accessibility of Pareto optima. In: Aoki M, Marzollo A (eds) New trends in dynamic system theory and economics. Academic Press, Cambridge, pp 337–349Google Scholar
  16. Cornet B (1983) Neutrality of planning procedures. J Math Econ 11:141–160View ArticleGoogle Scholar
  17. Cornet B, Lasry J-M (1976) Un théorème de surjectivité pour une procédure de planification. Comptes Rendus de l’Académie des Sciences. Série A 282:1375–1378Google Scholar
  18. D’Aspremont C, Drèze J (1979) On the stability of dynamic processes in economic theory. Econometrica 47:733–737View ArticleGoogle Scholar
  19. Drèze J (1972) A tâtonnement process for investment under uncertainty in private ownership economies. In: Szegö G, Shell K (eds) Mathematical methods in investment and finance. North-Holland, Amsterdam, pp 3–23Google Scholar
  20. Drèze J (1974) Investment under private ownership: optimality, equilibrium and stability. In: Drèze J (ed) Allocation under uncertainty: equilibrium and optimality. Macmillan, New York, pp 129–166View ArticleGoogle Scholar
  21. Drèze, J and de la Vallée Poussin (1969) A tâtonnement process for guiding and financing an efficient production of public goods. In: CORE Discussion Paper No. 6922. Presented at the Brussels Meeting of the Econometric Society, September 1969Google Scholar
  22. Drèze J, de la Vallée Poussin D (1971) A tâtonnement process for public goods. Rev Econ Stud 38:133–150View ArticleGoogle Scholar
  23. Foley D (1967) Resource allocation and the public sector. Yale Econ Essays 7:43–98Google Scholar
  24. Fujigaki Y, Sato K (1981) Incentives in the generalized MDP procedure for the provision of public goods. Rev Econ Stud 48:473–485View ArticleGoogle Scholar
  25. Fujigaki Y, Sato K (1982) Characterization of SIIC continuous planning procedures for the optimal provision of public goods. Econ Stud Q 33:211–226Google Scholar
  26. Green J, Laffont J-J (1977) Révélation des Préférences pour les biens publics: charactérization des mécanisms satisfaisants. In: Malinvaud E (ed) Cahiers du Séminaire d’Économétrie, No. vol 19, C.N.R.S., pp 83–103Google Scholar
  27. Green J, Laffont J-J (1979a) Incentives in public decision-making, studies in public economics, vol 1. North-Holland, AmsterdamGoogle Scholar
  28. Hammond P (1979) Symposium on incentive compatibility: introduction. Rev Econ Stud 47:181–184Google Scholar
  29. Henry C (1972) Differential equations with discontinuous right-hand side for planning procedures. J Econ Theory 4:545–551View ArticleGoogle Scholar
  30. Henry C (1973) Problèmes d’Existence et de Stabilité pour des Processus Dynamique Considerés en Economie Mathématique, Laboratoire d’Econométrie de l’Ecole Polytechnique, March 1973Google Scholar
  31. Henry C (1979) On the free rider problem in the M.D.P. procedure. Rev Econ Stud 46:293–303View ArticleGoogle Scholar
  32. Kakhbod A, Koo J, Teneketzis D (2013) An efficienct tâtonnement process for the public good problem: a decentralized subgradient approach, unpublishedGoogle Scholar
  33. Kolm S-C (1971) Justice et Equité, Editions du Centre National de la Recherche Scientifique, CEPREMAP, Paris; English translation (1998), Justice and Equity. MIT Press, CambridgeGoogle Scholar
  34. Kolm S-C (1973) Super-Equité. Kyklos 26:841–843View ArticleGoogle Scholar
  35. Laffont J-J, (1982), Cours de Théorie Microéconomique: vol 1-Fondements de $$l^{\prime }$$ l ′ Economie Publique, Economica, translated by BONIN, J. and H. (1985) Fundamentals of Public Economics. The MIT Press, CambridgeGoogle Scholar
  36. Laffont J-J (1985) Incitations dans les procédures de planification. Annales de l INSEE 58:3–37Google Scholar
  37. Laffont J-J, Maskin E (1980) A differentiable approach to dominant strategy mechanisms. Econometrica 48:1507–1520View ArticleGoogle Scholar
  38. Laffont J-J, Maskin E (1983) A characterization of strongly locally incentive compatible planning procedures with public goods. Rev Econ Stud 50:171–186View ArticleGoogle Scholar
  39. Laffont J-J, Rochet JC (1985) Price-quantity duality in planning procedures. Soc Choice Welfare 2:311–322View ArticleGoogle Scholar
  40. Laroque G, Rochet J-C (1983) Myopic versus intertemporal manipulation in decentralized planning procedures. Rev Econ Stud 50:187–196View ArticleGoogle Scholar
  41. Malinvaud E (1969) Procédures pour la determination d’un programme de consommation collective. In: Paper presented at the Brussels meeting of the econometric society, September 1969Google Scholar
  42. Malinvaud E (1970) The theory of planning for individual and collective consumption, INSEE, Paris. In: presented at symposium on the problems of the national economy modelling, June 1970, NovosibirskGoogle Scholar
  43. Malinvaud E (1970–1971) Procedures for the determination of collective consumption. Eur Econ Rev 2:187–217Google Scholar
  44. Malinvaud E (1971) A planning approach to the public good problem. Swed J Econ 73:96–121View ArticleGoogle Scholar
  45. Malinvaud E (1972) Prices for individual consumption, quantity indicators for collective consumption. Rev Econ Stud 39:385–406View ArticleGoogle Scholar
  46. McLure C (1968) Welfare maximization: the simple analytics with public goods. Can J Econ 12:21–34Google Scholar
  47. Milleron J-C (1972) Theory of value with public goods: a survey article. J Econ Theory 5:419–477View ArticleGoogle Scholar
  48. Mukherji A (1990) Walrasian and non-Walrasian equilibria. Clarendon Press, OxfordGoogle Scholar
  49. Roberts J (1979a) Incentives in planning procedures for the provision of public goods. Rev Econ Stud 46:283–292View ArticleGoogle Scholar
  50. Roberts J (1979b) Strategic behavior in the MDP procedure. in Laffont J-J (ed) Aggregation and revelation of preferences, studies in public economics, vol 2, North-Holland, Amsterdam, Chapter 19Google Scholar
  51. Roberts J (1987) Incentives, information, and iterative planning. In: Groves T, Radner R, Reiter S (eds) Information, incentives, and economic mechanisms: essays in honor of Leonid Hurwicz. Basil Blackwell, OxfordGoogle Scholar
  52. Rochet J-C (1982) Planning procedures in an economy with public goods: a survey article. In: CEREMADE, DP. No. 213, Université de ParisGoogle Scholar
  53. Samuelson P (1954) The pure theory of public expenditures. Rev Econ Stat 36:387–389View ArticleGoogle Scholar
  54. Sato K (1983) On compatibility between neutrality and aggregate correct revelation for public goods. Econ Stud Q 34:97–109Google Scholar
  55. Sato K (2012) Nonmyopia and incentives in the piecewise linearized MDP procedures with variable step-sizes. J Econ Struct 1:1–22View ArticleGoogle Scholar
  56. Schoumaker F (1977) Révélation des préférences et planification: une approche stratégique. Recherches Economiques de Louvain 43:245–259Google Scholar
  57. Suzumura K, Sato K (1985) Equity and efficiency in the public goods economy: some counterexamples. Hitotsubashi J Econ 26:59–82Google Scholar
  58. Truchon M (1984) Nonmyopic strategic behaviour in the MDP planning procedure. Econometrica 52:1179–1189View ArticleGoogle Scholar
  59. Tulkens H (1978) Dynamic processes for public goods: a process-oriented survey. J Public Econ 9:163–201; reprinted in Chander P, Drèze J, Lovell C, Mintz J (2006) Public economics, environmental externalities and fiscal competition, essays by Henry Tulkens, chap 1. Springer, New YorkGoogle Scholar

Copyright

© Sato. 2016