Macro Study Notes

Xinyu Zhou

Micro Foundations

Utility Function & Optimal Choice

Nowadays, macro models often build on micro foundations, especially in the context of representative agents and optimization problems.

Utility Function

Utility Function comes from Data
Properties:
  • Monotonicity:More is better than less.
  • Concavity: People is rational.
  • Completeness: Any two bundles can be compared.
  • Transitivity: If A $\succeq$ B & B$\succeq$ C, then A$\succeq$ C
  • Continuity: Preferences are continuous in consumption bundles.
Revealed Preference: \( \left\{\begin{array}{l} Weak: x_i\;R\;x_j \Leftrightarrow x_i\; \succeq \;x_j\\ Strong: x_i\;P\;x_j \Leftrightarrow x_i\; \succ \;x_j \end{array}\right.\)

Afriat Theorem

GARP: Genralized Axiom of Revealed Preference
  • Data set D satisfies GARP if $x_i$ P to $x_j$, $i \rightarrow i_1 \rightarrow i_2 \rightarrow ... \rightarrow i_n \rightarrow j$, then $x_j$ cannot be R to $x_i$.
Which is key assumption of Afriat Thm.
WARP: Weak Axiom of Revealed Preference
  • If $p_i x_j \leq p_i x_i$ and $x_i \neq x_j$, then $p_j x_i > p_j x_j$.
SARP: Strong Axiom of Revealed Preference
  • If $p_i x_j \leq p_i x_i$ and $x_i \neq x_j$, then $p_j x_i > p_j x_j$ for all pairs i,j.
Afriat Theorem
  • U can represent D if \( x_i \) is a solution to \( \max u(x) \quad \text{s.t.} \quad p_i x \leq p_i x_i \).(i.e. to find a u which solution is \( x_i \))
  • If D satisfies GARP, then exists a utility function u that is continuous, strictly increasing, and concave.

Expected Utility

Set up
$ EU = \sum p_i u(x_i) $ , where p is the probability of outcome i, and u(x i) is the utility of that outcome.
Lottery:L = {$L_1,L_2,...$} represent the probobility diatribution of X.
Independence condition:For L: $u(L_1) \geq u(L_2) \; \text{iff }\; u(\alpha L_1 + (1-\alpha)L_3) \geq u(\alpha L_2 + (1-\alpha)L_3)$

Expected utility thm.

\[ U(\alpha L_1 + (1-\alpha)L_2) = \alpha U(L_1) + (1-\alpha)U(L_2) \]

Representative Consumer

  • Aggregate the preferences of each individual consumer and obtain a representative preference.

1. Same utility but different income levels.

H.D.1(homogeneous in degree 1):

\[ U(\alpha x) = \alpha U(x) \] can be sonsidered as constant returns to scale.

Antonell's Thm

\[ X^*(P, \Sigma w) \equiv \sum X(P, w), \quad i.e., \quad \max U(X) \quad \text{s.t.} \quad P \cdot X \leq \sum_{i=1} w_i \] If \( D \) satisfies GARP & \( U \) is H.D.1, \(\exists \) representative \( U \).
Of course, if the research focus on "income distribution" or "heterogeneity among consumers", the model is not satisfied.

2. Same distributions of total income but different utility.

Eisenberg's Theorem

\[\max U_i(x_i) \quad \text{s.t.} \quad p \cdot x_i \leq \delta_i \cdot w \quad \text{where } \delta_i \geq 0, \, \sum \delta_i = 1\]
What we find: \(\max U(s) \quad \text{s.t.} \quad p \cdot x \leq w\)
If \(D\) satisfies GARP & \(U\) is H.D.1 & fix shared total income, then \(\exists\) can represent \(D\).
Opposite example:
\[U_1 = x_1, \, U_2 = x_2, \, P = (p_1, p_2)\]
\[x_1 = \frac{p_1^2}{p_1^2 + p_2^2}, \, x_2 = \frac{p_2^2}{p_1^2 + p_2^2}\]
Let \(P = (1, 3) \Rightarrow X = \left(\frac{1}{10}, \frac{9}{10}\right)\) ; Let \(P = (3, 1) \Rightarrow X = \left(\frac{9}{10}, \frac{1}{10}\right)\)
The postulate is it violates GARP(the preference is different) so aggregate demand function cannot be rationalized by any utility function.

3. If H.D.1 is violated, then the representative consumer may not exist.

A fundamental optimization problem

1. Unconstrained Optimization

To \(\max h(x)\), suppose that we have found \(x^*\)
Then, \(h(x^*) \geq h(x^* + \Delta x)\) (as definition of maximization): \( \left\{\begin{array}{l} \text{if } \Delta x > 0, \quad \lim_{\Delta x \to 0^+} \frac{h(x^* + \Delta x) - h(x^*)}{\Delta x} \leq 0 \\ \text{if } \Delta x < 0, \quad \lim_{\Delta x \to 0^-} \frac{h(x^* + \Delta x) - h(x^*)}{\Delta x} \geq 0 \end{array}\right. \)
\(\Rightarrow h'(x^*) = 0\)

Tayler expansion:

\[ h(x) = h(x^*) + h'(x^*)(x-x^*))+ \frac{h''(x^*)}{2}(x-x^*)^2 \] we have \(h'(x^*) = 0\), then \[ h(x) = h(x^*) + \frac{h''(x^*)}{2} (x-x^*)^2 \]
  • If \(h''(x^*) < 0\), then \( x^* \) is a local maximum;
  • If \(h''(x^*) > 0\), then \( x^* \) is a local minimum;
  • If \(h''(x^*) = 0\), then the test is inconclusive.

2. Equality Constraints Optimization

\( \max h(x,y) \quad \text{s.t.} \quad g(x,y) = c \); \( y = f(x,y) \), max \( h(x, f(x,y)) \)

f(x,y) is not easily to find always

\( \Rightarrow \) Lagrangian method: \[ \mathcal{L} = h(x,y) + \mu (c - g(x,y)) \]

3. Inequality Constraints Optimization

\( \max h(x) \quad \text{s.t.} \quad x \geq 0 \)
  • \(h'(\hat{x}) = 0\) for some \(\hat{x} \geq 0 \rightarrow h'(0) > 0\), then \(x^* = \hat{x}\)
  • \(h'(\hat{x}) = 0\) for some \(\hat{x} < 0\): \[ \left\{\begin{array}{l} h'(0) \leq 0, \text{ then } h(x) \text{ is weakly decreasing function, as the constraint, } x^* = 0 \text{ is optimal.} \\ h'(0) > 0, \text{ violate the concavity.} \end{array}\right. \]

KKT Condition(Karush-Kuhn-Tucker):

\[ h'(x) \geq 0, \quad x \geq 0, \quad h'(x) \cdot x = 0 \]

where \(h'(x) \cdot x = 0\) is called complementary slackness condition, i.e. at least one of the two inequalities has to hold with equality.

Comparative Statics

1. How optimal policy changea as parameters change?

\[ \left\{\begin{array}{l} p_x \frac{dx^*}{dB}+p_y \frac{dy^*}{dB}=1\\ \frac{\partial u}{\partial x}\frac{dx^*}{dB} +\frac{\partial u}{\partial y}\frac{dy^*}{dB} =0 \end{array}\right. \] Writing the system in matrix form: \[ \begin{pmatrix} p_x & p_y\\[4pt] \dfrac{\partial u}{\partial x} & \dfrac{\partial u}{\partial y} \end{pmatrix} \begin{pmatrix} \dfrac{dx^*}{dB}\\[8pt] \dfrac{dy^*}{dB} \end{pmatrix} = \begin{pmatrix} 1\\ 0 \end{pmatrix}. \] By Cramer's Rule, \[ \frac{dx^*}{dB}=\frac{D_x}{D}=\frac{\begin{vmatrix} 1 & p_y\\[4pt] 0 & \dfrac{\partial u}{\partial y} \end{vmatrix} }{D}. \]

2. How utility changes as parameter changes?

By Envelope Theorem Since we have \(\mathcal{L} = u(x,p) + \lambda(x,p) \) \[ \begin{aligned} \frac{\partial \mathcal{L}}{\partial p} &= \frac{\partial u}{\partial p} + \sum_i \lambda_i \frac{\partial g_i}{\partial p} \\ &= \frac{\partial u}{\partial x}\cdot \frac{\partial x}{\partial p} + \frac{\partial u}{\partial p} + \sum_i \lambda_i \left( \frac{\partial g_i}{\partial x}\cdot \frac{\partial x}{\partial p} + \frac{\partial g_i}{\partial p} \right) \\ &= \frac{\partial u}{\partial p} + \sum_i \lambda_i \frac{\partial g_i}{\partial p} \end{aligned} \]

Useful Theorems

1. Implicit Function Theorem

Suppose we have \(f(x,p) = 0\),then \(\frac{\partial x}{\partial p} = -\frac{f_x}{f_p}\)
We want to know whether, near some point, we can write \(x\) as a function of \(y\): \(x = g(p)\) such that \(F(g(p),p)=0\)
The Implicit Function Theorem says: if around some solution \((x_0,p_0)\), the function \(F\) is sufficiently smooth and the Jacobian with respect to \(x\) is invertible, then locally this representation is possible.

Pf: \[ Dg(p)=-[D_xF(g(p),p)]^{-1}D_pF(g(p),p). \] Because \[ F(g(p),p)=0, \] differentiate with respect to \(p\) and apply the chain rule: \[ D_xF(g(p),p)\, Dg(p)+D_pF(g(p),p)=0. \] Rearranging gives \[ D_xF(g(p),p)\,Dg(p)=-D_pF(g(p),p). \] Since \(D_xF\) is invertible, multiply by its inverse: \[ Dg(p)=-[D_xF(g(p),p)]^{-1}D_pF(g(p),p). \]

2. Cramer's Rule

For a linear system \(Ax=b\), if \(A\) is invertible, then the solution is given by \[x_i = \frac{\det(A_i)}{\det(A)}\] where \(A_i\) is the matrix obtained by replacing the \(i\)-th column of \(A\) with the vector \(b\).

3. Envelope Theorem

\[ \frac{\partial \mathcal{L}\bigl(x^*(p),\lambda^*(p),p\bigr)}{\partial x_\ell} = \frac{\partial h\bigl(x^*(p),p\bigr)}{\partial x_\ell} + \sum_{i=1}^{N}\lambda_i^*(p)\cdot \frac{\partial g^i\bigl(x^*(p),p\bigr)}{\partial x_\ell} =0. \] If \(x^*:P\to \mathbb{R}^L\) and \(\lambda^*:P\to \mathbb{R}^N\) are continuously differentiable, then \(V:P\to \mathbb{R}\) is continuously differentiable and we have, for any \(m\in\{1,\ldots,M\}\): \[ \frac{\partial V(p)}{\partial p_m} = \frac{\partial \mathcal{L}\bigl(x^*(p),\lambda^*(p),p\bigr)}{\partial p_m} = \frac{\partial h\bigl(x^*(p),p\bigr)}{\partial p_m} + \sum_{i=1}^{N}\lambda_i^*(p)\cdot \frac{\partial g^i\bigl(x^*(p),p\bigr)}{\partial p_m}. \] where we define \[ V(p)\equiv h\bigl(x^*(p),p\bigr). \]

3. Bordered Hessian

\[ \bar{H} \equiv \begin{bmatrix} 0 & 0 & \cdots & 0 & g_1^1 & g_2^1 & \cdots & g_L^1 \\ 0 & 0 & \cdots & 0 & g_1^2 & g_2^2 & \cdots & g_L^2 \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 0 & g_1^N & g_2^N & \cdots & g_L^N \\ g_1^1 & g_1^2 & \cdots & g_1^N & L_{11} & L_{12} & \cdots & L_{1L} \\ g_2^1 & g_2^2 & \cdots & g_2^N & L_{21} & L_{22} & \cdots & L_{2L} \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\ g_L^1 & g_L^2 & \cdots & g_L^N & L_{L1} & L_{L2} & \cdots & L_{LL} \end{bmatrix} \] where we denote \[ g_j^i \equiv \frac{\partial g^i}{\partial x_j}, \quad \text{and} \quad L_{ij} = \frac{\partial^2 \mathcal{L}}{\partial x_i \partial x_j} = \frac{\partial^2 h}{\partial x_i \partial x_j} + \sum_{k=1}^N \lambda_k \cdot \frac{\partial^2 g^k}{\partial x_i \partial x_j}. \]

4. Mean Value Theorem

If \(f\) is continuous on \([a,b]\) and differentiable on \((a,b)\), then there exists a point \(c \in (a,b)\) such that \[ f'(c) = \frac{f(b) - f(a)}{b - a}. \]

5. Extreme Value Theorem

If K is a nonempty compact set and function h is continuous, then h is bounded and attains its maximum and minimum in K. \[ h(p) = \sup \{h(x): x \in K\}; \quad \text{and} \quad h(p) = \inf \{h(x): x \in K\}. \]

6. Brouwer's Fixed Point Theorem

Every continuous function from a compact convex set to itself has at least one fixed point.