Dual Spaces

The other way to construct is base on the idea that every vector space $V$ and linear map $\alpha$, there is a vector space $V^\ast$ which mirrors $V$, and a linear map $\alpha^\ast$ of $V^\ast$ which mirrors $\alpha$.

Definition. Let $V$ be a finite dimensional vector space over $\mathbb{F}$. The dual space $V^\ast$ of $V$ is defined to be the vector space $\mathcal{L}(V, \mathbb{F})$. The elements of $V^\ast$ are linear forms/functions from $V$ into $\mathbb{F}$, i.e. $\theta: V \to \mathbb{F} \in V^\ast$ such that $\theta(v_1 + v_2) = \theta(v_1) + \theta(v_2)$ and $\theta(\lambda v) = \lambda \theta(v)$. The addition and scalar multiplication of $V^\ast$ are $(\theta_1 + \theta_2)(v) = \theta_1(v) + \theta_2(v)$ and $(\lambda \theta)(v) = \lambda\theta(v)$.

Proposition. Suppose that $V$ is finite dimensional vector space over $\mathbb{F}$ with basis $(e_1, …, e_n)$. Then $V^\ast$ has a basis $(\epsilon_1, …, \epsilon_n)$ such that $\epsilon_i(e_j) = \delta_{ij}$.

Proof.

The linear forms $\Set{\epsilon_i}$ exist because we can always map the basis vectors to arbitrary vectors in the codomain. For $(\epsilon_1, …, \epsilon_n)$ to be a basis, we need to show that it is linearly independent and spans $V^\ast$.

Since $\Set{\epsilon_i}$ are linear maps, they are determined by how they trasform the basis vectors $\Set{e_i}$ which are the only ones we need to care about.

Because of that, for linear independence, we just need to show that $\sum \lambda_i \epsilon_i(e_j) = 0$ implies $\lambda_i = 0$ for all $\Set{e_j}$. Suppose that $\sum \lambda_i \epsilon_i = 0$, in which $0$ is the zero function $0(v) = 0$ for all $v \in V$. Since

\[\begin{align*} \sum_i \lambda_i \epsilon_i(e_j) &= 0(e_j) \\ \sum_i \lambda_i \delta_{ij} &= 0 \\ \lambda_j &= 0 \end{align*}\]

for all $\Set{e_j}$ therefore $\Set{\epsilon_i}$ is linearly independent.

For any linear form $\theta \in V^\ast$, suppose that $\theta(e_i) = \lambda_i$ for $i = 1, …, n$. Then $\theta = \sum \lambda_i \epsilon_i$ since

\[\sum_i \lambda_i \epsilon_i(e_j) = \sum_i \lambda_i \delta_{ij} = \lambda_j = \theta(e_j)\]

for all $\Set{e_j}$ and therefore $\langle \epsilon_i \rangle = V^\ast$.

Definition. The basis $(\epsilon_1, …, \epsilon_n)$ is called the dual basis of $V^\ast$ with respect to $(e_1, …, e_n)$.

Corollary. If $V$ is finite dimensional, then $\dim V = \dim V^\ast$.

Annihilator

Definition. If $U \subset V$ then the annihilator of $U$ is defined by

\[U^\circ = \Set{\theta \in V^\ast : \forall u \in U, \theta(u) = 0} \subset V^\ast\]

Proposition. Suppose that $V$ is finite dimensional and $U \subset V$ is a subspace. Then $\dim U + \dim U^\circ = \dim V$.

Proof.

Let $(e_1, …, e_k)$ be a basis for $U$ extend to a basis $(e_1, …, e_n)$ for $V$, and $(\epsilon_1, …, \epsilon_n)$ be the corresponding dual basis for $V^\ast$.

By the definition $\epsilon_i(e_j) = \delta_{ij}$, we have $\epsilon_{k+1}(e_j) = … = \epsilon_n(e_j) = 0$ for $j = 1, …, k$ and therefore $\epsilon_{k+1}, …, \epsilon_n \in U^\circ$.

For any $\theta \in U^\circ$, suppose that $\theta = \sum_{i=1}^n \lambda_i \epsilon_i$ so $\theta(e_j) = \lambda_j$. Since $\theta(e_1) = … = \theta(e_k) = 0$, we have $\lambda_1 = … = \lambda_k = 0$. Therefore, $\theta = \sum_{i=k+1}^n \lambda_i \epsilon_i$ and $\langle \epsilon_{k+1}, …, \epsilon_n \rangle = U^\circ$. Hence, $(\epsilon_{k+1}, …, \epsilon_n)$ is a basis of $U^\circ$ and $\dim U^\circ = n - k$.

Dual Maps

Definition. Let $V$ and $W$ be vector spaces over $\mathbb{F}$ and suppose that $\alpha: V \to W$ is a linear map. The dual map to $\alpha$ is the map $\alpha^\ast: W^\ast \to V^\ast$ given by $\theta \to \theta\alpha$ for $\theta: W \to \mathbb{F} \in W^\ast$.

Proposition. The dual map $\alpha^\ast: W^\ast \to V^\ast$ is linear.

Proof.

$\theta\alpha: V \to \mathbb{F}$ is the composite of two linear maps so it is linear and $\theta\alpha \in V^\ast$. Also, for $\theta_1, \theta_2 \in W^\ast$ and $v \in V$, we have

\[\alpha^\ast(\lambda \theta_1 + \mu \theta_2)(v) = (\lambda \theta_1 + \mu \theta_2)(\alpha(v)) = \lambda \theta_1(\alpha(v)) + \mu \theta_2(\alpha(v)) = (\lambda \alpha^\ast(\theta_1) + \mu \alpha^\ast(\theta_2))(v)\]

so $\alpha^\ast \in \mathcal{L}(W^\ast, V^\ast)$.

Proposition. Suppose that $V$ and $W$ are finite dimensional with bases $(e_1, …, e_n)$ and $(f_1, …, f_m)$. Let $(\epsilon_1, …, \epsilon_n)$ and $(\eta_1, …, \eta_m)$ be the corresponding dual bases. If $\alpha: V \to W$ is represented by $A$ with respect to $(e_1, …, e_n)$ and $(f_1, …, f_m)$, then $\alpha^\ast: W^\ast \to V^\ast$ is represented by $A^\intercal$ with respect to $(\eta_1, …, \eta_m)$ and $(\epsilon_1, …, \epsilon_n)$.

Proof.

Given $\alpha(e_j) = \sum A_{kj} f_k$, we have

\[(\alpha^\ast(\eta_i))(e_j) = \eta_i(\alpha(e_j)) = \eta_i \left( \sum_k A_{kj} f_k \right) = \sum_k A_{kj} \eta_i(f_k) \\ = \sum_k A_{kj} \delta_{ik} = A_{ij}\]

On the other hand,

\[(\alpha^\ast(\eta_i))(e_j) = \left( \sum_k B_{ki} \epsilon_k \right) (e_j) = \sum_k B_{ki} \epsilon_k(e_j) = \sum_k B_{ki} \delta_{jk} = B_{ji}\]

Hence, $B_{ji} = A_{ij} = A_{ji}^\intercal$ and $\alpha^\ast(\eta_i) = \sum_k A^\intercal_{ki} \epsilon_k$.

Corollary. Suppose that $V$ is a finite dimensional vector space over $\mathbb{F}$. If $P$ is the change of basis matrix from $(e_1, …, e_n)$ to $(e_1’, …, e_n’)$ for $V$, Then $(P^{-1})^\intercal$ is the change of basis matrix from $(\epsilon_1, …, \epsilon_n)$ to $(\epsilon_1’, …, \epsilon_n’)$ for $V^\ast$.

Proof.

Let $\iota_V: V \to V$ be the identity map, then $\iota_V^\ast: V^\ast \to V^\ast$ with $\iota_V^\ast(\theta) = \theta \iota_V = \theta$ is also an identity map. Therefore, if $P$ is a matrix representation of $\iota_V$ for bases $(e_i)$ to $(e_i’)$, then $P^\intercal$ is that of $\iota_V^\ast$ for bases $(\epsilon_i’)$ to $(\epsilon_i)$. Hence, $(P^{-1})^\intercal$ is that for bases $(\epsilon_i)$ to $(\epsilon_i’)$.

Proposition. If $\alpha: U \to V$ and $\beta: V \to W$ are linear maps then $(\beta \alpha)^\ast = \alpha^\ast \beta^\ast$.

Proof.

By definition, for $\theta \in W^\ast$, $(\beta \alpha)^\ast(\theta)$ = $\theta(\beta \alpha)$ and $\alpha^\ast\beta^\ast(\theta) = \alpha^\ast(\theta\beta) = (\theta\beta)\alpha = \theta(\beta\alpha)$.

Proposition. Suppose that $\alpha \in \mathcal{L}(V, W)$ with $V, W$ finite dimensional over $\mathbb{F}$. Then

  • $\ker \alpha^\ast = (\text{Im}\, \alpha)^\circ$;

  • $r(\alpha^\ast) = r(\alpha)$;

  • $\text{Im}\,\alpha^\ast = (\ker \alpha)^\circ$.

Proof.

For $\theta \in W^\ast$, $\theta \in \ker \alpha^\ast$ iff $\alpha^\ast(\theta) = 0$ iff $\theta(\alpha(v)) = \theta(w) = 0$ for all $w \in \text{Im}\, \alpha$ iff $\theta \in (\text{Im}\, \alpha)^\circ$.

We have

\[r(\alpha) + \dim (\text{Im}\,\alpha)^\circ = \dim W = \dim W^\ast = r(\alpha^\ast) + n(\alpha^\ast)\]

and from the above $n(\alpha^\ast) = \dim(\ker \alpha^\ast) = \dim (\text{Im}\,\alpha)^\circ$ so $r(\alpha^\ast) = r(\alpha)$.

Suppose that $\eta \in \text{Im}\, \alpha^\ast$. Then there exist $\theta \in W^\ast$ such that $\alpha^\ast \theta = \theta\alpha = \eta$. For all $v \in \ker \alpha$, $\eta(v) = \theta\alpha(v) = \theta(0) = 0$ so $\eta \in (\ker \alpha)^\circ$ and $\text{Im}\,\alpha^\ast \subseteq (\ker \alpha)^\circ$. We have

\[\dim (\ker \alpha)^\circ = \dim V - n(\alpha) = r(\alpha) = r(\alpha^\ast) = \dim(\text{Im}\,\alpha^\ast)\]

so $\text{Im}\,\alpha^\ast = (\ker \alpha)^\circ$.

The last equality makes use of the dimension counting argument. The second equality gives a more in-depth reason for why the row rank is equal to column rank.

Canonical Maps

Definition. Suppose that $V$ is a vector space over $\mathbb{F}$. The canonical map is defined by $\text{ev}: V \to V^{\ast\ast}$ where $\text{ev}(v)(\theta) = \theta(v)$ for each $\theta \in V^\ast$.

The name “canonical” means it is a map arises natually from the definition of $V$ and $V^{\ast\ast}$ that preserves the widest amount of structure.

Proposition. The canonical map is well-defined and linear.

Proof.

For $\theta_1, \theta_2 \in V^\ast$, we have

\[\text{ev}(v)(\lambda \theta_1 + \mu \theta_2) = (\lambda \theta_1 + \mu \theta_2)(v) = \lambda \theta_1(v) + \mu \theta_2(v) = \lambda \text{ev}(v)(\theta_1) + \mu \text{ev}(v)(\theta_2)\]

so $\text{ev}(v) \in \mathcal{L}(V^\ast, \mathbb{F}) = V^{\ast\ast}$.

For $v_1, v_2 \in V$, we have

\[\text{ev}(\lambda v_1 + \mu v_2)(\theta) = \theta(\lambda v_1 + \mu v_2) = \lambda \theta(v_1) + \mu \theta(v_2) = (\lambda \text{ev}(v_1) + \mu \text{ev}(v_2))(\theta)\]

so $\text{ev}$ is linear.

Proposition. Suppose that $V$ is finite dimensional vector space over $\mathbb{F}$. Then the canonical linear map $\text{ev}$ is an isomorphism.

Proof.

Suppose that $\text{ev}(v) = 0 \in V^{\ast\ast}$. Then $\text{ev}(v)(\theta) = \theta(v) = 0$ for all $\theta \in V^\ast$. Thus,

\[\langle v \rangle^\circ = \Set{\theta \in V^\ast : \forall v \in V, \theta(v) = 0} = V^\ast\]

which means $\dim \langle v \rangle^\circ = \dim V^\ast = \dim V$ and $\dim \langle v \rangle = 0$ and $v = 0$. Therefore, $\ker \text{ev} = 0$ and $\text{ev}$ is injective. As $\dim V^{\ast\ast} = \dim V^\ast = \dim V$, injectivity of $\text{ev}$ implies isomorphism.

$\text{ev}$ is generally not an isomorphism if $V$ is not finite dimensional.

Proposition. Suppose that $V$ and $W$ are finite dimensional and $\alpha \in \mathcal{L}(V, W)$ then $\alpha^{\ast\ast} \circ \text{ev}_V = \text{ev}_W \circ \alpha$.

Proof.

Note that $\alpha^{\ast\ast}: V^{\ast\ast} \to W^{\ast\ast}$ so $\alpha^{\ast\ast} \circ \text{ev}_V$ and $\text{ev}_W \circ \alpha$ are both linear maps $V \to W^{\ast\ast}$. Also, $\text{ev}_V$ on the L.H.S. is $V \to V^{\ast\ast}$ and $\text{ev}_W$ on the R.H.S. is $W \to W^{\ast\ast}$.

For $v \in V$ and $\theta \in W^\ast$, we have

\[\alpha^{\ast\ast}(\text{ev}_V(v))(\theta) = \text{ev}_V(v)(\alpha^\ast \theta) = \text{ev}_V(v)(\theta \alpha) = \theta(\alpha(v)) = \text{ev}_W(\alpha(v))(\theta)\]

as required.

Proposition. Suppose that $V$ is finite dimensional and $U, U_1, U_2$ are subspaces of $V$. Then

  • $U^{\circ \circ} = \text{ev}(U)$;

  • $\text{ev}(U)^\circ = \text{ev}(U^\circ)$;

  • $(U_1 + U_2)^\circ = U_1^\circ \cap U_2^\circ$;

  • $(U_1 \cap U_2)^\circ = U_1^\circ + U_2^\circ$.

Proof.

Note that $U^\circ = \Set{\theta \in V^\ast : \forall u \in U, \theta(u) = 0}$ so $U^{\circ\circ} = \Set{\eta \in V^{\ast\ast} : \forall \theta \in U^\circ, \eta(\theta) = 0}$.

Let $\eta’ = \text{ev}(u)$ for $u \in U$. Then $\forall \theta \in U^\circ$, $\eta’(\theta) = \theta(u) = 0$ so $\eta’ \in U^{\circ\circ}$ and $\text{ev}(U) \subseteq U^{\circ\circ}$. Moreover,

\[\dim \text{ev}(U) = \dim U = \dim V - \dim U^\circ = \dim V^\ast - \dim U^\circ = \dim U^{\circ\circ}\]

so $U^{\circ\circ} = \text{ev}(U)$.

By the above, $\text{ev}(U)^\circ = (U^{\circ\circ})^\circ = (U^\circ)^{\circ\circ} = \text{ev}(U^\circ)$.

Note that

\[(U_1 + U_2)^\circ = \Set{\theta \in V^\ast : \forall u_1 \in U_1, \forall u_2 \in U_2, \theta(u_1 + u_2) = 0}\]

and

\[U_1^\circ \cap U_2^\circ = \Set{\theta \in V^\ast : \forall u_1 \in U_1, \forall u_2 \in U_2, \theta(u_1) = \theta(u_2) = 0}\]

If $\theta(u_1 + u_2) = 0$ for all $u_1, u_2$, by having $u_2 = 0$, we have $\theta(u_1 + 0) = \theta(u_1) = 0$ for all $u_1 \in U_1$. Similarily, $\theta(u_2) = 0$ for all $u_2 \in U_2$. Therefore, $(U_1 + U_2)^\circ \subseteq U_1^\circ \cap U_2^\circ$. $U_1^\circ \cap U_2^\circ \subseteq (U_1 + U_2)^\circ$ is obvious.

Finally, consider

\[(U_1^\circ + U_2^\circ)^\circ = U_1^{\circ\circ} \cap U_2^{\circ\circ} = \text{ev}(U_1) \cap \text{ev}(U_2)\]

Hence, since $\text{ev}$ is an isomorphism, we have

\[\text{ev}((U_1 \cap U_2)^\circ) = \text{ev}((U_1 \cap U_2))^\circ = (\text{ev}(U_1) \cap \text{ev}(U_2))^\circ = (U_1^\circ + U_2^\circ)^{\circ\circ} = \text{ev}(U_1^\circ + U_2^\circ)\]

so $(U_1 \cap U_2)^\circ = U_1^\circ + U_2^\circ$.

Reference