Friday, February 22, 2013

Vectors are Not 1-Forms

So, I recently moved in to a new research area.  It's new to my advisor, too.  Actually, generally speaking, it's pretty new period, first appearing some ten years ago or less.  Anyway, this new field deals fairly heavily with Maxwell's Equations in curved spacetime, so to understand it we are needing to review differential geometry and general relativity, two fields which are not in the normal purview of my advisor's expertise.  I was asked to prepare a chalk-talk that would introduce the key concepts of differential geometry to them, and another talk to segue in to Maxwells Equations in curved spacetime.

Not like I'm an expert on differential geometry, but I've studied it some privately and as an undergraduate.

While studying for this, it dawned on me suddenly, like the storm clouds that pile higher and higher until the first bolt of lightning strikes the ground, that vectors and 1-forms are different.

Every thing I have ever read in physics equates them.  Or not really.  Everything I have ever read in physics doesn't even demonstrate that it understands why those two should occupy different semantic domains.

What the heck am I even talking about?

Suppose you have a vector space, $\mathbb{V}$ of dimension $n$.  (If you don't know what a vector space is, maybe watch some videos on Khan Academy?  I started writing an explanation, but it was too long, and Sal is better at it anyway.)  If you have a basis, then you can express the vectors in terms of their components with respect to the basis, i.e. for $\mathbf{v}\in\mathbb{V}$, we can write $\mathbf{v}=(v^1,v^2,\ldots,v^n),$ where the superscript does not denote exponentiation, but is just to denote which component that is -- this is the convention in differential geometry, because each vector will have two kinds of components we need to distinguish; yes, it's confusing, sorry, it wasn't my idea.  Let the basis be $\{\mathbf{e}_j\}.$  Then $\mathbf{v}=\sum_j v^j \mathbf{e}_j.$

Note the basis is written with a lower index and the component with an upper index.  This is important in the notation, as index type is "conserved", kind of.  They persist throughout calculations and can only be canceled by the same index in the opposite position.

If we are lucky enough to have an inner product on our space (and we always are in physics) then things get interesting.  Mathematicians like to use $\langle \cdot, \cdot \rangle$ to represent the inner product because mathematicians like silly notation.  Physicists will use $\boldsymbol{\cdot}$ or $\mathbf{g}$ when we need to be abstract, which actually we almost never do, ever.  Physicists normally use component notation and let inner products be implied by the location of indices.  This is usually called the "dot product" when it's the Euclidean (read: "normal") inner product, and the "metric tensor" when we're being fancy.

(Let me add as an aside, that if you are typing an equation in quantum mechanics using LaTeX and you do not observe the difference between $\vert \psi \rangle$ and $|\psi>$... then really man.  Use "langle" and "rangle"; they're good for you.)

If we have an inner product, then we can describe distances in our space by $$\Vert \mathbf{v} \Vert = \sqrt{\mathbf{g}(\mathbf{v},\mathbf{v})} = \sqrt{\sum_{\alpha,\beta} g_{\alpha\beta}v^\alpha v^\beta},$$ where $g_{\alpha\beta}=\mathbf{g}(\mathbf{e}_\alpha,\mathbf{e}_\beta)$ are the components of the metric tensor.

Now, let's hop tracks.  It is often useful to speak of scalar functions, $\omega:\mathbb{V}\rightarrow\mathbb{R}$  that are linear.  That is, where $$\omega(\alpha\mathbf{v}+\beta\mathbf{u})=\alpha\omega(\mathbf{v})+\beta\omega(\mathbf{u}).$$  Such a function is called a 1-form.  It is then easy to see
$$\omega(\mathbf{v})=\sum_\alpha\omega(\mathbf{e}_\alpha)v^\alpha = \sum_\alpha \omega_\alpha v^\alpha,$$
where the $\omega_\alpha$ are called the covariant components of $\omega$.  That $\omega$ has components suggest that $\omega$ would have a basis.  What is this basis?

We normally choose the set of projection functions, $\pi^\alpha:\mathbb{V}\rightarrow\mathbb{R}$ defined by $$\pi^\alpha(\mathbf{e}_\beta)= \delta^\alpha_\beta.$$  This set of 1-forms is normally denoted $\mathbb{V}^\ast$ and called the "dual space" to $\mathbb{V}.$

What is cool is that if we have an inner product, then we can directly relate 1-forms and vectors in our space.  For every vector $\mathbf{w}$, we can form the 1-form $\omega = \mathbf{g}(\mathbf{w},\cdot).$  Likewise, for every 1-form $\omega$, there exists a $\mathbf{w}$ such that $\omega=\mathbf{g}(\mathbf{w},\cdot).$  This forms an isomorphism.

Note,
$$\omega = \mathbf{g}(\mathbf{w},\cdot) = \sum w^\alpha \mathbf{g}(\mathbf{e}_\alpha, \cdot).$$
What does this mean?  That 1-forms can also be written with contravariant components and also expressed with respect to the basis $\{\mathbf{g}(\mathbf{e}_\alpha,\cdot)\}.$  In a similar vein, for $\pi^\alpha$, there exists a vector $\mathbf{e}^\alpha$ so that $\pi^\alpha = \mathbf{g}(\mathbf{e}^\alpha, \cdot),$ and now we can write
$$\mathbf{w} = \mathbf{g}^{-1}(\omega) = \sum\omega_\alpha \mathbf{g}^{-1}\mathbf{g}(\mathbf{e}^\alpha, \cdot) = \sum_\alpha \omega_\alpha \mathbf{e}^\alpha,$$
meaning that vectors can be written with covariant components.

We can write vectors in $\mathbb{V}$ and forms in $\mathbb{V}^\ast$ covariantly or contravariantly.  We can make an association between a vector and a corresponding one-form.  There's clearly a very intimate link.

They are different things.

If you don't think the distinction is necessary, then look at it this way.  Using the inner product, we have formed an isomorphism $\mathbf{g}:\mathbb{V}\rightarrow\mathbb{V}^\ast,$ so that $\mathbb{V}\cong\mathbb{V}^\ast.$  But consider the isomorphism
$$P(\mathbf{v}) = v^1+v^2\cdot x + v^3\cdot (x^2) + \cdots +v^n \cdot(x^{n-1}),$$
which forms an isomorphism of vectors with polynomials of degree $n-1$.  There's a vector for each polynomial and a polynomial for each vector, and I can even relate the bases.  Is it okay then to insist that polynomials are the same as vectors in $\mathbb{V}$?

And in just the same way, it is not true that 1-forms are just vectors in $\mathbb{V}$ with their components expressed covariantly.

Whether the components of $\mathbf{v}$ are covariant or contravariant, they are still the components of a vector $\mathbf{v}$.  They do not become a one-form by being written with lower indices.  One-forms don't become vectors by writing them with upper indices.

I think the reason this distinction gets overlooked and ignored is that physicists almost exclusively work in component notation.  In component notation, something like
$\sum_\alpha u_\alpha v^\alpha$ might be $$\upsilon(\mathbf{v})=\sum_\alpha u_\alpha \pi^\alpha(\mathbf{v}) = \sum_\alpha u_\alpha v^\alpha$$ or it might be $$\mathbf{g}(\mathbf{u}, \mathbf{v}) = \sum_{\alpha,\beta} g_{\alpha\beta}u^\beta v^\alpha = \sum_{\alpha}u_\alpha v^\alpha.$$  In one we are evaluating a form at a vector, in another we are taking the dot product between two vectors.  But in either case, if we work only in component notation, then we're not really dealing with objects in abstract vector spaces, but with just these bags of numbers that obey some rules.  Heck, most books on general relativity and differential geometry by physicists define a vector as a thing that does this:
$$v^{\alpha'} = \Lambda^{\alpha'}_\beta v^\beta,$$
where $\Lambda$ is a coordinate transformation.  And so I guess for most physicists, the distinction is silly and trivial and not worth making?  But when you consider that $\mathbb{V}$ is the space of real physical things floatin' around and $\mathbb{V}^\ast$ isn't, then maybe this is a distinction that needs to be made.  It is certainly important to distinguish between the polynomial $a+bx+cx^2 \in P_2$ and the 3-vector $(a,b,c) \in \mathbb{R}^3$.  Isn't it?

At any rate, calling $v_\alpha$ a "form" and $v^\alpha$ a "vector" is incorrect.  From just the component there's no way to know, and either might be a component of either, depending on what basis you use.  Both spaces have a "standard" basis and a "reciprocal" basis.  For $\mathbb{V}$ these are $\mathbf{e}_\alpha$ and $\mathbf{e}^\alpha$, and for $\mathbb{V}^\ast$ these are $\pi^\alpha=\mathbf{g}(\mathbf{e}^\alpha,\cdot)$ and $\pi_\alpha=\mathbf{g}(\mathbf{e}_\alpha, \cdot).$

I guess that's all I have to say on that.  It was kind of terrifying to realize this after years of learning differently from every text I've ever read.  Even a book like Gravitation (aka the MTW Phonebook), the most comprehensive text on the subject, runs smack in to this error and brushes it aside by defining some new inner product between a form and a vector and never explaining what that is or where it came from to get away with treating forms as vectors.  It's astonishing, really.

Again, I'm not an expert, but this is a real thing that math books talk about and physics books don't talk about.

Why do we keep making this error?

No comments: