Thursday, November 23, 2023

[xttqmkvq] linear algebra in a field

if one has vector or matrix with elements from an arbitrary field, one can do a subset of linear algebra.

addition, subtraction, and multiplication of scalars, vectors, and matrices.  (but no general vector-vector multiplication yielding another vector, i.e., not an "algebra over a field".)

dot product of vectors, but it has much less geometric meaning.  a dot product of zero can still mean orthogonal, but we cannot compute other angles because arc-cosine is not a field operation.  cannot compute norm because that requires square root, not a field operation.  we can compute norm squared, but note that field elements generally cannot be ordered, so vectors cannot be compared by length (squared).

determinant, trace.

matrix transpose.  the field of complex numbers likes to use conjugate transpose instead of regular transpose, but I'm not aware of any generalization of conjugate transpose to arbitrary fields.

row operations on matrices, Gaussian elimination, solving Ax=b, LU decomposition, matrix inversion.  some fields might have many matrices that are not invertible.

Gram-Schmidt orthogonalization, but not orthonormalization because the latter requires square root.  consequently, no QR decomposition.

similarly, no Cholesky, singular value decomposition, nor eigenvalue decomposition.

solving linear least squares is interesting.  forming the normal equation by multiplying both sides by the transpose is possible.  the resulting matrix, if not singular, can be solved by Gaussian elimination.  however, more numerically stable methods of solving least squares rely on QR, Cholesky, or SVD, which are not available.  is there a method of least squares, numerically stable on the field(ish) of floating point numbers, that only uses field operations?  more broadly, it's unclear what "least" would mean in least squares over an arbitrary field, as, other than zero, field elements cannot be compared or ordered.

No comments :