given a complex number in rectangular (Cartesian) form x + y*i, we first consider computing its square root (one of its square roots) in rectangular form p + q*i such that p and q only contain radicals of real values.
here is a Haskell code that accomplishes this, implementing "How to find the square root of a complex number" by Stanley Rabinowitz:
complexsqrt :: forall real. (Eq real, Num real, Ord real, Floating real) => (real,real) -> (real,real);
complexsqrt (x,y) =
if y == 0 then
if x < 0 then (0, sqrt(negate x))
else (sqrt(x), 0)
else let {
hypotenuse :: real;
hypotenuse = sqrt(x*x + y*y);
p :: real;
p = sqrt((x+hypotenuse)/2);
imaginarypart :: real;
imaginarypart = sqrt((hypotenuse - x)/2);
} in if y < 0
then (p, negate imaginarypart)
else (p, imaginarypart);
the above code is only to demonstrate that it is theoretically possible to do this with radicals and conditionals. practically, the code has issues with loss of numerical precision and potential overflow in intermediate results.
note that if p + q*i is a square root, then -p - q*i is also a square root. these coincide at 0: zero (and only zero) has exactly one square root. the above code (I think) gives the answer whose complex argument t is -pi/2 < t <= pi/2 , the right half of the complex plane, and preferring the positive imaginary axis.
is there a similar algorithm to compute the cube root of a complex number? i strongly suspect the answer is "no", because casus irreducibilis.
let the polar form be r * (cos(t) + i*sin(t)). then, cube root by De Moivre theorem is r^(1/3) * (cos(t/3) + i*sin(t/3)). we do not know what cos(t/3) and sin(t/3) are, but we are given cos(t) and sin(t) and, and we have the triple angle identities:
sin(t) = 3*sin(t/3) - 4*(sin(t/3))^3
cos(t) = 4*(cos(t/3))^3 - 3*cos(t/3)
we could solve the triple angle identities for cos(t/3) and sin(t/3) using the cubic equation, eliminating the trigonometric functions from the De Moivre solution. however, the cubic equation typically requires computing the cube roots of complex numbers, casus irreducibilis, so we are (probably) back where we started.
if so, casus irreducibilis is stranger than it might appear at first sight. suppose we have a calculator that can do arithmetic on real numbers, including computing arbitrary roots of positive real numbers. (note that simple arithmetic + - * / on a complex numbers can easily be expressed in terms of real arithmetic on their rectangular components.) this machinery is insufficient for computing the rectangular components of the cube root of (most) complex numbers. it seems there is a partition within algebraic numbers: those which can be computed with a finite number of arithmetic operations on reals starting with rationals, and those which cannot. the rectangular components of the cube root of a complex number are in the latter set. (the roots of the general quintic equation and higher are also famously in the latter set.)
the components are probably not transcendental numbers. one can write (p + q*i)^3 = (x + y*i), then separate the real and imaginary components:
x = p^3 - 3*p*q^2
y = 3*p^2*q - q^3
I don't know if algebraic numbers include numbers which are the roots of a system of polynomials: the definition of algebraic states roots of a single polynomial. but I strongly suspect roots of systems are also algebraic, much like how roots of single polynomials with algebraic-number (not necessarily rational) coefficients are algebraic.
what (subjectively) is the simplest additional computational machinery needed to compute the rectangular components of the cube root of a complex number, perhaps to arbitrary precision?
of course, cos, sin, and atan2 suffice, going via polar form, but those functions are (subjectively) not very simple.
the cube root of a positive real number can be iteratively computed using bisection. but bisection does not work on the complex plane: there is too much space. can a root be confined to a rectangle which is iteratively shrunk?
the Newton-Raphson method applied to complex numbers works, but it has the ugly issue of selecting the initial point. Newton's method famously behaves very weirdly for complex numbers (yielding fractals) depending on the initial point. (who first applied Newton's method to complex numbers? I used to think it was Raphson, but I seem mistaken. it's not obvious that Newton's method should even work for complex numbers.)
perhaps use polynomial approximations of trigonometric and inverse trigonometric functions only good enough to get a good initial guess for Newton's method. perhaps we win because the 3 cube roots of a number on the complex plane are widely spaced, and the polynomial approximations of sine and cosine only need to be good enough between -pi and pi, the range of atan2. actually, we only need -pi/3 to pi/3.
No comments :
Post a Comment