Processing math: 4%

Thursday, September 26, 2013

Tensor Products and Field Extensions (13.3.22)

Dummit and Foote Abstract Algebra, section 12.3, exercise 22:

MathJax TeX Test Page Let K/F be a finite field extension, and let K1,K2K be field extensions of F. Show the F-algebra K1FK2 is a field if and only if [K1K2 : F]=[K1 : F][K2 : F].

Proof: Let A be the set of finite sums of elements of the form k1k2 for k1K1,k2K2, let φ : K_1 × K_2 → A be the bilinear map defined by φ(k_1,k_2)=k_1k_2, and let Φ : K_1 ⊗ K_2 → A be the corresponding F-linear transformation. We observeΦ(k_1⊗k_2)Φ(k_1'⊗k_2') = k_1k_1'k_2k_2' = Φ((k_1⊗k_2)(k_1'⊗k_2'))allowing us to showΦ(\sum_i k_{i1} ⊗ k_{i2})Φ(\sum_j k_{j1}'⊗k_{j2}')=\sum_{i,j} Φ(k_{i1} ⊗ k_{i2})Φ(k_{j1}' ⊗ k_{j2}') =\sum_{i,j} Φ((k_{i1} ⊗ k_{i2})(k_{j1}' ⊗ k_{j2}')) = Φ((\sum_i k_{i1} ⊗ k_{i2})(\sum_j k_{j1}' ⊗ k_{j2}'))so that Φ is an F-algebra homomorphism. Note A = \text{img Φ}. As well, let K_1 have for basis over F \{n_i\} and let K_2 have \{m_j\}.

() We thus have Φ is a nonzero field homomorphism, and is thus an isomorphism. Now A is a field and by definition we have K_1K_2 ⊆ A and by construction we observe A ⊆ K_1K_2 so that K_1K_2 ≅ K_1 ⊗ K_2 as F-algebras, the latter of which has for basis \{n_i ⊗ m_j\} of order [K_1~:~F][K_2~:~F]. () We still have A ⊆ K_1K_2, and since we observe [K_1~:~F][K_2~:~F] elements n_im_j of A linearly independent over F by Proposition 13.2.21, we must have this is a basis for K_1K_2 and thus again A = K_1K_2. The F-algebra homomorphism Φ above sends basis to basis and is thus an isomorphism, and now K_1 ⊗ K_2 is a field.~\square

Wednesday, September 25, 2013

Applications of Algebraic Extensions (13.2.16-17)

Dummit and Foote Abstract Algebra, section 13.2, exercises 16-17:

MathJax TeX Test Page 16. Let K/F be an algebraic extension and let R be a ring where F ⊆ R ⊆ K. Show R is a field.

17. Let f(x)∈F[x] be irreducible of degree n, and let g(x)∈F[x]. Prove that every irreducible factor of f(g(x)) has degree divisible by n.

Proof: (16) It suffices to show that every element r∈R has a multiplicative inverse. Since r∈K is algebraic over F, we observe r^{-1}∈F(r) ⊆ R.

(17) Let h(x) \mid f(g(x)) be irreducible of degree k, and let α be a solution to h(x), so that h(α)=0 and thus f(g(α))=0. Since f(x) is irreducible, we must have g(α) is of degree n. We thus have\text{deg }h(x) = k = [F(α)~:~F] =[F(α)~:~F(g(α))] \cdot [F(g(α))~:~F] = [F(α)~:~F(g(α))] \cdot nso that n \mid k.~\square

Saturday, September 7, 2013

Convergence of Matrices (12.3.40-45)

Dummit and Foote Abstract Algebra, section 12.3, exercises 40-45:

MathJax TeX Test Page 40. Letting K be the real or complex field, prove that for A,B∈M_n(K) and α∈K:
(a) ||A+B|| ≤ ||A|| + ||B||
(b) ||AB|| ≤ ||A|| \cdot ||B||
(c) ||αA|| = |α| \cdot ||A||

41. Let R be the radius of convergence of the real or complex power series G(x).
(a) Prove that if ||A|| < R then G(A) converges.
(b) Deduce that for all matrices A the following power series converge:\text{sin}(A)=\sum_{k=0}^∞ (-1)^k \dfrac{A^{2k+1}}{(2k+1)!}\text{cos}(A)=\sum_{k=0}^∞(-1)^k \dfrac{A^{2k}}{(2k)!}\text{exp(A)}=\sum_{k=0}^∞\dfrac{A^k}{k!} 42. Let P be a nonsingular n \times n matrix, and denote the variable t by the matrix tI (in light of the theory of differential equations).
(a) Prove PG(At)P^{-1}=G(PAtP^{-1})=G(PAP^{-1}t), so it suffices to consider power series for matrices in canonical form.
(b) Prove that if A is the direct sum of matrices A_1,...,A_m, then G(At) is the direct sum of the matrices G(A_1t),...,G(A_mt).
(c) Show that if Z is the diagonal matrix with entries z_1,...,z_n then G(Zt) is the diagonal matrix with entries G(z_1t),...,G(z_nt).

43. Letting A and B be commuting matrices, show \text{exp}(A+B)=\text{exp}(A)\text{exp}(B).

44. Letting λ ∈ K, show\text{exp}(λIt+M)=e^{λt}\text{exp}(M) 45. Let N be the r \times r matrix with 1s on the first superdiagonal and zeros elsewhere. Show\text{exp}(Nt)=\begin{bmatrix}1 & t & \dfrac{t^2}{2!} & \cdots & \cdots & \dfrac{t^{r-1}}{(r-1)!} \\ ~ & 1 & t & \dfrac{t^2}{2!} &~ & \vdots \\ ~ & ~ & \ddots & \ddots & \ddots & \vdots \\ ~ & ~ & ~ & \ddots & t & \dfrac{t^2}{2!} \\ ~ & ~ & ~ & ~ & 1 & t \\ ~ & ~ & ~ & ~ & ~ & 1 \end{bmatrix}Deduce that if J is the r \times r elementary Jordan matrix with eigenvalue λ then\text{exp}(Jt)=\begin{bmatrix}e^{λt} & te^{λt} & \dfrac{t^2}{2!}e^{λt} & \cdots & \cdots & \dfrac{t^{r-1}}{(r-1)!}e^{λt} \\ ~ & e^{λt} & te^{λt} & \dfrac{t^2}{2!}e^{λt} &~ & \vdots \\ ~ & ~ & \ddots & \ddots & \ddots & \vdots \\ ~ & ~ & ~ & \ddots & te^{λt} & \dfrac{t^2}{2!}e^{λt} \\ ~ & ~ & ~ & ~ & e^{λt} & te^{λt} \\ ~ & ~ & ~ & ~ & ~ & e^{λt} \end{bmatrix} Proof: (40)(a) We have||A+B||=\sum_{i,j}|a_{ij}+b_{ij}| ≤ (\sum_{i,j}|a_{ij}|)+(\sum_{i,j}|b_{ij}|)=||A||+||B||(b) We have||AB||=\sum_{i,j}|\sum_{k=0}^n a_{ik}b_{kj}| ≤ \sum_{i,j} \sum_{k=0}^n |a_{ik}| \cdot |b_{kj}| ≤\sum_{i,j,i',j'} |a_{ij}| \cdot |b_{i'j'}| = (\sum_{i,j}|a_{ij}|)(\sum_{i,j}|b_{ij}|) = ||A|| \cdot ||B||(c) We have||αA|| = \sum_{i,j}|αa_{ij}| = |α| \sum_{i,j} |a_{ij}| = |α| \cdot ||A||(41)(a)(Method due to Project Crazy Project) For each entry i,j entry a_{(k)ij} of A^k, we note that |a_(k){ij}| ≤ ||A^k|| ≤ ||A||^k < R^k. Therefore, we note \sum_{k=0}^N |α_ka_{(k)ij}| ≤ \sum_{k=0}^N |α_k| \cdot r^k, where r has been chosen ||A|| < r < R. By the Cauchy-Hadamard theorem this last power series displays the same radius of convergence R and thus converges for r as N approaches infinity. Thus the partial power series for G(A) in the i,j coordinate converges absolutely, and thus converges.

(b) Since these functions are shown to have radius of convergence R = ∞ over K, by the above they converge for matrices of arbitrary absolute value, i.e. for all matrices.

(42)(a) Lemma 1: Let x_n → x and y_n → y be convergent sequences of m × m matrices over K. Then x_ny_n → xy. Proof: We see x → x_n if and only if for any ε > 0 we have ||x-x_N|| < ε for sufficiently large N, since the forward is evident when the entries all converge within range of ε/m^2, and the converse forces all entries to converge within ε. For sufficiently large N we have - for some asymptotically small matrices ||ε_{(N)1}||,||ε_{(N)2}|| - the result||xy-x_Ny_N|| = ||xy-(x+ε_{(N)1})(y+ε_{(N)2})|| =||xε_{(N)2}+ε_{(N)1}y+ε_{(N)1}ε_{(N)2}|| ≤ ||x|| \cdot ||ε_{(N)2}|| + ||y|| \cdot ||ε_{(N)1}|| + ||ε_{(N)1}|| \cdot ||ε_{(N)2}||which vanishes to zero.~\square

Now we seePG(At)P^{-1}=(\text{lim }P)(\text{lim }G_N(At))(\text{lim }P)=\text{lim }PG_N(At)P^{-1} = \text{lim }G_N(PAtP^{-1}) = G(PAtP^{-1}) = G(PAP^{-1}t) (b) By considering lemma 2 of 12.3.38, we see that the power series' summands may be computed in blocks, leading to independent convergences as blocks.

(c) This is simply a special case of (b).

(43) Lemma 2: \lim_{n → ∞}\dfrac{x^n}{\lfloor n/2 \rfloor !}=0 for any real x. Proof: When n > x^2\lim_{n → ∞}\dfrac{x^n}{\lfloor n/2 \rfloor !} ≤ \lim_{n → ∞}x\prod_{k=1}^{\lfloor n/2 \rfloor} \dfrac{x^2}{k} = \lim_{n → ∞}x \prod_{k = 0}^{\lfloor x^2 \rfloor}\dfrac{x^2}{k} \prod_{k = \lceil x^2 \rceil}^{\lfloor n/2 \rfloor}\dfrac{x^2}{k} =x \prod_{k = 0}^{\lfloor x^2 \rfloor}\dfrac{x^2}{k} \lim_{n → ∞} \prod_{k = \lceil x^2 \rceil}^{\lfloor n/2 \rfloor}\dfrac{x^2}{k} = 0~~\square Lemma 3: Let x_n → x and y_n → y be convergent sequences of m × m matrices over K such that |x_n-y_n| → 0. Then x=y. Proof: Assume x \neq y so |x-y| > 0. Let |x-x_n| < |x-y|/2 - ε for n > n_1 and some chosen 0 < ε < |x-y|/2, let |y-y_n| < |x-y|/2 for n > n_2, let |x_n-y_n| < ε for n > n_3, and choose N > \text{max}(n_1,n_2,n_3). We have|x-y| = |(x-x_N)+(x_N-y_N)+(y_N-y)| ≤|x-x_N|+|x_N-y_N|+|y-y_N| < |x-y|a contradiction.~\square

Utilizing lemma 1, we have \lim_{N → ∞}\text{exp}_n(A)\lim_{N → ∞}\text{exp}_n(B)=\lim_{N → ∞}\text{exp}_n(A)\text{exp}_n(B), so we may compare terms of the two sequences\text{exp}_n(A)\text{exp}_n(B)=(\sum_{k=0}^n\dfrac{A^k}{k!})(\sum_{k=0}^n\dfrac{B^k}{k!})=\sum_{j=0}^n\sum_{k=0}^n\dfrac{A^jB^k}{j!~k!}=\sum_{j,k ≤ n} \dfrac{A^jB^k}{j!~k!}\text{exp}_n(A+B)=\sum_{k=0}^n \dfrac{(A+B)^k}{k!} = \sum_{k=0}^n \dfrac{\sum_{j=0}^k \dfrac{k!}{j!(k-j)!}A^jB^{k-j}}{k!} =\sum_{k=0}^n \sum_{j=0}^k \dfrac{A^jB^{k-j}}{j!(k-j)!} = \sum_{j+k ≤ n}\dfrac{A^jB^k}{j!~k!} and then compare their differences (without loss of generality assume ||A|| ≤ ||B||)|\text{exp}_n(A+B)-\text{exp}_n(A)\text{exp}_n(B)| = |\sum_{j,k ≤ n < j+k}\dfrac{A^jB^k}{j!~k!}| ≤ \sum_{j,k ≤ n < j+k} \dfrac{||A||^j||B||^k}{j!~k!} ≤\dfrac{n^2||B||^{2n}}{\lfloor n/2 \rfloor !} = \dfrac{||B||^2 \prod_{k=2}^n\dfrac{k^2||B||^{2k}}{(k-1)^2||B||^{2k-2}}}{\lfloor n/2 \rfloor !} ≤ ||B||^2\dfrac{(4||B||^2)^{n-1}}{\lfloor n/2 \rfloor !}When 4||B||^2 < 1 we clearly have the above vanishing to zero as n approaches infinity, so assume 4||B||^2 ≥ 1||B||^2\dfrac{(4||B||^2)^{n-1}}{\lfloor n/2 \rfloor !} ≤ ||B||^2\dfrac{(4||B||^2)^n}{\lfloor n/2 \rfloor !}and by lemma 2, the term still tends toward 0. Thus by lemma 3 we conclude \text{exp}(A+B)=\text{exp}(A)\text{exp}(B).

(44) This is clear from the fact \text{exp}(λt) = e^{λt}, the ring isomorphism between K and matrices KI, and the preceding exercise.

(45) The first part is clear from the fact that N^k is the matrix with 1s along the k^{th} superdiagonal (observable most easily by induction from the linear transformation form of N), and since N^r = 0, we have \text{exp}(Nt)=\text{exp}_r(Nt) is the matrix described above. The second part is clear from the observation in (44) with M=Nt coupled with the first part.~\square