Processing math: 100%

Saturday, September 7, 2013

Convergence of Matrices (12.3.40-45)

Dummit and Foote Abstract Algebra, section 12.3, exercises 40-45:

MathJax TeX Test Page 40. Letting K be the real or complex field, prove that for A,BMn(K) and αK:
(a) ||A+B||||A||+||B||
(b) ||AB||||A||||B||
(c) ||αA||=|α|||A||

41. Let R be the radius of convergence of the real or complex power series G(x).
(a) Prove that if ||A||<R then G(A) converges.
(b) Deduce that for all matrices A the following power series converge:sin(A)=k=0(1)kA2k+1(2k+1)!cos(A)=k=0(1)kA2k(2k)!exp(A)=k=0Akk! 42. Let P be a nonsingular n×n matrix, and denote the variable t by the matrix tI (in light of the theory of differential equations).
(a) Prove PG(At)P1=G(PAtP1)=G(PAP1t), so it suffices to consider power series for matrices in canonical form.
(b) Prove that if A is the direct sum of matrices A1,...,Am, then G(At) is the direct sum of the matrices G(A1t),...,G(Amt).
(c) Show that if Z is the diagonal matrix with entries z1,...,zn then G(Zt) is the diagonal matrix with entries G(z1t),...,G(znt).

43. Letting A and B be commuting matrices, show exp(A+B)=exp(A)exp(B).

44. Letting λK, showexp(λIt+M)=eλtexp(M) 45. Let N be the r×r matrix with 1s on the first superdiagonal and zeros elsewhere. Showexp(Nt)=[1tt22!tr1(r1)! 1tt22!      tt22!    1t     1]Deduce that if J is the r×r elementary Jordan matrix with eigenvalue λ thenexp(Jt)=[eλtteλtt22!eλttr1(r1)!eλt eλtteλtt22!eλt      teλtt22!eλt    eλtteλt     eλt] Proof: (40)(a) We have||A+B||=i,j|aij+bij|(i,j|aij|)+(i,j|bij|)=||A||+||B||(b) We have||AB||=i,j|nk=0aikbkj|i,jnk=0|aik||bkj|i,j,i,j|aij||bij|=(i,j|aij|)(i,j|bij|)=||A||||B||(c) We have||αA||=i,j|αaij|=|α|i,j|aij|=|α|||A||(41)(a)(Method due to Project Crazy Project) For each entry i,j entry a(k)ij of Ak, we note that |a(k)ij|||Ak||||A||k<Rk. Therefore, we note Nk=0|αka(k)ij|Nk=0|αk|rk, where r has been chosen ||A||<r<R. By the Cauchy-Hadamard theorem this last power series displays the same radius of convergence R and thus converges for r as N approaches infinity. Thus the partial power series for G(A) in the i,j coordinate converges absolutely, and thus converges.

(b) Since these functions are shown to have radius of convergence R= over K, by the above they converge for matrices of arbitrary absolute value, i.e. for all matrices.

(42)(a) Lemma 1: Let xnx and yny be convergent sequences of m×m matrices over K. Then xnynxy. Proof: We see xxn if and only if for any ε>0 we have ||xxN||<ε for sufficiently large N, since the forward is evident when the entries all converge within range of ε/m2, and the converse forces all entries to converge within ε. For sufficiently large N we have - for some asymptotically small matrices ||ε(N)1||,||ε(N)2|| - the result||xyxNyN||=||xy(x+ε(N)1)(y+ε(N)2)||=||xε(N)2+ε(N)1y+ε(N)1ε(N)2||||x||||ε(N)2||+||y||||ε(N)1||+||ε(N)1||||ε(N)2||which vanishes to zero. 

Now we seePG(At)P1=(lim P)(lim GN(At))(lim P)=lim PGN(At)P1=lim GN(PAtP1)=G(PAtP1)=G(PAP1t) (b) By considering lemma 2 of 12.3.38, we see that the power series' summands may be computed in blocks, leading to independent convergences as blocks.

(c) This is simply a special case of (b).

(43) Lemma 2: limnxnn/2!=0 for any real x. Proof: When n>x2limnxnn/2!limnxn/2k=1x2k=limnxx2k=0x2kn/2k=x2x2k=xx2k=0x2klimnn/2k=x2x2k=0   Lemma 3: Let xnx and yny be convergent sequences of m×m matrices over K such that |xnyn|0. Then x=y. Proof: Assume xy so |xy|>0. Let |xxn|<|xy|/2ε for n>n1 and some chosen 0<ε<|xy|/2, let |yyn|<|xy|/2 for n>n2, let |xnyn|<ε for n>n3, and choose N>max(n1,n2,n3). We have|xy|=|(xxN)+(xNyN)+(yNy)||xxN|+|xNyN|+|yyN|<|xy|a contradiction. 

Utilizing lemma 1, we have limNexpn(A)limNexpn(B)=limNexpn(A)expn(B), so we may compare terms of the two sequencesexpn(A)expn(B)=(nk=0Akk!)(nk=0Bkk!)=nj=0nk=0AjBkj! k!=j,knAjBkj! k!expn(A+B)=nk=0(A+B)kk!=nk=0kj=0k!j!(kj)!AjBkjk!=nk=0kj=0AjBkjj!(kj)!=j+knAjBkj! k! and then compare their differences (without loss of generality assume ||A||||B||)|expn(A+B)expn(A)expn(B)|=|j,kn<j+kAjBkj! k!|j,kn<j+k||A||j||B||kj! k!n2||B||2nn/2!=||B||2nk=2k2||B||2k(k1)2||B||2k2n/2!||B||2(4||B||2)n1n/2!When 4||B||2<1 we clearly have the above vanishing to zero as n approaches infinity, so assume 4||B||21||B||2(4||B||2)n1n/2!||B||2(4||B||2)nn/2!and by lemma 2, the term still tends toward 0. Thus by lemma 3 we conclude exp(A+B)=exp(A)exp(B).

(44) This is clear from the fact exp(λt)=eλt, the ring isomorphism between K and matrices KI, and the preceding exercise.

(45) The first part is clear from the fact that Nk is the matrix with 1s along the kth superdiagonal (observable most easily by induction from the linear transformation form of N), and since Nr=0, we have exp(Nt)=expr(Nt) is the matrix described above. The second part is clear from the observation in (44) with M=Nt coupled with the first part. 

No comments:

Post a Comment