Processing math: 0%

Friday, August 30, 2013

Matrix Roots (12.3.37-39)

Dummit and Foote Abstract Algebra, section 12.3, exercises 37-39:

MathJax TeX Test Page 37. Let J be a Jordon block of size n with eigenvalue λ∈\mathbb{C}.
(a) If λ \neq 0, prove the Jordan canonical form of J^2 is the Jordan block of size m with eigenvalue λ^2.
(b) If λ = 0, prove the Jordan canonical form of J^2 is two Jordan blocks of sizes \dfrac{m}{2}, \dfrac{m}{2} if m is even and of sizes \dfrac{m-1}{2}, \dfrac{m+1}{2} if m is odd.
38. Determine the necessary and sufficient conditions for a matrix over \mathbb{C} to have a square root.
39. Let J be a Jordan block of size m with eigenvalue λ over a field F with characteristic 2. Determine the Jordan canonical for for the matrix J^2. Determine the necessary conditions for a matrix over F to have a square root.

Proof: We consider a more general case - determining when a matrix over \mathbb{F} has an n^{th} root for any natural n, first when n \neq 0 in F, and when n = 0 in F and n is prime in \mathbb{Z}.

(37)(a) Let T be the linear transformation of J with regard to some basis e_1,...,e_m. We shall showJ^n=\begin{bmatrix}λ^n{n \choose 0} & λ^{n-1}{n \choose 1} & \cdots & {n \choose n}λ^0 & 0 & \cdots \\ 0 & λ^n{n \choose 0} & λ^{n-1}{n \choose 1} & \ddots & 0 & \cdots \\ 0 & 0 & λ^n{n \choose 0} & \ddots & 0 & \cdots \\ \ddots & & & \cdots & \cdots & \cdots \\ & \ddots & & & \cdots & \cdots \end{bmatrix}In the language of linear transformations, this is equivalent to showing T^n is the linear transformation acting on the basis byT^n(e_k)=\sum_{j=0}^{k-1}λ^{n-j}{n \choose j}e_{k-j}This is evident for n=1, so we prove the proposition by induction:T^n(e_k)=T \circ T^{n-1}(e_k) = T(\sum_{j=0}^{k-1}λ^{n-1-j}{n-1 \choose j}e_{k-j}) =\sum_{j=0}^{k-1}λ^{n-1-j}{n-1 \choose j}T(e_{k-j})We note at this point that T acts on the basis by T(e_k)=λe_k+e_{k-1} if k > 1 and T(e_1)=λe_1, so we may continue\sum_{j=0}^{k-1}λ^{n-1-j}{n-1 \choose j}T(e_{k-j})=[\sum_{j=0}^{k-2}{n-1 \choose j}(λ^{n-j}e_{k-j}+λ^{n-j-1}e_{k-j-1})]+{n-1 \choose k-1}λ^{(n-1)-(k-1)+1}e_1=λ^ne_k+[\sum_{j=1}^{k-2}({n-1 \choose j} λ^{n-j}+{n-1 \choose j-1}λ^{n-1-(j-1)})e_{k-j}]+({n-1 \choose k-1}λ^{n-k+1}+{n-1 \choose k-2}λ^{n-1-(k-2)})e_1=λ^ne_k+[\sum_{j=1}^{k-2}λ^{n-j}{n \choose j}e_{k-j}]+λ^{n-(k-1)}e_1=\sum_{j=0}^{k-1}λ^{n-j}{n \choose j}e_{k-j}And so our claim is proven. Now, since J^n-λ^n is an upper triangular matrix, it is nilpotent and thus m_J(x) | (x-λ^n)^m. As well, note the nullity of T^n-λ^n on V; we have (T^n-λ^n)V is generated by λ^{n-j}{n \choose j}e_{m-j} for 1≤j≤m-1, which arises as a linear combination of the linearly independent elements e_1,...,e_{m-1} when λ \neq 0, {n \choose 1} = n \neq 0 and so (T^n-λ^n)V is of rank m-1 and T^n-λ^n has nullity of 1. Thus there is one Jordan block in the Jordan canonical form of J with eigenvalue λ^n and by the minimal monomial this must be the only block, and hence the Jordan canonical form of J^n for J a Jordan block of size m with eigenvalue λ \neq 0 is a Jordan block of size m with eigenvalue λ^n.

(b) When λ=0, by the above we have J^n is the matrix where the i,j entry is 1 if j=i+n and all other entries are 0. Thus T^n acts on the basis by T(e_k)=0 if k ≤ n and T(e_k)=e_{k-n} when k > n. Write m = an+b where 0 ≤ b < n. Assume m < n so a = 0: Then J^n=0 and there are m=b blocks of size 1=a+1 so the form is of the same pattern as follows when a ≥ 1: We may see r_k = m-nk when k ≤ a and r_k = 0 when k > a, where r_k = \text{dim}(T^n)^kV. We count the Jordan blocks by 12.3.30:r_{a-1}-2r_a+r_{a+1}=m-n(a-1)-2(m-na)=m-na-2(m-na)+n=n-br_a-2r_{a+1}+r_{a+2}=m-na=bThus there are n-b blocks of size a and b blocks of size a+1 (all with eigenvalue 0, remember). We count their combined dimension over F by (n-b)a+b(a+1)=na+b=m so this describes the Jordan form entirely.

38. We prove a matrix A over F wherein n \neq 0 and which contains all the eigenvalues of its matrices (e.g. \mathbb{C}) has an n^{th} root if and only if:

1) Each of its nonzero eigenvalues has an n^{th} root in F, and
2) It is possible to divide its list of exponents of its invariant factors x^k into regions with generation rules:
     i) Size m containing only 1s for m < n, and
     ii) Size n containing elements which all differ from each other by 1 or 0.

Lemma 1: Let G be a ring. An element g∈G has a multplicative n^{th} root in G if and only if each of the elements in its similarity class has an n^{th} root. Proof: () is clear, and () follows by f^n = g ⇒ (hfh^{-1})^n = hf^nh^{-1}=hgh^{-1}.~\square

Lemma 2: Let A be the block matrix with blocks A_1,...,A_n and let B be the block matrix with blocks B_1,...,B_n where A_i is the same size as B_i. Then AB is the block matrix with blocks A_1B_1,...,A_nB_n. Proof: Expanding AB=(A_1'+...+A_n')(B_1'+...+B_n') where A_i' is the block matrix in position of a matrix of size A, and similar for B_i', by viewing these individual blocks as collapses onto disjoint subspaces followed by a particular linear transformation, we see A_i'B_j'=0 for i \neq j and A_i'B_i'=(A_iB_i)'. Therefore (A_1'+...+A_n')(B_1'+...+B_n')=(A_1B_1)'+...+(A_nB_n)' is the block matrix described above.~\square

By the above lemmas and our knowledge of the Jordan canonical forms of powers of Jordan blocks, the logical equivalence now holds. A has an n^{th} root if and only if there is a matrix for which B^n is of the same Jordan canonical form as A if and only if there is such B that is in Jordan canonical form itself (since (P^{-1}BP)^n=P^{-1}B^nP and B^n are in the same similarity class), and we see B^n has blocks each of which can be separately conjugated into Jordan canonical form giving rise to a block matrix P of conjugating blocks conjugating B^n into its Jordan canonical form as the block sum of the Jordan canonical forms of its component blocks. By the nature of how these canonical forms arise in these blocks by (37), this completes the proof.

39. The second requirement is the same as above but applies to each class of eigenvalues as well as 0, as when A_λ is the matrix in Jordan canonical form with eigenvalue λ of some fixed size, then A_λ^n-λ^n=A_0^n, since when n is prime n | {n \choose k} for all 0 < k < n.~\square

Monday, August 26, 2013

General Canonical Forms (12.3.25-28)

Dummit and Foote Abstract Algebra, section 12.3, exercises 25-28:

MathJax TeX Test Page 25. Determine the Jordan canonical form of the n \times n matrix over \mathbb{Q} whose entries are all equal to 1.
26. Determine the Jordan canonical form of the n \times n matrix over \mathbb{F}_p whose entries are all equal to 1.
27. Determine the Jordan canonical form of the n \times n matrix over \mathbb{Q} whose diagonal entries are all equal to 0 and other entries are all 1.
28. Determine the Jordan canonical form of the n \times n matrix over \mathbb{F}_p whose diagonal entries are all equal to 0 and other entries are all 1.

Proof: When n=1 all these matrices are already in Jordan canonical form, so assume n > 1.

(25) Let B=A-n and C=AB. We seec_{ij}=\sum_{k=1}^n a_{ik}b_{kj}=\sum_{k=1}^nb_{kj}=b_{jj}+\sum_{k=1,k \neq j}b_{kj}=1-n+n-1=0so now m_A(x) \mid x(x-n) and since A,B \neq 0 we have m_A(x)=x(x-n) and A is diagonalizable with ns and 0s.

Assume there are multiple ns down the diagonal of the Jordan form of A, so there are multiple x-n factors in the invariant decomposition. Letting 1_a,1_b∈\mathbb{Q}^n be the \mathbb{Q}[x] generators of these summands we see x(1_a) and x(1_b) are linearly independent over \mathbb{Q}. But we see A \begin{bmatrix}f_1 \\ ... \\ f_n \end{bmatrix}=\begin{bmatrix}\sum f_i \\ ... \\ \sum f_i \end{bmatrix} so that all elements under the image of A are either zero or associate to \begin{bmatrix}1 \\ ... \\ 1 \end{bmatrix}, a contradiction. Therefore the Jordan canonical form of this matrix is the n \times n matrix with a_{1,1}=n and all other entries 0.

(26) Assume p \not \mid n. Then as before we can conclude m_A(x)=x(x-n) and that there is only one x-n invariant factor and the Jordan canonical form is as above. Assume p \mid n. Then x=x-n and since A \neq 0 we have m_A(x)=x^2. Once again we see that assuming there is more than invariant factor not dividing x leads to linearly independent elements in the image of A, so that we may conclude the invariant decomposition is x,...,x,x^2 and the Jordan canonical form is simply the matrix with a_{1,2}=1 and all other entries 0.

(27) Letting A be this matrix and A' the matrix of (25), we see A+1=A' and thus ((x+1)-n)(x+1)=(x-n+1)(x+1)=0. As before we note there is one associativity class in the image of A'=A+1 so that there is exactly one invariant factor of x-n+1 in the decomposition, and so the Jordan canonical form is the matrix with a_{1,1}=n-1, the other diagonal entries -1 and the other entries 0.

(28) When p \not \mid n we have n \not ≡ 0 and so x-n+1 \neq x+1 and the Jordan canonical form is as above. When p \mid n we again notice there is one invariant factor of (x+1)^2 and thus the Jordan canonical form is the matrix with a_{1,2}=1, the diagonal entries all -1 and the other entries 0.~\square

Sunday, August 25, 2013

Diagonalization of Special Matrices (12.3.21-22)

Dummit and Foote Abstract Algebra, section 12.3, exercise 21-22:

MathJax TeX Test Page 21. Let A be a matrix such that A^2=A. Show that A can be diagonalized with 1s and 0s down the diagonal.

22. Let A be a matrix such that A^3=A. Show that A can be diagonalized over \mathbb{C}. Is this true over any field F?

Proof: (21) Let f_i(x)^{α_i} be the i^th invariant factor in the variant factor decomposition of V over F[x]. Let 1 be viewed as the F[x] generator of this direct summand. Since x^2(1)=x(1) we have x(x-1)=0. Since -1 \neq 0 and f_i(x)^{α_i} is a power of a single prime power dividing x(x-1) we must have f_i(x)^{α_i}∈\{x,x-1\}. Thus in the Jordan form of A it is diagonal with 1s and 0s down the diagonal.

(22) Restarting as above we can see f_i(x)^{α_i} divides x^3-x=x(x-1)(x+1). When F has characteristic greater than 2 these are all distinct prime factors and thus A is diagonalizable with -1s, 0s, and 1s down the diagonal. When F has characteristic 2 we note x-1=x+1 so x^3-x=x(x-1)^2 and thus by choosing the Jordan canonical form of the matrix with invariant factors (x-1)^2 we obtain a matrix that is not diagonalizable and is seen to satisfy\begin{bmatrix}1 & 1\\0 & 1\end{bmatrix}^3=\begin{bmatrix}1 & 1\\0 & 1\end{bmatrix}~\square

Thursday, August 22, 2013

Prime Cyclotomic Polynomials Modulo p (12.2.20)

Dummit and Foote Abstract Algebra, section 12.2, exercise 20:

MathJax TeX Test Page

Let n be prime, and let Φ_n(x) = \dfrac{x^n-1}{x-1} = x^{n-1}+...+1 be the n^{th} cyclotomic polynomial, which is irreducible over \mathbb{Q}. Fix a prime p. This exercise determines when Φ_n(x)∈\mathbb{F}_p[x] is irreducible and thus provides information about the conjugacy classes of matrices of prime order over prime fields.

(a) Let p = n. Show x-1 divides Φ_n(x).
(b) Let p \neq n. Let f be the multiplicative order of p in \mathbb{F}_n^\times. Show that m=f is the smallest integer for which \text{GL}_m(\mathbb{F}_p) contains a matrix A of order n.
(c) Show that no polynomial in \mathbb{F}_p[x] of degree less than f divides Φ_n(x), and that m_A(x) is an irreducible polynomial of degree f dividing Φ_n(x).
(d) In particular, prove Φ_n(x) is irreducible over \mathbb{F}_p if and only if p is a primitive root modulo n, i.e. p generates \mathbb{F}_n.

Proof: (a) Since Φ_n(1)≡0, x-1 divides Φ_n(x).

(b) We have |GL_m(\mathbb{F}_p)|=(p^m-1)...(p^m-p^{m-1}). By Lagrange's and Cauchy's theorems, GL_m(\mathbb{F}_p) has a matrix of order p if and only if its order is divisible by p, so - since \mathbb{F}_p is an integral domain - if and only if n \mid p^m-p^k=p^k(p^{m-k}-1) for some 0≤k≤m. Since this first occurs when m=f and k=0, the claim is proven.

(c) Assuming g(x)~|~Φ_n(x)~|~x^n-1 implies m_B(x)~|~c_B(x) = g(x)~|~x^n-1 where B is the companion matrix of g(x). Thus B^n=1 and by (b) g(x) is of degree ≥f. Letting A be as in (b) we see m_A(x)~|~x^n-1=(x-1)Φ_n(x) and by the restriction of polynomials dividing Φ_n(x) and the size of A we have m_A(x) is of degree f. Since m_A(x) is a monomial of minimal degree dividing Φ_n(x) it must be irreducible, and since A \neq 1 we have m_A(x) \neq x-1 and now m_A(x)~|~Φ_n(x).

(d) The case is clear when p=n. Otherwise, let p be a primitive root. Then f=n-1 and no polynomial of degree smaller than n-1 divides Φ_n(x), and thus Φ_n(x) is irreducible. If Φ_n(x) is irreducible, then a polynomial of degree smaller than n-1 divides Φ_n(x) and thus the order of p in \mathbb{F}_n is smaller than n-1.~\square

Wednesday, August 21, 2013

Matrix Similarity Classes Over Extension Fields of Q (12.2.13)

Dummit and Foote Abstract Algebra, section 12.2, exercise 13:

MathJax TeX Test Page Show that there are the same number of similarity classes of 3 \times 3 matrices over \mathbb{Q} for a given characteristic polynomial over \mathbb{Q}[x] as there are for when the entries are over any extension field of \mathbb{Q}. Give an example to show this is not true in general for 4 \times 4 matrices.

Proof: Consider all the cases of the decomposition of c_A(x)∈\mathbb{Q}[x]. Let a \neq b \neq c.

(x+a)(x+b)(x+c): The only choice for minimal polynomial is c_A(x), and there is one similarity class over \mathbb{Q} and F.

(x+a)^2(x+b): Whether viewed over \mathbb{Q} or F, there are two similarity classes.

(x+a)^3: As before, there are invariably three similarity classes. These are all a result of x+a being irreducible in both \mathbb{Q}[x] and F[x].

(x^2+ax+b)(x+c): There is only one similarity class over \mathbb{Q}. If x^2+ax+b=(x+v_1)(x+v_2) for v_1 \neq v_2, then there is again only one similarity class. If x^2+ax+b=(x+v_1)^2=x+2v_1x+v_1^2, then 2v_1 ∈ \mathbb{Q} so v_1 ∈ \mathbb{Q}, even though x^2+ax+b doesn't factor and thus doesn't have zeros in \mathbb{Q}.

x^3+ax^2+bx+c: If this polynomial decomposes in F[x] to three distinct linear factors, or an irreducible quadratic and a linear factor, or doesn't decompose further, then the minimal polynomial remains the same. If it decomposes into (x+v)^3, then comparing coefficients of x^2 we obtain 3v∈\mathbb{Q} so v∈\mathbb{Q}. Therefore x^3+ax^2+bx+c = (x+v_1)^2(x+v_2) and we obtain the relations2v_1+v_2 = av_1^2+2v_1v_2 = bv_1^2v_2 = cWe first observe v_2 = -2v_1 + a to manipulate the second equationv_1^2 = \dfrac{2}{3}av_1-\dfrac{1}{3}bSubstituting these both into the third equation yields a rational quadratic expression over v_1, so employing the first observation again and rearranging yields(6b-2a^2)v_1 = 9c-abSince the original polynomial can't have zeros in \mathbb{Q}, -v_1 \not ∈ \mathbb{Q} so v_1 \not ∈ \mathbb{Q} implying 6b-2a^2=0 implying b = \dfrac{1}{3}a^2 and c = \dfrac{1}{27}a^3. Now x^3+ax^2+bx+c=(x+\dfrac{1}{3}a)^3 decomposes in \mathbb{Q}[x], a contradiction.

Observe the polynomial x^4+2x^2+1=(x^2+1)^2 in \mathbb{Q}[x]. There are two lists of invariant factors for matrices with this characteristic polynomial, and thus there are two similarity classes over \mathbb{Q}. In \mathbb{C} this polynomial decomposes to (x-i)^2(x+i)^2, and there are seen to be four lists of invariant factors and thus four similarity classes over \mathbb{C}.~\square

Tuesday, August 13, 2013

Complex Computations and Inequalities (1.13-15)

Walter Rudin Principles of Mathematical Analysis, chapter 1, exercises 13-15:

MathJax TeX Test Page 13. For complex x,y show||x|-|y||≤|x-y| 14. If z is complex and |z|=1, compute|1+z|^2+|1-z|^2 15. Under what conditions does equality hold in the Schwarz inequality? I.e.,|\sum a_j\overline{b_j}|^2 = \sum |a_j|^2 \sum |b_j|^2for complex a_j,b_j.

Proof: (13) Assume |x|≥|y|. Then we must show |x|-|y| > |x-y| is impossible. Multiply both sides by (positive) |x|+|y| to obtain |x|^2-|y|^2 > |x-y|(|x|+|y|) ≥ |x-y||x+y| = |x^2-y^2| so |x^2| > |x^2-y^2|+|y^2| ≥ |x^2|, a contradiction. The case is parallel when |y|≥|x|.

(14) Then z=a+bi such that a^2+b^2=1. We see |1-z|=|z-1| and now |z+1|^2+|z-1|^2=(a+1)^2+b^2+(a-1)^2+b^2=2(a^2+b^2)+2=4.

(15) Let v_a,v_b∈\mathbb{C}^k be the the vectors of the a_j and b_j. Let A = \sum |a_j|^2,B = \sum |b_j|^2, and C = \sum a_j\overline{b_j}. We claim |C|^2=AB if and only if v_b is associate to v_a (i.e. v_b=cv_a for some c∈\mathbb{C}) or at least one of v_a or v_b is zero. () Assume |C|^2=AB and v_a,v_b \neq 0 so A,B > 0. As we have seen, ∑|Ba_j-Cb_j|^2 = B(AB-|C|^2). When |C|^2=AB we then thus have a_j=\dfrac{C}{B}b_j for all j, and thus v_b = \dfrac{C}{B}v_a. () When either one of v_a or v_b is zero the equality clearly holds, so assume v_a,v_b \neq 0 and v_b = cv_a for some complex c. We can thus manipulateB=\sum |b_j|^2 = \sum |ca_j|^2 = |c|^2\sum |a_j|^2 = |c|^2Aand\overline{c}A=\overline{c}\sum a_j\overline{a_j} = \sum a_j\overline{b_j} = Cso for all j we haveBa_j = |c|^2Aa_j = \dfrac{|c|^2a_j\overline{a_j}A}{\overline{a_j}} = \dfrac{|ca_j|^2}{\overline{a_j}}A = b_j \dfrac{\overline{b_j}}{\overline{a_j}} A = b_j \overline{c} A = Cb_jso that ∑ |Ba_j - Cb_j|^2 = B(|C|^2 - AB) = 0. Since v_b \neq 0 and thus B > 0, we have |C|^2 = AB.~\square

Saturday, August 10, 2013

Real Logs (1.7)

Walter Rudin Principles of Mathematical Analysis, chapter 1, exercise 7:

MathJax TeX Test Page Fix b > 1, y > 0 and prove that there is a unique real x such that b^x = y by completing the following outline. Say x = \text{log}_b y.
(a) For natural n, b^n - 1 ≥ n(b-1).
(b) Hence b - 1 ≥ n(b^{1/n}-1).
(c) If t > 1 and n > (b-1)/(t-1) then b^{1/n} < t.
(d) If w is such that b^w < y then b^{w+1/n} < y for sufficiently large n.
(e) If b^w > y then b^{w-1/n} > y for sufficiently large n.
(f) Let A = \{w~|~w < y\} and show b^{\text{sup }A} = y.
(g) Show x is unique.

Proof: (a) We show b^n ≥ n(b-1)+1. This is clear when n=1, so by inductionb^n = bb^{n-1} ≥ b((n-1)(b-1)+1)) = (n-1)b(b-1)+band we must show (n-1)b(b-1)+b ≥ n(b-1)+1. Collecting terms on the left and manipulating we must thus show (n-1)(b-1)^2 ≥ 0, which is clear as all these terms are nonnegative.
(b) This is clear from the previous, as b^{1/n} > 1 by noting c^n ≤ 1 for c ≤ 1 by induction.
(c) Assume b^{1/n} ≥ t. Then b - 1 < n(t-1) ≤ n(b^{1/n}-1), a contradiction by (b).
(d) We have 1 < \dfrac{y}{b^{w}}. Observe b^{1/n} ≤ \dfrac{b-1}{n}+1 by (b) so we may choose n such that \dfrac{b-1}{n}+1 < \dfrac{y}{b^{w}} by conditioning \dfrac{b-1}{n} < \dfrac{y}{b^w}-1 > 0 and now n > \dfrac{b-1}{\dfrac{y}{b^w}-1} so that b^{w+1/n} < y.
(e) As before, since we showed b^{1/n} tends toward 1 we can choose n such that b^{1/n} < \dfrac{b^w}{y} to fulfill.
(f) By (a) we have b^w is divergent so A has x = \text{sup }A. Suppose b^x < y; then by (d) we have some b^{x+1/n} < y so x is not an upper bound. Suppose b^x > y; then by (e) choose n for b^{x-1/n} > y and x is not a minimal upper bound. Therefore b^x = y.
(g) If x' \neq x then b^x - b^{x'} = b^{x'}(b^{x-x'}-1) = 0 is a contradiction.~\square

Friday, August 9, 2013

Real Exponents (1.6)

Walter Rudin Principles of Mathematical Analysis, chapter 1, exercise 6:

MathJax TeX Test Page Fix b>1.
(a) Let m,n,p,q be integers, n,q > 0 with m/n=p/q. Prove(b^m)^{1/n}=(b^p)^{1/q}Hence it makes sense to define b^{m/n}=(b^{m})^{1/n}.
(b) Prove b^{r+s}=b^rb^s for r,s∈\mathbb{Q}.
(c) For x∈\mathbb{R}, define B(x) = \{b^t~|~t∈\mathbb{Q},~t≤x\}. When r∈\mathbb{Q}, proveb^r=\text{sup }B(r)Hence it makes sense to defineb^x=\text{sup }B(x)(d) For x,y∈\mathbb{R}, prove b^{x+y}=b^xb^y.

Proof: (a) Recall (z_1z_2)^{1/n}=z_1^{1/n}z_2^{1/n} for natural n. For natural n_1,n_2 it is also clear that (z^{n_1})^{n_2}=z^{n_1n_2}, and when n_1,n_2 may be negative the equality holds by case analysis together with the definition z^{-n}=(z^n)^{-1}. Therefore, we derive(b^m)^{1/n}=(b^{1/n})^m=(((b^{1/n})^m)^q)^{1/q}=((b^{1/n})^{mq})^{1/q}=((b^{1/n})^{pn})^{1/q}=(((b^{1/n})^n)^p)^{1/q}=(b^p)^{1/q} (b) The case is clear for integer r,s. As well, we note z^{1/n_1n_2}=(z^{1/n_1})^{1/n_2} as ((z^{1/n_1})^{1/n_2})^{n_1n_2}=(((z^{1/n_1})^{1/n_2})^{n_2})^{n_1}=z. Otherwise let r=r_1/r_2 and s=s_1/s_2. We haveb^{r+s}=b^{r_1/r_2+s_1/s_2}=b^{(r_1s_2+r_2s_1)/(r_2s_2)}=(b^{r_1s_2+r_2s_1})^{1/r_2s_2}=(b^{r_1s_2}b^{r_2s_1})^{1/r_2s_2}=(((b^{r_1})^{s_2})^{1/s_2})^{1/r_2}(((b^{s_1})^{r_2})^{1/r_2})^{1/s_2}=b^rb^s (c) Lemma 1: When a = a_1/a_2 > 0 is rational we have b^a > 1. This implies b^a > b^c for rationals a > c. Proof: Rewrite a_2 > 0 so that a_1 > 0 so by induction b^{a_1} > 1 and it suffices to prove b^{1/a_2} > 1 for b > 1. Here it suffices to prove for c ≤ 1 then c^n ≤ 1 for natural n, which is clear by induction c^n = cc^{n-1} ≤ c ≤ 1.

Now, for t≤r we show b^r ≥ b^t ⇔ b^t(b^{r-t}-1) ≥ 0, so since b^t is nonnegative and by the lemma b^{r-t}-1 is nonnegative we have b^r is an upper bound of B(r). As well, if z < b^r then since b^r∈B(r) we have z is not an upper bound of B(x) and thus b^r = \text{sup }B(r).

(d) Lemma 2: Let X⊆\mathbb{R} be bounded above such that when x∈X and y≤x, then y∈X. We see \text{sup }X = \text{sup }(X \setminus \{\text{sup }X\}). Proof: z = \text{sup }X bounds the latter, so assume y < z also bounds. But y doesn't bound X and thus y < x for some x∈X and we may choose some y < x_1 < x ≤ z so x_1 ∈ X \setminus \{\text{sup }X\} yet y < x_1 and z is the smallest bound.

Let B^*(x) = \{v ≤ b^t~|~t∈\mathbb{Q},~t≤x\} so that \text{sup }B^*(x) = \text{sup }B(x) and it serves as an equivalent definition for b^x. Now if b^x∈B^*(x) then by the above lemma we may remove it, and as well b^t for t < x to define B^{**}(x) with an equivalent supremum. Now we must show that when z < \text{sup }B^{**}(x) then z < b^v for some rational v < x. We already see this is the case when for some rational v ≤ x, so we must construct rational v' < v such that z < b^{v'} < b^v; this may be done by observing lemma 1 and parts (a),(b), and (g) to show b^{1/n} tends to 1 and setting v' = v - 1/n for 1 < b^{1/n} < b^v/z.

Now, suppose b^xb^y < b^{x+y}; then b^xb^y < b^v for some rational v < x+y by the above discussion. Set rational v - y < v_1 < x and v - v_1 < v_2 < y so v_1+v_2 > v and b^xb^y ≥ b^{v_1}b^{v_2} = b^{v_1+v_2} > b^{v}.

Suppose b^{x+y} < b^xb^y. Then (b^x)^{-1}b^{x+y} < b^{v_2} for some rational v_2 < y. Choose rational v_2 < v_2 < y and set v_4 = v_3 - v_2 > 0, and then choose x-v_4 < v_1 < x. We have b^{v_2} > (b^x)^{-1}b^{x+y} ≥ (b^x)^{-1}b^{v_1+v_2+v_4} implying 1 > (b^x)^{-1}b^{v_1}b^{v_4} then b^x > b^{v_1+v_4}. But v_1+v_4 > x and choosing rational x < x' < v_1+v_4 we see b^{x'} is an upper bound of b^x and b^{v_1+v_4} > b^{x'} ≥ b^x is a contradiction.~\square

Thursday, August 8, 2013

Special Modules over PIDs (12.1.20-22)

Dummit and Foote Abstract Algebra, section 12.1, exercise 20-22:

MathJax TeX Test Page 20. Let R be an integral domain with field of fractions F. Prove \text{dim }F ⊗_R M = \text{rank }M for F ⊗_R M a vector space over F and M any R-module.

21. Let R be a PID. Prove any finitely generated projective R-module is free.

22. Let R be a non-field PID. Prove no finitely generated R-module is injective.

Proof: (20) () We see \{f ⊗ m~|~f∈F,m∈M\} spans F ⊗_R M, so there is a basis \{f_i ⊗ m_i\}_{i∈I}. As well, we see \{1⊗m_i\} is a basis. We claim \{m_i\} is a set of linearly independent elements in M; otherwise, we have ∑r_im_i=0 implying ∑r_i(1⊗m_i)=0. () Let \{m_i\} be a set of linearly independent elements. We claim \{1⊗m_i\} are linearly independent over R. We have ∑r_i(1⊗m_i)=∑1⊗r_im_i=1⊗∑r_im_i; since the m_i are linearly independent, we may assume that if some r_i \neq 0 then ∑r_im_i \neq 0 and as well ∑r_im_i is nontorsion; now by 10.4.8 1⊗∑r_im_i \neq 0 and the 1⊗m_i are linearly independent over R. Consequently, collecting the denominators of any linear independence over F leads to an equation of the form \dfrac{1}{d}(∑r_i(1⊗m_i))=0 so that ∑r_i(1⊗m_i)=0 and now r_i=\dfrac{r_i}{d}=0 for all i and the 1⊗m_i are linearly independent over F.

(21) M is projective if and only if it it is a direct summand of a free module only if it is a submodule of a free module, which would imply it has no torsion elements. By the fundamental theorem this eliminates any elementary divisors of M and forces M free.

(22) M≅R^n⊕R/(a_1)~⊕~...~⊕~R/(a_k) is injective if and only if each of its summands is injective by 10.5.4. By Baer's criterion, M is injective over a PID if and only if rM=M for all nonzero r∈R so necessarily R/(a_i) is not injective and we must only prove R is not injective; indeed, for p∈R a prime we see 1∉pR.~\square

Wednesday, August 7, 2013

Proof of Finitely Generated Module Theorem Over EDs by Matrices (12.1.19)

Dummit and Foote Abstract Algebra, section 12.1, exercise 19:

MathJax TeX Test Page By the previous two exercises we may perform elementary row and column operations on a given relations matrix by choosing different generators for R^n and \text{ker }φ. If all relation matrices are the zero matrix then \text{ker }φ=0 and M=R^n. Otherwise let a_1 be a nonzero g.c.d. of all the entries in a fixed initial relations matrix for M.

(a) Prove that by elementary row and column operations we may assume a_1 occurs in a relations matrix (a_{ij}) where a_1 \mid a_{ij} for all i,j.
(b) Prove there is a relations matrix of the form (a_{ij}) where a_{1,1} = a_1 | a_{ij} for all i,j and a_{1k}=a_{k1}=0 for all k > 1.
(c) Let a_2 be a g.c.d. of all the entries except the element a_1 in the relations matrix in (b). Prove there is a relations matrix of the form above where additionally a_{2,2} = a_2 and a_{2k} = a_{k2} = 0 for k \neq 2.
(d) Prove there is a relations matrix diagonalized from the top left a_1,a_2,...,a_k for some k ≤ n such that a_1 \mid a_2 \mid ... \mid a_k. Derive the theorem for finitely generated modules over Euclidean Domains.

Proof: (a,b) Let N : R → \mathbb{N} be the norm on R. For a matrix M with entries in R define N(M) to be the minimum norm taken over the nonzero entries over M. For A the initial relations matrix, let A' be a minimal matrix attainable from A by elementary row and column operations with regard to N(A'), and let a_1 be the nonzero entry of A' possessing such minimum norm. Move a_1 to position (1,1) by a row and column interchange. We claim a_1 divides all the entries of A', and as such is the g.c.d. of all the entries of A (since any divisor of all the entries of A still acts as a divisor after an operation, and the operation is reversible). First, we show a_1 \mid a_{1i} and a_1 \mid a_{i1}. By definition of the norm write a_{1i} = qa_1 + r, and subtract q times the first column from the i^{th} column to set r in the (1,i) position; since N(r) < N(a_1) or r = 0, we must have the latter by the minimality of A', and thus a_1 \mid a_{1i}. The case for the first column is parallel.

Now attain a matrix like the one in (b) by subtracting appropriate multiples of the first row and column. We show a_1 \mid a_{ij} for all i,j and we will be done. Add the i^{th} row to the first to set a_{ij} in the (1,j) position while still retaining a_1 in the (1,1) position (as the entry in the (i,1) is zero), and as above prove the remainder of the division is zero. This is matrix B.

(c) Simply iterate the process of (a,b) by focusing on the matrix B_{1,1}. These operations will not disturb the outer part of the matrix because of the zeros.

(d) Continue iterating to obtain the matrix in (d). By considering the new basis for R^n, we haveR^n/\text{ker }φ = (R~⊕~...~⊕~R) / (Ra_1~⊕~...~⊕~Ra_k~[⊕~0~⊕~...~⊕~0]) ≅R^{n-k}~⊕~R/(a_1)~⊕~...~⊕~R/(a_k)~\square~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Though the above proof is nonconstructive, a process for obtaining such a matrix with minimal norm can be described as follows: Given A, move the nonzero element with minimal norm to the top left. Use Euclid's algorithm to obtain the smallest possible norm among the elements of the first row, which is a g.c.d. of the row. Subtract the other elements of the row to zero and repeat with the first column. Now if a_{1,1} doesn't divide a_{ij} then add row i to row 1 and use Euclid's algorithm between a_{1,1} and a_{1,j} to obtain an element of smaller norm. Repeat until completion.

Tuesday, August 6, 2013

Irreducible Modules Over PIDs (12.1.14)

Dummit and Foote Abstract Algebra, section 12.1, exercise 14:

MathJax TeX Test Page Letting M be a torsion module over R a P.I.D., prove M is irreducible if and only if M=Rm for some nonzero element m∈R with prime annihilator (p).

Proof: () M is clearly generated by a single element m, as a partial generator would imply a nontrivial submodule. Assume \text{Ann }(m)=(r) for composite, nonzero r, so write r = ab for nonzero nonunits a and b. We show Rbm ⊂ Rm = M. Assuming otherwise, we can write xbm = m for some x∈R, and now axbm = am ⇒ am = 0 despite (r) ⊂ (a), a contradiction. () Take nonzero rm ∈ M; we thus have p \not \mid r and there are elements a,b∈R providing ar+bp = 1. We have arm = (ar+bp)m=m so that Rrm = M and M has no nontrivial submodules.~\square

Monday, August 5, 2013

Direct Sums and Rank (12.1.3)

Dummit and Foote Abstract Algebra, section 12.1, exercise 3:

MathJax TeX Test Page Let R be an integral domain and let A and B be R-modules of rank m and n respectively. Show A ⊕ B is an R-module of rank m+n.

Proof: Letting a_1,...,a_m and b_1,...,b_n be the fulfilling linearly independent elements, we see (a_1,0),...,(0,b_n) are linearly independent. Observing the homomorphism (x,y) ↦ (x \text{ mod }R^m, y \text{ mod }R^n) admitted by these former elements we see the kernel is the submodule generated by the latter. By 12.1.2, we will show the quotient is torsion and thus A ⊕ B is of rank m+n. Take \overline{(a,b)}. Also by 12.1.2 we have the respective images of A and B under the homomorphism are those of A and B under the homomorphisms provided by their own linearly independent elements, and as such there are nonzero r_1,r_2 such that r_1\overline{a}=r_2\overline{b}=0, so that r_1r_2 is nonzero and r_1r_2\overline{(a,b)}=0.~\square

Friday, August 2, 2013

Torsion Elements and Rank (12.1.1)

Dummit and Foote Abstract Algebra, section , exercise :

MathJax TeX Test Page Let M be a module over the integral domain R.
(a) Suppose x \neq 0 is a torsion element of M. Show x and 0 are "linearly dependent." Conclude the rank of \text{Tor}(M) is 0, so that in particular any torsion R-module has rank 0.
(b) Show \text{rank} M = \text{rank} M/\text{Tor}(M).

Proof: (a) By definition we have rx=0 for nonzero r. Therefore there exist no candidates for linearly independent sets.

(b) Let n = \text{rank} M and m = \text{rank} M/\text{Tor}(M). () Let \overline{x_1},...,\overline{x_m}∈M/\text{Tor}(M) be linearly independent. If m > n then we can arrange for a linear dependencer_1x_1+...+r_mx_m = 0which clearly extends to give a linear dependence in the quotient. () Let x_1,...,x_n∈M be linearly independent. Assume that their images in the quotient are linearly dependent, so thatr_1x_1+...+r_nx_n∈\text{Tor}(M)and by definition of \text{Tor}(M) we obtain a nonzero annihilator to obtain a contradictory linear dependence.~\square

Thursday, August 1, 2013

Tensor, Symmetric, and Exterior Subalgebras (11.5.14)

Dummit and Foote Abstract Algebra, section 11.5, exercise 14:

MathJax TeX Test Page Prove that when the R-module M is a direct summand of N by φ, then \mathcal{T}(M), \mathcal{S}(M), and \bigwedge M are R-subalgebras of \mathcal{T}(N), \mathcal{S}(N), and \bigwedge N, respectively.

Proof: For each of the above algebras from M, we have induced algebra homomorphismsΦ(m_1 ⊗ ... ⊗ m_k)=φ(m_1) ⊗ ... ⊗ φ(m_k)Φ(m_1 ⊗ ... ⊗ m_k \text{ mod }\mathcal{A}(M)) = φ(m_1) ⊗ ... ⊗ φ(m_k) \text{ mod }A(N)Φ(m_1 ∧ ... ∧ m_k) = φ(m_1) ∧ ... ∧ φ(m_k)As well, we have a natural homomorphism φ' : N → M by collapsing onto the M component which induces homomorphisms of algebras by Φ'. Since we see φ'∘φ=1, likewise we have Φ'∘Φ=1 so that Φ is injective.~\square