Processing math: 0%

Tuesday, July 30, 2013

Group Generation Through Exterior Algebras (11.5.9)

Dummit and Foote Abstract Algebra, section 11.5, exercise 9:

MathJax TeX Test Page Let R=ZG be the group ring over the group G = \{1,σ\}. Letting M = \mathbb{Z} × \mathbb{Z} and defining σ(e_1)=e_1+2e_2, σ(e_2)=-e_2, show M is an R-module and \bigwedge^2 M is a group of order 2 generated by e_1 \wedge e_2.

Proof: Since there is a unique representation for r∈R and m∈M we may define the action of R on M from their mutual bases' actions to obtain rm = (z_1σ+z_2)(a,b) = (z_1a+z_2a,2z_1a-z_1b+z_2b). Note that σ(σ(a,b)) = (a,b) so that it satisfies action associativity.

Observe(a,b) \wedge (c,d) = ( (a,0)+(0,b) ) \wedge ( (c,0) + (0,d) ) = (ad+bc)(e_1 \wedge e_2)so that \bigwedge^2 M is generated by e_1 \wedge e_2 as an R-module. As well, we have σ(e_1 \wedge e_2) = (σe_1) \wedge e_2 = (e_1+2e_2) \wedge e_2 = e_1 \wedge e_2 and σ(e_1 \wedge e_2) = e_1 \wedge (σe_2) = e_1 \wedge -e_2 = -(e_1 \wedge e_2) so that (z_1σ+z_2)(e_1 \wedge e_2) = (\overline{z_1+z_2})(e_1 \wedge e_2) where the overline denotes reduction modulo 2, implying \bigwedge^2 M is either of order 1 or 2. To prove the latter, we shall construct a bilinear alternating form φ on M such that φ(e_1,e_2) \neq 0.

First, define ψ : R → \mathbb{Z}/2\mathbb{Z} by ψ(z_1σ+z_2)=\overline{z_1+z_2} and prove it is a ring homomorphismψ(z_1σ+z_2+z_1'σ+z_2')=\overline{z_1+z_2+z_1'+z_2'}=ψ(z_1σ+z_2)+ψ(z_1'σ+z_2')ψ((z_1σ+z_2)(z_1'σ+z_2'))=ψ((z_1z_2'+z_2z_1')σ+(z_1z_1'+z_2z_2'))=\overline{z_1z_2'+z_2z_1'+z_1z_1'+z_2z_2'}=\overline{(z_1+z_2)(z_1'+z_2')}=ψ(z_1σ+z_2)ψ(z_1'σ+z_2')Now define φ : M × M → \mathbb{Z}/2\mathbb{Z} by φ((a,b),(c,d))=ψ(ad-bc). It alternates and is 1 on (e_1,e_2) so all that remains to show is that it is bilinear by being additive in its componentsφ((a,b)+(a',b'),(c,d))=ψ((a+a')d-(b+b')c))=ψ(ad-bc)+ψ(a'd-b'c)=φ((a,b),(c,d))+φ((a',b'),(c,d))and linear over Rφ((z_1σ+z_2)(a,b),(c,d))=φ((z_1a+z_2a,2z_1a+z_2b-z_1b),(c,d))=\overline{z_1ad+z_1bc+z_2ad-z_2bc}=\overline{z_1ad-z_1bc+z_2ad-z_2bc}=\overline{z_1(ad-bc)+z_2(ad-bc)}=(z_1σ+z_2)φ((a,b),(c,d))~\square

Exterior Algebras and Fraction Fields (11.5.8c)

Dummit and Foote Abstract Algebra, section 11.5, exercise 8(c):

MathJax TeX Test Page (c) Give an example of an integral domain R with fraction field F and ideal I⊆F considered as an R-module such that \bigwedge^n I \neq 0 for all n.

Proof: Let R = \mathbb{Z}[x_1,...] and I = (x_1,...). It suffices to find an alternating n-multilinear map φ_n : I × ... × I → \mathbb{Z} (n factors) such that φ_n(x_1,...,x_n) = 1 for all n. To that end, define φ_n as follows,φ_n: I × ... × I → \mathbb{Z}φ_n(∑a_{1,i}x_i,...,∑a_{n,i}x_i)=\text{det }(a_{ij}')_{1≤i,j≤n}where a'_{ij} is the constant term of a_{ij}. Note that ∑a_{i}x_i=∑b_{i}x_i implies that a_i'=b_i' for all i, so that (a_{ij}')_{1≤i,j≤n} is uniquely determined and φ_n is well defined. This map is multilinear alternating on components of I × ... × I just as \text{det} is multilinear alternating on matrix rows. Here (x_1,...,x_n) represents the identity matrix, and as such φ_n(x_1,...,x_n) = 1.\square

Sunday, July 21, 2013

Common Annihilator Through Matrices (11.4.3)

Dummit and Foote Abstract Algebra, section 11.4, exercise 3:

MathJax TeX Test Page Let R be a commutative ring with 1 and let V be an R-module with x_1,...,x_n∈V. Letting W be the column matrix of these elements, assume that for some A∈M_{n × n}(R),AW=0Prove (\text{det }A)x_i=0 for i∈\{1,...,n\}.

Proof: This implies a systemα_{11}x_1+α_{12}x_2+...+α_{1n}x_n=0α_{21}x_1+α_{22}x_2+...+α_{2n}x_n=0...α_{n1}x_1+α_{n2}x_2+...+α_{nn}x_n=0In fashion of constructing the cofactor formula for the determinant along the first column, multiply the first row by \text{det }A_{11} and for k∈\{2,...,n\} add (-1)^{k+1}\text{det }A_{k1} times the k^{th} row to the first row to obtain(\text{det }A)x_1+\sum_{j=2}^n (α_{1j}\text{det }A_{11}+\sum_{k=2}^n(-1)^{k+1}α_{kj}\text{det }A_{k1})x_j=0(\text{det }A)x_1+\sum_{j=2}^n (\sum_{k=1}^n(-1)^{k+1}α_{kj}\text{det }A_{k1})x_j=0~~~~~(*)For each j, let B_j be the matrix A with the first column replaced by the j^{th} column. We see0 = \text{det }B_j = \sum_{k=1}^n(-1)^{k+1}β_{k1}\text{det }B_{k1} = \sum_{k=1}^n(-1)^{k+1}α_{kj}\text{det }A_{k1}so that (*) collapses to(\text{det }A)x_1 = 0By interchanging arbitrary x_i with x_1 and letting A_i be the matrix A with its first and i^{th} column interchanged, since this operation negates the determinant and by the argument above we have-(\text{det }A)x_i = 0 = (\text{det }A)x_i~~~\square

Wednesday, July 17, 2013

Dual Annihilators (11.3.3)

Dummit and Foote Abstract Algebra, section 11.3, exercise 3:

MathJax TeX Test Page Let S \subseteq V for V some finite dimensional space. Define \text{Ann}(S)=\{v∈V~|~f(v)=0\text{ for all }f∈S\}.

(a) Show \text{Ann}(S) is a subspace of V.
(b) Let W_1,W_2 \subseteq V^* be subspaces. Show \text{Ann}(W_1+W_2)=\text{Ann}(W_1)∩\text{Ann}(W_2) and \text{Ann}(W_1∩W_2)=\text{Ann}(W_1)+\text{Ann}(W_2).
(c) Prove W_1=W_2 ⇔ \text{Ann}(W_1)=\text{Ann}(W_2).
(d) Prove \text{Ann}(S)=\text{Ann}(\text{span }S).
(e) Assume V has for basis \{v_1,...,v_n\}. Prove that if S=\{v_1^*,...,v_k^*\} then \text{Ann}(S) has basis \{v_{k+1},...,v_n\}.
(f) Assume V is finite dimensional. Prove that if W^* \subseteq V^* is a subspace then \text{dim }\text{Ann}(W^*) = \text{dim }V - \text{dim }W^*.

Proof: We shall approach this problem from another angle to obtain the final result first, and then recover the rest as implications. But first, (a):v_1,v_2∈\text{Ann}(S)⇒f(v_1+αv_2)=f(v_1)+αf(v_2)=0~~~~~∀f∈S Let S be a finite set of linearly independent linear functionals v_1^*,...,v_k^*, perhaps a basis for an arbitrary subspace of V^*. Define a linear transformationF : V → F^kF(v)=(v_1^*(v),...,v_k^*(v))Under this terminology, we can see \text{Ann}(S)=\text{ker }F. Letting A be the k × n reduced row echelon matrix of F from basis e_1,...,e_n (of V) to e_1,...,e_k (of F^k) we observe the linear functionals manifest as linearly independent rows, so that the kernel is of dimension n-k=\text{dim }V-\text{dim span }S. This is (f), since (d) is immediately evident. (e) then follows easily, as the argument for the basis is clear and dimensional restrictions show it spans.

When S ⊆ V, define the subspace \text{Ann}(S)=\{v^*∈V^*~|~v^*(s)=0~\text{for all }s∈S\} to be the dual notion of the \text{Ann} defined above. A parallel argument by evaluations at v_i gives \text{dim Ann}(S)=\text{dim }V^*-\text{dim span }S. When S is a subspace of V or of V^* we show \text{Ann}(\text{Ann}(S))=S since clearly holds, and dimension restrictions show , which now easily gives (c).

We now show \text{Ann}(W_1+W_2)=\text{Ann}(W_1)∩\text{Ann}(W_2) when W_1,W_2 are subspaces of V^* (or V). () Let v∈\text{Ann}(W_1)∩\text{Ann}(W_2). We have (w_1+w_2)(v)=w_1(v)+w_2(v)=0 (respectively, v(w_1+w_2)=v(w_1)+v(w_2)=0). () Let v∈\text{Ann}(W_1+W_2). We have w_1(v)=(w_1+0)(v)=0 and similarly w_2(v)=0 (respectively, v(w_1)=v(w_1+0)=0 and v(w_2)=0). Now, to finish, we have\text{Ann}(W_1∩W_2)=\text{Ann}(W_1)+\text{Ann}(W_2)⇔\text{Ann}(\text{Ann}(W_1∩W_2))=\text{Ann}(\text{Ann}(W_1)+\text{Ann}(W_2))⇔W_1∩W_2=\text{Ann}(\text{Ann}(W_1))∩\text{Ann}(\text{Ann}(W_2))=W_1∩W_2~\square

Monday, July 15, 2013

Dual Endomorphism Ring (11.3.1)

Dummit and Foote Abstract Algebra, section 11.3, exercise 1:

MathJax TeX Test Page Let V be a vector space of finite dimension n. Prove that the map\psi : \text{End}(V) \rightarrow \text{End}(V^*)\psi(\varphi) = \varphi^*is an isomorphism of vector spaces. Show \psi is not a ring isomorphism when n \geq 2. Exhibit an F-algebra isomorphism from \text{End}(V) to \text{End}(V^*).

Proof: Recall that \text{dim } V = \text{dim } V^* implying \text{dim End}(V) = \text{dim End}(V^*), so that it suffices to show \psi is a nonsingular linear transformation. To show linearity, we must show\psi(\varphi_1 + \alpha \varphi_2) = \psi(\varphi_1)+\alpha \psi(\varphi_2)(\varphi_1 + \alpha \varphi_2)^*(f) = \varphi_1^*(f) + (\alpha \varphi_2^*)(f)~~~~~\forall f \in V^*(\varphi_1 + \alpha \varphi_2)^*(f)(v) = \varphi_1^*(f)(v) + (\alpha \varphi_2^*)(f)(v)~~~~~\forall v \in Vf \circ (\varphi_1 + \alpha \varphi_2)(v) = f \circ \varphi_1^*(v) + f \circ \alpha \varphi_2^*(v)which follows from the linearity of f. To show nonsingularity, suppose \psi(\varphi)=0, which is to say \varphi^*(f) = 0 for all f \in V^*, which is to say \varphi^*(f) (v) = f \circ \varphi (v) = 0 for all v \in V. Assuming \varphi is nonzero implies \varphi(v) = \sum a_ie_i where some a_k is nonzero. Letting f be the element that sends e_k to 1 and all other e_j to zero, we obtain a contradiction for f \circ \varphi (v) = f(\sum a_ie_i) = a_k.

Suppose n \geq 2 and \psi is a ring isomorphism\psi(\varphi_1 \circ \varphi_2) = \psi(\varphi_1) \circ \psi(\varphi_2)\psi(\varphi_1 \circ \varphi_2)(f) = \psi(\varphi_1) \circ \psi(\varphi_2)(f)~~~~~\forall f \in V^*f \circ (\varphi_1 \circ \varphi_2)= f \circ (\varphi_2 \circ \varphi_1)Choosing \varphi_1,\varphi_2 as n \times n matrices such that \varphi_2 \circ \varphi_1 = 0 but \varphi_1 \circ \varphi_2 is nonzero, we can choose f that maps a single basis element involved in the nonzero image of an element v under the latter to 1 so that the equality is violated and \psi is not a ring isomorphism. Note that this approach does prove \psi is a ring isomorphism when n=1 as 1 \times 1 matrices commute.

Let V have basis e_1,...,e_n, so that V^* has basis e_1^*,...,e_1^*, \text{End}(V) has basis e_{ab} (considered as the n \times n matrix with 1 in position a,b and zeros elsewhere) and \text{End}(V^*) has basis e_{ab}^* for 1≤a,b≤n. Define ψ : \text{End}(V) → \text{End}(V^*) by its action on the basis of \text{End}(V) via ψ(e_{ab})=e_{ab}^*. This extends to a linear transformation that sends basis to basis and as such is nonsingular. All that remains is to demonstrate multiplicativity. We may observe thisψ(φ_1 \circ φ_2)=ψ(φ_1) \circ ψ(φ_2)as the composition seen as matrix multiplication remains the same through the transformation.~\square

Saturday, July 13, 2013

Span and Linear Dependence Computations (11.2.27)

Dummit and Foote Abstract Algebra, section 11.2, exercise 27:

MathJax TeX Test Page Let V be an m-dimensional vector space with basis e_1,...,e_m and let v_1,...,v_n be vectors in V. Let A be the m \times n matrix translating the vectors into this basis and let A' be the reduced row echelon form of A.

(a) Let B be any matrix row equivalent to A. Let w_1,...,w_n be the vectors described by the columns of B. Prove that any linear relationx_1v_1+...+x_nv_n = 0impliesx_1w_1+...+x_nw_n = 0(b) Prove that the vectors given by the pivotal columns of A' are linearly independent and the rest are linearly dependent on these.
(c) Prove v_1,...,v_n are linearly independent if and only if A' has n nonzero rows.
(d) By (c), the vectors v_1,...,v_n are linearly dependent if and only if A' has nonpivotal columns. The solutions to the linear dependence relations among v_1,...,v_n are given by the linear equations defined by A'. Show that the variables x_1,...,x_n corresponding to nonpivotal columns can be prescribed arbitrarily and the remaining variables are then uniquely defined to give the linear dependence relation.
(e) Prove that the subspace W spanned by v_1,...,v_n has dimension r where r is the number of nonzero rows of A' and that a basis for W is given by the original vectors corresponding to the pivotal columns of A'.

Proof: (a) Since the other two row operations can be replicated by addition of scalar-multiplied rows to one another, it suffices to prove the relations are preserved for one operation. Viewing the equation in the form of the original basis, we can see that the other unmodified rows (i.e. basis vectors) result in zero in the sum, and since the scalar-multiplied row to be added when multiplied and summed with the x_i is already known to be zero, its addition to any of the rows does not change the sum.

(b) These pivotal columns must be of the form e_i, so are clearly independent. Every other vector is composed of a column whose nonzero entries have already been preceded by a pivotal element, and are thus generatable by these.

(c) Viewing A' as A under a nonsingular matrix multiplication, we can see that v_1,...,v_n are linearly independent if and only if w_1,...,w_n are linearly independent if and only if A' has n nonzero rows.

(d) After prescribing these variables arbitrarily, the partial sum represents a vector in basis vectors e_1,...,e_n which are zero in coordinates not reached by a pivotal vector, so that there are unique scalars to the pivotal vectors to make this sum zero.

(e) Again viewing A' as A under a nonsingular linear transformation, we can see that the vectors mapping to the pivotal column vectors of A' precisely form the basis for W, of which pivotal columns there are exactly r.

Friday, July 12, 2013

Stable Subspaces of Linear Transformations (11.2.9)

Dummit and Foote Abstract Algebra, section 11.2, exercise 9:

MathJax TeX Test Page Let φ∈\text{End}(V), and let W \subseteq V be a \varphi-stable subspace. Show that \varphi induces linear tranformations \varphi_{|W} and \overline{\varphi} on the spaces W and V/W. Show that if \varphi_{|W} and \overline{\varphi} are nonsingular then \varphi is nonsingular. Show the converse holds when V is finite dimensional, but not necessarily when V is infinite dimensional.

Proof: \varphi_{|W} is clearly a linear transformation by the stability of W, and define \overline{\varphi}(\overline{v})=\overline{\varphi(v)}. For well definedness, suppose \overline{v_1}=\overline{v_2}; then v_1-v_2 \in W so v_1=v_2+w and \overline{\varphi(v_1)}=\overline{\varphi(v_2+w)}=\overline{\varphi(v_2)}.

Assume these two are nonsingular, and now assume φ(v)=0. If v∈W, then since φ_{|W} is nonsingular we have v=0. If v∉W, then \overline{v}≠0 and since \overline{φ} is nonsingular we have φ(v)∉W so a fortiori φ(v)≠0, a contradiction.

For the converse, since φ is nonsingular we naturally have φ_{|W} is nonsingular. Assume V is finite dimensional to show \overline{φ} is nonsingular; now that we may assume W is finite dimensional, assume φ(v)∈W. Then since φ_{|W} is nonsingular and thus surjective, we may obtain w∈W such that φ(w)=φ(v), implying v=w so that in particular v∈W and \overline{v}=0.

Now assume V is the direct sum of the countably infinite number of copies of \mathbb{R}. Letting φ be the right shift operator with W the subspace consisting of vectors whose first coordinate is zero, we see φ is nonsingular yet \overline{φ} is manifestly not nonsingular by \overline{φ}(\overline{e_1})=0.

Monday, July 8, 2013

Bases and Cardinality (11.1.12-14a)

Dummit and Foote Abstract Algebra, section 11.1, exercises 12-14a:

MathJax TeX Test Page 12. If F is a countable field and V is an infinite dimensional vector space with basis \mathcal{B}, prove |\mathcal{B}|=|V|.
13. Prove \mathbb{R}^n≅\mathbb{R} as vector spaces over \mathbb{Q}.
14a. Let \mathcal{A} be a basis for the infinite dimensional vector space V over F. Prove V ≅ \oplus_{a∈\mathcal{A}}F.

Proof: Throughout these exercises we shall assume the fact that countable unions and finite direct products of an infinite set S fix the cardinality.

(12) By the inclusion mapping we clearly see |\mathcal{B}| ≤ |V|. Letting V_i for i∈\mathbb{N} be the set of elements of V whose basis sum includes exactly i nonzero vectors, we have V is the countable union of the V_i, so it suffices to show |V_i| = |\mathcal{B}|. We can observe|\mathcal{B}| = |\mathcal{B} \times \mathcal{B}| \geq |F \times \mathcal{B}| = |(F \times \mathcal{B})_1 \times ... \times (F \times \mathcal{B})_i| \geq |V_i|so that|\mathcal{B}| = |\bigsqcup_{i∈\mathbb{N}} \mathcal{B}| \geq |\bigsqcup_{i∈\mathbb{N}} V_i| = |V|and now |\mathcal{B}|=|V|.

(13) First we prove \mathbb{R}^2≅\mathbb{R}, so that by induction the proposition easily follows. First, assume the basis \mathcal{A} of \mathbb{R} is finite. Then \mathbb{R} is isomorphic to the direct sum of a finite number of copies of \mathbb{Q} by the next exercise, which would imply \mathbb{R} is countable, a contradiction. Now, there is a basis \mathcal{B} for \mathbb{R}^2 given by the basis \mathcal{A} in each component whose cardinality is equal to |\mathcal{A} \sqcup \mathcal{A}| so that |\mathcal{B}|=|\mathcal{A}|. Linearly extend the homomorphism induced by this bijection to obtain an isomorphism.

(14a) By mapping the coefficients of a vector sum of the basis to the proper coordinates in the direct sum we obtain the evident isomorphism. Now, the direct product is clearly a vector space by closure with componentwise multiplication by scalars of F.

Sunday, July 7, 2013

Basis Calculation (11.1.1)

Dummit and Foote Abstract Algebra, section 11.1, exercise 1:

MathJax TeX Test Page Let V=\mathbb{R}^n and let (a_1,...,a_n)∈V be fixed. Let W \subseteq V be the set of vectors (x_1,...,x_n) such that x_1a_1+...+x_na_n=0. Prove W is a subspace and find a basis for W.

If all a_i=0 then clearly W=V and e_1,...,e_n suffice as a basis. Otherwise, let a_m≠0. For (x_1,...,x_n),(y_1,...,y_n)∈W we see(x_1,...,x_n)-r(y_1,...,y_n)=(x_1-ry_1,...,x_n-ry_n)as well as(x_1-ry_1)a_1+...+(x_n-ry_n)a_n=(x_1a_1+...+x_na_n)-r(y_1a_1+...+y_na_n)=0so that W is a subspace. We claim e_i-(a_i/a_m)e_m for all i∈\{1,...,m-1,m+1,...,n\} is a basis for W. Foremost, all of these are seen to be vectors of W. Linear independence:b_1(e_1-(a_1/a_m)e_m)+...+b_{m-1}(e_{m-1}-(a_{m-1}/a_m)e_m)+b_{m+1}(e_{m+1}-(a_{m+1}/a_m)e_m)+...+b_n(e_n-(a_n/a_m)e_m)=0⇒b_1e_1+...+b_{m-1}e_{m-1}+b_{m+1}e_{m+1}+...+b_ne_n-(\sum_{i≠m} b_i a_i/a_m)e_m=0⇒b_1,...b_{m-1},b_{m+1},...,b_n=0This implies W is either of dimension n-1 or n. If it were the latter, then we would have W=V despite e_m∉W so that necessarily W is of dimension n-1 and now this set must be a basis.~\square

Friday, July 5, 2013

Flat Tensor Products (10.5.23)

Dummit and Foote Abstract Algebra, section 10.5, exercise 23:

MathJax TeX Test Page When M is a right flat R-module and S is a ring considered as a left R-module by some identity-fixing homomorphism R → S, prove that M \otimes_R S is a right flat S-module.

Proof: Let 0 → A → B be an exact sequence of S modules by ψ. Since S is a free right S-module of rank 1, it is flat, and therefore 1 \otimes_S ψ : S \otimes_S A → S \otimes_S B is injective. Moreover, this produces an exact sequence of R-modules, and since M is right flat, 1 \otimes_R (1 \otimes_S) : M \otimes_R S \otimes_S A → M \otimes_R S \otimes_S B is injective, which is the associated homomorphism induced by the functor M \otimes_R S \otimes_S \_, which is to say M \otimes_R S is a right flat S-module.~\square

Tuesday, July 2, 2013

Rings Inducing Projective and Injective Modules (10.5.6)

Dummit and Foote Abstract Algebra, section 10.5, exercise 6:

MathJax TeX Test Page Prove every R-module is projective if and only if every R-module is injective.

Proof: Every R-module being projective implies every short exact sequence 0 → D → M → N → 0 splits since N is an R-module and thus projective, therefore arbitrary D is also injective. Likewise, if every R-module is injective, then for an arbitrary R-module D we have 0 → L → M → D → 0 splits as L is injective, so that D is projective.~\square

Direct Sums of Special Modules (10.5.3-5)

Dummit and Foote Abstract Algebra, section 10.5, exercise 4:

MathJax TeX Test Page 3. Prove Q_1 \oplus Q_2 is a projective R-module if and only if Q_1 and Q_2 are projective R-modules.
4. Prove Q_1 \oplus Q_2 is an injective R-module if and only if Q_1 and Q_2 are injective R-modules.
5. Prove Q_1 \oplus Q_2 is a flat R-module if and only if Q_1 and Q_2 are flat R-modules. Prove \sum A_i is a flat R-module if and only if each A_i is a flat module.

Proof: Lemma: Let \mathcal{F}_i be functors of R-modules. \bigoplus \mathcal{F}_i is exact if and only if every \mathcal{F}_i is exact. Proof: (\Leftarrow) Letting 0 → L → M → N → 0 be exact by ψ and φ, since \mathcal{F}_i are exact functors, then (\bigoplus \mathcal{F}_i)(ψ) is injective by observation of components, likewise (\bigoplus \mathcal{F}_i)(φ) is surjective, and (\bigoplus \mathcal{F}_i)(φ) is zero on and only on elements whose individual coordinates belong to the images of their respective functored ψ homomorphisms, i.e. \text{ker }(\bigoplus \mathcal{F}_i)(φ)=\text{img }(\bigoplus \mathcal{F}_i)(ψ). () For some inexact \mathcal{F}_n, we can observe inexactness in (\bigoplus \mathcal{F}_i) in a natural fashion. For example, if \text{ker }(\mathcal{F}_n)(φ)≠\text{img }(\mathcal{F}_n)(ψ), then the kernel and images of (\bigoplus \mathcal{F}_i)(φ) and (\bigoplus \mathcal{F}_i)(ψ) don't coincide by adducing the kernel-image inexactness in the coordinate in question.~\square

Proof: (3-5) M_i are all projective (or injective or flat [here I possibly infinite]) if and only if Hom_R(M_i,\_) (or Hom_R(\_,M_i) or M_i \otimes_R \_) are all exact if and only if \bigoplus Hom_R(M_i,\_)≅Hom_R(\bigoplus M_i,\_) (or \bigoplus Hom_R(\_,M_i)≅Hom_R(\_,\bigoplus M_i) or \bigoplus (M_i \otimes \_) ≅ (\bigoplus M_i) \otimes \_)) if and only if \bigoplus M_i is projective (or injective or flat).~\square