Processing math: 23%

Saturday, February 22, 2014

Fracture Hypothesis (Research)


MathJax TeX Test Page Hypotheses formulated in mid-May, 2013.

Let R be a non-field UFD, and for xR{0} letω(x)=ω(upα11...pαnn)=ni=1αiFor each iN letVi={xR | ω(x)=i}{0}

Weak Fracture Hypothesis: For any non-field UFD R, there exists iN such that Vi is not a group under addition.

Strong Fracture Hypothesis: For any non-field UFD R, for all iN+ necessarily Vi is not a group under addition. Equivalently, V1 is not a group under addition.

What would otherwise be the very "strongest" FH (that V0 is also never an additive group) is false, since taking any field F and letting R=F[x], we have V0 is the subset of constant polynomials, which are seen to be a group under addition.

SFH: Assume R violates the SFH with Vi an additive group.

Proposition: The image of Z{0} in R is within V0, i.e. zZ{0} is a unit in R if it is nonzero. Proof: For xVi, we have x+...+x(z times)=zxVi, so that z doesn't append to the prime factorization of x, so is a unit. Since also (1)2=1, we have 1 is a unit as well, and now all nonzero integers are seen to be units.

Proposition: V0,V1,...,Vi1 are additive groups. Proof: Assume V0 is not a group so that u1u2 is divisible by a prime. Now for some prime p we have u1piu2pi=(u1u2)piVi. Now assume V1 is not a group so that p1p2V1. Observe pi1p2pi11=(p1p2)pi11Vi. The case follows similarly for Vk for any ki.

Prime Infinitude: There are an infinite number of distinct primes of R. Proof: Assume P is a complete, finite list of primes in R. Observe x=(pPp)1. Assume x is a unit, in which case x+1=pPp is also a unit, a contradiction. So for some qP we have q divides x and by construction q divides pP so q divides 1, another contradiction.

Proposition: When xVa and yVb for ab and x0y, we have x+yVaVb. Proof: Assume x+yVa. Now (x+y)x=yVa and y=0. The case for Vb holds similarly.

Algebraic Geometry/Rank of V1 and/or Closedness of V0: (First method due to Professor Li) The rank of V1 as a vector space over V0 is greater than 2. Proof: Assume the rank is 2, so RV0[x,y]/P for some prime ideal P identifying x,y with primes in R. Necessarily P=(f(x,y),p(x)) for some f(x,y)V0[x,y], p(x)V0[x], each either prime or zero in their respective rings. We see p(x)=0 since each prime in R is transcendental over V0, so simply P=(f(x,y)). Now, V0 must be infinite since V1 must be infinite to satisfy infinitude of primes as above, so choose distinct units u,v. Write Q1=x+u=(α1x+β1y)...(αnx+βny) in R, so Q1xu=f(x,y)g(x,y) in V0[x,y]. The left hand side demonstrates a polynomial in V0[x,y] with three homogenous components, so by comparing terms on the right we see the largest homogenous term m(x,y) of f(x,y) divides Q1, so without loss assume α1x+β1y is a factor of m(x,y). But by taking the same approach to Q2=x+v, we see Q2 as a polynomial in V0[x,y] has α1x+β1y as a factor. This is impossible as Q1Q2=uvV0 (nonzero) while simultaneously α1x+β1y divides Q1Q2 in V0[x,y] and now too in R.

Theorem: Assume the rank n of V1 as a vector space over V0 is finite, and that V0 is algebraically closed. Then R is not a PID. Proof: By the above n3. Consider the surjective ring homomorphism V0[x1,...,xn]R induced by mapping the variables to the generating primes of V1 over V0. Then by Nullstellensatz, there is a common zero uVn0 of the polynomials generating the kernel of this homomorphism. Consider the nontrivial homomorphism ϕ:RV0 induced by this zero. Then since ϕ restricts to a linear transformation V1 to V0 over V0, and since n3, by rank-nullity the kernel of this transformation is of rank 2, i.e. it contains two nonassociate primes, and hence in R the kernel of the full homomorphism cannot be principal.

WFH: Fix some infinite sequence of primes p1,p2,.... For each pair of integers ij defineρij:ViVjx(jk=i+1pk)xThese mappings are seen to satisfy the following properties for ijkρii=1ρjkρij=ρikAs well, they are group homomorphisms (and are in addition injective), so this directed system satisfies all the properties of a direct limit.

Proposition: For any units u1u2 and prime p, we have p+u1 is not within the same fracture as p+u2. Proof: Clearly neither are within V0, so assume p+u1,p+u2Vi for positive i. Since Vi is an additive group, we have (p+u1)(p+u2)=u1u2V0 as well as u1u20 so that u1u2Vi, a contradiction.

Lemma: V0 is countable. Proof: Fix a prime p and let φ(u) be the index of the fracture p+u is within. By the previous proposition, this mapping is injective.

Lemma: V_1 is countable. Proof: Fix a unit u and let φ(p) be the index of the fracture u+p is within. Assume φ(p_1)=φ(p_2), i.e. p_1+u,p_2+u∈V_i. Since V_i is additive, we have (p_1+u)-(p_2+u)∈V_i while also (p_1+u)-(p_2+u)=p_1-p_2∈V_1. Therefore (p_1+u)-(p_2+u)=0 and p_1=p_2, and the mapping is injective.

Lemma: R is countable. Proof: SinceR=\bigcup_{n∈\mathbb{N}} V_nit remains to show that V_n is countable. We can establish a surjectionφ : V_0 \times \prod_{i=1}^nV_1 → V_n(u,p_1,p_2,...,p_n) \mapsto up_1p_2...p_nSince both V_0 and V_1 are countable, and finite direct products of sets of a given cardinality retain the same cardinality, we have V_n is countable and now R is countable.

Lemma: There are no nontrivial automorphisms of R. Proof: Let φ be a nontrivial automorphism of R. Then since φ is defined by its action on units and primes, either φ doesn't fix some prime p, or φ doesn't fix some unit u in which case either φ doesn't fix any chosen prime p' or φ doesn't fix p=up'; hence in either case we may say φ(p)≠p for some prime p.

Now, write 1+p∈V_k, necessarily k > 1. For general UFDs, automorphisms send units to units and primes to primes, so also φ(1+p)∈V_k. Hence 1+p-φ(1+p)∈V_k. But also 1+p-φ(1+p)=p-φ(p)∈V_1, so since V_1∩V_k=\{0\} we must have p=φ(p), a contradiction.

Module Structure: Note that V_i acts as a vector space over V_0. We may define a bilinear map of V_0-modules \prod_{i=1}^n V_1 → V_n by (p_1,...,p_n) \mapsto p_1...p_n to induce an appropriate homomorphism of V_0-modules \Phi_i : \otimes_{i=1}^n~V_1 → V_n.

Exact Sequences: Since V_0 is a field and V_1 a V_0-module, we can see that V_1 is injective and by an injective homomorphism V_1 → V_n by x \mapsto p_1^{n-1}x we obtain V_n ≅ V_1 \oplus V_n'.

Tensor Algebras: V_1 is a vector space over V_0, and R is a commutative V_0 algebra with a natural inclusion linear transformation V_1 → R, inducing an appropriate extension homomorphism of V_0-algebras Φ: S(V_1) → R. This is nonzero on nonzero 1- and 2-tensor sums, but to this end we claim \ker{Φ} is generated by 3-tensor sums. Proceed by induction on n for an n-tensor sum ∑_{i=1}^nt_i in \text{ker }Φ; for any two simple tensors s_1 and s_2, we have Φ(s_1+s_2)=Φ(s_3) for some simple tensor s_3 due to representation in R. Therefore, for any simple tensors s_1 and s_2 let f(s_1,s_2) be a simple tensor such that s_1+s_2+f(s_1,s_2) is in \text{ker }Φ. Back to the claim at hand, we observe0=Φ(\sum_{i=1}^nt_i)=Φ(t_1+t_2+\sum_{i=3}^nt_i)=Φ(-f(t_1,t_2)+\sum_{i=3}^nt_i)Since -f(t_1,t_2)+\sum_{i=3}^nt_i is an (n-1)-tensor sum in \text{ker }Φ, by induction it is generated by 3-tensor sums in \text{ker }Φ and we have \sum_{i=1}^nt_i=-f(t_1,t_2)+\sum_{i=3}^nt_i+(t_1+t_2+f(t_1,t_2)) to complete the claim.

Furthermore, we can partially predict the behavior of the function f. According to a contradiction of WFH, when s_1 and s_2 are simple tensors in S^n(V_1), then f(s_1,s_2) is in S^n(V_1). Since this function completely defines the 3-tensor sums generating the kernel, we might clearly derive SFH is violated if and only if there is a vector space V over F whose symmetric algebra contains an ideal A such that (i) A is generated by 3-tensor sums (ii) A contains no 1- or 2-tensor sums, and (iii) A for every pair of simple tensors s_1,s_2 there is a simple tensor s_3 such that s_1+s_2+s_3∈A. As well, WFH is violated if and only if there is such V, F, and A such that (iv) when s_1,s_2∈S^n(V) then s_3∈S^n(V).

Rank of V_1: Assume V_1 is generated over V_0 as a vector space by the elements p and q. Then V_0 is algebraically closed. Proof: We first show that the elements p^kq^{n-k} for k∈[0,n] are a basis for V_n over V_0; this is evident because products of the form xp+yq for x,y∈V_0 taken n at a time generate V_n and such products can be expanded to sums in p^kq^{n-k}, and also because they are linearly independent: Assume v_0p^n+v_1qp^{n-1}+...+v_nq^n=0, and further assume v_0≠0; then we have v_0p^n=qr for some r, which is a contradiction by unique factorization. So v_0=0; now divide the sum out by q and apply the independence statement by induction to V_{n-1} to obtain v_i=0 for all i.

Now, observe the polynomial f(x)=x^n+a_{n-1}x^{n-1}+...+a_0 in V_0[x]. Then observe the element p^n+a_{n-1}qp^{n-1}+a_{n-2}q^2p^{n-2}+...+a_0q^n∈V_n. We must be able to write this according to its prime factorization as (p-α_0q)(p-α_1q)...(p-α_nq) for some α_i∈V_0 (the negative signs making the notational argument simpler; note that we may ignore the primes of the form yq for y∈V_0 as the expansion requires the coefficient of p to be nonzero in all multiplicands). We see that in fact α_0,...,α_n behaves exactly as would the solutions β_0,...,β_n in writing f(x)=(x-β_0)...(x-β_n)=(x-α_0)...(x-α_n) so that f(x) splits over V_0.

Open Problems: When is R generated as an additive group by V_0 and V_1? If R is generated as such, then any sum r_1+r_2 in R can be evaluated with knowledge of the generation of r_1 and r_2, knowledge of addition in V_0 and V_1 (to group the sums again) and knowledge of sums of the form v_0+v_1. In other words, since Φ∘f above is a homomorphism on V_0 × V_1, we see R is generated by primes and units iff \text{img }Φ∘f=R. Anyway, we see when R is prime unit generated that it is additively isomorphic to V_0 × V_1, and when we have knowledge about the generation of V_2 from V_0 and V_1 we can make V_0 × V_1 into a ring isomorphic to R via (v_0,v_1)(v_0',v_1')=(v_0v_0',v_0'v_1+v_0v_1')+S, where S is the element of V_0 × V_1 corresponding to v_1v_1'. This may be to say that the additive structure depends only on V_0 and V_1 in this case, but the multiplicative structure requires knowledge about V_2, i.e. the map V_1^2↦V_0 × V_1. Keep in mind the additive automorphism group of V_0 × V_1 is \text{Aut}(V_0) × \text{Aut}(V_1).

A UFD not generated as an additive group by its primes and units would be \mathbb{C}[x], and some UFDs that are would be \mathbb{Z}, \mathbb{Z}[x], \mathbb{Q}[x], and \mathbb{F}_p[x].

If V_0,V_1 generate V_2 as an additive group, then R is generated by V_0,V_1. To see this, assume an ungenerated element x with n=\omega (x) minimal. Then observe the product of the generation of the product of its first n-1 primes, with its last prime, which is a sum of elements in V_1 and V_2, which by the hypothesis can be represented by a sum of elements in V_0 and V_1.


R is not generated as an additive group by V_0 and V_1, and in fact V_2 isn't either. This would presume the existence of nonassociate u_1+p_1,u_2+p_2∈V_2, where necessarily u_1,u_2≠0, so we can write u_1+p_1-(u_1/u_2)(u_2+p_2)=p_1-(u_1/u_2)p_2∈V_1∩V_2=\{0\} but we see u_1+p_1=u_1+(u_1/u_2)p_2 is associate to u_2+p_2 by u_2/u_1.

Vandermonde and Discriminants (Graph Theoretic Proof) (14.6.27)

Dummit and Foote Abstract Algebra, section 14.6, exercise 27:

MathJax TeX Test Page Let f(x) be a monic polynomial with roots α_1,...,α_n.
(a) Show that the discriminant D of f(x) is equal to the square of the Vandermonde determinant. \begin{vmatrix} 1&α_1&α_1^2&\cdots&α_1^{n-1} \\ 1&α_2&α_2^2&\cdots&α_2^{n-1} \\ 1&α_3&α_3^2&\cdots&α_3^{n-1} \\ \vdots&\vdots&\vdots&\ddots&\vdots \\ 1&α_n&α_n^2&\cdots&α_n^{n-1} \end{vmatrix} = \prod_{i > j}(α_i-α_j) (b) Taking the Vandermonde matrix above, multiplying on the left by its transpose, and taking the determinant, show that one obtains D = \begin{vmatrix} p_0&p_1&p_2&\cdots&p_{n-1} \\ p_1&p_2&p_3&\cdots&p_n \\ p_2&p_3&p_4&\cdots&p_{n+1} \\ \vdots&\vdots&\vdots&\ddots&\vdots \\ p_{n-1}&p_n&p_{n+1}&\cdots&p_{2n-2} \end{vmatrix} where p_k=∑α_i^k, which can be computed in terms of the coefficients of f(x) using Newton's Formulas from exercise 22. This gives an efficient procedure for calculating the discriminant of a polynomial.

Proof: (a) Expanding \prod_{i > j}(α_i-α_j) into sums, one finds that the summands can be represented by the collection of graphs on n points where for every point α_i and α_j, a "choice" is made according to whether the element from the term (α_i-α_j) is α_i or (negative) α_j. Visually, this may be depicted by a regular n-gon with exactly one arrow from each vertex to another, with the vertices successively labeled α_1,...,α_n. Conversely, a sum may be obtained from a diagram of this type by examining how many arrows point to the vertex α_i (notated d(i)) and retrieving \pm ∏α_i^{d(i)}, where the sign is negative if the number of arrows pointing from higher-ordered vertices to lower-ordered vertices is odd.

First, a lemma.

Lemma: Given such a diagram on n-points, if d(i)=d(j) for any i≠j, then there exist points a,b,c such that a→b→c→a.
Proof: The lemma is vacuously true for n=2. We shall prove it inductively for n: Given a counterexample, assume there does not exist z such that d(z)=n-1. Then for every point, there is an arrow from that point to some other point. Starting from any point x_1, one may follow arrows without dead-ends to obtain a path x_1→x_2→...→x_i→...→x_i, so that the existence of a circuit x_i→...→x_i is guaranteed. Observe a circuit y_1→y_2→...→y_k→y_1 of minimum distance k: if k≠3, then either y_1 points to y_3 or y_3 points to y_1. In the former case y_1→y_3→...→y_k→y_1 is a shorter circuit violating minimality, and in the latter case so too is y_1→y_2→y_3→y_1. Hence k=3 and the lemma's hypothesis is contradicted.

Therefore there is z such that d(z)=n-1, i.e. all other points point to α_z. Note that in the equality d(i)=d(j) assumed by a counterexample necessarily i,j≠z as there cannot be two points toward which every point yields an arrow (since one of the two must yield an arrow to the other). As such, examine the diagram on n-1 points given by removing α_z from the diagram and all the arrows pointing toward it (α_z itself yields no arrows) to obtain a counterexample of the lemma on a smaller diagram, an inductive contradiction. This establishes the lemma.

Returning to the view of the diagrams as individual summands, establish a map between positive and negative diagrams of the type in the lemma (d(i)=d(j) for some i≠j) given by reversing the directions of the arrows on the "first" triangle a→b→c→a appearing in the diagram (triangle in order of labels a_1,b_1,c_1 is before triangle a_2,b_2,c_2 if the label of a_1 is smaller than the label of a_2, then compare b_1 with b_2 then c_1 with c_2). This map is a bijection as it is its own inverse. This ultimately implies the summands of the form ±α_1^{β_1}α_2^{β_2}...α_n^{β_n} where β_i=β_j for some i≠j cancel out in ∏_{i > j}(α_i-α_j), and the only ones remaining are those corresponding to diagrams where one point receives 0 arrows, another 1 arrow, ..., and the last n-1 arrows, of which there are exactly n!; in fact, these are all point-permutations of the graph on n points where point α_i receives i-1 arrows. These permutations and negations correspond precisely to the definition of the determinant of the Vandermonde matrix above.

(b) If the i,j entry of the Vandermonde matrix V is v_{ij}=α_i^{j-1}, then we see the i,j entry of V^tV is, \sum_{k=1}^n v_{ki}v_{kj}=\sum_{k=1}^n α_k^{(i-1)+(j-1)} which is the matrix written above. As described above, Newton's Formulas provide an efficient means of inductively calculating p_i in terms of the symmetric functions which appear in the coefficients of f(x), thus giving an efficient means of calculating the discriminant of f(x).~\square

Friday, February 21, 2014

Newton's Formula and Applications (14.6.22-26)

Dummit and Foote Abstract Algebra, section 14.6, exercises 22-26:

MathJax TeX Test Page 22. Let f(x) be a monic polynomial with roots α_1,...,α_n. Let s_k be the elementary symmetric function of degree k in the roots and define s_k=0 for k > n. Let p_k=∑α_i^k be the sum of the k^\text{th} powers of the roots for k ≥ 0. Derive Newton's Formulas for all j∈ℕ:p_1-s_1=0p_2-s_1p_1+2s_2=0p_3-s_1p_2+s_2p_1-3s_3...p_j-s_1p_{j-1}+s_2p_{j-2}-...+(-1)^{j-1}s_{j-1}p_1+(-1)^jjs_j=0 23. (a) If x+y+z=1, x^2+y^2+z^2=2, and x^3+y^3+z^3=3, determine x^4+y^4+z^4.
(b) Prove x,y,z∉ℚ but x^n+y^n+z^n∈ℚ for all n∈ℕ.
24. Prove that an n×n matrix A over a field F of characteristic 0 is nilpotent iff \text{Tr}(A^k)=0 for all 1≤k≤n.
25. Prove than two n×n matrices A and B over a field F of characteristic 0 have the same characteristic polynomial iff \text{Tr}(A^k)=\text{Tr}(B^k) for all 1≤k≤n.
26. When A and B are two n×n matrices over a field F of characteristic 0, show the characteristic polynomials of AB and BA are the same.

Proof: (22) When I_n=\{1,2,...,n\}, let A_i=\{A⊆I_n~|~|A|=i\} so that we may say s_i=\sum_{A∈A_i}\prod_{a∈A}α_a Now, define q_1=s_{j-1}p_1-js_jq_i=s_{j-i}p_i-q_{i-1} By rearranging Newton's j^\text{th} Formula we see that we must prove p_j=s_1p_{j-1}-(s_2p_{j-2}-(...-(s_{j-1}p_1-js_j)...)=q_{j-1} To this end, we shall inductively prove, q_i=\sum_{A∈A_{j-i}} \sum_{a∈A} α_a^{i+1} \prod_{\substack{b∈A \\ b≠a}} α_b which will establish the formula when i=j-1. For the base case i=1, q_1=s_{j-1}p_1-js_j is seen to be the sum of all combinations of roots multiplied j-1 at a time with a squaring of one of those roots, which agrees with the inductive pattern stated above.

Now, for the inductive step, we see q_i=s_{j-i}p_i-q_{i-1}. Note s_{j-i}p_i is the sum of all roots multiplied j-i at a time with one root to the (i+1)^\text{th} power, plus the sum of all roots multiplied j-i+1=j-(i-1) at a time with one root to the i^\text{th}=((i-1)+1)^{th} power. In fact, that second sum mentioned is inductively q_{i-1}, so that in fact q_i accords with the pattern above and the induction is complete.

(23) (a) Letting f(X)=(X-x)(X-y)(X-z), we see p_1=1, p_2=2, and p_3=3. Newton's Formulas provides a sufficient system: p_1-s_1=0⇒s_1=1 p_2-s_1p_1+2s_2=0⇒s_2=-\dfrac{1}{2} p_3-s_1p_2+s_2p_1-3s_3=0⇒s_3=\dfrac{1}{6} Since s_4=0, we may proceed to calculate p_4: p_4-s_1p_3+s_2p_2-s_3p_1=0⇒p_4=\dfrac{25}{6} (b) Since we have s_1,s_2, and s_3, we may explicitly observe the polynomial f(X)=x^3-x^2-\dfrac{1}{2}x-\dfrac{1}{6}. Multiplying by 6 one may prove the polynomial 6x^3-6x^2-3x-1 has no rational roots by checking it against the rational root theorem. Meanwhile, p_n may be deduced for all n∈ℕ by the inductive process shown at the end of (a) which makes all of its calculations in , hence p_n∈ℚ.

(24) () We've seen the negative of the trace of a matrix manifests as the coefficient of x^{n-1} in its characteristic polynomial. Hence, since a nilpotent matrix has a minimal polynomial of the form x^m, we see the characteristic polynomial must be of the form x^{m'} so that \text{Tr}(A)=0. Since A^k is also nilpotent for all k, the same argument applies. () This shall follow from the more general argument in (25).

(25) () Put A and B in Jordan form over the algebraic closure K of F. We see that their primary diagonals' sum is the same given that their characteristic polynomials are the same and hence have the same roots in the same multiplicities. In general, given an upper triangular matrix C with primary diagonal elements c_{ii}=γ_i, it is simple to show by induction that C^k is an upper triangular matrix with primary diagonal elements γ_i^k, i.e. the strict upper triangular elements have no effect on the primary diagonal, ultimately implying through the Jordan forms of A and B that \text{Tr}(A^k)=\text{Tr}(B^k). () Once again given the Jordan forms A' and B' of A and B over K, we may deduce from the Newton's Formula system implied by the diagonal sums entailed by \text{Tr}(A'^k)=\text{Tr}(B'^k) for k∈I_n that the elementary symmetric functions in the roots of the characteristic polynomials for A and B are identical (here characteristic 0 or at least > n is needed so that the term (-1)^jjs_j doesn't vanish), hence their characteristic polynomials are identical.

(26) For any two matrices A and B with AB=C we see \text{Tr}(AB)=\sum_{m=1}^n c_{mm}=\sum_{m=1}^n \sum_{k=1}^n a_{mk}b_{km} = \sum_{k=1}^n \sum_{m=1}^n b_{km}a_{mk} = \text{Tr}(BA) Thus for all k we have \text{Tr}((AB)^k)=\text{Tr}(A(BA)^{k-1}B)=\text{Tr}((BA)^{k-1}BA)=\text{Tr}((BA)^k) hence by (25) AB and BA have the same characteristic polynomial.~\square

Monday, February 10, 2014

Noncyclicity of Negative-Discriminant Quartic Fields over Q (14.6.19)

Dummit and Foote Abstract Algebra, section 14.6, exercise 19:

MathJax TeX Test Page Let f(x) be an irreducible polynomial of degree 4 in ℚ[x] with discriminant D. Let K denote the splitting field of f(x), viewed as a subfield of .
(a) Prove ℚ(\sqrt{D})⊂K.
(b) Let τ denote complex conjugation and let τ_K denote complex conjugation restricted to K. Prove τ_K is an element of \text{Gal}(K/ℚ) of order 1 or 2 depending on whether K⊆ℝ.
(c) Prove that if D < 0 then K cannot be cyclic of degree 4 over .
(d) Prove generally that ℚ(\sqrt{D})⊈K for any D < 0 when K is a cyclic quartic field.

Proof: (a) \sqrt{D} is an expression in the roots of f(x) and hence is clearly within K. Since f(x) is irreducible we have [K~:~ℚ]≥4 and so the proper containment is evident.

(b) When K⊆ℝ, τ_K is merely the identity. So assume K⊈ℝ: Then [K~:~K∩ℝ]=2 as clearly holds, and must hold as K is Galois over K∩ℝ and the only nonidentity automorphism must be φ(a+bi)=a+bφ(i)=a-bi, i.e. τ_K.

(c,d) Assume D < 0 so that \sqrt{D}∉ℝ. Then τ_K≠1 and it must be the sole automorphism of degree 2 generated by either of the two other nonidentity automorphisms. But now ℚ(\sqrt{D}) is not fixed by anything but the identity, a contradiction as ℚ(\sqrt{D})≠K. Nothing more was needed than the fact that D < 0.~\square