Preprint
Article

Tangential and Anisotropic Bases in Quadratic Spaces

Altmetrics

Downloads

99

Views

25

Comments

0

Submitted:

12 July 2024

Posted:

15 July 2024

You are already at the latest version

Alerts
Abstract
This paper presents an explicit method for constructing tangential and anisotropic bases in quadratic spaces. We will also provide three theorems that allow us to explicitly calculate vectors that are orthogonal or tangent to given vectors or that provide necessary and sufficient conditions for the existence of such vectors. A classical theorem on quadratic space states that every finite-dimensional quadratic space has an orthogonal basis. A result that seems to be unanswered is under what conditions a finite-dimensional quadratic space has a basis consisting of isotropic vectors. A second problem is whether or not every finite-dimensional quadratic space has a basis consisting of mutually tangent vectors, where two vectors v,w are tangent if (u·v)2=(u·u)(v·v). In this paper, we attack these two problems.
Keywords: 
Subject: Computer Science and Mathematics  -   Algebra and Number Theory

MSC:  11E88; 11E04; 15A63

1. Introduction

A detailed and basic study to understand quadratic forms appears in[4,5].The anisotropic part of a quadratic form over a field has been studied by P. Koprowski and B. Rothkegel in [3]. On the other hand, the algorithms that allow us to understand the anisotropic part of a quadratic form are systematically presented in the works of P. Koprowski and A. Czogała (see [1,2]). If V is a quadratic space, that is, a vector space with inner product “·”, the equation u · v = 0 , with u , v V , is usually interpreted as the orthogonality of the vectors u and v there are numerous geometric examples that justify this point of view. However, little mention is made of the condition ( u · v ) 2 = ( u · u ) ( v · v ) . In the present work, we propose to convince the reader, with several examples, that this ĺast condition expresses the fact that the vectors u and v are tangents, prior identification of vectors with everyday geometric entities. We will also provide three theorems that allow us to explicitly calculate vectors that are orthogonal or tangent to given vectors or that provide necessary and sufficient conditions for the existence of such vectors.

2. Preliminary Facts

For the sake of simplicity and convenience to the reader, we will begin this section with some basic and necessary concepts for understanding this work.
Definition 1.
Let V an vector space over an abstract field F , an inner product over V is a map φ : V × V F which satisfies the following properties:
(i)
φ ( u 1 + u 2 , v ) = φ ( u 1 , v ) + φ ( u 2 , v ) , u 1 , u 2 , v V ;
(ii)
φ ( λ u , v ) = λ φ ( u , v ) , λ F , u , v V ;
(iii)
φ ( u , v ) = φ ( v , u ) , u , v V .
Usually write u · v instead of φ ( u , v ) . Two vectors u , v V are called orthogonal if u · v = 0 and tangent if ( u · v ) 2 = ( u · u ) ( v · v ) .
An quadratic space V is a vector space over a field F endowed with an inner product “·”.
We say that V is regular, if u · v = 0 for every u V implies that v = 0 .
Definition 2.
Let u 0 be a vector in the quadratic space V, we call u isotropic if u · u = 0 and anisotropic if u · u 0 .
A subespace S of V is totally isotropic if for every pair u , v S , we have u · v = 0 .
Definition 3.
Let V be a quadratic space of finite dimension n. The index of V (ind V) is the maximum dimension of a totally isotropic subspace of V. So, ind V=0 if and only if V is anisotropic.
The Grammian of a basis B = { v 1 , v 2 , , v n } of V is the determinant | v i · v j | . We shall write GramB = | v i · v j | .
We will have the opportunity to use, the following results, the tests can be consulted in detail in each reference.
Theorem 1
(See [5], 42.1). Every finite-dimensional quadratic space has an orthogonal basis.
Theorem 2
(See [5], 42.2-42.4). The following assertions are equivalent in a finite-dimensional quadratic space V:
  • V es regular;
  • V has an orthogonal anisotropic basis;
  • For every basis B of V, GramB 0 .
  • There exists a basis B of V such that GramB 0 .
For each subset A V , we define:
A = { u V : a · u = 0 a A } .
Note that A is a subspace of V.
Theorem 3
(See [5], 42.6). If U is an arbitrary subspace of V, we have:
dim U + dim U = dim V + dim U V .
Corollary 1.
If V is regular, for every subspace of V, we have:
dim U + dim U = dim V and U = U .

3. Tangency and Orthogonality

In this section, we will study vectors tangent or orthogonal to a fixed family of vectors.
Let V be a vector space over a field F . A vectorial determinant is an expresión of the type:
v = v 1 v 2 v n a 11 a 12 a 1 n a n 1 , 1 a n 1 , 2 a n 1 , n
where v 1 , v 2 , , v n V and each a i j F , for i { 1 , 2 , , n 1 } and j { 1 , 2 , , n } . The development of this determinant is always on the first row and therefore v = λ 1 v 1 + λ 2 v 2 + + λ n v n where λ i is the cofactor of a 0 i in the matrix:
v = a 01 a 02 a 0 n a 11 a 12 a 1 n a n 1 , 1 a n 1 , 2 a n 1 , n
where a o i = 1 for each i = 1 , , n .
Theorem 4.
Let V a regular quadratic space on finite dimension n over a field F ; let B = { w 1 , w 2 , w n } be a basis of V and let { v 1 , v 2 , v n 1 } be a collection of linearly independent vectors of V. Define v V by a means of the vectorial determinant:
v = w 1 w 2 w n w 1 · v 1 w 2 · v 1 w n · v 1 w 1 · v n 1 w 2 · v n 1 w n · v n 1
Then v 0 ¯ , v · v i = 0 for each i = 1 , , n 1 and v · v = det ( w i · w j ) det ( v i · v j ) .
An immediate corollary of this theorem states that the vectors { v 1 , v 2 , v n 1 } are orthogonal to an anisotropic vector if and only if det ( v i · v j ) 0 .
Theorem 5
(See [4,5]). Let V be a regular quadratic space on finite dimension n over a field F . Let B = { v 1 , v 2 , v n } be a basis of V and let α 1 , α 2 , α n 1 F be arbitrary scalars. Consider the vector:
v = v 1 v 2 v n 0 v 1 · v 1 v 1 · v 2 v 1 · v n α 1 v n 1 · v 1 v n 1 · v 2 v n 1 · v n α n 1 v n · v 1 v n · v 2 v n · v n μ
where μ is a root of the quadratic polynomial.
p ( X ) = α 1 α 2 α n 1 X 1 v 1 · v 1 v 1 · v 2 v 1 · v n 1 v 1 · v n α 1 v n 1 · v 1 v n 1 · v 2 v n 1 · v n 1 v n 1 · v n α n 1 v n · v 1 v n · v 2 v n · v n 1 v n · v n X .
Let Δ = det ( v i · v j ) = GramB. Then v · v i = ( 1 ) n 1 α i Δ for each i = 1 , , n 1 , v · v n = ( 1 ) n 1 μ Δ and v · v = Δ 2 .
Proof.
v · v i = v 1 · v i v n · v i 0 v 1 · v 1 v 1 · v n α 1 v n 1 · v 1 v n 1 · v n α n 1 v n · v 1 v n · v n μ = 0 0 α i v 1 · v 1 v 1 · v n α 1 v n 1 · v 1 v n 1 · v n α n 1 v n · v 1 v n · v n μ = ( 1 ) n 1 α i det ( v i · v j ) = ( 1 ) n 1 α i Δ .
In a similar way, we prove that v · v n = ( 1 ) n 1 μ Δ . On the other hand,
v · v = v · v 1 v · v n 0 v 1 · v 1 v 1 · v n α 1 v n 1 · v 1 v n 1 · v n α n 1 v n · v 1 v n · v n μ = ( 1 ) n 1 Δ α 1 α n 1 μ 0 v 1 · v 1 v 1 · v n 1 v 1 · v n α 1 v n 1 · v 1 v n 1 · v n 1 v n 1 · v n α n 1 v n · v 1 v n · v n 1 v n · v n μ = ( 1 ) n 1 Δ [ p ( μ ) + ( 1 ) n 1 Δ ] = ( 1 ) n 1 Δ [ 0 + ( 1 ) n 1 Δ ] = Δ 2 .
From the previous theorem, we can obtain the following two corollaries.
Corollary 2.
In case v n · v i = 0 for each 1 , 2 , , n 1 and if D is the non-singular matrix:
D = v 1 · v 1 v 1 · v n 1 v n 1 · v 1 v n 1 · v n 1
and B = ( α 1 , , α n 1 ) , then the roots of p ( X ) are the numbers:
μ = ± ( 1 B D 1 B T ) ( v n · v n )
(These numbers are in F if and only if there exists a scalar λ F such that ( 1 B D 1 B T ) ( v n · v n ) = λ 2 ).
Proof.
If we develop the determinant giving p ( X ) through the last row, we obtain:
p ( X ) = ( 1 ) n 1 X 2 | D | ( 1 ) n 1 ( v n · v n ) S
where:
S = α 1 α 2 α n 1 1 v 1 · v 1 v 1 · v 2 v 1 · v n 1 α 1 v n 1 · v 1 v n 1 · v 2 v n 1 · v n 1 α n 1 .
Since the matrix D is invertible, we can find a vector Λ = ( α 1 , α 2 , , α n 1 ) such that Λ D = B . Subtract from the first row os S the linear combination α 1 S 2 + + α n 1 S n where S 2 , , S n denote the second, third, …, n-th row of S. We obtain then:
S = 0 0 1 Λ B T v 1 · v 1 v 1 · v n 1 α 1 v n 1 · v 1 v n 1 · v n 1 α n 1 = ( 1 ) n 1 | D | ( 1 Λ B T ) .
Replacing Λ by B D 1 , we finally obtain:
S = ( 1 ) n 1 | D | ( 1 B D 1 B T ) .
Therefore:
p ( X ) = ( 1 ) n 1 | D | [ X 2 ( 1 B D 1 B T ) ( v n · v n ) ]
and the roots of p ( X ) are the numbers μ = ± ( 1 B D 1 B T ) ( v n · v n ) . In this case, the vector v in Theorem 5 is:
v = ( 1 ) n 1 | D | μ v n ( v n · v n ) v 1 v n 1 0 v 1 · v 1 v 1 · v n 1 α 1 v n 1 · v 1 v n 1 · v n 1 α n 1 .
Corollary 3.
If α i 2 = v i · v i for each i = 1 , 2 , , n 1 , there exist 2 n 1 possibilities for the sequence α 1 , , α n 1 . If we additionally suppose that v i · v i = 1 for i = 1 , , n 1 we conclude that there exist at most 2 n 1 vectors that are simultaneously tangent to v 1 , v 2 , , v n 1 . If K = R and we fix B = ( α 1 , , α n 1 ) where α i 2 = v i · v i for each i = 1 , , n 1 , then there exist two, one, or none linearly independent vectors tangent to v 1 , , v n 1 according to ( 1 B D 1 B T ) > 0 ; B D 1 B T = 1 or ( 1 B D 1 B T ) < 0 . These vectors may be written in the form:
v = ± | D | 1 B D 1 B T v n · v n v n v 1 v n 1 0 v 1 · v 1 v 1 · v n 1 α 1 v n 1 · v 1 v n 1 · v n 1 α n 1 .

4. Tangential and Anisotropic Bases

Let us remember that in a quadratic space V of dimension n a base B = { v 1 , v 2 , , v n } is unitary (resp. antiunitary) if v i · v j = 1 (resp. v i · v j = 1 ) for each i { 1 , 2 , , n } .
Theorem 6.
If K = R , V is regular and dim V = 2 ind V + 1 , then V has a tangential basis which is unitary or antiunitary.
Proof.
We can write V as following: V = v 1 , v 1 v 2 , v 2 v s , v s v , where ind V = s , v i · v i = v i · v i = 1 for each i = 1 , 2 , , s and v · v 0 . Since K = R , we can suppose, without loss of generality, that v · v = 1 or v · v = 1 .
We, define w i = v i + v , w i = 2 v v v · v , w 0 = v . We have then w i · w i = v · v = w i · w i . Also, w i · w i = ( v i + v ) · ( 2 v i v v · v ) = 2 1 = 1 for each i = 1 , 2 , , s and w i · w j = w i · w j = 1 for i j , w i · v = ( v i + v ) · v = v · v , w i · v = ( 2 v i v v · v ) · v = 1 . Then, { w 1 , w 1 , , w s , w s , v } is tangential basis of V which is unitary or antiunitary depending if v · v = 1 or v · v = 1 . □
Before generalizing this theorem, we need the next lemma.
Lemma 1.
Let b R , b > 1 and let V = u 1 u v n be an anisotropic quadratic space over R . We suppose that u i · u i = 1 for each i { 1 , , n } (positive case) or u i · u i = 1 for each i { 1 , , n } (negative case). Let u 1 , u 2 , , u n 1 be linearly independent vectors in V such that u i . · u j = b for i j and each u i · u i = b 2 , in the positive case, or u i · u j = b for i j and u i · u i = b 2 , in the negative case. Then there exists a vector u V such that u 1 , , u n 1 , u are linearly independent and u · u i = b ( u · u = b 2 ) in the positive (negative) case.
Proof.
Let w V such that w · u i = 0 for each i { 1 , 2 , , n 1 } and w · w = ± 1 , ( + 1 in the positive case and 1 in the negative case). Let:
u 0 = λ i = 1 n 1 u i , where λ = 1 n 2 + b .
We see that:
( n 1 ) λ = n 1 ( n 1 ) + ( b 1 ) < 1 , and then ( n 1 ) 2 λ 2 < 1 .
Hence:
u 0 · u 0 = λ 2 i = 1 n 1 u i · u i + 2 i < j u i · u j = λ 2 ( ( n 1 ) b 2 + ( n 1 ) ( n 2 ) b ) = λ 2 ( n 1 ) b ( b + n 2 ) = λ ( n 1 ) b .
Since λ ( n 1 ) = n 1 ( b 1 ) + ( n 1 ) < 1 , we conclude that u 0 · u 0 < b < b 2 . Then, if μ = b 2 u 0 · u 0 , = u 0 + μ w is the vector we were looking for. In the negative case we have u 0 · u i = b for each i { 1 , 2 , , n 1 } and:
u 0 · u 0 = λ 2 [ ( n 1 ) b 2 ( n 1 ) ( n 2 ) b ] = λ 2 ( n 1 ) b λ 1 = λ ( n 1 ) b .
Clearly λ ( n 1 ) b > b 2 since λ ( n 1 ) < 1 < b . Therefore, u 0 · u 0 > b 2 . Defining μ = b 2 + u 0 · u 0 , the vector u = u 0 + μ w satisfies u · u = b 2 and u · u i = b for each i { 1 , 2 , , n 1 } . □
The next theorem is the most important result of this paper:
Theorem 7.
Let V be a regular quadratic spaces over R such that dim V = n > 2 ind V > 0 . Then V has unitary tangential basis or an antiunitary tangential basis.
Proof.
Let s = ind V , we can find isotropic vectors v 1 , v 1 , , v s v s on V with v i · v i = 1 for each i = 1 , 2 , , s and anisotropic mutually orthogonal vectors u 1 , , u t such that u j · u j = 1 for every j { 1 , 2 , , t } (positive case) or such that u j · u j = 1 for every j { 1 , 2 , , t } (negative case) and such that:
V = v 1 , v 1 v s , v s u 1 u t .
Using theorem 6, we can find a tangential basis
B = { w 1 , w 1 , w 2 , w 2 , , w s , w s , u 1 }
of the subspace V 0 = v 1 , v 1 v s , v s u 1 which is unitary in the positive case or antiunitary in the negative case.
By Lemma 1 we can find linearly independent vectors u 2 , u 3 , , u t in u 2 u t such that u j · u j = 4 and u i · u j = 2 for i j in the positive case and such that u j · u j = 4 and u i · u j = 2 for i j in the negative case.
In the positive case we define u = v s + 2 v s u 1 V 0 . If i < s :
w i · u = ( v i + u 1 ) · ( v s + 2 v s u 1 ) = u 1 · u 1 = 1
and
w i · u = ( 2 v i u 1 ) · ( v s + 2 v s u 1 ) = u 1 · u 1 = 1 ,
where w i and w i are defined as in theorem 6. The next step is to prove that w 1 , w 1 , , w s , w s , u 1 , u + u 2 , u + u 3 , , u + u t is a tangential basis for V. Take 1 < j t . If i < s ,
w i · ( u + u j ) = w i · u = 1 , w i · ( u + u j ) = w i · u = 1 ,
and if i = s ,
w s · ( u + u j ) = w s · u = ( v s + u 1 ) · ( v s + 2 v s u 1 ) = 1 , w s · ( u + u j ) = w s · u = ( 2 v s u 1 ) · ( v s + 2 v s u 1 ) = 1 .
Continuing in the positive case, let us calaculate ( u + u i ) · ( u + u j ) . If i j , ( u + u i ) · ( u + u j ) = u · u + u i · u j = 3 + 2 = 1 and if i = j , ( u + u i ) · ( u + u i ) = u · u + u i · u i = 3 + 4 = 1 . We conclude then that w 1 , w 1 , , w s , w s , u 1 , u + u 2 + u + u 3 , , u + u t is a tangential unitary basis for V.
In the negative case, we apply again lemma 1 and find linearly independet vectors u 2 , u 3 , , u t in u 2 u t such that u t · u j = 2 for i j and u i · u i = 4 for each i { 2 , 3 , , t } . The proposed antiunitary tangential basis is w 1 , w 1 , , w s , w s , u 1 , u + u 2 , , u + u t where u = v s 2 v s u 1 and w i , w i are as in the positive case. We have now, if i < s :
w i · u = ( v i + u 1 ) · ( v s 2 v s u 1 ) = u 1 · u 1 = 1 . w i · u = ( 2 v i u 1 ) · ( v s 2 v s u 1 ) = u 1 · u 1 = 1 .
and if i = s :
w s · u = ( v s + u 1 ) · ( v s 2 v s u 1 ) = 2 ( v s · v s ) = u 1 · u 1 = 2 + 1 = 1 . w s · u = ( 2 v s + u 1 ) · ( v s 2 v s u 1 ) = 2 ( v s · v s ) = u 1 · u 1 = 2 + 1 = 1 .
If 1 < j t and i < s , we have:
w i · ( u + u j ) = w i · u = 1 ; w i · ( u + u j ) = w i · u = 1 ,
and for i = s :
w s · ( u + u j ) = w s · u = 1 ; w s · ( u + u j ) = w s · u = 1 .
Therefore, [ w i · ( u + u j ) ] 2 = ( w i · w i ) [ ( u + u j ) · ( u + u j ) ] = 1 , since ( u + u j ) · ( u + u j ) = u · u + u j · u j = 3 4 = 1 .
Finally, if 1 < i , j t :
( u + u i ) · ( u + u j ) = u · u + u i · u j = 3 2 = 1 for i j ( u + u i ) · ( u + u i ) = u · u + u i · u i = 3 4 = 1 .
Therefore, w 1 , w 1 , , w s , w s , u 1 , u + u 2 , , u + u t is a tangential antiunitary basis for V. □
The following theorem gives us the necessary and sufficient conditions for a quadratic space to have a tangential base.
Theorem 8.
Let V = U V , where U is a regular subspace of V. Then V has a tangential basis if and only if U has a tangential basis.
Proof.
Suppose dim U = n and dim V = m . Suppose V has a tangential basis { v 1 , v 2 , , v n + m } , every v i can be written in the form v i = u i + u ¯ i , u i U and u ¯ i V . We claim the vectors { u 1 , u 2 , , u n + m generate U. Indeed, if w U is arbitrary, we have scalars λ 1 , λ 2 , , λ n + m such that:
w = λ 1 v 1 + λ 2 v 2 + + λ n + m v n + m .
Therefore, w ( λ 1 u 1 + λ 2 u 2 + + λ n + m u n + m ) = λ 1 u ¯ 1 + λ 2 u ¯ 2 + + λ n + m u ¯ n + m . Since w ( λ 1 u 1 + λ 2 u 2 + + λ n + m u n + m ) U V , we have w = λ 1 u 1 + λ 2 u 2 + + λ n + m u n + m as was to be proved. Therefore, a subfamily of { u 1 , u 2 , , u n + m with n elements is a basis for U, say U = u 1 , u 2 , , u n . Observe now:
( u i · u j ) 2 = ( ( u i + u ¯ i ) · ( u j + u ¯ j ) ) 2 = ( v i · v j ) 2 = ( v i · v i ) ( v j · v j ) = ( u i · u i ) ( u j · u j ) .
Therefore, { u 1 , , u n } is a tangential basis U.
Suppose now that U has a tangential basis { u 1 , , u n } . Let { u ¯ 1 , , u ¯ m } be a basis for V . Denote w i = u 1 + u ¯ i , for each i { 1 , , m } . We calculate ( u i · w j ) 2 and ( w i · w j ) 2 :
( w i · w j ) 2 = ( u i · ( u 1 + u ¯ j ) ) 2 = ( u i · u 1 ) 2 = ( u i · u i ) ( u 1 · u 1 ) = ( u i · u i ) ( w j · w j ) , ( w i · w j ) 2 = ( ( u 1 + u ¯ i ) · ( u 1 + u ¯ j ) ) 2 = ( u 1 · u 1 ) ( u 1 · u 1 ) = ( w i · w i ) ( w j · w j ) .
Since V = U V , { u 1 , u 2 , , u n , w 1 , , w m } is a tangential basis for V. □
We will finish this work with the definition of isometries.
Definition 4.
Two quadratic spaces V 1 , V 2 over the same field F areisometricif there exists an isomorphism ψ : V 1 V 2 such that v · v = ψ ( v ) · ψ ( v ) for each pair of vectors v , v V 1 . Such mapping ψ is called anisometrybetween V 1 and V 2 .
The following theorem will be very useful for our purposes, its proof can be consulted in detail in ([5], Theorem 42:16).
Theorem 9.
Let U, V be isometric regular subspaces of a quadratic space W. Then U and V are isometric too.
The problem about the existence of isotropic basis of a regular space V over the field of real numbers is solved with the following theorem:
Theorem 10.
If F = R , V is regular and if s = ind V > 0 . Then V has an isotropic basis.
Proof.
If V is a hyperbolic space and we keep the notation of theorem 6, it is clear that { v 1 , v 1 , , v s , v s } is the desired basis. Suppose now that t = dim V 2 ind V > 0 . Let us consider the extension V * = V u t + 1 where u t + 1 · u t + 1 = u i · u i for each i { 1 , 2 , , t } . By theorem 7, V * has a tangential basis { v 1 , v 2 , , v 2 s + t + 1 } where v i · v i = 1 or v i · v i = 1 for each i { 1 , 2 , , 2 s + t + 1 } . We define the vectors w i = v i ± v 2 s + t + 1 , where i { 1 , 2 , , 2 s + t } and the sing + is taken if v i · v 2 s + t + 1 v 2 s + t + 1 · v 2 s + t + 1 and the sing − is taken if v i · v 2 s + t + 1 = v 2 s + t + 1 · v 2 s + t + 1 . It is clear that every w i is isotropic and v 2 s + t + 1 = w 1 , w 2 , , w 2 s + t .
Since the subspaces v 2 s + t + 1 and v t + 1 of V * are isometric, theorem 9 implies that their orthogonal complements are isometric too. Therefore, V = u t + 1 is isometric to u 2 s + t + 1 and V has an isotropic basis. □
Corollary 4.
If V = U V , U is regular and ind U > 0 , then V has an isotropic basis.
Proof.
By theorem 10, U has an isotropic basis B. If B ¯ any basis of V , then B B ¯ is an isotropic basis of V. □
We will finish the paper with some remarks and two conjectures.
Using induction and theorem 5, it may be proved easily that every quadratic space V over the field of complex numbers has a tangential basis provided that V is regular and dim V 3 .
We also observe that if V is a hyperbolic space of index s > 0 , say V = v 1 , v 1 v s , v s where v i · v i = 0 = v i · v i and v i · v i = 1 for each i = 1 , 2 , , s , then there exists a linear automorphism ϕ : V V such that ϕ ( v i ) = v i and ϕ ( v i ) = v i for each i = 1 , 2 , , s .
ϕ satisfies the following property:
  • u · w = ϕ ( u ) · ϕ ( w ) for every pair u , w V .
Hence, if the hyperbolic space V has a unitary tangential basis and also an antiunitary tangential basis.
We say a quadratic space V is bitangential if V has a unitary tangential basis and also an antiunitary tangential basis.
Therefore, if a hyperbolic space V has a unitary tangential basis , then V is bitangential.
The previous comments motivate the following conjectures:
Problem 1.
If V is a regular quadratic spaces over R of any finite dimension, then V cannot be tangential.
A weaker conjecture is:
Problem 2.
If F = R , no hyperbolic space over R can be have a tangential basis.
If this last conjecture is true, we can finally state: If F = R , the only regular quadratic spaces over R that have a tangential basis are those of positive index and which are not hyperbolic spaces.

5. Conclusions

The results presented in this article allow us to obtain an explicit method to construct tangential and anisotropic bases in quadratic spaces. We will also provide three theorems that allow us to explicitly calculate vectors that are orthogonal or tangent to given vectors or that provide necessary and sufficient conditions for the existence of such vectors.

Author Contributions

All authors, A.L., P.H., J.M., and A.P., contributed equally to writing this article. .

Funding

The last author was supported by Vicerrectoría de Investigación e Innovación de la Universidad Simón Bolivar (sede de Barranquilla).

Data Availability Statement

Not applicable.

Acknowledgments

The last author thanks Dr. Adalberto García-Máynez (R.I.P.) for several fruitful conversations about the problem’s solution in the title.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. P. Koprowski, Algorithms for quadratic forms, J. Symb. Comput, 43 (2008) 140-152. [CrossRef]
  2. P. Koprowski and A. Czogała, Computing with quadratic forms over number fields, J. Symb. Comput, 89 (2018) 129-145.
  3. P. Koprowski and B. Rothkegel, The anisotropic part of a quadratic form over a number field, J. Symb. Comput, 115 (2023) 32-52. [CrossRef]
  4. T. Y. Lam, Introduction to quadratic forms over fields, Graduate Studies in Mathematics, vol. 67. American Mathematical Society, Providence, RI, 2005.
  5. O.T. O’Meara, Introduction to Quadratic Forms, Springer, Berlin, (1973).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated