Exploring projective equivalences between closures of orbits

Research Article
Open access

Exploring projective equivalences between closures of orbits

Christopher Qiu 1*
  • 1 Bridgewater-Raritan High School    
  • *corresponding author c.qiu.workspace@gmail.com
Published on 17 November 2023 | https://doi.org/10.54254/2753-8818/11/20230382
TNS Vol.11
ISSN (Print): 2753-8818
ISSN (Online): 2753-8826
ISBN (Print): 978-1-83558-133-9
ISBN (Online): 978-1-83558-134-6

Abstract

Motivated by results in the literature that use representations and group actions to produce nice geometric results about algebraic varieties, this article studies projective equivalence relations between closures of orbits for several complex algebraic group actions on , where is a complex representation of . In particular, we study the cases when is one of the following:, , , and . On the way, we also obtain some interesting geometric results from studying these orbits.

Keywords:

projective equivalence, orbits, representations of algebraic groups.

Qiu,C. (2023). Exploring projective equivalences between closures of orbits. Theoretical and Natural Science,11,66-81.
Export citation

1. Introduction

Motivated by studies of orbits of algebraic group actions [1] and their compactifications [2] with different properties, we propose to explore the following problem.

Problem 1. Let \( G \) be a linear algebraic group defined over the field \( C \) of complex numbers. Let V be a complex algebraic representation of \( G \) . Then \( G \) acts canonically on \( P(V) \) . Let \( x \) \( P(V) \) and consider the orbit of \( x \) under the action \( G \) on \( P(V) \) , which we denote by \( {G_{x}} \) . We examine the following questions.

When is \( {G_{x}} \) projectively equivalent to \( {G_{{x^{ \prime }}}} \) ?

Let \( {Ω_{x}} \) := \( \bar{{G_{x}}} \) be the closure of \( {G_{x}} \) in \( P(V) \) . Under what conditions on \( x,{x^{ \prime }}∈P(V) \) are \( {Ω_{x}} \) and \( {Ω_{{x^{ \prime }}}} \) projectively equivalent?

In this article, we study this problem for the following four cases.

\( G=G{L_{n}}(C) \) is the general linear group over \( C \) , \( V={C^{n}} \) and \( G \) acts on \( V \) as the matrices act on vectors.

\( G={O_{n}}(C) \) is the orthogonal group over \( C \) formed by matrices \( g \) satisfying \( {g^{T}}g=I \) , \( V={C^{n}} \) and \( G \) acts on \( V \) as the matrices act on vectors.

\( G=G{L_{n}}(C) \) , \( V={M_{n}}(C) \) is the space of \( n×n \) matrices with entries in \( C \) and the action of \( G \) on \( V \) is defined by conjugation.

\( G={O_{n}}(C) \) , \( V={M_{n}}(C) \) and the action of \( G \) on \( V \) is defined by conjugation.

We will present in detail our study of these four cases in Section 3. The first and second cases are quite simple.

Proposition 1.1. There is only one orbit for \( G{L_{n}}(C) \) acting on \( P({C^{n}}) \) .

Proposition 1.2. There are two orbits for \( {O_{n}}(C) \) acting on \( P({C^{n}}) \) . One is formed by the points \( x∈P({C^{n}}) \) satisfying \( {x^{T}}∙x≠0 \) and the other is formed by the points \( x∈P({C^{n}}) \) satisfying \( {x^{T}}∙x=0 \) . The two orbits are not isomorphic.

In the third case, we get the following result.

Theorem 1.3. Let \( V={M_{n}}(C) \) be the representation of \( G=G{L_{n}}(C) \) given by the conjugation. The notations \( {G_{x}}, {G_{{x^{ \prime }}}} \) , \( {Ω_{x}} \) and \( {Ω_{{x^{ \prime }}}} \) follow those in Problem 1. Let \( x,x \prime \) \( P(V) \) be two points and let \( \widetilde{x} \) and \( \widetilde{ x} \prime \) be two matrices representing \( x \) and \( x \prime \) , respectively. Let \( {J_{{λ_{1}},{k_{1}}}},…,{J_{{λ_{r}},{k_{r}}}} (resp.{J_{{λ \prime _{1}},k{ \prime _{1}}}},…,{J_{{λ \prime _{s}},k{ \prime _{s}}}}) \) be Jordan blocks in the Jordan normal form of \( \widetilde{x} \) (resp. \( \widetilde{ x} \prime \) ). Assume that \( r=s \) and that up to a reordering we have \( {k_{i}}={k \prime _{i}} \) for each \( i \) . Suppose there exist \( α,β∈C \) with \( α≠-nβ \) and \( β≠0 \) such that

\( {λ \prime _{i}}= α{λ_{i}}+β \sum _{i=1}^{r}{k_{i}}{λ_{i}} \) (1)

for each \( i \) . Then \( {G_{x}} \) and \( {G_{{x^{ \prime }}}} \) are projectively equivalent, and \( {Ω_{x}} \) and \( {Ω_{{x^{ \prime }}}} \) are projectively equivalent.

In the fourth case, we get the following result.

Theorem 1.4. Let \( V={M_{n}}(C) \) be the representation of \( G={O_{n}}(C) \) given by the conjugation.

The notations \( {G_{x}}, {G_{{x^{ \prime }}}} \) , \( {Ω_{x}} \) and \( {Ω_{{x^{ \prime }}}} \) follow those in Problem 1. Let \( x,x \prime \) \( P(V) \) be two points and let \( \widetilde{x} \) and \( \widetilde{ x} \prime \) be two matrices representing \( \widetilde{ x} \) and \( \widetilde{ x} \prime \) , respectively. If there exist \( α,β,γ∈C \) satisfying \( α≠±β \) and \( α+β≠-nγ \) such that

\( \widetilde{x} \prime =α\widetilde{x}+β{\widetilde{x}^{T}}+γTr\widetilde{x}. \) (2)

Then \( {G_{x}} \) and \( {G_{{x^{ \prime }}}} \) are projectively equivalent, and \( { Ω_{x}} \) and \( {Ω_{{x^{ \prime }}}} \) are projectively equivalent.

We prove Theorems 1.3 and 1.4 by explicitly finding a projective transformation that sends one orbit to the other. Representation theory shows that Theorems 1.3 and 1.4 are the best that we can get by using our method. See Lemma 3.7 and Lemma 3.9 for detailed discussions on this topic.

In our study, we also obtain some interesting geometric results using the machinery we develop in the article. They are results of the following form.

Let \( {λ_{1}}, {λ_{2}}∈C \) . We define a projective variety \( {Y_{{λ_{1}}, {λ_{2}}}} i \) n \( {P^{8}} \) of points with homogeneous coordinates \( [{y_{11}}:{y_{12}}:{y_{13}}:{y_{21}}:{y_{22}}:{y_{23}}:{y_{31}}:{y_{32}}:{y_{33}}] \) satisfying the following system of equations:

\( \begin{cases} \begin{array}{c} (2{λ_{1}}+{λ_{2}}{)^{3}}({y_{11}}{y_{22}}{y_{33}}+{y_{12}}{y_{23}}{y_{31}}+{y_{21}}{y_{32}}{y_{13}}-{y_{31}}{y_{13}}{y_{22}}-{y_{21}}{y_{12}}{y_{33}}-{y_{32}}{y_{23}}{y_{11}}) \\ =λ_{1}^{2}{λ_{2}}({y_{11}}+{y_{22+}}{y_{33}}{)^{3}} \\ (2{λ_{1}}+{λ_{2}}{)^{2}}({y_{31}}{y_{13}}+{y_{21}}{y_{12}}+{y_{32}}{y_{23}}-{y_{11}}{y_{22}}-{y_{11}}{y_{33}}-{y_{22}}{y_{33}}) \\ =-(2{λ_{1}}{λ_{2}}+λ_{1}^{2})({y_{11}}+{y_{22+}}{y_{33}}{)^{3}} \end{array} \end{cases} \) (3)

Proposition 1.5. Given two pairs of distinct complex numbers \( { (λ_{1}}, {λ_{2}}) \) and \( (λ_{1}^{ \prime },λ_{2}^{ \prime }) \) , suppose that

either \( {(λ_{1}}, {λ_{2}}) \) is proportional to \( (λ_{1}^{ \prime },λ_{2}^{ \prime }) \) up to a nonzero constant,

or the following conditions hold

\( \begin{cases} \begin{array}{c} {λ_{2}}≠-2{λ_{1}} \\ (2{λ_{1}}+{λ_{2}})λ_{1}^{ \prime }≠{λ_{2}}λ_{2}^{ \prime } \\ 2(2{λ_{1}}+{λ_{2}})λ_{1}^{ \prime }≠(3{λ_{1}}-{λ_{2}})λ_{2}^{ \prime } \end{array} \end{cases} \) (4)

Then the projective varieties \( {Y_{{λ_{1}}, {λ_{2}}}} \) and \( {Y_{λ_{1}^{ \prime },λ_{2}^{ \prime }}} \) are projectively equivalent in \( {P^{8}} \) .

This article is organized as follows. In Section 2, we present classical results in representation theory, general topology and projective geometry that will be used in the subsequent sections. In Section 3, we study Problem 1 and prove Propositions 1.1, 1.2 and Theorems 1.3, 1.4. In Section 4, we explore geometric applications of the machinery we developed and prove results in the form of Proposition 1.5.

2. Basic knowledge

In this section, we present some basic knowledge that is necessary to the subsequent sections of the article.

2.1. Representation theory

The material presented here is well-known to experts and we refer the readers to [3] for more details about representation theory. Let \( k \) be a field.

Definition 2.1. Let \( G \) be a group. A representation of \( G \) is a pair \( (V,ρ) \) where \( V \) is a \( k- \) vector space and \( ρ: G→GL(V) \) is a map satisfying: (i) \( ρ(e)=IdV \) (ii) \( ρ(gh)=ρ(g)ρ(h) \) .

Definition 2.2. Let \( (V,ρ) \) be a representation of \( G \) . A subrepresentation \( (W,ρ \prime ) \) is a representation of G, with \( W⊂V \) a sub-vector space of \( V \) and \( ρ \prime (g)=ρ(g) \) for any \( g∈G \) .

Definition 2.3. Let \( (W,ρ \prime )⊂(V,ρ) \) be a subrepresentation. The quotient representation \( (V/W,\overline{ρ}) \) is a representation of \( G \) such that \( \overline{ρ}(g)=\overline{ρ(g)} \) where \( \overline{ρ(g)}=\overline{ρ(g)(x)} \) for any \( \overline{x}∈V /W \) .

Definition 2.4. A subrepresentation \( W \) of representation \( V \) is called trivial if \( W=0 \) or \( W=V \) . A representation \( V \) of \( G \) is called irreducible if it has no nontrivial subrepresentations.

Definition 2.5. A map \( ϕ: W→V \) between representations of a group \( G \) is called a morphism (of representations) if: (i) \( ϕ \) is k-linear. (ii) \( ϕ \) \( (g.w) \) = \( g.ϕ(w) \) for all \( g∈G,w∈W \) .

Lemma 2.6 (Schur). (i) Let \( W \) and \( V \) be irreducible representations of a group \( G \) and \( ϕ :W→V \) be a morphism of representations. Then either \( ϕ=0 \) or \( ϕ \) is an isomorphism.

(ii) Let \( k \) be algebraically closed. Let \( V \) be an irreducible finite-dimensional representation of a group \( G \) . Let \( ϕ: V→V \) be an endomorphism of the representation \( V \) . Then there exists \( λ∈C \) for which \( =ϕλI{d_{V}} \) .

Proof. We begin by proving (i). Let \( w∈kerϕ \) and \( g∈G \) . Then since \( ϕ(gw)=gϕ(w)=0 \) , \( gw∈kerϕ \) . Let \( v∈Im ϕ \) and write v \( =ϕ({w_{0}}) \) . Then, \( gv=gϕ(w)=ϕ(gw)∈Im ϕ \) . Hence ker \( ϕ⊂W \) and \( Im ϕ⊂V \) are subrepresentations. Either ker \( ϕ=\lbrace 0\rbrace \) or ker \( ϕ=W \) . If ker \( ϕ=W \) , \( ϕ \) =0. If ker \( ϕ= \) {0}, \( ϕ \) is injective. In addition, \( Im ϕ⊂V⟹Im ϕ=\lbrace 0\rbrace *=V \) , meaning that \( ϕ \) is surjective and thus bijective. It is not hard to verify that \( {ϕ^{-1}}: W→V \) is also a morphism of representations. We conclude that \( ϕ: W→V \) is an isomorphism, as desired.

Next, we prove (ii). By linear algebra, there exists a nonzero vector \( v∈V \) and a scalar \( λ∈C \) such that \( ϕ(v)=λv \) . The linear map \( ϕ-λI{d_{V}} : V→V \) is a morphism of representations. By (i), either \( ϕ-λI{d_{V}}=0 \) or \( ϕ-λI{d_{V}} \) is an isomorphism. However, since there exists nonzero \( v∈V \) for which \( ϕ(v)=λv \) , \( ϕ-λI{d_{V}} \) is not injective. Thus \( ϕ=λI{d_{V}} \) .

2.2. General topology

In this part, we present some topological language from the perspective of metric spaces. One may refer to [4] for a more abstract approach. We include this part to provide a basis for our usage of Euclidean topology in complex projective geometry.

Recall how continuity of maps is defined in differential calculus.

Definition 2.7. A function \( f :R→R \) is continuous at \( {x_{0}}∈R \) if for all \( ϵ \gt 0 \) , there exist \( δ \gt 0 \) such that if \( x∈R \) satisfies \( |x-{x_{0}}| \lt δ \) , then \( |f(x)-f({x_{0}})| \lt ϵ \) .

Let \( x=({x_{1}},…,{x_{n}}),y=({y_{1}},…,{y_{n}})∈R \) . The Euclidean definition of distance is dist \( (x,y) \) = \( \sqrt[]{\sum _{i=1}^{n} {|{x_{i}}-{y_{i}}|^{2}}} \) . Note that we can also use a non-Euclidean definition of ”distance”, such as \( {dist_{p}}(x,y) \) = \( {(\sum _{i=1}^{n} {|{x_{i}}-{y_{i}}|^{p}})^{\frac{1}{p}}} \) for \( p≥1 \) .

Definition 2.8. A map \( f :R→R \) is called continuous at \( {x_{0}}∈{R^{n}} \) if for all \( ϵ \gt 0 \) , there exist \( δ \gt 0 \) such that if \( x∈{R^{n}} \) satisfies dist \( (x, {x_{0}}) \lt δ, \) then dist \( (f(x),f({x_{0}})) \lt ϵ \) .

Motivated by this, we introduce the concept of metric spaces to talk about continuity in a more general setting.

Definition 2.9. A metric space is a set \( X \) endowed with a function

\( d: X×X→R, \) (5)

called the metric function (or distance function), satisfying the following properties:

\( d(x,y)=0 iff x = y, \)

\( d(x,y)=d(y,x), and \)

\( d(x,z)≤d(x,y)+d(y,z) \)

Definition 2.10. Let \( (X,{d_{X}}) \) and \( (Y,{d_{Y}}) \) be metric spaces. A map \( f:X→Y \) is called continuous at \( {x_{0}}∈X \) if for all \( ϵ \gt 0 \) , there exist \( δ \gt 0 \) such that if \( x∈X \) and \( {d_{X}}(x,{x_{0}}) \lt δ \) , then \( {d_{Y}}(f(x),f({x_{0}})) \lt ϵ \) . Such a map is called continuous if it is continuous at each \( {x_{0}}∈X. \)

Open and closed subsets in metric spaces

Let \( (X,{d_{X}}) \) be a metric space.

Definition 2.11. An open ball centered at \( {x_{0}}∈X \) with radius \( r \gt 0 \) is the set \( B({x_{0}},r):=\lbrace x∈X:{d_{X}}(x,{x_{0}}) \lt r\rbrace \) . A closed ball centered at \( {x_{0}}∈X \) with radius \( r \gt 0 \) is the set \( \overline{B}({x_{0}},r):=\lbrace x∈X:{d_{X}}(x,{x_{0}})≤r\rbrace \) .

Definition 2.12. (i) A subset \( U⊂X \) is called open if for any \( {x_{0}}∈U \) , there exists \( r \gt 0 \) such that \( B({x_{0}},r)⊂U \) .

(ii) Let \( \lbrace {x_{n}}\rbrace _{n=1}^{∞} \) be a sequence of points in \( X \) . The sequence is called convergent in \( X \) if there is a point \( A∈X \) such that for all \( ϵ \gt 0 \) , there exist \( N∈N \) for which \( {d_{X}}({x_{n}},A) \lt ϵ \) for all \( n \gt N \) .

(iii) A subset \( F⊂X \) is closed if for any convergent sequence \( \lbrace {x_{n}}\rbrace _{n=1}^{∞}⊂F \) , the convergence point lies in \( F \) .

Proposition 2.13. Let \( Y \) be a subset of \( X \) . Then \( Y⊂X \) is open if and only if \( {Y^{c}}⊂X \) is closed.

Proof. Suppose \( Y⊂X \) is open. Let the sequence \( \lbrace {x_{n}}\rbrace _{n=1}^{∞}⊂{Y^{c}} \) converge to \( A∈X \) . That is, for any, there exists \( N∈N \) such that for all \( n \gt N \) . Assume for the sake of contradiction that \( A∈Y \) . Since \( Y \) is open, there exists \( r \gt 0 \) such that \( B(A,r)⊂Y \) . Pick \( ϵ=r \) . Then there exists \( N∈N \) such that for all \( n \gt N \) , \( {x_{n}}∈B(A,r)⊂Y \) , contradicting \( {x_{n}}∈{Y^{c}} \) . Hence \( A∈{Y^{c}} \) for any choice of \( \lbrace {x_{n}}\rbrace _{n=1}^{∞} \) and so \( {Y^{c}}⊂X \) is closed.

Suppose \( {Y^{c}}⊂X \) is closed. Let \( y \) be an arbitrary point in \( Y \) . Assume for the sake of contradiction that for each \( r \gt 0 \) , \( B(y,r)∩{Y^{c}}≠ϕ \) . Picking \( r=1/n \) , we can construct a sequence \( \lbrace {x_{n}}\rbrace _{n=1}^{∞}⊂{Y^{c}} \) such that \( {x_{n}}∈B(y,1/n)∩{Y^{c}} \) . This sequence converges to \( y∈X \) . Since \( {Y^{c}} \) is closed, \( y∈{Y^{c}} \) , contradicting \( y∈Y \) . Thus there must exist \( r \gt 0 \) so that \( B(y,r)⊂{Y^{c}} \) . Hence \( Y⊂X \) is open.

Theorem 2.14. Let \( X \) and \( Y \) be metric spaces. Let \( f :X→Y \) be a map. Then the following three statements are equivalent:

\( f \) is continuous.

for any open subset \( V⊂Y \) , the preimage \( {f^{-1}}(V)⊂X \) is open.

for any closed subset \( W⊂Y \) , the preimage \( {f^{-1}}(W)⊂X \) is closed.

Proof. By Proposition 2.13, (ii) and (iii) are equivalent. We will prove that (i) if and only if (ii).

Suppose \( f \) is continuous. Let \( V⊂Y \) be an open subset. Let \( x∈{f^{-1}}(V) \) be an arbitrary point. Since \( f(x)∈V \) and \( V⊂Y \) is open, there exist \( ϵ \gt 0 \) such that \( B(f(x),ϵ)⊂V \) . By the continuity of \( f \) , there exists \( δ \gt 0 \) for which \( f(B(x,δ))⊂B(f(x),ϵ) \) . Then \( B(x,δ)⊂{f^{-1}}(B(f(x),ϵ))⊂{f^{-1}}(V) \) and hence \( {f^{-1}}(V) \) is open.

Suppose for any open subset \( V⊂Y \) , the preimage \( {f^{-1}}(V)⊂X \) is open. Let \( x \) be an arbitrary point in \( X \) . For any \( ϵ \gt 0 \) , consider the open ball \( B(f(x),ϵ)⊂Y \) , which is an open subset of \( Y \) . Then \( {f^{-1}}(B(f(x),ϵ))⊂X \) is open. Since clearly \( x∈{f^{-1}}(B(f(x),ϵ)) \) , there is \( δ \gt 0 \) such that \( B(x,δ)⊂{f^{-1}}(B(f(x),ϵ)) \) . Then for all \( y∈X \) satisfying \( {d_{X}}(y,x) \lt δ \) , we have \( {d_{Y}}(f(y),f(x)) \lt ϵ \) . Hence \( f \) is continuous at \( x \) . Therefore \( f \) is continuous.

As we can see from Theorem 2.14, the notion of continuity can be talked about without mentioning metrics. All we need to be able to talk about continuity is the concept of open subsets (or equivalently the concept of closed subsets). This is the motivation for mathematicians to define topological spaces.

Definition 2.15. A topological space is a set \( X \) endowed with a set \( τ \) of subsets of \( X \) satisfying the following properties:

\( ϕ,X∈τ \) ;

Let \( \lbrace {U_{i}}{\rbrace _{i∈I}}⊂τ \) . Then \( ∪_{i∈I} {U_{i}}∈τ \) ;

Let \( U,V∈τ. \) Then \( U∩V∈τ \) .

The set \( τ \) is called a topology of \( X \) and its elements are called open subsets of \( X \) .

Proposition 2.16. Let \( X \) be a metric space. Let \( τ \) be the set of open subsets of \( X \) defined by the metric. Then \( (X,τ) \) is a topological space.

Proof. First, \( φ,X∈τ \) is clearly true.

Second, let \( \lbrace {U_{i}}{\rbrace _{i∈I}} \) be a family of open subsets of \( X \) . Let \( x∈∪_{i∈I} {U_{i}} \) . So there exists \( {i_{0}}∈I \) such that \( x∈{U_{{i_{0}}}} \) . But since \( {U_{{i_{0}}}}⊂X \) is open, there exists \( r \gt 0 \) such that \( B(x,r)⊂{U_{{i_{0}}}}⊂∪_{i∈I} {U_{i}} \) . Thus, \( ∪_{i∈I} {U_{i}} \) is open, as desired.

Third, let \( U,V \) be open subsets of \( X \) . We must show that \( U∩V \) is open. Let \( x∈U∩V \) . Then there exists \( {r_{U}},{r_{V}} \gt 0 \) such that \( B(x,{r_{U}})⊂U \) and \( B(x,{r_{V}})⊂V \) . Pick \( r=min\lbrace {r_{U}},{r_{V}}\rbrace \gt 0 \) and we have \( B(x,r)⊂U∩V \) . Hence \( U ∩ V \) is an open subset, as desired.

Hence, we may say that metric spacs induce topological spaces. Motivated by Theorem 2.14, we define the concept of continuity for topological spaces.

Definition 2.17. Let \( (X,{τ_{X}}) \) and \( (Y,{τ_{Y}}) \) be topological spaces. Let \( f :X→Y \) be a map. The map \( f \) is continuous if for any \( V∈{τ_{Y}} \) , \( {f^{-1}}(V)∈{τ_{X}} \) .

Now we introduce two notions that are important in this article.

Definition 2.18. Let \( X \) be a topological space. Let \( Y \) be a subset of \( X \) .

(i) The closure of \( Y \) , denoted by \( Y \) , is the smallest closed subset of \( X \) containing \( Y \) .

(ii) The interior of \( Y \) , denoted by \( Y° \) , is the largest open subset of \( X \) contained in \( Y \) .

2.3. Projective geometry

We present here some basic notions in projective geometry, which we will use later. Let k be a field.

Definition 2.19. Let \( V \) be a \( k \) -vector space. Denote \( P(V) \) to be the set of 1-dimensional subspaces in \( V \) . When \( V={k^{n}} \) , we write \( P(V)={P^{n-1}}(k) \) .

Definition 2.20. The natural map \( π:V-\lbrace 0\rbrace →P(V) \) takes \( x \) to \( [x] \) , the 1-dimensional subspace of \( V \) generated by \( x \) .

Definition 2.21. A projective transformation \( ϕ:{P^{n}}(k)→{P^{n}}(k) \) is given by an invertible \( (n+1)×(n+1) \) matrix \( A \) such that \( ϕ([x])=[Ax] \) .

Definition 2.22. Let \( X \) and \( Y \) be subsets of \( {P^{n}}(k) \) . \( X \) and \( Y \) are called projectively equivalent if there exists a projective transformation that takes \( X \) to \( Y \) .

When \( k=C \) , \( V-\lbrace 0\rbrace \) has a Euclidean topology. Using this topology, we can define a topology on \( P(V) \) .

Definition 2.23. Define a subset \( Y⊂P(V) \) to be open if and only if \( {π^{-1}}(Y)⊂V-\lbrace 0\rbrace \) is open. Let \( τ \) be the set of open subsets of \( P(V) \) defined this way.

One checks readily that \( τ \) is a topology on \( P(V) \) .

Proposition 2.24. Let \( f∈C[{x_{0}},…,{x_{n}}] \) be a homogeneous polynomial. Define

\( V(f):=\lbrace [{x_{0}}:{x_{1}}:⋯:{x_{n}}]:f({x_{0}},…,{x_{n}})=0\rbrace ⊂{P^{n}}(C). \) (6)

Then \( V(f) \) is closed with respect to the topology \( τ \) defined in Definition 2.23.

Proof. It suffices to prove that \( {π^{-1}}(V(f)) \) is closed. We define \( \overset{~}{f } \) as a function on \( {C^{n+1}}-\lbrace 0\rbrace →C \) as follows:

\( \overset{~}{f}:({x_{0}},…,{x_{n}})↦f({x_{0}},…,{x_{n}}). \) (7)

The map \( \overset{~}{f } \) is continuous because it is a polynomial function. Hence \( {π^{-1}}(V(f))=\lbrace ({x_{0}},…,{x_{n}})∈{C^{n+1}}-\lbrace 0\rbrace :\overset{~}{f}({x_{0}},…,{x_{n}})=0\rbrace ={\overset{~}{f}^{-1}}(\lbrace 0\rbrace ) \) is closed in \( {C^{n+1}}-\lbrace 0\rbrace \) .

3. The geometry of orbits

In this section, we study Problem 1 in detail through the following four examples.

\( G=G{L_{n}}(C) \) is the general linear group over \( C \) , \( V={C^{n}} \) and \( G \) acts on \( V \) as matrices act on vectors.

\( G={O_{n}}(C) \) is the orthogonal group over \( C \) formed by matrices \( g \) satisfying \( {g^{⊺}}g=I \) , \( V={C^{n}} \) and \( G \) acts on \( V \) as matrices act on vectors.

\( G=G{L_{n}}(C) \) , \( V={M_{n}}(C) \) is the space of \( n×n \) matrices with entries in \( C \) and the action of \( G \) on \( V \) is defined by conjugation.

\( G={O_{n}}(C) \) , \( V={M_{n}}(C) \) and the action of \( G \) on \( V \) is defined by conjugation.

3.1. The representation of \( G{L_{n}}(C) \) on \( {C^{n}} \)

Lemma 3.1. Let \( v,v \prime ∈{C^{n}} \) be nonzero vectors. There exists \( g∈G{L_{n}}(C) \) for which \( v \prime =gv \) .

Proof. Let \( \lbrace {e_{1}}=v,{e_{2}},…,{e_{n}}\rbrace \) be a basis of \( {C^{n}} \) , and let \( w=(1,0,⋯,0{)^{⊺}} \) . Note that \( A=(\begin{matrix}{e_{1}} & ⋯ & {e_{n}} \\ \end{matrix})∈G{L_{n}}(C) \) is invertible. In addition, \( Aw=v⇒w={A^{-1}}v \) . Similarly, we can let \( \lbrace {e_{1}} \prime =v \prime ,{e_{2}} \prime ,…,{e_{n}} \prime \rbrace \) be a basis of \( {C^{n}} \) and \( A \prime =(\begin{matrix}{e_{1}} \prime & ⋯ & {e_{n}} \prime \\ \end{matrix})∈G{L_{n}}(C) \) . We then have \( A \prime w=v \prime ⇒A \prime {A^{-1}}v=v \prime \) . Pick \( g=A \prime {A^{-1}}∈G{L_{n}}(C) \) .

Let \( x,x \prime ∈P(V) \) . Since by Lemma 3.1 any nonzero vector can be expressed as a product of a matrix in \( G{L_{n}}(C) \) and any other nonzero vector, \( {G_{x}}=P(V)={G_{x \prime }} \) . Consequently, \( {Ω_{x}}≅{Ω_{x \prime }} \) . Since projective transformations are defined by linear maps, they are continuous. Hence, a projective transformation sends limit points to limit points. If two subsets are projectively equivalent, so are their closures.

3.2. The representation of \( {O_{n}}(C) \) on \( {C^{n}} \)

Lemma 3.2. Let \( x∈{C^{n}} \) be a vector such that \( {x^{⊺}}·x≠0 \) . Then there is a basis \( \lbrace {e_{1}}=x,{e_{2}},…,{e_{n}}\rbrace \) such that

\( \begin{cases} \begin{array}{c} e_{i}^{⊺}·{e_{j}}=0"for"i≠j \\ e_{i}^{⊺}·{e_{i}}≠0. \end{array} \end{cases} \) (8)

Proof. Proceed by induction on \( dimV \) . If \( dimV=1 \) , the result is trivial. Assume that the lemma holds for all vector spaces of dimension less than \( dimV \) . Pick a vector \( x∈V \) for which \( x·x=0 \) . For any \( y∈V \) ,

\( x·y=\frac{1}{2}\lbrace (x+y)·(x+y)-x·x-y·y\rbrace =0 \) (9)

Consider \( W=⟨x{⟩^{⊥}}=\lbrace v∈V:v·x=0\rbrace \) . Since

\( \begin{cases} \begin{array}{c} dimW=dimV-1 \lt dimV \\ W∩⟨x⟩=\lbrace 0\rbrace , \end{array} \end{cases} \) (10)

\( V=W⊕⟨x⟩ \) . The statement is proven by induction.

Proposition 3.3. Denote the orbit of \( x \) under the action \( { O_{n}}(C) \) on \( P({C^{n}}) \) by \( {G_{x}} \) and the closure of \( {G_{x}} \) in \( P(V) \) by \( {Ω_{x}} \) . Let \( x,x \prime ∈P({C^{n}}) \) . If \( {x^{⊺}}·x≠0 \) and \( x{ \prime ^{⊺}}·x \prime ≠0 \) , then \( {G_{x}}={G_{x \prime }} \) .

Proof. Let \( x,x \prime ∈P({C^{n}}) \) . Pick bases \( \lbrace {e_{1}}=x,{e_{2}},…,{e_{n}}\rbrace \) and \( \lbrace {f_{1}}=x \prime ,{f_{2}},…,{f_{n}}\rbrace \) as in the lemma. Let \( {x^{⊺}}·x=x{ \prime ^{⊺}}·x \prime =λ \) . Since \( e_{i}^{⊺}·{e_{i}}≠0 \) and \( f_{i}^{⊺}·{f_{i}}≠0 \) , we may assume that \( e_{i}^{⊺}·{e_{i}}=f_{i}^{⊺}·{f_{i}}=λ \) for all \( i≥2 \) . Take \( g=({f_{1}},{f_{2}},…,{f_{n}})({e_{1}},{e_{2}},…,{e_{n}}{)^{-1}} \) . Note that

\( ({e_{1}},{e_{2}},…,{e_{n}}{)^{⊺}}·({e_{1}},{e_{2}},…,{e_{n}})=({f_{1}},{f_{2}},…,{f_{n}}{)^{⊺}}·({f_{1}},{f_{2}},…,{f_{n}})=λ{I_{n}}. \) (11)

Hence, \( (({e_{1}},{e_{2}},…,{e_{n}}{)^{⊺}}{)^{-1}}·({e_{1}},{e_{2}},…,{e_{n}}{)^{-1}}={λ^{-1}}{I_{n}} \) .

\( ({e_{1}},{e_{2}},…,{e_{n}}{)^{⊺}}{g^{⊺}}·g({e_{1}},{e_{2}},…,{e_{n}})=({f_{1}},{f_{2}},…,{f_{n}}{)^{⊺}}({f_{1}},{f_{2}},…,{f_{n}})=λ{I_{n}}. \) (12)

Since

\( {g^{⊺}}g=λ(({e_{1}},{e_{2}},…,{e_{n}}{)^{⊺}}{)^{-1}}·({e_{1}},{e_{2}},…,{e_{n}}{)^{-1}}=λ{λ^{-1}}{I_{n}}={I_{n}} \) , (13)

\( g∈G \) . Thus \( x \prime \) is in the orbit of \( x \) . Therefore \( {G_{x}}≅P(V)≅{G_{{x^{ \prime }}}} \) and consequently \( {Ω_{x}}≅{Ω_{x \prime }} \) .

Proposition 3.4. Denote the orbit of \( x \) under the action \( {O_{n}}(C) \) on \( P({C^{n}}) \) by \( {G_{x}} \) and the closure of \( {G_{x}} \) in \( P(V) \) by \( {Ω_{x}} \) . Let \( x,x \prime ∈P({C^{n}}) \) . If \( {x^{⊺}}·x=x{ \prime ^{⊺}}·x \prime =0 \) , then \( {G_{x}}={G_{x \prime }} \) .

Proof. We begin by introducing the following claim.

Claim. Let nonzero \( x∈V \) satisfy \( {x^{⊺}}·x=0 \) . Then there exists a 2-dimensional subspace \( W⊂V \) containing \( x \) and \( V=W⊕{W^{⊥}} \) . Furthermore, the symmetric bilinear form on \( V→V \) taking \( (x,y)↦{x^{⊺}}·y \) is nondegenerate.

Proof. Since \( {x^{⊺}}·x=0 \) and \( (x,y)↦{x^{⊺}}·y \) is nondegenerate, we can find \( y≠x∈V \) such that \( {x^{⊺}}·y≠0 \) . \( W=⟨x,y⟩ \) is a 2-dimensional subspace of \( V \) . Let \( v∈W \) so that \( {v^{⊺}}·w=0 \) for all \( w∈W \) . Then \( v=0 \) , so the bilinear form is nondegenerate on \( W \) . Similarly, the bilinear form is nondegenerate on \( {W^{⊥}} \) .

Let \( v∈W∩{W^{⊥}} \) . Since \( v∈W \) , express it as \( v=ax+by \) for \( a,b∈C \) . Since \( v∈{W^{⊥}} \) , \( {v^{⊺}}·x={v^{⊺}}·y=0 \) . Hence,

\( \begin{cases} \begin{array}{c} b({y^{⊺}}·x)=0 \\ a({x^{⊺}}·y)+b({y^{⊺}}·y)=0. \end{array} \end{cases} \) (14)

Since \( {x^{⊺}}·y={y^{⊺}}·x≠0 \) , \( a=b=0 \) . Hence, \( v=0 \) . \( W∩{W^{⊥}}=\lbrace 0\rbrace \) and so \( V=W⊕{W^{⊥}} \) . The claim is proven.

By the claim,

\( V={W_{x}}⊕W_{x}^{⊥}={W_{x \prime }}⊕W_{x \prime }^{⊥}. \)

By Lemma 3.1, \( W_{x}^{⊥}\overset{≃}{→}W_{x \prime }^{⊥} \) via an orthogonal matrix \( g \prime \) . Since \( {W_{x}} \) and \( {W_{x \prime }} \) are 2-dimensional vector spaces with induced bilinear forms, we can find another orthogonal matrix \( g \prime \prime \) taking \( {W_{x}}→{W_{x \prime }} \) and sending \( x↦x \prime \) . Then \( g=g \prime g \prime \prime ∈{O_{n}}(C) \) sends \( x↦x \prime \) , and thus there is an orthogonal matrix \( g∈{O_{n}}(C) \) such that \( x \prime =gx \) . It follows that \( {G_{x}}≅{G_{x \prime }} \) .

Proposition 3.5. The two orbits mentioned in Propositions 3.3 and 3.4 are not isomorphic.

Proof. This is due to dimension reasons. The orbit in Proposition 3.3 has dimension \( n-1 \) whereas the orbit in Proposition 3.4 has dimension \( n-2. \)

As a corollary, all of these propositions are true for the closures as well.

3.3. The representation of \( G{L_{n}}(C) \) on \( {M_{n}}(C) \)

In this part, we prove Theorem 1.3 and discuss how can we use our method to get more general results.

Proposition 3.6. Here we use the assumptions on \( α \) and \( β \) as in Theorem 1.3. The linear map \( ϕ:P{M_{n}}(C)→P{M_{n}}(C) \) of the form \( ϕ:A↦αA+βTr(A) \) is a projective transformation that sends \( {G_{x}} \) to \( {G_{x \prime }} \) .

Proof. The first \( {k_{1}}×{k_{1}} \) submatrix of \( ϕ(x) \) can be expressed as

\( (\begin{matrix}λ_{1}^{ \prime } & α & & \\ & ⋱ & ⋱ & \\ & & ⋱ & α \\ & & & λ_{1}^{ \prime } \\ \end{matrix}), \) (15)

while the first \( {k_{1}}×{k_{1}} \) submatrix of \( {x^{ \prime }} \) can be expressed as

\( (\begin{matrix}λ_{1}^{ \prime } & 1 & & \\ & ⋱ & ⋱ & \\ & & ⋱ & 1 \\ & & & λ_{1}^{ \prime } \\ \end{matrix}). \) (16)

Let \( A \) be the following matrix:

\( (\begin{matrix}1 & & & & \\ & \frac{1}{α} & & & \\ & & \frac{1}{{α^{2}}} & & \\ & & & ⋱ & \\ & & & & \frac{1}{{α^{{k_{i}}-1}}} \\ \end{matrix}). \) (17)

We can easily verify that \( x \prime \) and \( ϕ(x) \) are conjugates by comparing each of these blocks:

\( (\begin{matrix}λ_{1}^{ \prime } & α & & \\ & ⋱ & ⋱ & \\ & & ⋱ & α \\ & & & λ_{1}^{ \prime } \\ \end{matrix})=A(\begin{matrix}λ_{1}^{ \prime } & 1 & & \\ & ⋱ & ⋱ & \\ & & ⋱ & 1 \\ & & & λ_{1}^{ \prime } \\ \end{matrix})={A^{-1}}. \) (18)

Therefore, the map \( ϕ \) sends \( {G_{x}} \) to \( {G_{x \prime }} \) .

Let \( ϕ \prime (A)={α^{-1}}A-\frac{{α^{-1}}β}{α+nβ}Tr(A) \) be another linear map on \( P{M_{n}}(C)→P{M_{n}}(C) \) . We can verify that \( ϕ \prime \) is the inverse of \( ϕ \) . Since \( ϕ \) has an inverse, it is a projective transformation, as desired.

Proof of Theorem 1.3. The proposition above proves that \( {G_{x}} \) is projectively equivalent to \( {G_{x \prime }} \) . Since the projective transformation \( ϕ:{P^{n}}(C)→{P^{n}}(C) \) is continuous, it sends limit points to limit points. Hence, the closures \( {Ω_{x}} \) and \( {Ω_{x \prime }} \) are projectively equivalent by the same projective transformation.

Remark. In the proof of Theorem 1.3, we tried to find a linear map

\( ϕ:{M_{n}}(C)→{M_{n}}(C) \) (19)

satisfying

\( ϕ(gA{g^{-1}})=gϕ(A){g^{-1}} \) for each \( A∈{M_{n}}(C) \) and for each \( g∈G{L_{n}}(C) \) ,

\( ϕ(x) \) is conjugate to \( x \prime \) , and

\( ϕ \) is invertible.

Our construction was \( ϕ:A↦αA+βTr(A) \) for \( α≠0 \) and \( α≠-nβ \) . In fact, we can see that if \( ϕ \) is satisfies the three conditions listed above, then \( ϕ \) must be of the form \( ϕ:A↦αA+βTr(A) \) for \( α≠0 \) and \( α≠-nβ \) (Lemma 3.7). For \( {G_{x}} \) and \( {G_{x \prime }} \) to be projectively equivalent, the first condition in the list above is more restrictive than necessary. One only needs to ask

\( ∀A∈{M_{n}}(C),∀g∈G{L_{n}}(C),∃λ∈{C^{×}} such that ϕ(gA{g^{-1}})=λgϕ(A){g^{-1}}. \) (20)

It would be interesting to explore whether the looser condition can give us a generalization of Theorem 1.3.

Lemma 3.7. Let \( ϕ:{M_{n}}(C)→{M_{n}}(C) \) be a linear map such that for all \( g∈G{L_{n}}(C) \) and for all \( A∈{M_{n}}(C) \) , \( ϕ(gA{g^{-1}})=gϕ(A){g^{-1}} \) . Then there exists \( {λ_{1}},{λ_{2}}∈C \) for which \( {λ_{1}}≠0 \) , \( n{λ_{2}}≠-{λ_{1}} \) , and \( ϕ(A)={λ_{1}}A+{λ_{2}}Tr(A) \) .

Proof. We view \( {M_{n}}(C) \) as a representation of \( G{L_{n}}(C) \) by

\( g.x:=gx{g^{-1}}. \) (21)

Note that this representation is not irreducible. We can write \( {M_{n}}(C)=M_{n}^{0}(C)⊕C{I_{n}} \) , where \( M_{n}^{0}(C)=\lbrace A∈{M_{n}}(C):Tr(A)=0\rbrace \) . [5, Chapter V] contains a proof of the following classical result.

Result. \( M_{n}^{0}(C) \) and \( C{I_{n}} \) are irreducible representations of \( G{L_{n}}(C) \) .

Let \( ϕ:{M_{n}}(C)→{M_{n}}(C) \) be a morphism of representations \( ϕ={ϕ_{00}}+{ϕ_{01}}+{ϕ_{10}}+{ϕ_{11}} \) satisfying the conditions of the Lemma, where

\( \begin{cases} \begin{array}{c} {ϕ_{00}}:M_{n}^{0}(C)→M_{n}^{0}(C) \\ {ϕ_{01}}:M_{n}^{0}(C)→C{I_{n}} \\ {ϕ_{10}}:C{I_{n}}→M_{n}^{0}(C) \\ {ϕ_{11}}:C{I_{n}}→C{I_{n}}. \end{array} \end{cases} \) (22)

Hence \( {ϕ_{00}}=α{I_{n}} \) and \( {ϕ_{11}}=β{I_{n}} \) , whereas \( {ϕ_{01}}={ϕ_{10}}=0 \) , by Schur’s Lemma. Since \( ϕ \) must be invertible, \( α≠0 \) and \( β≠0 \) , and

\( ϕ=α{Id_{M_{n}^{0}(C)}}+β{Id_{C{I_{n}}}}. \) (23)

Let \( A∈{M_{n}}(C) \) . We can write \( A=(A-\frac{1}{n}Tr(A))+(\frac{1}{n}Tr(A)) \) . Then,

\( ϕ(A)=α(A-\frac{1}{n}Tr(A))+β(\frac{1}{n}Tr(A)) \)

\( =αA+\frac{1}{n}(β-α)Tr(A) \)

\( ={λ_{1}}A+{λ_{2}}Tr(A). \)

Hence we have our desired values for \( {λ_{1}} \) and \( {λ_{2}} \) , and the lemma is proven.

3.4. The representation of \( {O_{n}}(C) \) ) on \( {M_{n}}(C) \)

Proposition 3.8. If there exists \( α,β,γ∈C \) such that \( α≠±β \) , \( α+β≠-nγ \) , and \( y=αx+β{x^{⊺}}+γTr(x) \) , then the linear map \( ϕ:P{M_{n}}(C)→P{M_{n}}(C) \) of the form \( ϕ:A↦αA+β{A^{⊺}}+γTr(A) \) is a projective transformation that sends \( {G_{x}} \) to \( {G_{y}} \) .

Proof. Let us check that \( ϕ(gA{g^{-1}})=gϕ(A){g^{-1}} \) for all orthogonal matrices \( g \) . Notice that \( {g^{⊺}}={g^{-1}} \) for \( g \) an orthogonal matrix. Hence,

\( ϕ(gA{g^{-1}})= α(gA{g^{⊺}})+β(gA{g^{⊺}}{)^{⊺}}+γTr(gA{g^{-1}}) \)

\( =α(gA{g^{⊺}})+β(g{A^{⊺}}{g^{⊺}})+γTr(A) \)

\( =g(αA+β{A^{⊺}}+γTr(A)){g^{⊺}} \)

\( =gϕ(A){g^{-1}}. \)

It follows that \( ϕ \) does indeed send \( {G_{x}} \) to \( {G_{y}} \) .

Now we want to show that \( ϕ \) is invertible. Let be another linear map on \( P{M_{n}}(C)→P{M_{n}}(C) \) . We can verify that \( ϕ \prime \) is the inverse of \( ϕ \) . Since \( ϕ \) has an inverse, it is a projective transformation, as desired.

This proves Theorem 1.4.

Lemma 3.9. Let \( n≥3 \) . Let \( ϕ:{M_{n}}(C)→{M_{n}}(C) \) be a linear map such that for all \( g∈{O_{n}}(C) \) and for all \( A∈{M_{n}}(C) \) , \( ϕ(gA{g^{-1}})=gϕ(A){g^{-1}} \) . Then there exists \( α,β,γ∈C \) satisfying the conditions given in Proposition 3.8 and such that \( ϕ(A)=αA+β{A^{⊺}}+γTr(A) \) .

Proof. The representation \( V \) of \( G \) can be expressed as

\( V=Sym°⊕C{I_{n}}⊕Ant={V_{1}}⊕{V_{2}}⊕{V_{3}}, \) (24)

where \( Sym°=\lbrace y∈{M_{n}}(C):{y^{⊺}}=y,Tr(y)=0\rbrace \) and \( Ant=\lbrace y∈{M_{n}}(C):{y^{⊺}}=-y\rbrace \) . The representations \( Sym \) , \( C{I_{n}} \) , and \( Ant \) are non-isomorphic irreducible representations of \( G={O_{n}}(C) \) [5, Chapter V]. Let \( ϕ:V→V \) be an isomorphism on \( V \) . Then let \( ϕ={Σ_{i,j}}{ϕ_{ij}} \) where \( {ϕ_{ij}}:{V_{i}}→{V_{j}} \) .

By Schur’s Lemma, \( ϕ={λ_{1}}{Id_{{V_{1}}}}+{λ_{2}}{Id_{{V_{2}}}}+{λ_{3}}{Id_{{V_{3}}}} \) for nonzero \( {λ_{1}},{λ_{2}}, and {λ_{3}} \) . For any \( x∈{M_{n}}(C) \) ,

\( x=\frac{x+{x^{⊺}}}{2}+\frac{x-{x^{⊺}}}{2}=\underset{∈Sym°}{\underset{\underbrace{ }}{[\frac{x+{x^{⊺}}}{2}-\frac{1}{n}Tr(x)]}}+\underset{∈C{I_{n}}}{\underset{\underbrace{ }}{[\frac{1}{n}Tr(x)]}}+\underset{∈Ant}{\underset{\underbrace{ }}{[\frac{x-{x^{⊺}}}{2}]}}. \)

\( ϕ(x)={λ_{1}}[\frac{x+{x^{⊺}}}{2}-\frac{1}{n}Tr(x)]+{λ_{2}}[\frac{1}{n}Tr(x)]+{λ_{3}}[\frac{x-{x^{⊺}}}{2}]. \)

We may pick \( α=\frac{1}{2}{λ_{1}}+\frac{1}{2}{λ_{3}} \) , \( β=\frac{1}{2}{λ_{1}}-\frac{1}{2}{λ_{3}} \) , \( γ=\frac{1}{n}({λ_{1}}-{λ_{2}}) \) , which satisfy the required conditions.

4. Geometric applications

In this section, we exploit Theorem 1.3 to give some algebraic geometry results. To do this, we need to introduce a linear algebra lemma (Lemma 4.1).

4.1. A linear algebra lemma

Let \( x \) be an element in \( P{M_{n}}(C) \) , with

\( x~(\begin{matrix}{J_{{λ_{1}},{k_{1}}}} & & \\ & ⋱ & \\ & & {J_{{λ_{r}},{k_{r}}}} \\ \end{matrix}). \) (25)

Lemma 4.1. Let \( {λ_{1}},…,{λ_{r}} \) be distinct. The closure \( {Ω_{x}} \) of the orbit of x under the conjugation action of the group \( G{L_{n}}(C) \) is the set of classes of matrices \( y∈{M_{n}}(C) \) for which there exists \( {x_{0}}∈{C^{×}} \) satisfying the following condition:

\( det(y-T·{I_{n}})=\prod _{i=1}^{r} ({x_{0}}{λ_{i}}-T{)^{{k_{i}}}}. \) (26)

Proof. Let \( {Θ_{x}} \) be the set of classes of matrices \( y∈{M_{n}}(C) \) for which there exists \( {x_{0}}∈{C^{×}} \) satisfying the following condition:

\( det(y-T·{I_{n}})=\prod _{i=1}^{r} ({x_{0}}{λ_{i}}-T{)^{{k_{i}}}}. \) (27)

To show that \( {Θ_{x}} \) is the closure of the orbit \( {G_{x}} \) of \( x \) , we need to prove that \( {Θ_{x}} \) is closed in

\( P{M_{n}}(C) \) and that every element in \( {Θ_{x}} \) can be approximated by a sequence in \( {G_{x}} \) .

Let us first prove the closedness of \( {Θ_{x}} \) . Each side of the equation (26) is a polynomial in \( T \) . On the left-hand-side, the coefficients of this polynomials are homogeneous polynomials in the entries of the matrix \( y \) , whereas on the right-hand-side the coefficients are scalars expressed by \( {x_{0}} \) and \( {λ_{i}} \) Comparing the coefficients of the two sides, we get \( n+1 \) equations that \( y \) must satisfy. Since \( {x_{0}} \) is undetermined, we wish to eliminate this variable. By doing so, we get \( n \) homogeneous polynomials that characterize elements in \( {Θ_{x}} \) . Hence, \( {Θ_{x}} \) is the zero locus in \( P{M_{n}}(C) \) of \( n \) homogeneous polynomials. By Proposition 2.24, \( {Θ_{x}} \) is an intersection of closed subsets in \( P{M_{n}}(C) \) and is thus closed.

Next, we show that every element in \( {Θ_{x}} \) can be approximated by a sequence in \( {G_{x}} \) . To this end, we first prove a

Claim. \( {Θ_{x}} \) contains exactly some \( x \prime ∈P{M_{n}}(C) \) such that if we regard \( x \prime \) as a representing matrix, \( x \prime \) is similar to a matrix of the form

\( (\begin{matrix}{x_{0}}{J_{{λ_{1}},{k_{11}}}} & & & & & \\ & {x_{0}}{J_{{λ_{1}},{k_{12}}}} & & & & \\ & & ⋱ & & & \\ & & & {x_{0}}{J_{{λ_{1}},{k_{1{l_{1}}}}}} & & \\ & & & & ⋱ & \\ & & & & & {x_{0}}{J_{{λ_{r}},{k_{r{l_{r}}}}}} \\ \end{matrix}), \) (28)

where \( {x_{0}}∈{C^{×}} \) and \( {k_{i1}}+…+{k_{i{r_{i}}}}={k_{i}} \) for every \( i=1,2,…,r \) .

Proof. Let \( {J_{x \prime }} \) denote the matrix in (28). Suppose that \( x \prime \) satisfies the condition in the Claim. Since \( x \prime \) and \( {J_{x \prime }} \) are similar, we have \( {J_{x \prime }}=Qx \prime {Q^{-1}} \) for some \( Q∈G{L_{n}}(C) \) . Suppose \( x \prime ∈{Θ_{x}} \) . Then

\( det({x^{ \prime }}-T·{I_{n}})=det(Q({x^{ \prime }}-T·{I_{n}}){Q^{-1}}) \)

\( =det({J_{{x^{ \prime }}}}-T·{I_{n}}) \)

\( =\prod _{i=1}^{r} \prod _{j=1}^{{l_{i}}} det({x_{0}}{J_{{λ_{i}},{k_{ij}}}}-T·{I_{{k_{ij}}}}) \)

\( =\prod _{i=1}^{r} ({x_{0}}{λ_{i}}-T{)^{{k_{i}}}}. \)

Thus \( x \prime ∈{Θ_{x}} \) by the definition of \( {Θ_{x}} \) . Now we consider the inverse direction. Suppose that \( x \prime ∈{Θ_{x}} \) . Since \( x \prime \) satisfies \( det(x \prime -T·{I_{n}})=\prod _{i=1}^{r} ({x_{0}}{λ_{i}}-T{)^{{k_{i}}}} \) for some \( {x_{0}}∈{C^{×}} \) , the Jordan normal form of \( x \prime \) should be

\( {J_{x \prime }}=(\begin{matrix}{J_{{x_{0}}{λ_{1}},{k_{11}}}} & & & & & \\ & {J_{{x_{0}}{λ_{1}},{k_{12}}}} & & & & \\ & & ⋱ & & & \\ & & & {J_{{x_{0}}{λ_{1}},{k_{1{l_{1}}}}}} & & \\ & & & & ⋱ & \\ & & & & & {J_{{x_{0}}{λ_{r}},{k_{r{l_{r}}}}}} \\ \end{matrix}), \) (29)

However, \( {J_{x \prime }} \) and \( {J_{x \prime }} \) are similar by Lemma 4.2 below.

Lemma 4.2. Let \( {a_{1}},…,{a_{d-1}},{b_{1}},…,{b_{d-1}} \) be nonzero complex numbers. Let \( λ \) be an arbitrary complex number. Then the matrices

\( A=(\begin{matrix}λ & {a_{1}} & & & & \\ & λ & {a_{2}} & & & \\ & & ⋱ & ⋱ & & \\ & & & λ & {a_{i}} & \\ & & & & ⋱ & ⋱ \\ & & & & & λ \\ \end{matrix}) \) (30)

and

\( B=(\begin{matrix}λ & {b_{1}} & & & & \\ & λ & {b_{2}} & & & \\ & & ⋱ & ⋱ & & \\ & & & λ & {b_{i}} & \\ & & & & ⋱ & ⋱ \\ & & & & & λ \\ \end{matrix}) \) (31)

are similar.

Proof. We will define the sequence \( \lbrace {α_{k}}{\rbrace _{k=1,…,d}} \) of nonzero complex numbers as follows. Set \( {α_{1}}=1 \) . For \( k \gt 1 \) , inductively set \( {α_{k}}=\frac{{a_{1}}…{a_{k-1}}}{{b_{1}}…{b_{k-1}}} \) . Let \( Q=diag({α_{1}},…,{α_{d}}) \) . Since each \( {α_{k}} \) is nonzero, the matrix \( Q \) is invertible. We can easily check that \( B=QA{Q^{-1}} \) .

This lemma concludes our proof of the claim.

Now let us return to the proof of Lemma 4.1. Let \( y∈Θ \) . We want to show that \( y \) can be approximated by a sequence in \( {G_{x}} \) . By the claim, we may assume that the Jordan normal form of \( y \) , up to scaling, is

\( {J_{y}}=(\begin{matrix}{J_{{λ_{1}},{k_{11}}}} & & & & & \\ & {J_{{λ_{1}},{k_{12}}}} & & & & \\ & & ⋱ & & & \\ & & & {J_{{λ_{1}},{k_{1{l_{1}}}}}} & & \\ & & & & ⋱ & \\ & & & & & {J_{{λ_{r}},{k_{r{l_{r}}}}}} \\ \end{matrix}), \) (32)

The first \( {k_{1}}×{k_{1}} \) submatrix of \( {J_{y}} \) is:

\( A=(\begin{matrix}{J_{{λ_{1}},{k_{11}}}} & & & \\ & {J_{{λ_{1}},{k_{12}}}} & & \\ & & ⋱ & \\ & & & {J_{{λ_{1}},{k_{1{l_{1}}}}}} \\ \end{matrix}), \) (33)

which can be approximated by the sequence \( \lbrace {A_{n}}\rbrace \) define by

\( {A_{n}}=(\begin{matrix}λ & 1 & & & & & & & \\ & λ & 1 & & & & & & \\ & & ⋱ & ⋱ & & & & & \\ & & & λ & 1 & & & & \\ & & & & ⋱ & ⋱ & & & \\ & & & & & λ & 1/n & & \\ & & & & & & λ & 1 & \\ & & & & & & & ⋱ & ⋱ \\ \end{matrix}) \) (34)

where each \( {A_{n}} \) is similar to \( {J_{{λ_{1}},{k_{1}}}} \) by Lemma 4.2. For each \( {k_{i}}×{k_{i}} \) submatrix of \( {J_{y}} \) , we may do the same process, and thus we can conclude that \( {J_{y}} \) can be approximated by a sequence in \( {G_{x}} \) . Since \( y \) is similar to \( {J_{y}} \) , \( y \) can also be approximated by a sequence in \( {G_{x}} \) , as desired. Hence the proof of Lemma 4.1 is finished.

In the following two subsections, we will combine Theorem 1.3 and Lemma 4.1 to get some interesting geometric results. The general idea is as follows. Theorem 1.3 gives us a condition on which \( {Ω_{x}} \) and \( {Ω_{x \prime }} \) are projectively equivalent, while Lemma 4.1 describes how we can write down defining equations of \( {Ω_{x}} \) in \( P{M_{n}}(C) \) when \( x \) satisfies the assumption in Lemma 4.1. Combining these two, one may get some nontrivial results in algebraic geometry. To illustrate how this machinery works, we discuss in the following subsections the case when the size of the matrices is \( n=2 \) and \( n=3 \) . We give an easy geometric explanation for our results when \( n=2 \) . The results for \( n=3 \) however are already nontrivial.

4.2. \( n=2 \)

We would like to write down the equation (26) explicitly. Let us distinguish three cases.

4.2.1. Case 1. In this case, we suppose \( x~(\begin{matrix}λ & 1 \\ & λ \\ \end{matrix}) \) . By Lemma 4.1, \( {Ω_{x}}=\lbrace y∈P{M_{2}}(C):∃{x_{0}} s.t.det(y-T{I_{2}})=({x_{0}}λ-T{)^{2}}\rbrace \) . Let \( y=(\begin{matrix}{y_{1}} & {y_{2}} \\ {y_{3}} & {y_{4}} \\ \end{matrix}) \) . Then for \( y∈{Ω_{x}} \) there must exist \( {x_{0}} \) such that

\( det(\begin{matrix}{y_{1}}-T & {y_{2}} \\ {y_{3}} & {y_{4}}-T \\ \end{matrix})=x=({x_{0}}λ-T{)^{2}}. \) (35)

Expanding the two sides of the equation as polynomials in \( T \) , we find

\( ({y_{1}}{y_{4}}-{y_{2}}{y_{3}})-({y_{1}}+{y_{4}})T+{T^{2}}=x_{0}^{2}{λ^{2}}-2{x_{0}}λT+{T^{2}}. \) (36)

Comparing the coefficients, we get

\( \begin{cases} \begin{array}{c} {y_{1}}{y_{4}}-{y_{2}}{y_{3}}=x_{0}^{2}{λ^{2}} \\ {y_{1}}+{y_{4}}=2{x_{0}}λ \end{array} \end{cases}. \) (37)

Since \( {x_{0}} \) is an undetermined variable, we eliminate it by combining the two above equations, and we find

\( {y_{1}}{y_{4}}-{y_{2}}{y_{3}}=\frac{1}{4}({y_{1}}+{y_{4}}{)^{2}}. \) (38)

Hence, the defining equation of \( {Ω_{x}} \) for \( x~(\begin{matrix}λ & 1 \\ & λ \\ \end{matrix}) \) .

4.2.2. Case 2. In this case, we suppose that \( x~(\begin{matrix}{λ_{1}} & \\ & {λ_{2}} \\ \end{matrix}) \) , where \( {λ_{1}}≠{λ_{2}} \) . By Lemma 4.1, \( {Ω_{x}}=\lbrace y∈P{M_{2}}(C):∃{x_{0}} s.t.det(y-T{I_{2}})=({x_{0}}{λ_{1}}-T)({x_{0}}{λ_{2}}-T)\rbrace \) . Again let \( y=(\begin{matrix}{y_{1}} & {y_{2}} \\ {y_{3}} & {y_{4}} \\ \end{matrix}) \) . Then for \( y∈{Ω_{x}} \) there must exist \( {x_{0}} \) such that

\( det(\begin{matrix}{y_{1}}-T & {y_{2}} \\ {y_{3}} & {y_{4}}-T \\ \end{matrix})=({x_{0}}{λ_{1}}-T)({x_{0}}{λ_{2}}-T). \) (39)

Expanding both sides of the equation as polynomials in the variable \( T \) , we find

\( ({y_{1}}{y_{4}}-{y_{2}}{y_{3}})-({y_{1}}+{y_{4}})T+{T^{2}}=x_{0}^{2}{λ_{1}}{λ_{2}}-{x_{0}}({λ_{1}}+{λ_{2}})T+{T^{2}}, \) (40)

which gives us

\( \begin{cases} \begin{array}{c} {y_{1}}{y_{4}}-{y_{2}}{y_{3}}=x_{0}^{2}{λ_{1}}{λ_{2}} \\ {y_{1}}+{y_{4}}={x_{0}}({λ_{1}}+{λ_{2}}) \end{array} \end{cases}, \) (41)

by comparing the coefficients of the polynomials in \( T \) . Similarly, we eliminate the undetermined variable \( {x_{0}} \) by combining these equations and find

\( ({λ_{1}}+{λ_{2}}{)^{2}}({y_{1}}{y_{4}}-{y_{2}}{y_{3}})={λ_{1}}{λ_{2}}({y_{1}}+{y_{4}}{)^{2}}. \) (42)

Hence, the defining equation of \( {Ω_{x}} \) for \( x~(\begin{matrix}{λ_{1}} & \\ & {λ_{2}} \\ \end{matrix}) \) , where \( {λ_{1}}≠{λ_{2}} \) , is \( ({λ_{1}}+{λ_{2}}{)^{2}}({y_{1}}{y_{4}}-{y_{2}}{y_{3}})={λ_{1}}{λ_{2}}({y_{1}}+{y_{4}}{)^{2}} \) . Now we can apply Theorem 1.3 to get the following corollary.

Corollary 4.3. Suppose that \( {λ_{1}}≠±{λ_{2}} \) and \( {λ_{1}} \prime ≠±{λ_{2}} \prime \) .Then the two surfaces defined by \( ({λ_{1}}+{λ_{2}}{)^{2}}({y_{1}}{y_{4}}-{y_{2}}{y_{3}})={λ_{1}}{λ_{2}}({y_{1}}+{y_{4}}{)^{2}} \) and \( ({λ_{1}} \prime +{λ_{2}} \prime {)^{2}}({y_{1}}{y_{4}}-{y_{2}}{y_{3}})={λ_{1}} \prime {λ_{2}} \prime ({y_{1}}+{y_{4}}{)^{2}} \) are projectively equivalent in \( {P^{3}} \) .

Proof. Let \( x~(\begin{matrix}{λ_{1}} & \\ & {λ_{2}} \\ \end{matrix}) \) and \( x \prime ~(\begin{matrix}{λ_{1}} \prime & \\ & {λ_{2}} \prime \\ \end{matrix}) \) with \( {λ_{1}}≠{λ_{2}} \) and \( {λ_{1}} \prime ≠±{λ_{2}} \prime \) . By Theorem 1.3, we know that if there exist \( α,β \) satisfying \( α≠0 \) and \( α≠-3β \) such that

\( \begin{cases} \begin{array}{c} {λ_{1}} \prime =α{λ_{1}}+β({λ_{1}}+{λ_{2}}) \\ {λ_{2}} \prime =α{λ_{2}}+β({λ_{1}}+{λ_{2}}) \end{array} \end{cases}, \) (43)

then \( {Ω_{x}} \) is projectively equivalent to \( {Ω_{x \prime }} \) . By writing down the condition (43) explicitly, we find that \( {λ_{1}}≠±{λ_{2}} \) and \( {λ_{1}} \prime ≠±{λ_{2}} \prime \) . Corollary 4.3 follows, as the defining equations for \( {Ω_{x}} \) and \( {Ω_{x \prime }} \) were calculated above.

Remark. In fact, Corollary 4.3 can be proved without the machinery that we developed in the article. It can be viewed as a direct consequence of the following well-known algebraic geometry fact [6]:

Let \( Y \) and \( Y \prime \) be smooth quadric hypersurfaces in \( {P^{n}}(C) \) . Then \( Y \) and \( Y \prime \) are projectively equivalent in \( {P^{n}}(C) \) .

One checks readily that under the assumption of Corollary 4.3, the quadric surfaces in question are smooth. Thus they are projectively equivalent in \( {P^{3}}(C) \) .

4.2.3. Case 3. In this case, we consider \( x~(\begin{matrix}λ & 1 \\ & λ \\ \end{matrix}) \) . This case is trivial: \( {Ω_{x}}={G_{x}}=\lbrace (\begin{matrix}1 & 0 \\ 0 & 1 \\ \end{matrix})\rbrace ⊂{P^{3}}(C). \)

4.3. \( n=3 \)

We need to write down the equation (26) explicitly when the size of the matrices is \( n=3 \) .

4.3.1. Case 1. In this case, we suppose that \( x~(\begin{matrix}λ & 1 & \\ & λ & 1 \\ & & λ \\ \end{matrix}) \) . By Lemma 4.1, \( {Ω_{x}}=\lbrace y∈P{M_{3}}(C):∃{x_{0}} s.t.det(y-T{I_{2}})=({x_{0}}λ-T{)^{3}}\rbrace \) . Let \( y=(\begin{matrix}{y_{11}} & {y_{12}} & {y_{13}} \\ {y_{21}} & {y_{22}} & {y_{23}} \\ {y_{31}} & {y_{32}} & {y_{33}} \\ \end{matrix}) \) .Then for \( y∈{Ω_{x}} \) there must exist \( {x_{0}} \) such that

\( det(\begin{matrix}{y_{11}}-T & {y_{12}} & {y_{13}} \\ {y_{21}} & {y_{22}}-T & {y_{23}} \\ {y_{31}} & {y_{32}} & {y_{33}}-T \\ \end{matrix})=({x_{0}}λ-T{)^{3}}. \) (44)

Expanding both sides of the equation as polynomials in \( T \) , we find

\( ({y_{11}}{y_{22}}{y_{33}}+{y_{12}}{y_{23}}{y_{31}}+{y_{21}}{y_{32}}{y_{13}}-{y_{31}}{y_{13}}{y_{22}}-{y_{21}}{y_{12}}{y_{33}}-{y_{32}}{y_{23}}{y_{11}}) \)

\( +({y_{31}}{y_{13}}+{y_{21}}{y_{12}}+{y_{32}}{y_{23}}-{y_{11}}{y_{22}}-{y_{11}}{y_{22}}-{y_{22}}{y_{33}})T+({y_{11}}+{y_{22}}+{y_{33}}){T^{2}}-{T^{3}} \) (45)

\( =x_{0}^{3}{λ^{3}}-3x_{0}^{2}{λ^{2}}T+3{x_{0}}λ{T^{2}}-{T^{3}}. \)

Comparing the coefficients, we have

\( \begin{cases} \begin{array}{c} {y_{11}}{y_{22}}{y_{33}}+{y_{12}}{y_{23}}{y_{31}}+{y_{21}}{y_{32}}{y_{13}}-{y_{31}}{y_{13}}{y_{22}}-{y_{21}}{y_{12}}{y_{33}}-{y_{32}}{y_{23}}{y_{11}}=x_{0}^{3}{λ^{3}} \\ {y_{31}}{y_{13}}+{y_{21}}{y_{12}}+{y_{32}}{y_{23}}-{y_{11}}{y_{22}}-{y_{11}}{y_{22}}-{y_{22}}{y_{33}}=-3x_{0}^{2}{λ^{2}} \\ {y_{11}}+{y_{22}}+{y_{33}}=3{x_{0}}λ. \end{array} \end{cases} \) (46)

Eliminating the undetermined variable \( {x_{0}} \) , we get

\( \begin{cases} \begin{array}{c} {y_{11}}{y_{22}}{y_{33}}+{y_{12}}{y_{23}}{y_{31}}+{y_{21}}{y_{32}}{y_{13}}-{y_{31}}{y_{13}}{y_{22}}-{y_{21}}{y_{12}}{y_{33}}-{y_{32}}{y_{23}}{y_{11}}=\frac{1}{27}({y_{11}}+{y_{22}}+{y_{33}}{)^{3}} \\ {y_{31}}{y_{13}}+{y_{21}}{y_{12}}+{y_{32}}{y_{23}}-{y_{11}}{y_{22}}-{y_{11}}{y_{22}}-{y_{22}}{y_{33}}=-\frac{1}{3}({y_{11}}+{y_{22}}+{y_{33}}{)^{2}}. \end{array} \end{cases} \) (47)

The above system of equations defines \( {Ω_{x}} \) in \( {P^{8}}(C) \) for \( x~(\begin{matrix}λ & 1 & \\ & λ & 1 \\ & & λ \\ \end{matrix}) \) .

4.3.2. Case 2. In this case, we suppose that \( x~(\begin{matrix}{λ_{1}} & 1 & \\ & {λ_{1}} & \\ & & {λ_{2}} \\ \end{matrix}) \) with \( {λ_{1}}≠{λ_{2}} \) . By Lemma 4.1, \( {Ω_{x}}=\lbrace y∈P{M_{3}}(C):∃{x_{0}} s.t.det(y-T{I_{3}})=({x_{0}}{λ_{1}}-T{)^{2}}({x_{0}}{λ_{2}}-T)\rbrace \) . Let \( y=(\begin{matrix}{y_{11}} & {y_{12}} & {y_{13}} \\ {y_{21}} & {y_{22}} & {y_{23}} \\ {y_{31}} & {y_{32}} & {y_{33}} \\ \end{matrix}) \) . Then for \( y∈{Ω_{x}} \) there must exist \( {x_{0}} \) such that

\( det(\begin{matrix}{y_{11}}-T & {y_{12}} & {y_{13}} \\ {y_{21}} & {y_{22}}-T & {y_{23}} \\ {y_{31}} & {y_{32}} & {y_{33}}-T \\ \end{matrix})=({x_{0}}{λ_{1}}-T{)^{2}}({x_{0}}{λ_{2}}-T). \) (48)

Expanding both sides as polynomials in \( T \) , we find

\( ({y_{11}}{y_{22}}{y_{33}}+{y_{12}}{y_{23}}{y_{31}}+{y_{21}}{y_{32}}{y_{13}}-{y_{31}}{y_{13}}{y_{22}}-{y_{21}}{y_{12}}{y_{33}}-{y_{32}}{y_{23}}{y_{11}}) \)

\( +({y_{31}}{y_{13}}+{y_{21}}{y_{12}}+{y_{32}}{y_{23}}-{y_{11}}{y_{22}}-{y_{11}}{y_{22}}-{y_{22}}{y_{33}})T+({y_{11}}+{y_{22}}+{y_{33}}){T^{2}}-{T^{3}} \) (49)

\( =x_{0}^{3}λ_{1}^{2}{λ_{2}}-x_{0}^{2}(2{λ_{1}}{λ_{2}}+λ_{1}^{2})T+{x_{0}}(2{λ_{1}}+{λ_{2}}){T^{2}}-{T^{3}}. \)

Comparing the coefficients of both sides, we get the following system of equations.

\( \begin{cases} \begin{array}{c} {y_{11}}{y_{22}}{y_{33}}+{y_{12}}{y_{23}}{y_{31}}+{y_{21}}{y_{32}}{y_{13}}-{y_{31}}{y_{13}}{y_{22}}-{y_{21}}{y_{12}}{y_{33}}-{y_{32}}{y_{23}}{y_{11}}=x_{0}^{3}{λ^{3}} \\ {y_{31}}{y_{13}}+{y_{21}}{y_{12}}+{y_{32}}{y_{23}}-{y_{11}}{y_{22}}-{y_{11}}{y_{22}}-{y_{22}}{y_{33}}=-x_{0}^{2}(2{λ_{1}}{λ_{2}}+λ_{1}^{2}) \\ {y_{11}}+{y_{22}}+{y_{33}}={x_{0}}(2{λ_{1}}+{λ_{2}}). \end{array} \end{cases} \) (50)

we eliminate the undetermined variable x0 and get the following system of equations.

\( \begin{cases} \begin{array}{c} (2{λ_{1}}+{λ_{2}}{)^{3}}({y_{11}}{y_{22}}{y_{33}}+{y_{12}}{y_{23}}{y_{31}}+{y_{21}}{y_{32}}{y_{13}}-{y_{31}}{y_{13}}{y_{22}}-{y_{21}}{y_{12}}{y_{33}}-{y_{32}}{y_{23}}{y_{11}}) \\ =λ_{1}^{2}{λ_{2}}({y_{11}}+{y_{22}}+{y_{33}}{)^{3}} \\ (2{λ_{1}}+{λ_{2}}{)^{2}}({y_{31}}{y_{13}}+{y_{21}}{y_{12}}+{y_{32}}{y_{23}}-{y_{11}}{y_{22}}-{y_{11}}{y_{33}}-{y_{22}}{y_{33}}) \\ =-(2{λ_{1}}{λ_{2}}+λ_{1}^{2})({y_{11}}+{y_{22}}+{y_{33}}{)^{2}}. \end{array} \end{cases} \) (51)

This is the system of equations that defines \( {Ω_{x}}⊂{P^{8}}(C) \) for \( x~(\begin{matrix}{λ_{1}} & 1 & \\ & {λ_{1}} & \\ & & {λ_{2}} \\ \end{matrix}) \) . Following the notation given in the Introduction, we let \( {Y_{{λ_{1}},{λ_{2}}}} \) denote the projective variety defined by this system of equations. We can now use Theorem 1.3 to prove the following.

Corollary 4.4. Given two pairs of distinct complex numbers \( ({λ_{1}},{λ_{2}}) \) and \( ({λ_{1}} \prime ,{λ_{2}} \prime ) \) , suppose that

either \( ({λ_{1}},{λ_{2}}) \) is proportional to \( ({λ_{1}} \prime ,{λ_{2}} \prime ) \) up to a nonzero constant,

or the following conditions hold

\( \begin{cases} \begin{array}{c} {λ_{2}}≠-2{λ_{1}} \\ (2{λ_{1}}+{λ_{2}}){λ_{1}} \prime ≠{λ_{2}}{λ_{2}} \prime \\ 2(2{λ_{1}}+{λ_{2}}){λ_{1}} \prime ≠(3{λ_{1}}-{λ_{2}}){λ_{2}} \prime \end{array} \end{cases} \) (52)

Then the projective varieties \( {Y_{{λ_{1}},{λ_{2}}}} \) and \( {Y_{{λ_{1}} \prime ,{λ_{2}} \prime }} \) are projectively equivalent in \( {P^{8}} \) .

Proof. Let \( x~(\begin{matrix}{λ_{1}} & 1 & \\ & {λ_{1}} & \\ & & {λ_{2}} \\ \end{matrix}) \) and \( x \prime ~(\begin{matrix}{λ_{1}} \prime & 1 & \\ & {λ_{1}} \prime & \\ & & {λ_{2}} \prime \\ \end{matrix}) \) , with \( {λ_{1}}≠{λ_{2}} \) and \( {λ_{1}} \prime ≠{λ_{2}} \prime \) . By Theorem 1.3, we know that if there exist \( α,β \) that satisfy \( α≠0 \) and \( α≠-2β \) such that

\( \begin{cases} \begin{array}{c} {λ_{1}} \prime =α{λ_{1}}+β(2{λ_{1}}+{λ_{2}}) \\ {λ_{2}} \prime =α{λ_{2}}+β(2{λ_{1}}+{λ_{2}}) \end{array} \end{cases}, \) (53)

then \( {Ω_{x}} \) is projectively equivalent to \( {Ω_{x \prime }} \) . By writing down the condition (53) explicitly, we find that the three listed conditions allow for the existence of such \( α \) and \( β \) . Corollary 4.4 follows as the defining equations for \( {Ω_{x}} \) and \( {Ω_{x \prime }} \) were calculated above.

4.3.3. Case 3. In this case, we suppose that \( x~(\begin{matrix}{λ_{1}} & & \\ & {λ_{2}} & \\ & & {λ_{3}} \\ \end{matrix}) \) with \( {λ_{1}},{λ_{2}},{λ_{3}} \) distinct. By Lemma 4.1, \( {Ω_{x}}=\lbrace y∈P{M_{3}}(C):∃{x_{0}} s.t.det(y-T{I_{3}})=({x_{0}}{λ_{1}}-T)({x_{0}}{λ_{2}}-T)({x_{0}}{λ_{3}}-T)\rbrace \) . Let \( y=(\begin{matrix}{y_{11}} & {y_{12}} & {y_{13}} \\ {y_{21}} & {y_{22}} & {y_{23}} \\ {y_{31}} & {y_{32}} & {y_{33}} \\ \end{matrix}) \) . Then for \( y∈{Ω_{x}} \) there must exist \( {x_{0}} \) such that

\( det(\begin{matrix}{y_{11}}-T & {y_{12}} & {y_{13}} \\ {y_{21}} & {y_{22}}-T & {y_{23}} \\ {y_{31}} & {y_{32}} & {y_{33}}-T \\ \end{matrix})=({x_{0}}{λ_{1}}-T)({x_{0}}{λ_{2}}-T)({x_{0}}{λ_{3}}-T). \) (54)

Expanding both sides as polynomials in \( T \) , we find

\( ({y_{11}}{y_{22}}{y_{33}}+{y_{12}}{y_{23}}{y_{31}}+{y_{21}}{y_{32}}{y_{13}}-{y_{31}}{y_{13}}{y_{22}}-{y_{21}}{y_{12}}{y_{33}}-{y_{32}}{y_{23}}{y_{11}}) \)

\( +({y_{31}}{y_{13}}+{y_{21}}{y_{12}}+{y_{32}}{y_{23}}-{y_{11}}{y_{22}}-{y_{11}}{y_{22}}-{y_{22}}{y_{33}})T+({y_{11}}+{y_{22}}+{y_{33}}){T^{2}}-{T^{3}} \) (55)

\( =x_{0}^{3}{λ_{1}}{λ_{2}}{λ_{3}}-x_{0}^{2}({λ_{1}}{λ_{2}}+{λ_{2}}{λ_{3}}+{λ_{3}}{λ_{1}})T+{x_{0}}({λ_{1}}+{λ_{2}}+{λ_{3}}){T^{2}}-{T^{3}}. \)

Comparing the coefficients of both sides, we get the following system of equations.

\( \begin{cases} \begin{array}{c} {y_{11}}{y_{22}}{y_{33}}+{y_{12}}{y_{23}}{y_{31}}+{y_{21}}{y_{32}}{y_{13}}-{y_{31}}{y_{13}}{y_{22}}-{y_{21}}{y_{12}}{y_{33}}-{y_{32}}{y_{23}}{y_{11}}=x_{0}^{3}{λ_{1}}{λ_{2}}{λ_{3}} \\ {y_{31}}{y_{13}}+{y_{21}}{y_{12}}+{y_{32}}{y_{23}}-{y_{11}}{y_{22}}-{y_{11}}{y_{22}}-{y_{22}}{y_{33}}=-x_{0}^{2}({λ_{1}}{λ_{2}}+{λ_{2}}{λ_{3}}+{λ_{3}}{λ_{1}}) \\ {y_{11}}+{y_{22}}+{y_{33}}={x_{0}}({λ_{1}}+{λ_{2}}+{λ_{3}}). \end{array} \end{cases} \)

we eliminate the undetermined variable \( {x_{0}} \) and get the following system of equations.

\( \begin{cases} \begin{array}{c} ({λ_{1}}+{λ_{2}}+{λ_{3}}{)^{3}}({y_{11}}{y_{22}}{y_{33}}+{y_{12}}{y_{23}}{y_{31}}+{y_{21}}{y_{32}}{y_{13}}-{y_{31}}{y_{13}}{y_{22}}-{y_{21}}{y_{12}}{y_{33}}-{y_{32}}{y_{23}}{y_{11}}) \\ ={λ_{1}}{λ_{2}}{λ_{3}}({y_{11}}+{y_{22}}+{y_{33}}{)^{3}} \\ ({λ_{1}}+{λ_{2}}+{λ_{3}}{)^{2}}({y_{31}}{y_{13}}+{y_{21}}{y_{12}}+{y_{32}}{y_{23}}-{y_{11}}{y_{22}}-{y_{11}}{y_{33}}-{y_{22}}{y_{33}}) \\ =-({λ_{1}}{λ_{2}}+{λ_{2}}{λ_{3}}+{λ_{3}}{λ_{1}})({y_{11}}+{y_{22}}+{y_{33}}{)^{2}}. \end{array} \end{cases} \) (56)

The above system of equations defines \( {Ω_{x}} \) in \( {P^{8}}(C) \) for \( x~(\begin{matrix}{λ_{1}} & & \\ & {λ_{2}} & \\ & & {λ_{3}} \\ \end{matrix}) \) with \( {λ_{1}},{λ_{2}},{λ_{3}} \) distinct. Following similar notation as given in the previous case, we define a projective variety \( {Y_{{λ_{1}},{λ_{2}},{λ_{3}}}} \prime \) in \( {P^{8}} \) of points with homogeneous coordinates \( [{y_{11}}:{y_{12}}:{y_{13}}:{y_{21}}:{y_{22}}:{y_{23}}:{y_{31}}:{y_{32}}:{y_{33}}] \) satisfying the system of equations in (56).

Corollary 4.5. Given two triples of distinct complex numbers \( ({λ_{1}},{λ_{2}},{λ_{3}}) \) and \( ({λ_{1}} \prime ,{λ_{2}} \prime ,{λ_{3}} \prime ) \) , suppose that

either \( ({λ_{1}},{λ_{2}},{λ_{3}}) \) is proportional to \( ({λ_{1}} \prime ,{λ_{2}} \prime ,{λ_{3}} \prime ) \) up to a nonzero constant,

or the following conditions hold

\( \begin{cases} \begin{array}{c} {λ_{1}} \prime {λ_{2}}-{λ_{1}}{λ_{2}} \prime +{λ_{2}} \prime {λ_{3}}-{λ_{2}}{λ_{3}} \prime +{λ_{3}} \prime {λ_{1}}-{λ_{3}}{λ_{1}} \prime =0 \\ {λ_{1}}+{λ_{2}}+{λ_{3}}≠0 \\ {λ_{1}} \prime +{λ_{2}} \prime +{λ_{3}} \prime ≠0 \\ 2{λ_{1}}≠{λ_{2}}+{λ_{3}} \\ 2{λ_{1}} \prime ≠{λ_{2}} \prime +{λ_{3}} \prime \end{array} \end{cases} \)

Then the projective varieties \( {Y_{{λ_{1}},{λ_{2}},{λ_{3}}}} \prime \) and are projectively equivalent in \( {P^{8}} \) .

Proof. Let \( x~(\begin{matrix}{λ_{1}} & & \\ & {λ_{2}} & \\ & & {λ_{3}} \\ \end{matrix}) \) and \( x \prime ~(\begin{matrix}{λ_{1}} \prime & & \\ & {λ_{2}} \prime & \\ & & {λ_{3}} \prime \\ \end{matrix}) \) , with \( {λ_{1}},{λ_{2}},{λ_{3}} \) distinct (resp. \( {λ_{1}} \prime ,{λ_{2}} \prime ,{λ_{3}} \prime \) ). By Theorem 1.3, we know that if there exist \( α,β \) that satisfy \( α≠0 \) and \( α≠-3β \) such that

\( \begin{cases} \begin{array}{c} {λ_{1}} \prime =α{λ_{1}}+β({λ_{1}}+{λ_{2}}+{λ_{3}}) \\ {λ_{2}} \prime =α{λ_{2}}+β({λ_{1}}+{λ_{2}}+{λ_{3}}) \\ {λ_{3}} \prime =α{λ_{3}}+β({λ_{1}}+{λ_{2}}+{λ_{3}}) \end{array} \end{cases} \) , (57)

then \( {Ω_{x}} \) is projectively equivalent to \( {Ω_{x \prime }} \) . Given the five listed conditions, we find that there exist \( α=\frac{2{λ_{1}} \prime -{λ_{2}} \prime -{λ_{3}} \prime }{2{λ_{1}}-{λ_{2}}-{λ_{3}}} \) and satisfying condition (57). Corollary 4.5 follows as the defining equations for \( {Ω_{x}} \) and \( {Ω_{x \prime }} \) were calculated above.

5. Conclusion

Although the classification of orbits for any algebraic group action on projective spaces seems to be out of reach for the moment, we manage to give efficient criteria to determine when two orbits are projectively equivalent for the four families of explicit examples we presented in the Introduction. We also find interesting geometric applications of our research.


References

[1]. Humphreys, J. E. (1995). Conjugacy Classes in Semisimple Algebraic Groups. American Mathematical Society.

[2]. He, X., Thomsen, J. F. (2006). Closures of Steinberg Fibers in Twisted Wonderful Compactifications. Transformations Groups, 11(3), 427-438.

[3]. Etingof, P., Golberg, O., Hensel, S., Liu, T., Schwendner, A., Vaintrob, D., Yudovina, E. (2011). Introduction to Representation Theory. American Mathematical Society.

[4]. Armstrong, M. A. (1979). Basic Topology. Undergraduate Texts in Mathematics.

[5]. Knapp, A. W. (1996). Lie Groups Beyond an Introduction. Progress in Mathematics,

[6]. Birkhäuser. Reid, M. (1988). Undergraduate Algebraic Geometry. Cambridge University Press.


Cite this article

Qiu,C. (2023). Exploring projective equivalences between closures of orbits. Theoretical and Natural Science,11,66-81.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of the 2023 International Conference on Mathematical Physics and Computational Simulation

ISBN:978-1-83558-133-9(Print) / 978-1-83558-134-6(Online)
Editor:Roman Bauer
Conference website: https://www.confmpcs.org/
Conference date: 12 August 2023
Series: Theoretical and Natural Science
Volume number: Vol.11
ISSN:2753-8818(Print) / 2753-8826(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).

References

[1]. Humphreys, J. E. (1995). Conjugacy Classes in Semisimple Algebraic Groups. American Mathematical Society.

[2]. He, X., Thomsen, J. F. (2006). Closures of Steinberg Fibers in Twisted Wonderful Compactifications. Transformations Groups, 11(3), 427-438.

[3]. Etingof, P., Golberg, O., Hensel, S., Liu, T., Schwendner, A., Vaintrob, D., Yudovina, E. (2011). Introduction to Representation Theory. American Mathematical Society.

[4]. Armstrong, M. A. (1979). Basic Topology. Undergraduate Texts in Mathematics.

[5]. Knapp, A. W. (1996). Lie Groups Beyond an Introduction. Progress in Mathematics,

[6]. Birkhäuser. Reid, M. (1988). Undergraduate Algebraic Geometry. Cambridge University Press.