3
$\begingroup$

Let $\{a_n\}_{n\in \mathbb{Z}}$, $a_n\in \mathbb{R}$, be such that $a_n = O(1/n^2)$ and $a_{-n}=a_n$. The Toeplitz matrix $A_N$ is the $N$-by-$N$ matrix defined by $$A_{N,i,j} = a_{|i-j|}$$ for $1\leq i,j\leq N$. Given our assumptions, it is real symmetric. Hence, it has full real spectrum.

Assume as well that $a_0$ is positive, that every $a_n$ with $n\ne 0$ is negative, and that $\sum_{n\in\mathbb{Z}} a_n = 0$. Then, in particular, $A_N$ is positive definitive, and all of its eigenvalues are positive: $$0<\lambda_{N,1}\leq \lambda_{N,2} \leq \dotsc \leq \lambda_{N,N}.$$

For the sequence $a_n$ I am dealing with, I can see numerically that $N \lambda_{N,1}$ converges from above to a constant $c$ as $N\to \infty$. The question is how to prove this, and how to determine the exact value of this constant, or at least show that $N \lambda_{N,1} \leq c + 10^{-5}$ (say) for $N$ sufficiently large.

I imagine this is very much a known kind of problem. What sort of strategies are known?

  • All I can think of is taking high powers of $a_0 I - A_N$, but it is unclear that I will get anything meaningful - there is nothing miraculous happening as in Schur's proof of Hilbert's inequality, say.
  • Szegő's strong limit theorem seems to get me nowhere - it's a statement on the asymptotic distribution of most eigenvalues, unscaled (i.e., not multiplied by $N$), and does not have sufficient resolution near $0$, so to speak.

UPDATE: Let us add some assumptions. Assume that

  • the function $f(x) = \sum_{n\in \mathbb{Z}} a_n e^{2\pi i n x}$ is continuous,
  • $f(x)$ takes the value only at $x=0$,
  • $f'(x)$ has a singularity at $x=0$ (notice this implies that $\sum_{n\in \mathbb{Z}} a_n n$ does not converge absolutely), and only there; moreover, $\lim_{x\to 0^+} f'(x) = L$ and $\lim_{x\to 0^-} f'(x) = -L$ for some $L>0$.

What can be said then? Szegő's limit theorem implies that, as $M\to \infty$, the $M$th smallest eigenvalue of $A_{M N}$ tends to $L/2 N$. This might lead to the guess that the lowest eigenvalue $\lambda_{N,1}$ of $A_N$ should be about $L/2 N$, but that is not what I am getting for a sequence I am working with; numerically, I get $c/N$, with $c>0$ a great deal smaller than $L/2$.

$\endgroup$

2 Answers 2

3
$\begingroup$

The $O(1/N)$ behavior is easy to explain (and no, there aren't any hidden log factors). All we need for that is to produce a vector whose norm is dropped about $N$ times after the application of the matrix. To this end just take any smooth (on the entire line) bump $\psi$ supported on $(0,1)$ and put $x_n=\psi(n/N)$. Then we can apply the full infinite matrix $A$ to $x$ indexed by $n\in\mathbb Z$ and look at the result on $[1,N]$ only or we can think of $x$ as a vector with $N$ coordinates ($n\in[1,N]$) and apply $A_N$ to it; the results will be the same.

Let's look at it from the first point of view. Then, due to the symmetry and $0$ sum conditions, we can say that $$ (Ax)_k=\sum_{n>0} a_n(x_{k+n}+x_{k-n}-2x_k)\,. $$ However the difference in parentheses is $O(\min(n^2/N^2,1))$ (both $\psi$ and $\psi''$ are bounded), so combining that with $a_n=O(n^{-2})$, we immediately get the $O(1/N)$ bound for each entry of $Ax$, while the typical entry of $x$ itself on $[1,N]$ is of size $1$, so we get our desired $N$-fold reduction in the $\ell^2$ norm.

Getting the existence of the limit would require some regularity of the sequence $a$ and finding the exact value of the limit will, probably, require the full knowledge of $a$. Since no information to that extent has been provided in the OP, I'll stop here.

$\endgroup$
5
  • $\begingroup$ But how would you go about finding the limit? All I have got is a decreasing succession of upper bounds. $\endgroup$ Commented Dec 25, 2024 at 3:40
  • $\begingroup$ @HAHelfgott As I said, I need to know $a$ or, at the very least, some assumptions about its regularity for that. The bare $a_n=O(n^{-2})$ condition is not even sufficient for the existence of the limit, forget about finding the exact value. Provide more information, and then I'll think more about it ;-) $\endgroup$ Commented Dec 25, 2024 at 3:44
  • $\begingroup$ @HAHelfgott I have noticed that you had an "UPDATE" in your original post, but I'm not sure if the information you have provided there is sufficient for your task, so the more details, the better. $\endgroup$ Commented Dec 25, 2024 at 3:55
  • $\begingroup$ OK, here is what my $a_n$ is. For $n=0$, it is $4 \log 2$. For $n=\pm 1$, it is $(9/2) \log 3 - 8 \log 2$. For $n = \pm 2$, it is $-2 \log(1-1/4)+8 \log(1-1/9)$. For $|n|>2$, it is $(2-n^2) \log(1-1/n^2)+((2+n)^2/2) \log(1-1/(1+n)^2)+((2-n)^2/2) \log(1-1/(1-n)^2)$ $\endgroup$ Commented Dec 25, 2024 at 11:21
  • $\begingroup$ I now have numerical values for the lowest eigenvalue for very large $N$ (courtesy of a friend). What I would like is tight lower bounds in the limit, and ideally an expression for the limit of the eigenfunctions. $\endgroup$ Commented Dec 25, 2024 at 11:26
2
$\begingroup$

I think you need a slight better bound for a_n to have such dependance of the smallest eigenvalue. One can see that if \sum_{n>0} n a_n converges and the limit is equal to C then <u, Au> is approximated -2C where u is a vector with all 1-es. This suggest that the smallest eigenvalue is -2C/n.

However if a_n is O(1/n^2) the sum diverges and this argument does not work. It is possible that your numerical experiments do not got to large enough n to detect the extra log n factors which comes from the divergent sum. If is also possible that the smallest eigenvector is far enough from the vectors with all 1-es so this approximation does not really give the smallest eigenvalue.

$\endgroup$
6
  • $\begingroup$ I forgot to mention that this argument gives an upper bound for the smallest eigenvalue - one needs to work out the exact bound, it seem to be of the form \lambda_,{N,1} < - \frac{2}{N} \sum_{n=1}^N n a_n $\endgroup$ Commented Dec 21, 2024 at 15:00
  • $\begingroup$ All right, assume the sum of $n a_n$ converges. $\endgroup$ Commented Dec 21, 2024 at 19:35
  • $\begingroup$ This shows an upper bound in the right scale (constant/N), but no, the optimal eigenvector in the case I’m looking at doesn’t look anything like the constant function; it looks more like (but not identical to) the entropy function (it goes down to $0$ more rapidly than linearly towards the ends). $\endgroup$ Commented Dec 21, 2024 at 20:47
  • $\begingroup$ It is unlikely that such soft argument will give you the eigen vector. I suppose one can try to understand the behavior the infinite matrix (with support 1,2,3, and not the binfinite one) to get better information about the shape of the eigen vector near the end. What exactly do you need? If you only care about the minimal eigenvalue, I think that such soft argument can produce it. Numerically what time the ratio of <u,Au>/N and \lambda_{N,1}? My intuition suggests that this should tend to 1 as N grows, which essentially says that u is close enough to the eigen vector $\endgroup$ Commented Dec 21, 2024 at 21:06
  • $\begingroup$ No, I don’t think it tends to $1$ at all, at least not in my case. $\endgroup$ Commented Dec 22, 2024 at 2:23

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.