0
$\begingroup$

I am interested in the exponential sum

$$\sum_{n=1}^X \frac{e(c_1n^2+c_2 n)}{1-e(c_1n)}$$

where $c_2$ is irrational and $e(x)=e^{2\pi i x}$. I know if the denominator is not there, this is a Weyl sum with a lot of bounds.

Is there anything that can be done to relate this sum to a Weyl sum or get some other bounds?

Edit, one thing that might be nice is to get a bound like

$$\left|\sum_{n=1}^X \frac{e(c_1n^2+c_2 n)}{1-e(c_1n)}\right|\ll \left|\sum_{n=1}^X e(c_1n^2+c_2 n) \right|\max_{1\leq n\leq N} \frac{1}{|1-e(c_1n)|}.$$

Is a bound like this possible?

$\endgroup$
6
  • 3
    $\begingroup$ You write that $c_2$ is irrational, but say nothing about $c_1$. If $c_1$ is a rational number $a/b$ with $0 < b \leq X$ then at least one summand is infinite. In general the sum may be dominated by terms for which $c_1 n$ is close to an integer, which means that the size of the sum may hinge on small rational approximations to $c_1$. $\endgroup$ Commented May 30, 2023 at 19:01
  • $\begingroup$ @NoamD.Elkies Thank you, you are correct. Here's a question. If we know that $c_1$ is so that $|1-e(c_1 n)|>\epsilon$ for all $n\in \{1,...,X\}$, then can we get a bound in terms of $\epsilon$? $\endgroup$ Commented May 30, 2023 at 19:10
  • $\begingroup$ You can approximate $1/ (1- e (c_1n ))$ by a polynomial in $e(c_1 n)$ with an error term which is usually small but large for $c_1$ close to an integer. Each monomial in the polynomial gives a Weyl sum and you can then bound the error term separately (depending on rational approximations to $c_1$). $\endgroup$ Commented May 30, 2023 at 19:10
  • $\begingroup$ @WillSawin Are you suggesting to write something like $\frac{1}{1-e(c_1 n)}=e(c_1 n)+O(e(2 c_1 n))$ for example? How would you handle the error term? I am fine with the first finitely many, but how do we control the error in terms of rational approximation to $c_1$? $\endgroup$ Commented May 30, 2023 at 19:19
  • $\begingroup$ A much better estimate is $\frac{1}{ 1- e(c_1n )} = \sum_{k=0}^d e( c_1 n k ) \frac{d-k}{d} + O\left( \frac{ 1 - e ( d c_1 n )}{ (1- e(c_1n))^2}\right)$ but, thinking about it more, this seems to only be good enough to get an improvement of the "trivial" bound (which is not so trivial in this case) by a constant factor. $\endgroup$ Commented May 30, 2023 at 21:03

1 Answer 1

3
$\begingroup$

Under your assumption that no $n$ from $1$ to $X$ has $|1 -e(c_1n)|<\epsilon$, we have an upper bound for your sum of the form $2 \epsilon^{-1} \log X + O(X)$.

This is the "trivial bound" in that it just comes from estimating $$\sum_{n=1}^X \frac{1}{ |1 - e(c_1 n )|}.$$

By the pidgeonhole principle, the number of $n$ with $e( c_1 n)$ at an angle less than $\theta$ from $1$ is at most $ 2 \theta /\epsilon$ since otherwise there would be $2$ such $n$ with angles within an interval of length less than $\epsilon$ and then the absolute value of their difference would have an angle less than $\epsilon$ and thus have $|1 -e(c_1n) |< \epsilon$.

If $\theta_1,\dots, \theta_X$ are any set of angles such that the number with distance less than $\theta$ from $0$ is at most $2\theta/\epsilon$ then

$$ \sum_{n=1}^{X} \frac{1}{ |1 -e(\theta_n)|} \leq \sum_{n=1}^{X} \frac{1}{ |1 -e ( n \epsilon/2))|}$$

because $|1- e(c_1 n )|$ is an increasing function of the angular difference, and this sum is well-approximated by the integral

$$ 2 \epsilon^{-1} \int_{\epsilon/2}^{ X \epsilon/2} \frac{ d \theta}{ | 1- e(\theta)|}$$

and since $\frac{1}{ 1- e(\theta)} = \frac{1}{ \theta} + O(1)$ , the integral is bounded by $\log X + O(X \epsilon)$, giving the claim.

Improving this bound by a significant factor seems to require estimates at many different scales, since the integral producing the $\log X$ term has equal contributions from many different scales. I suspect this can be done in such a way that the estimate at each scale is essentially a Weyl sum over a Bohr set, which probably means that cancellation can indeed be found under hypotheses on $c_1,c_2$, but one couldn't hope to save more than the $\log X$ factor as one is unlikely to get a bound for the overall sum greater than the bound for a single term.

$\endgroup$
5
  • $\begingroup$ So is the point to save on the even more trivial bound $X/\epsilon$? You put the $\epsilon$ in the numerator but get an additive log term? $\endgroup$ Commented May 31, 2023 at 0:34
  • $\begingroup$ @user479223 No, we multiply this by $\epsilon$. The bound we get is stated at the beginning, and represents a saving of $X/\log X$ over the even more trivial bound. $\endgroup$ Commented May 31, 2023 at 0:50
  • $\begingroup$ Thank you. This is helpful. $\endgroup$ Commented May 31, 2023 at 0:56
  • $\begingroup$ @user479223 So this indeed gets a bound better than that requested in your question - sort of. One never gets a better bound for the original sum that $\sqrt{X}$ so your requested bound is $\sqrt{X} \epsilon^{-1}$, which this bound easily beats. However, the original sum is probably quite small some of the time (and sometimes even zero) if $c_1$ and $c_2$ are chosen exactly right so the bound you request isn't literally true. $\endgroup$ Commented May 31, 2023 at 1:09
  • $\begingroup$ Thank you so much for your help. $\endgroup$ Commented May 31, 2023 at 1:16

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.