\( \def\wtilde{\widetilde} \def\cat{\textrm} \def\proved{} \def\lbrackk{[\![} \def\rbrackk{]\!]} \def\qandq{\quad \text{and} \quad}\newcommand{\cvdots}[1][]{\quad\ \vdots} \newcommand{\mfrac}{\frac} \newcommand{\dsty}{\displaystyle} \newcommand{\ipfrac}[3][]{\tfrac{1}{#3}(#2)} \newcommand{\unfrac}[2]{{#1}/{#2}} \newcommand{\punfrac}[2]{\left({#1}/{#2}\right)} \newcommand{\upnfrac}[2]{{(#1)}/{#2}} \newcommand{\unpfrac}[2]{{#1}/{(#2)}} \newcommand{\upnpfrac}[2]{{(#1)}/{(#2)}} \newcommand{\parbox}[2]{\text{#2}} \newcommand{\ssty}[1]{{\scriptstyle{#1}}} \def\liso{\ \tilde{\longrightarrow}\ } \def\tsum{\sum} \def\mbinom{\binom} \) \( \def\C{\mathbb C} \def\R{\mathbb R} \def\Z{\mathbb Z} \def\Q{\mathbb Q} \def\F{\mathbb F} \def\smash{} \def\phantomplus{} \def\nobreak{} \def\omit{} \def\hidewidth{} \renewcommand{\mathnormal}{} \renewcommand{\qedhere}{} \def\sp{^} \def\sb{_} \def\vrule{|} \def\hrule{} \def\dag{\dagger} \def\llbracket{[\![} \def\rrbracket{]\!]} \def\llangle{\langle\!\langle} \def\rrangle{\rangle\!\rangle} \def\sssize{\scriptsize} \def\mathpalette{} \def\mathclap{} \def\coloneqq{\,:=\,} \def\eqqcolon{\,=:\,} \def\colonequals{\,:=\,} \def\equalscolon{\,=:\,} \def\textup{\mbox} \def\makebox{\mbox} \def\vbox{\mbox} \def\hbox{\mbox} \def\mathbbm{\mathbb} \def\bm{\boldsymbol} \def\/{} \def\rq{'} \def\lq{`} \def\noalign{} \def\iddots{\vdots} \def\varint{\int} \def\l{l} \def\lefteqn{} \def\slash{/} \def\boxslash{\boxminus} \def\ensuremath{} \def\hfil{} \def\hfill{} \def\dasharrow{\dashrightarrow} \def\eqno{\hskip 50pt} \def\curly{\mathcal} \def\EuScript{\mathcal} \def\widebar{\overline} \newcommand{\Eins}{\mathbb{1}} \newcommand{\textcolor}[2]{#2} \newcommand{\textsc}[1]{#1} \newcommand{\textmd}[1]{#1} \newcommand{\emph}{\text} \newcommand{\uppercase}[1]{#1} \newcommand{\Sha}{{III}} \renewcommand{\setlength}[2]{} \newcommand{\raisebox}[2]{#2} \newcommand{\scalebox}[2]{\text{#2}} \newcommand{\stepcounter}[1]{} \newcommand{\vspace}[1]{} \newcommand{\displaybreak}[1]{} \newcommand{\textsl}[1]{#1} \newcommand{\prescript}[3]{{}^{#1}_{#2}#3} \def\llparenthesis{(\!\!|} \def\rrparenthesis{|\!\!)} \def\ae{a\!e} \def\nolinebreak{} \def\allowbreak{} \def\relax{} \def\newline{} \def\iffalse{} \def\fi{} \def\func{} \def\limfunc{} \def\mathbold{\mathbf} \def\mathscr{\mathit} \def\bold{\mathbf} \def\dvtx{\,:\,} \def\widecheck{\check} \def\spcheck{^\vee} \def\sphat{^{{}^\wedge}} \def\degree{{}^{\circ}} \def\tr{tr} \def\defeq{\ :=\ } \newcommand\rule[3][]{} \newcommand{\up}[1]{\textsuperscript{#1}} \newcommand{\textsuperscript}[1]{^{#1}} \newcommand{\fracwithdelims}[4]{\left#1\frac{#3}{#4}\right#2} \newcommand{\nicefrac}[2]{\left. #1\right/#2} \newcommand{\sfrac}[2]{\left. #1\right/#2} \newcommand{\discretionary}[3]{#3} \newcommand{\xlongrightarrow}[1]{\xrightarrow{\quad #1\quad}} \def\twoheadlongrightarrow{ \quad \longrightarrow \!\!\!\!\to \quad } \def\xmapsto{\xrightarrow} \def\hooklongrightarrow{\ \quad \hookrightarrow \quad \ } \def\longlonglongrightarrow{\ \quad \quad \quad \longrightarrow \quad \quad \quad \ } \def\rto{ \longrightarrow } \def\tto{ \longleftarrow } \def\rcofib{ \hookrightarrow } \def\L{\unicode{x141}} \def\niplus{\ \unicode{x2A2E}\ } \def\shuffle{\ \unicode{x29E2}\ } \def\fint{{\LARGE \unicode{x2A0F}}} \def\XXint#1#2#3{\vcenter{\hbox{$#2#3$}}\kern-0.4cm} \newcommand{\ve}{\varepsilon} \newcommand{\C}{\mathbb C} \newcommand{\N}{\mathbb N} \newcommand{\R}{\mathbb R} \newcommand{\Z}{\mathbb Z} \newcommand{\Q}{\mathbb Q} \renewcommand{\H}{\mathcal H} \newcommand{\Pabn}{P_n^{(a,b)}} \def\logg{\log_{(2)}} \def\loggg{\log_{(3)}} \newcommand{\mm}[4]{\begin{pmatrix} #1 & #2 \cr #3 & #4 \end{pmatrix}} \newcommand{\ontop}[2]{\genfrac{}{}{0pt}{}{#1}{#2}} \)

Section 1Introduction

The spacing between zeros of the Riemann zeta-function and the location of zeros of the derivative of the zeta-function are closely related problems which have connections to other topics in number theory.

For example, if the zeta-function had a large number of pairs of zeros that were separated by less than half their average spacing, one would obtain an effective lower bound on the class numbers of imaginary quadratic fields [10, 1]. Also, Speiser proved that the Riemann hypothesis is equivalent to the assertion that the nontrivial zeros of the derivative of the zeta-function, \(\zeta'\), are to the right of the critical line [14]. There is a quantitative version of Speiser's theorem [8] which is the basis for Levinson's method [7]. In Levinson's method there is a loss caused by the zeros of \(\zeta'\) which are close to the critical line, so it would be helpful to understand the horizontal distribution of zeros of \(\zeta'\). The intuition is that the spacing of zeros of the zeta-function should determine the horizontal distribution of zeros of the derivative. Specifically, a pair of closely spaced zeros of \(\zeta(s)\) gives rise to a zero of \(\zeta'(s)\) close to the critical line. Our main result is a partial converse, showing that sufficiently many zeros of \(\zeta'(s)\) close to the \(\tfrac12\)-line implies the existence of many closely spaced zeros of \(\zeta(s)\). See Theorem 1.1.2.

We assume the Riemann hypothesis and write the zeros of \(\zeta\) as \(\rho_j=\tfrac12+i\gamma_j\) and the zeros of \(\zeta'\) as \(\beta_j'+i\gamma_j'\), where in both cases we list the zeros by increasing imaginary part. We consider the normalized gaps between zeros of \(\zeta\) and the normalized distance of \(\rho_j'\) to the right of the critical line, given by

\begin{align} \lambda_j=\mathstrut &(\gamma_{j+1}-\gamma_j)\log\gamma_j\\ \lambda_j'=\mathstrut & (\beta_j'-\tfrac12) \log\gamma_j'.\quad\quad\quad\quad\quad\quad \quad \tag{1.1} \end{align}

We are interested in how small the normalized gaps can be, and how small the normalized distance to the critical line can be, so we set

\begin{align}\lambda=\mathstrut &\liminf_{j\to\infty} \lambda_j\quad\quad\quad\quad\quad\quad \quad \tag{1.2}\\ \lambda'=\mathstrut &\liminf_{j\to\infty}\lambda_j' .\quad\quad\quad\quad\quad\quad \quad \tag{1.3} \end{align}

We also consider the cumulative densities of \(\lambda_j\) and \(\lambda_j'\), given by

\begin{align}m(\nu) =\mathstrut & \liminf_{J\to\infty} \frac{1}{J} \, \#\{j\le J : \lambda_j \le \nu\}\\ m'(\nu) =\mathstrut & \liminf_{J\to\infty} \frac{1}{J}\, \#\{j\le J : \lambda_j' \le \nu\}.\quad\quad\quad\quad\quad\quad \quad \tag{1.4} \end{align}

Soundararajan's [12] Conjecture B states that \(\lambda=0\) if and only if \(\lambda'=0\). This amounts to conjecturing that zeros of \(\zeta'(s)\) close to the \(\tfrac12\)-line can only arise from a pair of closely spaced zeros of \(\zeta(s)\). Zhang [17] showed that (on RH) \(\lambda=0\) implies \(\lambda'=0\). Thus, Soundararajan's conjecture is almost certainly true because \(\lambda=0\) follows from standard conjectures on the zeros of the zeta-function, based on random matrix theory.

However, the second author[6] showed that \(\lambda=0\) and \(\lambda'=0\) are not logically equivalent. Specifically, Ki[6] proved

Note that the theorem implies Zhang's result (that \(\lambda=0\) implies \(\lambda'=0\)), because if \(\lambda=0\) then for some \(j\) the sum in (1.5) will be large because an individual term in the sum is large. But that is not the only way for \(M(\gamma_j)\) to be large. It is possible that there could be an imbalance in the distribution of zeros, such as a very large gap between neighboring zeros, which makes the sum large because many small terms have the same sign.

For example, suppose there were consecutive zeros of the zeta function with a gap of size 1, followed by \(c \log T\) zeros equally spaced (this cannot happen, but we are illustrating a point). Then \(M(\gamma)\) would be \(\gg \log T \log\log T\). That possibility is the reason attempts to prove \(\lambda'=0\) implies \(\lambda=0\) have been unsuccessful. For example, Garaev and Yıldırım [4] required the stronger assumption \(\lambda_J'(\log\log \gamma_J')^2=o(1)\) in order to conclude \(\lambda_J=o(1)\).

The discussion in the previous paragraph shows that, without detailed knowledge of the distribution of zero spacings, one requires \( M(\gamma)\ge C \log T \log\log T\) for any \(C>0\) in order to conclude \(\lambda =0\). It is possible that this could be improved by proving results about the rigidity of the spacing between zeros of the zeta function. Random matrix theory could give a clue about the limits of this approach. This would involve finding the expected maximum of the random matrix analogue of the sum

\begin{equation} \sum_{\frac{1}{\log \gamma_j}< |\gamma_j-\gamma_n|< 1} \frac{1}{\gamma_j-\gamma_n}.\quad\quad\tag{1.6} \end{equation}

Unfortunately, the necessary random matrix calculation may be quite difficult because a lower bound on \(|\gamma_j-\gamma_n|\) requires the exclusion of a varying number of intervening zeros, so the combinatorics of the random matrix calculation may be intricate.

In this paper we consider not \(\lambda\) and \(\lambda'\), but the density functions \(m(\nu)\) and \(m'(\nu)\). In the next section we illustrate this with the example described above, and then we state our main result.

Subsection 1.1Examples with equally spaced zeros

We illustrate Theorem 1.1 with examples which can help build intuition for why \(\lambda'=0\) does not imply \(\lambda=0\).

Our example involves degree \(N\) polynomials with all zeros on the unit circle. In other words, characteristic polynomials of matrices in the unitary group \(U(N)\). In these examples. \(\lambda>0\) but \(\lambda'=0\), where \(\lambda\) and \(\lambda'\) refer respectively to the large \(N\) limits of the normalized gap between zeros, and the rescaled distance between zeros of the derivative and the unit circle. This is the random matrix analogue of \(\lambda\) and \(\lambda'\) for the zeta function.

Figure 1.1.1 illustrates the case of 16 zeros in the interval\(\{e^{i\theta}\ :\ 0\le \theta\le \pi/2\}\). The plot on the left shows the zeros of the polynomial and its derivative. The figure on the right is the same plot "unrolled": the horizontal axis is the argument, and the vertical axis is the distance from the unit circle, rescaled by a constant factor.

Figure1.1.1\sf On the left, the zeros and the zeros of the derivative of a degree 16 polynomial having all zeros in \(\frac14\) of the unit circle. On the right, the image of those zeros under the mapping \(r e^{i \theta} \mapsto (\theta, 2 \pi \cdot 16(1-r))\). Zeros of the function are shown as small squares, and zeros of the derivative as small dots.

Figure 1.1.2 is the analogue of the plot on the right side of Figure 1.1.1, for 101 zeros and 501 zeros. Note that in these examples \(\lambda\sim \pi/2\).

Figure1.1.2\sf Unrolled and rescaled zeros of the derivative of a polynomial with zeros equally spaced along the arc \(\{e^{i\theta}\ :\ 0\le \theta\le \pi/2\}\). The polynomial has degree 101 (left) and 501 (right).

In Figure 1.1.2 the vertical scales are stretched by a factor of \(2 \pi N (1-r)\) where \(N=101\) and \(501\), respectively.

Figures 1.1.1 and 1.1.2 illustrate that, with this unrolling and rescaling, the zeros of the derivative approach a circle. We see that even though \(\lambda>0\) we have \(\lambda'=0\), but furthermore, since the zeros lie on a (rescaled) circle, we have \(m'(\nu)\gg \nu^2\) as \(\nu\to 0\). Thus, we can have \(m'(\nu)>0\) for all \(\nu>0\), yet \(m(\nu)=0\) for \(\nu\) sufficiently small.

We believe that the above example is the limit of this behavior, and we make the following conjecture, which we view as a refinement of Soundararajan's conjecture.

Conjecture1.1.1

If \(m'(\nu) \gg \nu^\alpha\) for some \(\alpha< 2\), then \(m(\nu)>0\) for all \(\nu>0\).

We intend this as a general conjecture, applying to the Riemann zeta function but also to other cases such as a sequence of polynomials with all zeros on the unit circle.

For applications to lower bounds of class numbers [10, 1] one does not actually need \(m(\nu)>0\) for \(\nu < \pi \); it is sufficient to show that a relatively small number of gaps between zeros of the zeta function are small. Our main result, Theorem 1.1.2, obtains such bounds from estimates on the zeros of the derivative of the zeta function.

Denote \( \log_{(2)}t=\log\log t \).

The conclusion of the theorem is weaker than \(m(\nu)>0\) for \(\nu>0\), but only by a factor of \(\logg T\). Thus, it is more than sufficient to apply the results of Conrey and Iwaniec [1]. In particular, Theorem 1.1.2 shows that it is possible to obtain lower bounds for class numbers of imaginary quadratic fields from knowledge of the density of zeros of the derivative of the Riemann zeta function.

There is an apparent discrepancy between Conjecture 1.1.1 and Theorem 1.1.2 which we wish to clarify. In Theorem 1.1.2 we allow exponential decrease of \(m'(\nu)\) as \(\nu\to 0\). While the conclusion of the theorem is weaker than \(m(\nu)>0\) by a factor of \(\logg T\), it may seem curious that the condition in Conjecture 1.1.1 requires \(m'(\nu)\) to be relatively large as \(\nu\to 0\). Indeed, the examples in Section 1.1 show that the condition in Conjecture 1.1.1 cannot be improved for general functions.

The reason for the apparent inconsistency is that, as described in Section 2.3, our method relies on a bound on the moments of the logarithmic derivative. For the Riemann zeta function one expects

\begin{equation} \int_{T}^{2T} \left| \frac{\zeta'}{\zeta}\left(\frac12 + \frac{1}{\log T} + i t\right) \right|^{2k} dt \ll_k T\log^{2k}T.\quad\quad\tag{1.1.3} \end{equation}

The bound (1.1.3) should follow by the method of Selberg [11], although we give a conditional proof that allows us to explicitly determine the implied constant. Such a bound, for one fixed \(k\), would establish a weaker version of Theorem 1.1.2 that required \(m'(\nu)\gg \nu^{-2 k}\). However, more general functions like the polynomials in Section 1.1 do not satisfy an analogous bound to (1.1.3). In fact, they are very large on the unit circle and do not satisfy the analogue of the Lindelöf hypothesis. Conjecture 1.1.1 is intended to cover those more general cases, while stronger statements should be true for the zeta function.

It is interesting to speculate on the precise nature of the function \(m'(\nu)\) for the Riemann zeta function. Dueñez et. al. [2] give a detailed analysis of the relationship between small gaps between zeros of the zeta function (and analogously for zeros of the characteristic polynomial of a random unitary matrix) and the zeros of the derivative which arise from the small gaps. For the case of the Riemann zeta function they indicate that the random matrix conjectures for the zeros of the zeta function should imply

\begin{equation}m_\zeta'(\nu) \sim \frac{8}{9\pi} \nu^{\frac{3}{2}},\quad\quad\tag{1.1.4} \end{equation}

as conjectured by Mezzadri [9]. That calculation is based on a more general result which suggests that if \(m(\nu)\sim \kappa \nu^\beta\) then \(m'(\nu) \sim \kappa' \nu^{\beta/2}\) where

\begin{equation}\kappa'=2\pi \frac{\kappa}{\beta} \left(\frac{2}{\pi}\right)^\beta.\quad\quad\tag{1.1.5} \end{equation}

The factor of \(2\pi\) comes from a different normalization used in [2] and here we work with the cumulative distribution functions \(m\) and \(m'\), while in [2] they use density functions. That derivation assumed that zeros of \(\zeta'\) close to the \(\tfrac12\)-line only arise from closely spaced zeros of the zeta-function. The discussion above shows that, without further knowledge of the zeros, this is not a valid assumption. But, as indicated in our Conjecture 1.1.1, if \(\beta< 4\) then we believe that the almost all zeros close to the \(\tfrac12\)-line do arise in such a manner. The random matrix prediction for the neighbor spacing of zeros of the zeta-function has \(\kappa=\pi/6\) and \(\beta=3\), which is covered by Conjecture 1.1.1. So our results support the analysis of Dueñez et. al. [2].

The remainder of this paper is devoted to the proof of Theorem 1.1.2.