# Lattice gauge theories

 Peter Weisz and Pushan Majumdar (2012), Scholarpedia, 7(4):8615. doi:10.4249/scholarpedia.8615 revision #137037 [link to/cite this article]
Jump to: navigation, search
Post-publication activity

Curator: Peter Weisz

Lattice gauge theory is a formulation of quantum field theory with gauge symmetries on a space-time lattice. This formulation is particularly suitable for describing hadronic phenomena. In this article we review the present status of lattice QCD. We outline some of the computational methods, discuss some phenomenological applications and a variety of non-perturbative topics. The list of references is severely incomplete, the ones we have included are text books or reviews and a few subjectively selected papers. Kronfeld and Quigg (2010) supply a reasonably comprehensive set of QCD references. We apologize for the fact that have not covered many important topics such as QCD at finite density and heavy quark effective theory adequately, and mention some of them only in the last section "In Brief". These topics should be considered in further Scholarpedia articles.

## Introduction

All presently known elementary particle physics phenomena at low energies are well described by four forces; in descending order of strengths these are the strong, electromagnetic, weak and gravitational forces. One widespread hope is that there is a theory which unifies all of these. However in this article we will concentrate on the strong forces, as they occur in nuclei for example, and neglect all the other forces$$^{~\![1]}$$.

Particles which participate directly in the strong interactions are called hadrons. Well known examples of hadrons are the proton, neutron and the pions. The latter two particles, which decay in Nature via the other interactions, are stable in our idealized scenario. The scattering of stable hadrons is experimentally well measured over a wide range of energies and reveals a rich spectrum of low-lying resonances$$^{~\![2]}$$.

It would be a great human achievement if we really knew the fundamental theory of the strong interactions, which explained all hadronic data and also nuclear physics! Remarkably it now seems that there is a candidate theory which achieves this, called Quantum Chromodynamics (QCD). The theory has an extremely interesting historical development for which we refer to Scholarpedia article on Asymptotic freedom.

QCD is a quantum field theory in 4 space-time dimensions. The theory is formally specified by the Lagrangian

$\tag{1} L=\frac14\sum_a F_{\mu\nu}^aF_{\mu\nu}^a+\sum_f\bar{q}_f[i\gamma D+m_f]q_f\,,$

describing the interactions of so-called quarks and gluons. The gluons are bosonic vector fields $$A_\mu^a\,,\,\,a=1,\dots N^2-1$$ belonging to an adjoint representation of $$SU(N)\ .$$ The field strength tensor entering the Lagrangian is given by $$F_{\mu\nu}^a=\partial_\mu A_\nu^a-\partial_\nu A_\mu^a+gf^{abc}A_\mu^b A_\nu^c\ ,$$ where $$f^{abc}$$ are the structure constants of the Lie algebra of $$SU(N)\ .$$ Note that the gluons (unlike photons) have self interactions already at the classical level.

The quark fields $$q_{f,\alpha}^i$$ are fermionic fields; the flavor indices $$f$$ run from $$1$$ to $$N_f\ ;$$ $$\alpha$$ denotes a Dirac spin index and $$i$$ a color index running from 1 to $$N\ .$$ Presently $$N_f=6$$ species of flavor are experimentally known. The covariant derivative appearing above is given by $$(D_\mu q)^i=\partial_\mu q^i+gA_\mu^a R^a_{ij}q^j\ ,$$ where $$R^a$$ is a generator of $$SU(N)$$ in the fundamental representation to which the quarks belong. The physical theory is associated to the case $$N=3,$$ but one is free to theoretically investigate all values of $$N\ge2\,\ .$$ The case $$N=1$$ is special; it is the successful theory of Quantum Electrodynamics (QED), which has rather different physical properties.

Esthetics and symmetries have played a crucial role in the postulation of physical theories. Like QED, QCD is a gauge theory. The classical action is fixed by demanding that it is invariant under infinitesimal local $$SU(N)$$ gauge transformations

$\tag{2} \delta q^i(x)=\epsilon^a(x) R^a_{ij}q^j(x) \quad , \quad\delta A^a_\mu(x)=f^{abc}\epsilon^b(x)A_{\mu}^c(x)+\partial_\mu\epsilon^a(x)\,,$

and so that it has the minimum number of derivatives. The relatively simple classical QCD action has conformal invariance for $$m_f=0\ ,$$ and only one dimensionless coupling.

The hadrons are considered to be colorless bound states of the basic constituents. In a Hamiltonian formalism they are created from the vacuum by application of gauge invariant operators e.g. for the proton $$P_\alpha= \Gamma_{\alpha\beta\gamma\delta} \epsilon^{ijk} u^i_\beta u^j_\gamma d^k_\delta\ ,$$ where $$u,d$$ denote the up and down flavored quarks and $$\Gamma$$ acts on the Dirac indices.

Quantization is conveniently performed using the Euclidean path integral formalism. The physics is then extracted from the properties of correlation functions of gauge invariant operators, e.g. a proton 2-point function is formally given by

$\tag{3} \langle P_\alpha(x)\overline{P}_\beta(0)\rangle=Z^{-1}\int{\rm d} A Dq D{\bar q}\,{\rm e}^{-S}\,P_\alpha(x)\overline{P}_\beta(0)\,,$

where the partition function $$Z$$ is such that $$\langle 1\rangle=1\ .$$ In this formulation the quark fields are Grassmann (anti-commuting), for which the integration is specified by the following simple Berezin rules. For a one-component Grassmann field $$\eta$$ they are given by $$\int{\rm d}\eta\,\eta^n=\delta_{n1}$$ for all $$n\ge0\ .$$

The framework allows derivation of Feynman rules for perturbative computations in $$g\ .$$ Due to the local gauge invariance the trivial field configurations are highly degenerate and the formalism requires gauge fixing and the introduction of Faddeev-Popov ghosts (for covariant gauges) as explained in Scholarpedia article on Faddeev-Popov ghosts. Furthermore an intermediate regulator must be introduced to tame ultra-violet divergences of the Feynman diagrams. The presence of remaining Becchi-Rouet-Stora symmetries and the associated Slavnov-Taylor identities allows to prove the renormalizablity of QCD to all orders of perturbation theory i.e. the singular terms in the UV cutoff can be canceled by multiplicatively renormalizing the bare couplings and bare fields. Once this is done the cutoff can be removed leaving well defined expressions.

QCD is perturbatively asymptotically free; this property allows certain physical high energy observables to be computed in the framework of renormalized perturbation theory. As such QCD unites the old quark model and the parton model describing deep inelastic scattering. So far all such computations are consistent with experiment, however the precision obtained is only at the 1-10% level, far from that reached for QED. Nevertheless computations of various QCD background processes are essential to enable unraveling of new physics at the LHC.

In the process of quantization the conformal invariance (for $$m_f=0$$) is unavoidably broken. As a consequence it is expected that in the limit of zero quark masses all hadrons remain massive except for the pions, e.g. the nucleon mass stays finite in this limit$$^{~\![3]}$$.

The question to be answered is: "is QCD is a SUPERB Theory in the sense of Penrose?" To be able to answer this we must be able to reproduce the known low-lying spectrum of hadrons, both the stable particles and rich structure of resonances, as well as the scattering amplitudes of the stable hadrons. These properties are not computable using perturbation theory, thus in order to achieve the above goal we must specify a non-perturbative definition of the theory by making the measure over field space well defined.

## The framework

In this section we discuss the field theoretic framework within which computations in lattice gauge theory are carried out.

### Lattice regularization

Defining the theory on a space-time lattice, as first proposed by Wilson and Smit (1974), provides a non-perturbative gauge invariant UV regularization of QCD. In this formulation the lattice spacing $$a$$ acts as an (inverse) UV cutoff. With this regularization both low and high energy observables can be (and have been) computed.

Mostly (but not exclusively) regular hypercubic lattices are considered for simplicity. The quark fields are located at lattice points and the dynamical gauge fields on links. To each link joining points $$x$$ to $$x+a\hat{\mu}$$ is associated an $$SU(N)$$ matrix $$U_\mu(x)\ ,$$ where $$\hat{\mu}$$ is the unit vector in the $$\mu$$-direction.

Once a lattice action $$S$$ has been proposed, expectation values of observables $$\mathcal{O}$$ are given by integrals

$\langle\mathcal{O}\rangle=Z^{-1}\int Dq D{\bar q} DU\,{\rm e}^{-S}\mathcal{O}\,.$

Here $$S$$ is a lattice action, which will be discussed below, and the Grassmann integral is as described above. The symbol $$DU$$ denotes the product over Haar measures of the $$SU(N)$$ link variables. Since these integrals are compact it follows that if the number of lattice points is finite, expectation values are well defined (away from singularities of $$\mathcal{O}$$). Boundary conditions in the continuum limit correspond to specifying terms in the action near the boundary and declaring the dynamical variables to be integrated over. The important point to appreciate is that the measure is now no longer formal as it is in the continuum expression.

An introduction to lattice field theory can be found in the Scholarpedia article on Lattice quantum field theory. There it is also explained why the path integral is defined in Euclidean space. Some physical quantities can be directly measured in Euclidean space e.g. masses, some scattering data, and spectral functions. Others require analytic continuation to Minkowski space. In the lattice formalism one hopes, in the continuum limit, to verify the Ostwerwalder-Schrader axioms, which are equivalent to the Wightman axioms for the correlation functions in Minkowski space.

### Lattice actions

A lattice QCD action consists of the sum of two terms:

$\tag{4} S=-{\bar q} M(U)q+S_g(U)\,.$

The gauge action $$S_g$$ is chosen to satisfy two criteria. First it must be invariant under local gauge transformations

$\tag{5} U_\mu(x)\to g(x)U_\mu(x)g^{-1}(x+a\hat{\mu})\,,$

where $$g(x)\in SU(N)\ .$$ Secondly it is chosen so that it reproduces the classical action in the formal continuum limit $$a\to0$$$$^{~\![4]}$$. For classical considerations the link variables are associated to continuum phase factors (parallel transporters):

$\tag{6} U_\mu(x)=\mathcal{P}\exp\left\{ag_0\int_0^1{\rm d} t\,A_\mu(x+t a\hat{\mu})\right\}\,,$

where $$\mathcal{P}$$ denotes path ordering. The conditions above still allow an infinite number of choices for the action. We rely on the unproven conjecture of universality. This states that a large class of lattice actions have the same continuum limit. This concept is taken from condensed matter physics where universality classes of theories have the same critical behavior. In these considerations locality of the action plays an important role. A simple choice of gauge action is Wilson's:

$\tag{7} S_{\rm Wilson}(U)=\frac{2}{g_0^2}\sum_p\Re{\rm tr}(1-U(p))\,,$

where the sum goes over all plaquettes (squares with sides $$a$$) $$p\ ,$$ and $$U(p)$$ is the ordered product of the gauge fields associated to the links of the plaquette. Here and in (6) $$g_0$$ is a bare coupling.

The fermion action is chosen invariant under gauge transformations (eq.(5)) and

$\tag{8} q(x)\to g(x)q(x)\,,\,\,\,\,{\bar q}(x)\to{\bar q}(x)g^{-1}(x)\,.$

There are again many choices each with their own advantages and disadvantages. Among the most common formulations used by the international collaborations are Wilson fermions (and improvements thereof), twisted mass fermions, staggered (Kogut-Susskind) fermions and those obeying the Ginsparg-Wilson relation (e.g. overlap). They differ mainly in the way that chiral symmetry is treated. The ($${\rm O}(a)$$ unimproved) Wilson fermion action for example is

$\tag{9} S_F=a^4\sum_{x}{\bar q}(x)(D_{\rm w}+m_0)q(x)$

where the Wilson-Dirac operator is given by

$\tag{10} D_{\rm w}=\frac{1}{2}\left\{\gamma_\mu(\nabla^\star_\mu+\nabla_\mu)-a\nabla^\star_\mu\nabla_\mu\right\}\,,$

with gauge covariant forward difference operator

$\tag{11} \nabla_\mu q(x)={1\over a}\left[U(x,\mu)q(x+a\hat{\mu})-q(x)\right]\,.$

Given a lattice action it is important to have a good understanding of the phase structure. In particular the continuum limit of the lattice theory is obtained at a second order phase transition $$g_0\rightarrow g_{\rm crit}$$ when the correlation lengths $$\to\infty\ .$$ The phase structure may be quite intricate e.g. for Wilson's fermion action there is also a (Aoki) phase where parity is broken.

It is helpful, but not necessary, for the physical interpretation if the theory has a positive transfer matrix. This is the case for example for the Wilson gauge and fermionic actions described above. To have a quantum mechanics interpretation it suffices to have positivity in the weaker sense of Osterwalder and Seiler.

To compute correlation functions the Grassmann integrals over the quark fields are first done analytically. This is easily done using the rules stated above because QCD actions are always simply bilinear in the quark fields. The following example illustrates the procedure: \tag{12} \begin{align} \langle{\bar q}\Gamma q\rangle &= \frac{1}{Z}\int Dq\,D{\bar q}\,DU\,{\bar q}\,\Gamma q \,{\rm e}^{\,{\bar q}M(U)q-S_g(U)} \, \\ & = -\frac{1}{Z}\int Dq\,D{\bar q}\,DU\,\frac{\partial}{\partial\eta}\Gamma\frac{\partial}{\partial{\bar \eta}} \,{\rm e}^{\,{\bar q} M(U)q-S_g(U)+{\bar q}\eta+{\bar \eta} q}|_{\eta,{\bar \eta}=0} \, \\ & = -\int DU\,P(U)\,\frac{\partial}{\partial\eta}\Gamma\frac{\partial}{\partial {\bar \eta}}\,{\rm e}^{\,{\bar \eta} M^{-1}(U)\eta}|_{\eta,{\bar \eta}=0} \, = \quad -\int DU\,P(U)\,{\rm tr}(\Gamma M^{-1}(U))\,, \end{align}

where $$P(U)=Z^{-1}{\rm det} M(U)\,{\rm e}^{-S_g(U)}\,\ .$$ The main task now remaining is to perform the (huge) integrals over the gauge fields efficiently.

### Identities

Before discussing the computation of path integrals we point out that useful relations between correlation functions can be deduced without explicit integration. These are obtained by making infinitesimal transformations on the fields in the path integral. For transformations which leave the functional measure invariant the relations take the form

$\tag{13} \langle\delta S\,{\mathcal O}\rangle=\langle\delta{\mathcal O}\rangle\,.$

One class of such identities where the fields are linearly shifted yields the Schwinger-Dyson equations. Another useful class arises when the action is invariant or has simple properties under symmetry transformations. In the classical theory this leads to (partially) conserved currents (Noether's theorem). In the quantized theory one obtains Ward-Takahashi (WI) identities.

To illustrate this consider axial transformations for two flavors ($$N_f=2$$):

$\tag{14} \delta_A q(x)=\omega^a(x)\frac12\tau^a\gamma_5 q(x)\quad ,\quad\delta_A{\bar q}(x)=\omega^a(x){\bar q}(x)\frac12\tau^a\gamma_5\,,$

where $$\tau^a$$ are the Pauli matrices. Working formally in the continuum (with an assumed chiral invariant regularization) the action transforms as

$\tag{15} \delta_AS=\int_\mathcal{R}{\rm d}^4 x\,\omega^a(x)\left[-\partial_\mu \mathcal{A}_\mu^a(x)+2mP^a(x)\right]\,,$

where $$P^a={\bar q}\frac{\tau^a}{2}\gamma_5 q$$ is the pseudoscalar density and we have assumed that $$\omega^a(x)=0$$ for $$x$$ outside a bounded region $$\mathcal{R}\ .$$ For example if the observable $${\mathcal O}={\mathcal O}_{\rm ext}$$ has no support in $$\mathcal{R}$$ then the WI becomes

$\tag{16} \langle\left[-\partial_\mu \mathcal{A}^a_\mu(x)+2mP^a(x)\right]{\mathcal O}_{\rm ext}\rangle=0\,,$

the famous PCAC relation which has many applications.

If $${\mathcal O}={\mathcal O}_{\rm int}{\mathcal O}_{\rm ext}\ ,$$ having support inside and outside $$\mathcal{R}\ ,$$ we obtain

$\tag{17} \int_\mathcal{R}{\rm d}^4x\,\omega^a(x)\langle\left[-\partial_\mu\mathcal{A}^a_\mu(x)+2mP^a(x)\right]\mathcal{O}_{\rm int}\mathcal{O}_{\rm ext}\rangle=\langle\delta_A\mathcal{O}_{\rm int}\mathcal{O}_{\rm ext}\rangle\,,$

which in the limit $$\omega^a(x)\to{\rm constant}$$ inside $$\mathcal{R}\ ,$$ and $$m=0$$ simplifies to

$\tag{18} \omega^a\int_{\partial\mathcal{R}}{\rm d}\sigma_\mu(x)\,\langle\mathcal{A}^a_\mu(x)\mathcal{O}_{\rm int}\mathcal{O}_{\rm ext}\rangle= -\langle\delta_A{\mathcal O}_{\rm int}{\mathcal O}_{\rm ext}\rangle\,,$

where $${\rm d}\sigma_\mu$$ is the outward normal to the boundary of $$\mathcal{R}\ .$$ For the case $${\mathcal O}_{\rm int}=\mathcal{A}^b_\nu(y)$$ and $$\mathcal{R}$$ the region between two fixed time hyperplanes at $$t_2,t_1\ ,$$ (and using $$\delta_A \mathcal{A}_\nu^b(x)=-i\omega^a(x)\epsilon^{abc}\mathcal{V}_\nu^c(x)$$ where $$\mathcal{V}$$ are the vector currents) the WI reads ($$t_2>t_1$$),

$\tag{19} \int{\rm d}^3x\,\langle\left[\mathcal{A}_0^a(\mathbf{x},t_2)-\mathcal{A}_0^a(\mathbf{x},t_1)\right]\mathcal{A}_\nu^b(y)\mathcal{O}_{\rm ext}\rangle =i\epsilon^{abc}\langle\mathcal{V}^c_\nu(y)\mathcal{O}_{\rm ext}\rangle\,.$

The above identity is equivalent to the current algebra relation $$\left[\widehat{\mathcal{A}}^a_0(\mathbf{x},t),\widehat{\mathcal{A}}_0^b (\mathbf{y},t)\right]=i\delta^{(3)}(\mathbf{x-y})\epsilon^{abc}\widehat{\mathcal{V}}_0^c(\mathbf{x},t)$$ in Minkowski space. All current algebra relations can be obtained similarly.

Analogous identities can be rigorously derived for the lattice theory. One way to renormalize currents non-perturbatively, is to enforce the continuum Ward identities (in some cases only up to lattice artifacts).

A symmetry present in the classical theory but violated in the quantum theory is called anomalous. Examples are the breaking of conformal symmetry and breaking of the flavor axial U(1) symmetry in QCD. The reproduction of the assumed correct anomalous Ward identities in the continuum limit is a necessary condition on the class of lattice regularizations.

## Non-perturbative effects

The non-Abelian nature of QCD manifests itself through two very important non-perturbative effects which distinguishes it from the physically relevant phase of Abelian QED. These are spontaneous breaking of chiral symmetry and confinement.

### Spontaneous symmetry breaking

When two or more quark masses vanish the action is invariant under chiral transformations. The vacuum may however not be invariant in which case one says that the symmetry is spontaneously broken. The breakdown is signaled by a non-vanishing vacuum expectation value of $${\bar q} q \ :$$

$\tag{20} \Sigma=-\lim_{m\to0}\lim_{V\to\infty}\langle{\bar q} q\rangle\,.$

In addition the spectrum will have massless Nambu-Goldstone bosons. When quark masses are switched on the symmetry is broken also by the Lagrangian and the N-G bosons acquire masses. The squared masses of the bosons tend to zero linearly in the quark masses; the relation is given by the Gell-Mann-Oakes-Renner relation

$\tag{21} m_\pi^2=f_\pi^{-2}(m_u+m_d)\langle \overline{u}u\rangle\,+{\rm O}\left(m_q^2~\ln^{~}\!(m_q^2)\right)\,.$

In this scenario small quark masses "explain" the small masses of the physical pions relative to other hadrons.

The broken symmetry relations are expressed by the PCAC relations (eq.(16)). Physical consequences of the chiral WIs are obtained by making approximations for the spectral functions (e.g. pole dominance) and using dispersion relations. Alternatively the low energy consequences of the symmetries are summarized by effective chiral Lagrangians first proposed by Weinberg and later brought to perfection by Gasser and Leutwyler. This non-renormalizable effective theory with the pions as basic fields is a good description at low energies. The expansion is in orders of momenta of the fields. In the lowest order only two parameters occur, but at higher orders new effective terms appear, whose coefficients can be fitted to experiment.

So far chiral perturbation theory ($$\chi$$PT) has been used mainly to extrapolate lattice data e.g. for the scattering lengths the pion mass dependence is of the form$$^{~\![5]}$$

$\tag{22} m_\pi \ell_{\pi\pi}^{I=2}=-\frac{m_\pi^2}{8\pi f_\pi^2}\left[1+\frac{3m_\pi^2}{16\pi^2 f_\pi^2}\left(\ln(m_\pi^2/f_\pi^2)+X\right)+\dots\right]$

where $$X$$ is a Gasser-Leutwyler low energy constant. But now that physical pion masses can be simulated the coefficients of the effective chiral Lagrangian can be extracted from lattice computations once the chiral logs have been identified.

The breaking of chiral symmetry is closely related to the low-lying eigenvalues of the massless Dirac operator, which in the continuum are purely imaginary $$i\lambda_k\ .$$ Let $$\rho$$ be the eigenvalue density

$\tag{23} \rho(\lambda,m)=\frac{1}{V}\sum_k\langle\delta(\lambda-\lambda_k)\rangle\,,$

then since non-zero eigenvalues appear as complex conjugate pairs

$\tag{24} \langle{\bar q} q\rangle\propto\int_0{\rm d}\lambda\,\rho(\lambda)\left[\frac{1}{\lambda+im}+\frac{1}{-\lambda+im}\right]\,.$

Taking the limits in the correct order one obtains the Banks-Casher relation:

$\tag{25} \lim_{\lambda\to0}\lim_{m\to0}\lim_{V\to\infty}\rho(\lambda,m)=\Sigma/\pi\,.$

In a free theory $$\rho(\lambda)\sim\lambda^3$$ and chiral symmetry is not spontaneously broken.

A relatively new proposal of Giusti and Lüscher is to investigate spectral observables such as the average number of eigenvalues of $$\mathcal{Q}=D^\dagger D+m^2$$ below $$M^2\ :$$

$\tag{26} \nu(M,m)=\langle \mathbb{P}_M\rangle=V\int_{-\Lambda}^{\Lambda}{\rm d}\lambda\,\rho(\lambda,m)\,,\quad\Lambda=\sqrt{M^2-m^2}\,,$

where $$\mathbb{P}_M$$ is the orthogonal projector to the space of eigenvectors of $$\mathcal{Q}$$ below $$M^2\ .$$

The authors give convincing arguments to show that $$\nu_R(M_R,m_R)=\nu(M,m_q)$$ is a renormalized quantity, where $$M_R=Z_P^{-1}M$$ and $$m_R=Z_m m_q$$ are renormalized masses. The conceptual advance is that the chiral condensate can be computed through the computation of the average spectral density using any lattice fermions, even Wilson fermions although these violate chiral symmetry. One has

$\tag{27} \Sigma=\frac{\pi}{2}\frac{\nu(M,m)}{\Lambda V}+\dots,$

where the dots represent corrections which can be computed using $$\chi$$PT.

### Chiral symmetry on the lattice

The topic of chiral symmetry on the lattice is a vast subject which we only touch upon here because it will be covered in detail by the Scholarpedia article on Lattice chiral fermions.

A theorem of Nielsen and Ninomiya states that a lattice Dirac operator $$D(x)$$ cannot have all the following four properties:

1. local and bounded by $$C{\rm e}^{-\gamma|x|}\ ,$$
2. its Fourier transform $$\tilde{D}(p)$$ behaves like a massless continuum Dirac operator for $$p\sim0$$ i.e. $$\sim i\gamma p+{\rm O}(ap^2)\ ,$$
3. $$\tilde{D}(p)$$ is invertible for $$p\ne0$$(no massless doublers),
4. $$\gamma_5 D+D\gamma_5 =0\,.$$

For example the naive discretization $$\tilde{D}(p)=(i/a)\gamma_\mu\sin ap_\mu$$ has $$2^d$$ massless modes (in $$d$$ space-time dimension), and the Wilson action has no doublers but explicitly violates the chiral symmetry (4).

A way out, first introduced by Ginsparg and Wilson, is to relax property (4) and demand$$^{~\![6]}$$

$\tag{28} \gamma_5D+D\gamma_5=\bar{a} D\gamma_5D\,,\qquad\bar{a}=a/(1+s)\,,\,\,|s|<1\,.$

This suggestion was forgotten for some time since the authors found no explicit solutions. Hasenfratz and Niedermayer (see Niedermayer (1999)) realized there were intrinsically an infinite number of implicit solutions obtained by blocking from the continuum, and around the same time Neuberger found explicit solutions (called overlap operators). Neuberger's Dirac operator connects widely separated lattice points however the operator has been shown to be local in the physical sense. Lüscher pointed out that the GW Wilson's actions possess an exact chiral symmetry. Chiral Ward identities with GW fermions resemble closely those of the continuum.

Simulations using overlap (or GW) fermions are very computer-time consuming, however some simulations with them have been performed. They are especially advantageous in situations where chiral properties are important. For example the JLQCD and TWQCD collaborations have used overlap fermions to measure the chiral condensate with the result $$\Sigma^{\overline{\rm MS}}(0,m_s;2{\rm GeV})=(243(4)(16){\rm MeV})^3\,.$$

### Confinement

Although the observation of jets in high energy scattering of hadrons signal a probable underlying parton structure, isolated quarks and gluons are not observed experimentally. This is known as confinement.

In the absence of dynamical quarks, a necessary condition for confinement due to Wilson is that the minimal energy of a state having a static quark-antiquark pair grows with the distance of separation $$R\ .$$ Most considerations conclude that the growth is linear:

$\tag{29} V(R)=\sigma R+...\,,\,\,\,\,R\to\infty\,.$

Here $$\sigma$$ is called the string tension. The static potential $$V(R)$$ can be computed from the expectation value of a Wilson loop $$W={\rm tr}\,U_{\mathcal{C}}$$ where $$U_{\mathcal{C}}$$ is the ordered product of link variables around a $$T\times R$$ rectangular loop $$\mathcal{C}\ :$$

$\tag{30} V(R)=-\lim_{T\to\infty}\frac{1}{T}\ln\langle W\rangle\,.$

Brandt and Ng proved that the Wilson loop is multiplicatively renormalizable to all orders of perturbation theory. The static potential for short distances is hence computable in perturbation theory (up to an unphysical constant) . In leading order it is proportional to $$\alpha_{\overline{\rm MS}}(1/R)$$ and can serve as a definition of a running coupling. In 2010 it has been computed to 3-loops in PT in the MS scheme of dimensional regularization. Summing IR divergent terms leads to terms involving logarithms of the coupling $$\sim\alpha_{\overline{\rm MS}}^4\ln\alpha_{\overline{\rm MS}}$$ at this order.

Since the potential at intermediate distances can be measured in lattice simulations with high accuracy it is often used to define scales. A popular choice is Sommer's scale $$r_0$$ defined by $$r_0^2V\,'(r_0)=1.65\ .$$ In phenomenological potential models describing charmonium $$r_0\sim 0.5{\rm fm}\,.$$

Confinement refers to long distance behavior which is out of the scope of PT. The string tension is non-zero in the strong coupling expansion and numerical computations indicate that it scales like a physical quantity in the continuum limit$$^{~\![7]}$$.

$$\sigma$$ can also be computed from correlation functions of widely separated Polyakov loops:

$\tag{31} \langle P(x)P(y)\rangle \;\sim_{\!\!\!\!\!\!\!\!\!\!_{|x-y|\to\infty}}\;{\rm e}^{-\sigma |x-y|}.$

The Polyakov loop $$~P(x)={\rm tr}\prod\limits_{l=0}^{N_t-1}U_4(x,l)~$$ is (the trace of) a Wilson loop wrapping around the compact direction of the space $$\R^3\times S^1$$ passing through points $$(x,t)$$ (all $$t$$). The expectation value $$\langle P\rangle$$ can be interpreted as $${\rm e}^{-\beta F}\,,$$ where $$F$$ is the free energy to add a free quark. Consider (at a fixed time slice $$t=t_0$$) a transformation $$U_4(x,t_0)\to zU_4(x,t_0)$$ for all $$x$$ with $$z\in{\mathbb Z}_N$$ the center of $$SU(N)\ .$$ The action remains invariant but a Polyakov loop transforms with a factor $$z^{k_R}$$ with $$k_R$$ the $$N$$-ality. If the $${\mathbb Z}_N$$ symmetry is not broken spontaneously then $$k_R\ne0$$ implies $$\langle P\rangle=0$$ defining the confining phase. At some critical finite temperature $$T_c$$ this order parameter becomes non-zero indicating a deconfinement phase transition.

A suggestive picture (which should be considered in the same spirit as Bohr's description of the atom) is that of a fluctuating electric flux tube joining the quark-antiquark sources. In this scenario many features of the potential (and higher excitations) at large separations can be described by an effective string model. If the only degrees of freedom are assumed to be the transverse fluctuations then there is a universal prediction for the subleading (Lüscher) term ($d$ is the number of space-time dimensions)

$\tag{32} V(R)=\sigma R-\frac{(d-2)\pi}{24R}+...\,,\quad R\to\infty\,.$

Note that the coefficient of the $$1/R$$ term is independent of the gauge group. There is strong support of this from lattice simulations. In $$d=3$$ there are initial results which suggest that the next term in the long distance expansion is also universal. Universality of further terms is still under debate.

Another prediction of the effective string picture is the broadening of the flux tube:

$\tag{33} \lambda^2(R)\equiv\frac{\int{\rm d}^2 x_\perp x_\perp^2\mathcal{E}(x_\perp,R/2)} {\int{\rm d}^2 x_\perp \mathcal{E}(x_\perp,R/2)} \sim\frac{(d-2)}{(2\pi\sigma)}\ln(R/R_0)\,\,{\rm for}\,\,R\to\infty\,,$

where $$\mathcal{E}(x)$$ is the chromo-electric field energy density distribution

$\tag{34} \mathcal{E}(x)\propto \langle q{\bar q}|{\rm tr}{\mathbf E}^2(x)|q{\bar q}\rangle -\langle q{\bar q}|q{\bar q}\rangle\langle\Omega|{\rm tr} {\mathbf E}(x)^2|\Omega\rangle\,.$

Numerical evidence for logarithmic broadening in $$SU(2)$$ YM theory in $$d=3$$ dimensions has recently been obtained by Gliozzi, Pepe and Wiese.

With dynamical quarks the flux tube eventually breaks and the static potential will tend to a constant at large separations.

A continuing effort is made to identify a/the class of gauge configurations responsible for confinement. The different semi-classical considerations include magnetic monopoles, dyons, and vortices associated with the center may be more manifest in fixed gauges, and may be related to one another. Proposed mechanisms for confinement typically require condensation of such gauge configurations in the YM theory ground state and go under names such as "dual superconductivity", "vortex condensation" etc.

Some (yet unsuccessful) attempts at constructing a rigorous proof use real space RG methods; others investigate the IR behavior of gluon and ghost propagators in fixed gauges from approximate solutions to Schwinger-Dyson equations or numerical simulations.

The main questions are however still unanswered. What is the confinement mechanism and how to detect it? Is there are direct relation between confinement and chiral symmetry breaking? What does confinement mean in a dense medium?

### Fixed gauges

The physics of gauge theories is completely encoded in expectation values of gauge invariant variables. The simulation of the latter using lattice gauge theory does not require gauge fixing. However sometimes gauge fixing is convenient, e.g. when performing perturbative expansions$$^{~\![8]}$$. Moreover a large group within the lattice community are of the opinion that certain non-perturbative phenomena might manifest themselves more clearly when the gauge is fixed, e.g. the possible identification of dominating configurations responsible for confinement. The physics discovery potential from gauge non-invariant variables is however not manifest.

In this subsection the formulae refer to a continuum formulation however the extension to the lattice regularization is straightforward. Sometimes axial gauges are employed e.g. $$A_0=0\ .$$ The nonperturbative implementation of the Coulomb gauge $$\sum_{k=1,\dots,d-1}\partial_i A_i=0$$ or the Landau gauge, $$\sum_{\mu=1,\dots,d}\partial_\mu A_\mu=0$$ in $$d$$ Euclidean dimensions is hampered by the existence of multiple solutions of the gauge condition on the same gauge orbit - the so-called Gribov copies. The problem is not relevant if one restricts to weak coupling, but in finite volume with periodic boundary conditions integration over the entire configuration space renders the partition function zero, so that averages are ill defined - the so-called Neuberger paradox$$^{~\![9]}$$. To overcome this problem it was proposed that the functional integral should be restricted to the (convex) Gribov region $$\Omega$$ containing the trivial configuration $$A_\mu(x)=0$$ where the Faddeev-Popov operator $$M(A)=- D_\mu\partial_\mu$$ (here for the Landau gauge) is positive:

$\tag{35} \Omega=\{A:\partial_\mu A_\mu=0\,\,;M(A)\ge0\}\,.$

There are however Gribov copies within the Gribov region. Ideally one should integrate over the subregion of configuration space $$\Lambda\subset\Omega$$ having Gribov copies only on its boundary $$\partial\Lambda\ .$$ This region is called the fundamental modular domain. Defining the Morse function $$F_A(g)=\parallel\!\!\mbox{}^{g}A\parallel\,,\;\mbox{}^{g}A=g^{-1}Ag+g^{-1}\partial g\,,$$ $$\left ( \parallel A\parallel^2=\int{\rm d}^4x\,{\rm tr} A_\mu^\dagger(x) A_\mu(x)\right ) \ ,$$ considered as a mapping on the gauge group $$\mathcal{G}$$ for fixed $$A\ ,$$ $$\Lambda$$ is defined by $$\Lambda=\{A:\,F_A(1)\le F_A(g)\,\,\forall g\in\mathcal{G}\}\,\ .$$ One often encounters the statement that configurations essential for confinement are located close to the boundary $$\partial\Lambda\ .$$

As we mentioned above, numerical investigations of infrared properties of the gluon and ghost propagators $$D(k)\,,G(k)$$ have been pursued to find signals for confinement. The situation is however still quite confused and as an example we outline the present situation for the Landau gauge. Suppose

$\tag{36} D(k)\sim (k^2)^{-1-\alpha_D}\,,\quad G(k)\sim (k^2)^{-1-\alpha_G}\,,\quad {\rm for}\,\,k\to0\,.$

Initially Mandelstam and others suggested that the gluon propagator should show a strong singularity $$\alpha_D=1\ .$$ However later Zwanziger, assuming that Gribov copies inside $$\Omega$$ do not influence expectation values, argued that $$D(0)=0\,!$$ i.e. $$\alpha_D<-1\ ,$$ and $$\alpha_D+2\alpha_G=0$$ for $$d=4\ .$$ Present numerical investigations indicate however that $$D(0)\ne0$$ (and $$\alpha_G>0$$). Zwanziger offers the following two explanations for the discrepancy between theoretical and numerical investigations. Either the numerical gauge fixing procedure is algorithm dependent and only produces a subregion of $$\Omega\ ,$$ or the generating functional of gluon connected Green functions $$W(J)$$ is non-analytic for in the source at $$J=0\ .$$

## Computational techniques

In this section we briefly discuss some of the actual computational techniques used for lattice gauge theory calculations.

### Expansions

As for the case of statistical mechanical models there are various analytic expansions of the path integrals. Firstly there is the strong coupling expansion for large bare coupling $$g_0\ .$$ Usually in this limit the correlation lengths are finite and one can for example compute the masses (in units of the cutoff) of glueballs, the excitations of the pure YM theory, and the string tension. The strong coupling expansions are non-universal, i.e. they depend drastically on the lattice action. Initially it was hoped that ratios of physical quantities become relatively independent of $$g_0$$ in a range already attainable by the expansion. Now the expansion is used more to establish qualitative behaviors and act as quantitative checks for the algorithmic computer programs.

As in the continuum one can perform standard perturbation expansion in the bare coupling. The Feynman rules are however more complicated. Integrals go over the Brillouin zone $$|k_\mu|\le\pi/a$$ and there are many more vertices than in the continuum. Nowadays the vertices are generated automatically by algebraic computer programs. The renormalization of lattice theories with fermions was first considered by Sharatchandra. An all order proof of renormalization of a large class of lattice formulations was given by Reisz. Essential ingredients are the (lattice) BRS symmetries and the lattice power counting theorems for lattice Feynman diagrams.

Mean field theory can be used to suggest qualitative structure and renormalization group blockings of various kinds are employed in attempts of rigorous proofs of conjectured properties.

### Algorithms

As mentioned above the problem of computing correlation functions is reduced to performing integrals over compact domains. For lattices with many points the dimension of the integrals is so huge that it is impossible to compute the integrals accurately by quadrature. If $${\rm det} M$$ is real and non-negative then $$P(U)$$ is a probability distribution over $$U$$ and the integral may be accurately estimated by a Monte-Carlo procedure i.e. importance sampling, as has long been used as a computational strategy for condensed matter physics. Here we present a very brief outline of the main algorithms used for lattice gauge theory computations. For a fuller exposition see (Lüscher, 2009).

Representative ensembles $$U_1,\dots,U_s$$ of a probability distribution for which direct sampling is difficult are obtained through an iterative procedure. The simplest and most common algorithm generates the $$n$$-th element in the sequence $$U_n$$ depending only on the previous element $$U_{n-1}\ .$$ This kind of a sequence is known as a Markov Chain. Associated with the Markov Chain is the transition probability from $$U_{n-1}=U$$ to $$U_n=U'$$ which we denote by $$T(U\to U')\ .$$ It has the properties :

1. $$T(U\to U')\ge0$$ for all $$U,U'$$ and $${\textstyle \sum}_{\,U'}\; T(U\to U')=1\,\, \forall \,U\,.$$
2. $${\textstyle \sum}_{\,U}\; P(U)T(U\to U')=P(U')\,\,\forall\,U'\,.$$
3. $$T(U\to U)>0\,\,\forall\,U\,.$$
4. If $$\mathcal{U}$$ is a non-empty proper subset of states then there exist two states $$U\in\mathcal{U}\,,U'\notin\mathcal{U}$$ such that $$T(U\to U')>0\,.$$

Properties 3 and 4 ensure aperiodicity and ergodicity which guarantee to generate Markov chains which simulate the system in the required way.

If we impose the sufficient (but not necessary) condition

$\tag{37} T(U\to U')P(U)=T(U'\to U)P(U'),$

known as detailed balance, then one can prove that the equilibrium distribution $$P(U)$$ is a fixed point of the Markov process. This means that even if we start from an arbitrary configuration of $$U\ ,$$ after some time we start sampling the distribution $$P(U)\ .$$ The initial part of the sequence called the equilibration time is ignored when computing averages. An estimate of the equilibration time is often obtained by monitoring some physical observable at each step of the Markov process.

The algorithm consists of two steps. First, a proposal for $$U'$$ given $$U$$ $$(T_0(U\to U'))\ .$$ Then an accept/reject step, where the proposal is accepted with probability (Gattringer and Lang, 2010)

$\tag{38} P_{\rm acc}(U'|U) = \min \left ( 1,\frac{T_0(U'\to U)P(U')}{T_0(U\to U')P(U)}\right ).$

If the proposal is not accepted, then $$U_n$$ is taken to be the same as $$U_{n-1}\ .$$ For lattice gauge theory algorithms usually $$T_0(U\to U')$$ is chosen to be equal to $$T_0(U'\to U)$$ and then $$P_{\rm acc}$$ reduces to $$P_{\rm acc}(U'|U)=\min \left ( 1, P(U')/P(U)\right )$$ which for lattice QCD becomes

$\tag{39} P_{\rm acc}(U'|U)= \min \left ( 1,\frac{\det M(U')}{\det M(U)}{\rm e}^{-[S_g(U')-S_g(U)]}\right ).$

Note that $$T(U\to U')$$ is given by $$T_0(U\to U')P_{\rm acc}(U'|U)\ .$$

Changing a single link produces a change in $$S_g$$ which can be computed locally whereas computing the determinant ratio involves the full lattice even if a single link changes. Since computing the determinant is expensive, an $$O(r^3)$$ process for a matrix of size $$r\times r\ ,$$ early lattice gauge calculations often set $${\rm det} M=1$$ by hand. This is called the quenched approximation. The determinant of $$M$$ encodes in it all the effects of the quark loops and the simulations which take into account the determinant ratio goes under the name dynamical simulations. For actual implementation, one exponentiates the determinant using auxiliary complex bosonic fields (say $$\phi$$) known as pseudofermions. The fermionic part of the action then becomes $${\rm e}^{- \phi^{\dagger} M^{-1}\phi}$$ with an additional integration over $$\phi$$ in the partition function.

Figure 1: Mass scaling cost for various lattice fermion formulations (Berlin wall plot) (Clark, 2006)

The update process will be too slow or prohibitively expensive (in terms of computer resources) if one has to compute $$P_{\rm acc}$$ for an individual link update and do this for all links on the lattice. On the other hand if one changes a significant number of links randomly the $$P_{\rm acc}$$ would go down exponentially with the number of links changed. So what is needed is a judicious choice of the update proposal so that many links are changed while keeping $$P_{\rm acc}$$ reasonable $$(\gtrsim 0.5)\ .$$ For spin systems there are very efficient algorithms known as cluster algorithms which achieves this. Unfortunately for non-Abelian gauge theories no such algorithms exist till date$$^{~\![10]}$$.

The presently favored algorithm is an improved version of the Metropolis-Hastings algorithm known as Hybrid Monte Carlo (HMC).

The hybrid part generates the proposal and then the Metropolis accept/reject step either accepts or rejects the proposed configuration of $$U$$'s. The Hybrid algorithm itself, as the name suggests, has two parts viz. the Langevin step and the molecular dynamics step. In the Langevin step one introduces a momentum vector $$p$$ which is drawn from a Gaussian distribution and is periodically refreshed to ensure ergodicity. The molecular dynamics step involves integrating the Hamiltonian equations of motion where the Hamiltonian is given by

$\tag{40} {\mathcal H}=\frac{1}{2}\,{\rm tr}\,p^2 +S_g+\phi^{\dagger}M^{-1}\phi$

where $$U$$ and $${\rm e}^{ip}$$ are conjugate variables. The pseudofermion fields are updated in conjunction with the momenta. For a step by step implementation see (Lüscher, 2009). Integrating the equations of motion is the most time consuming part in the algorithm and here the different terms of the Hamiltonian contribute differently. Various strategies (integrators) are used for optimizing this step. In summary the HMC algorithm refreshes the momenta periodically to ensure ergodicity, uses the molecular dynamics equations to move quickly through the phase space and accepts or rejects the proposed configuration to correct for the finite time step used in the integration. It is thus an exact algorithm. Lüscher and Schaefer (2011a) have recently shown that the algorithm is non-renormalizable and their numerical investigations suggest that it is in the same universality class as the Langevin algorithm (i.e. auto-correlation times scaling as $$a^{-2}$$ up to logs).

Figure 2: (Lüscher, 2009) Time needed on a 64 processor PC cluster to solve the $$O(a)$$-improved Wilson-Dirac equation in $$N_{\rm f}=2$$ QCD. 64$$\times$$32$$^3$$ lattice, $$a=$$0.08 fm ; $$m_{\rm sea}=$$ 26 MeV ; $$m_{\rm val}$$ between 15 and 90 MeV. Relative residue = 10$$^{-10}\ .$$ Algorithms used : (i) Even-odd preconditioning with Bi-Conjugate gradient stabilised inverter, (ii) Schwarz Alternating Procedure for preconditioning with Generalized Conjugate Residual inverter, (iii) Same as (ii) but with Deflation (low mode projection) first.

For a given set of quark masses and couplings, the HMC algorithm scales as $$V^{5/4}$$ where $$V$$ is the lattice volume. This is made possible because unlike $${\rm det} M\ ,$$ the computation of $$M^{-1}\phi$$ is only an $$O(r)$$ process. The scaling with lattice spacing or quark masses is not so clear. It is estimated (Gattringer and Lang, 2010) that the cost of HMC goes as $$L^5\,a^{-7}\,M_{\pi}^6$$ although other more encouraging figures have also been reported.

At intermediate steps when the momenta and pseudofermion fields are refreshed, tens of millions of random numbers are required for a typical lattice. In a whole simulation which might involve several thousand such refreshments, one needs tens of billions of random numbers. Therefore these simulations require very high quality random number generators which have sufficiently large periods and are free from correlations over such long sequences. Examples of such random number generators are the Ranlux and the Mersenne Twister random number generators.

Sometimes we are interested in the pure gauge theory (or including fermions only in the quenched approximation). For simulating only the pure gauge part there are efficient ways to propose the new configuration so that it is almost always accepted. These algorithms are called heat-bath algorithms.

In HMC, the most time consuming part is the computation of $$M^{-1}\phi\ .$$ The speed and accuracy with which this can be computed depends on $$(\lambda_{\rm max}/\lambda_{\rm min}) \propto (am)^{-1}$$ where $$\lambda_{\rm max}$$ is the largest and $$\lambda_{\rm min}$$ the smallest eigenvalue of $$M$$ and $$m$$ the quark mass. Most algorithmic developments for fermionic algorithms therefore try to reduce this ratio which is also called the condition number of the matrix $$M\ .$$ All such techniques go under the name pre-conditioning. Other advancements include the Domain Decomposition approach to HMC and low mode deflation (Lüscher, 2009) where the lower eigenvalues are projected out and treated separately.

Another important line of development has been the advent of the algorithms Polynomial HMC (PHMC) and Rational HMC (RHMC). In both these approaches, non-integral powers of $$M^{-1}$$ are written as polynomial or rational functions of $$M^{-1}\ .$$ These techniques allow the simulation of an odd number of quark flavors while still maintaining the interpretation of $$P(U)$$ as a probability distribution over $$U\ .$$

While all these advancements are welcome, what is still sorely lacking is a physical understanding as to which field configurations dominate the expectation values of the different observables. They are certainly not random and do exhibit a degree of coherence, but we still do not yet know much about them.

### Computer facilities

Figure 3: Historical development of computers used in lattice projects; data compiled by A. Ukawa. Number of FLOPS=10$$^n\ .$$ The colors indicate the machine types: Vector (blue), parallel (violet), vector-parallel (green), projects (red).

The present success of numerical studies of lattice QCD is due partially to the exponential growth in computer power (Moore's law). The machine on which Creutz carried out his first numerical lattice experiments in 1979$$^{~\![11]}$$, was much less powerful than modern laptops! Now the leading collaborations in the lattice community have access to petaFLOP machines. The historical development is illustrated in . The machines vary from custom made super-computers to special purpose computers and PC clusters. Programs are designed to make optimal use of the machine architecture, size of memory, and communication time between the nodes. A relatively new development is to use the graphics card of a modern PC for computing (Egri, 2007). The Graphics Processing Unit (GPU) brings a peak computing power of one teraFLOPs to one's desktop (Clark, 2010). Many lattice collaborations are now using the GPUs in varying degrees, mainly for the computation intensive part of their programs. A single GPU typically gives a performance of about 100 CPU cores.

## Phenomenological results

### Spectrum

In the Euclidean framework masses are obtained by monitoring exponential fall-off of 2-point functions

$\tag{41} G(t)=\langle A(t)B(0)\rangle\to_{t\to\infty} c_A c_B{\rm e}^{-Mt}$

where $$M$$ is the lowest mass in the channel coupling to both $$A,B\ .$$ The simulations yield values of the masses in lattice units i.e. numbers $$\mu=Ma\ ,$$ so that only ratios of masses are predicted. To obtain an estimate of the lattice spacing at a given value of the bare coupling (assuming that QCD is the correct physical theory), one chooses a certain massive quantity and equates $$M_{\rm phys} a(g_0)=\mu\ .$$ The value of the lattice spacing at the specific bare coupling here depends on the particular spectral choice, but close to the continuum limit this dependence should be small. As $$g_0\to g_c$$ the lattice spacing goes to zero in physical units.

Figure 4: The light QCD spectrum. Horizontal lines and bands are the experimental values with their decay widths. The BMW collaboration results are shown by solid circles, with error bars representing their combined statistical and systematic error estimates. The $$\pi, K$$ and $$\Xi$$ have no error bars because they are used to set the light and strange quark masses and the overall scale respectively.

The remaining $$N_f$$ bare quark masses are then tuned so that $$N_f$$ ratios of other masses agree with experiment. All other spectral quantities are then predictions of the theory (systematic errors such as lattice artifacts will be discussed in a later section).

In the pure Yang-Mill's theory the stable particles are called glueballs. Although this can only be part of the physical theory, in order to estimate the mass of glueballs, it is a common practice to fix the scale using quantities defined by the static quark potential. One thereby obtains estimates for the lightest glueball (which has $$J^{PC}=0^{++}$$) of the order of $$~\gtrsim 1 {\rm GeV}\ .$$ In full QCD glueballs become unstable and mix with other ("quark-containing") states. Experimentally there are as yet no clear candidates for glueballs.

The quantum numbers of the quarks and gluons allow the construction of local observables which in the equivalent operator formalism create all known hadrons from the vacuum. As mentioned in the introduction, apart from the lowest-lying pseudoscalar mesons, the hadrons are expected to remain massive for zero quark masses. Present simulations for $$N_f=2+1$$ dynamical flavors have relatively small lattice spacings $$a\sim 0.2-0.07~{\rm fm}$$ and volumes $$L\ge 2.5~{\rm fm}\ ,$$ and nearly physical pions. presents the results of the Budapest-Marseille-Wuppertal collaboration, showing good agreement with experiment. We stress that similar measurements of the low-lying particle spectrum have been made by many international collaborations. The credit for this enterprise should be attributed to practically the entire lattice community and not just to the group with the latest most powerful computers.

### Scattering data

As first discussed in detail by Maiani and Testa, S-matrix elements cannot be determined directly from infinite volume Euclidean correlation functions except at kinematic thresholds. However scattering data can be extracted by monitoring finite volume effects of spectral quantities. This has long been known in the framework of non-relativistic quantum mechanics and appears in the statistical mechanics literature. It was brought to attention to the lattice community by Parisi; subsequently it was Lüscher who in a brilliant series of papers considered finite volume effects in the framework of relativistic QFT, for the stable particles, scattering states and the extraction of resonance parameters.

For example energies $$E_n$$ of two-particle states are measured in a 3-dimensional box of length $$L\ .$$ The center of mass momenta $$p_n$$ are then computed via the relativistic formula $$E_n=2\sqrt{p_n^2+m^2}$$ (assuming small lattice artifacts). Lüscher's formula giving a relation between momenta $$k$$ in the box and the elastic scattering phase shifts $$\delta(k)$$ reads $k\cot\delta(k)=\frac{1}{\pi L} J(kL/2\pi)=1/\ell+\frac12 r k^2+\dots\,\quad ,\quad J(x)=\lim_{\Lambda\to\infty}\left\{ \sum_{j\in\Z^3,|j|<\Lambda}\frac{1}{|j|^2-x^2}-4\pi\Lambda\right\}\,,$ which hold if the lattice volume is much larger than the interaction range. A special case of this relation is for the ground state energy which is expressed only in terms of the scattering length $$\ell$$ up to terms $${\rm O}(1/L^6)\ :$$

$\tag{42} E_0-2m=-\frac{4\pi\ell}{mL^3}\left[1+c_1 \ell/L+c_2 (\ell/L)^2\right]+{\rm O}(1/L^6)\,,$

where $$c_1\,,c_2$$ are geometrical constants$$^{~\![12]}$$. For $$\ell>0$$ there is a bound state with energy $$E=2m-(\gamma^2/m)[1+{\rm O}(1/\gamma L)]\,,$$ where $$\gamma$$ is a solution to $$\gamma+p\cot\delta|_{p^2=-\gamma^2}=0\ .$$ Instead of measuring the energies on can measure the wave functions at large distances. Their spatial parts satisfy the Helmholz equation $$(\triangle +k^2)\psi=0$$ up to exponentially suppressed terms and the solutions contain information on the phase shifts.

Experiments analyzed by the application of dispersion relations expressed through the Roy equations yield $$\pi-\pi$$ scattering lengths $$m_\pi \ell_{\pi\pi}^{I=0}=0.220(5)\,,m_\pi \ell_{\pi\pi}^{I=2}=-0.044(2)\,\ .$$ Lowest order $$\chi$$PT yields $$0.1588, -0.0454\ .$$ Lüscher's methods have been applied to the scattering of pions in lattice simulations (the two lowest-lying states in the $$A_1$$ representation of the cubic group). For the isospin $$I=2$$ channel lattice results are in good agreement with experiment. Measurements in the $$I=0,1$$ channels are also in process.

Nucleon-nucleon scattering lengths are unnaturally large $$\left ( \ell(\,^3S_1)\sim1/(45{\rm MeV})\,,\ell(\,^1S_0)\sim-1/(8{\rm MeV}) \right )$$ compared to the range of the nucleon-nucleon interaction. It is thought that to reproduce this in QCD may require a fine tuning of the quark masses. To derive physical NN results reliably using lattice technology will only be feasible in the future.

### Running couplings

Practically any renormalizable gauge invariant quantity which has a perturbative expansion starting at lowest order linear in $$g$$ and depending on a single physical mass scale $$Q$$ qualifies as a running coupling $$\bar{g}(Q)\ .$$ In a massless renormalization scheme running couplings obey the renormalization group equation

$\tag{43} Q\frac{\partial\bar{g}}{\partial Q}=\beta(\bar{g})\,.$

For small $$g$$ the $$\beta$$-function has a perturbative expansion of the form

$\tag{44} \beta(g)=-g^3\sum_{k=0}^\infty b_kg^{2k}\,,$

where the coefficients $$b_0,b_1$$ are scheme independent. In particular

$\tag{45} b_0=(4\pi)^{-2}\left[\frac{11}{3}N-\frac{2}{3}N_f\right]\,,$

which is positive for the number of flavors $$N_f<11N/2\ .$$ As a consequence running couplings tend to zero as $$Q\to\infty\ :$$

$\tag{46} \bar{g}^2(Q)\sim\frac{1}{b_0t}\left[1-\frac{b_1}{b_0^2t}\ln(t)+{\rm O}(t^{-2})\right]\,,\quad t=\ln(Q^2/\Lambda^2)\,.$

This property is called asymptotic freedom. Gross, Politzer and Wilczek received the Nobel prize in 2004 for showing this and for stressing it's phenomenological importance$$^{~\![13]}$$. The so-called $$\Lambda$$-parameter appearing above is renormalization group independent but scheme dependent.

Figure 5: Schrödinger functional running coupling for $$N_f=4.$$

Although all couplings behave similarly at high energy, in the low energy regime they can be very different. Among the infinite number of non-perturbative definitions are ones based on correlation functions in infinite volume e.g. using the Adler function which has been computed to high orders in PT. Recently the associated current 2-point functions have been also measured in lattice simulations; the practical difficulty is to control IR and UV effects. To avoid this problem one can define couplings running with the volume. Prominent examples are ones based on the Schrödinger functional, the partition function defined on a cylindrical volume $$L^3\times T$$ with periodic boundary conditions in the spatial directions and Dirichlet in the Euclidean time. The Alpha Collaboration has measured couplings based on the latter using finite size scaling techniques over a wide range of energies with lattice artifacts taken into account (see e.g. ). The running at high energies is consistent with perturbative running and is presently our best evidence that the continuum limit of the lattice theory is asymptotically free. One measurement at low energies is required to relate the box sizes to physical scales. Having established consistency with perturbative running one can solve the RG equation to infinite energies and thereby obtain a good estimate of the $$\Lambda$$-parameter to physical scales. A one loop computation then yields the relation to the common $$\Lambda_{\overline{\rm MS}}$$ scale of dimensional regularization. It is important to appreciate that a coupling defined using dimensional regularization is only defined perturbatively, and is a definite function of the running scale when a fixed order of PT is used in solving the RG equation.

Some lattice determinations of $$\alpha_{\overline{\rm MS}}(M_Z)$$ have been included in present world averages for this quantity. The error estimates are so small that they dominate the determination$$^{~\![14]}$$. Once lattice determinations improve, the values can be used to estimate non-perturbative effects in various processes, and deviations with experiment may be attributed to sources of physics (perhaps new) other than QCD.

### Running quark masses

In Nature QCD is only an effective theory, since as mentioned previously at high energies other interactions become important. As far as the idealization to QCD is concerned quark masses are considered as new parameters which must be fitted to experiment. The origin of the irregular phenomenological quark mass pattern is at present not understood in the framework of the Standard Model.

In the framework of perturbation theory running quark masses can be defined at high energies. They obey a RG equation of the form

$\tag{47} Q\frac{\partial\bar{m}_f}{\partial Q}=\tau(\bar{g})\bar{m}_f\,,$

with

$\tag{48} \tau(g)=-g^2\sum_{k=0}^\infty d_kg^{2k}\,,\quad d_0=\frac{3(N^2-1)}{(4\pi)^2N}\,,$

from which a RG invariant scheme independent mass $$M_f$$ can be defined

$\tag{49} \bar{m}_f(Q)=M_f(2b_0\bar{g}^2(Q))^{d_0/2b_0}\left\{1+{\rm O}(\bar{g}^2(Q))\right\}\,.$

There are attempts to compute these running masses using QCD sum rules applied to 2-point functions of local operators.

In perturbative lattice theory running quark masses can also be defined. Note that in some formulations such as Wilson's where chiral symmetry is broken, the bare quark mass needs an extra additive renormalization $$m_{\rm R}=Z(m_0-m_c)$$ where $$m_c={\rm O} (g_0^2)$$ is the critical mass.

Since quarks are not observed as asymptotic states the question arises how to define quark masses non-perturbatively. One way is to use chiral perturbation theory; within this framework to first order one obtains $$2m_s/(m_u+m_d)=24.2\ .$$

Various non-perturbatively defined quark masses have been measured in lattice simulations. One can employ the PCAC relation e.g. for the $$I=1$$ axial current

$\tag{50} (\bar{m}_u+\bar{m}_s)(\mu)=\frac{Z_A(g_0)\langle\partial_\nu(\bar{u}\gamma_\nu\gamma_5 s)_{\rm latt}J\rangle}{Z_P(g_0,\mu)\langle(\bar{u}\gamma_5 s)_{\rm latt}J\rangle}+\dots$

where the dots stand for terms which (rapidly) tend to zero in the continuum limit for which this definition of the masses should be independent of the source $$J\ .$$ The renormalization of the axial current can be determined non-perturbatively by requiring the axial Ward identities. On the other hand the renormalization of the pseudoscalar density $$Z_P(g_0,\mu)$$ is scheme and scale dependent, and can be defined by a requiring a certain normalization of the pseudoscalar 2-point function.

### Standard model related phenomenology

There many on-going lattice projects measuring a wide range of observables of phenomenological interest. Apart from the running couplings and masses referred to above these include, meson decay constants e.g. $$f_\pi, f_K$$ ($$\langle \overline{u}\gamma_\mu \gamma_5 d(x)|\pi^-(p)\rangle=i\sqrt{2}f_\pi p_\mu{\rm e}^{-ipx}\,.$$), form factors (FF) (e.g. electromagnetic and the nucleon axial FF involving the nucleon axial coupling $$g_A$$), moments of structure functions, weak matrix elements e.g. $$B_K$$ and low energy constants in the effective chiral Lagrangian. Some quantities are extremely important to flavor physics, e.g. in determining the elements of the CKM matrix.

The international project, the International Lattice Data Grid, for storing and sharing the (CPU) costly dynamical configurations is helpful in this endeavor.

This topic is extensive and cannot be covered here. Regular progress reports can be found in annual lattice conferences. In this connection we recommend the phenomenologically oriented reader to refer to the project of the Flavianet Lattice Averaging Group (FLAG). They attempt to select the currently best lattice values for a particular quantity in a way that is readily accessible to non-experts. In this quest they estimate how well published measurements satisfy certain quality criteria.

We only report here on the progress of computing the ratios of amplitudes $$A_0\,,A_2$$ for the decay of kaons into two pions with total isospin 0,2. The reason for choosing this is that the experimental ratio of $${\mathfrak R} A_0/{\mathfrak R} A_2\simeq 22.2$$ is much larger than naive expectations of $$\sim 1\ ;$$ this is called the $$\triangle I=1/2$$ rule. If lattice computations could reproduce this value it would lend even more support to the conjecture that QCD is the correct theory of strong interactions.

A decade ago $$SU(3)$$ chiral PT was used to relate $${\rm K}\to2\pi$$ to $${\rm K}\to\pi$$ and $${\rm K}\to$$vacuum. This method however needs power subtractions since 4-quark and 2-quark operators mix rendering the method insufficiently reliable. A more efficient method is to use the Lellouch-Lüscher formula

$\tag{51} |A|^2=8\pi V^2 \frac{m_K E^2}{q^{*2}}\{\delta'(q^*)+\phi^{P\prime}(q^*)\}M^2\,,$

relating the physical $${\rm K}\to2\pi$$ amplitude to the amplitude $$M$$ in a finite volume $$V\ .$$ Here $$E^2=4(m_\pi^2+q^{*2})$$ and $$\delta(q^*)+\phi^P(q^*)=n\pi$$ where $$\delta$$ is s-wave $$\pi\pi$$ phase shift and $$\phi^P$$ a kinematic factor. On the lattice one then computes matrix elements $$\langle {\rm K}|H_W|2\pi\rangle$$ of the effective weak Hamiltonian $$H_W\ ,$$ and $${\rm K},2\pi$$ energies from $${\rm K}-{\rm K}$$ and $$2\pi-2\pi$$ correlators, using twisted periodic boundary conditions.

Using this method (with $$N_f=2+1\,,m_\pi=165{\rm MeV}$$) the RBC-UK collaboration recently obtained $${\mathfrak R} A_2=1.56(7)(25)\times 10^{-8}{\rm GeV}\ ;$$ $${\mathfrak I} A_2=-9.6(4)(24)\times 10^{-13}{\rm GeV}\ ;$$ $${\mathfrak I} A_2/{\mathfrak R} A_2=-6.2(0.3)(1.3)\times 10^{-5}$$ in good agreement with experiment. The ratio $$\epsilon'/\epsilon$$ (=$$16.5(2.6)10^{-4}$$ experimentally) gives one relation between $${\mathfrak I} A_0$$ and $${\mathfrak I} A_2\ .$$ Lattice computations of $$\epsilon'/\epsilon$$ and of $$A_0$$ have been presented but are not yet under satisfactory control. The computation of the isospin $$0$$ amplitudes using the LL method are more involved than for $$I=2$$ since many diagrams are involved including disconnected diagrams. Initial studies (at large $$m_\pi=420{\rm MeV}$$) indicate that the latter give small contributions but have at present relatively large errors.

### Finite temperature

Figure 6: Phases of QCD as functions of the temperature ($$T$$), light ($$m_{u,d}$$) and strange ($$m_s$$) quark masses. At physical values of the light and strange quark masses (indicated in the figure by a red ?), the transition is a crossover and the exact temperature seems to be observable dependent. However at the moment there is a consensus between different groups that it is between 140 and 180 MeV (Kanaya, 2010). The $$m_{u,d}=m_s=\infty$$ limit yields pure $$SU(3)$$ Yang-Mills theory and there the transition is first order. The $$m_{u,d}=m_s=0$$ limit also gives a first order transition. The first order transition persists in a small region around these points and ends in second order critical points. The series of critical points are shown in green bordering the first order regions. The second order transitions in different mass regimes fall in different universality classes viz. ,$${\mathbb Z}_2$$ and $${\rm O}(4)\ .$$ The lines of points belonging to the two universality classes meet at a tricritical point.

The partition function of a system is given by $$Z={\rm tr} \exp (-\beta H)$$ where $$\beta=1/k_BT$$ and $$H$$ is an appropriate Hamiltonian. In units where the Bolzmann constant $$k_B=1\ ,$$ $$\beta$$ is the inverse temperature in units of energy. Going from the Hamiltonian to the action formulation, the partition function can be recast as a functional integral over a finite Euclidean time interval $$\beta$$ with periodic (anti-periodic) boundary conditions for bosonic (fermionic) fields. The action is given by $$S=\int_0^{\beta}{\rm d} t\,\int{\rm d}^3x\,L$$$$~^{\![15]}$$ where $$L$$ is the Lagrangian corresponding to the Hamiltonian $$H\ .$$ The lattice in this case has $$N_t$$ points in the time direction and $$N_s$$ points in each other direction. The inverse temperature $$\beta$$ is given by $$aN_t\ .$$ The thermodynamic limit corresponds to $$N_s\to \infty$$ with $$aN_t$$ fixed. In practice $$N_s$$ is always finite and the thermodynamic limit is approximated by keeping $$N_s \gg N_t\ .$$ On the other hand $$N_t\ge N_s$$ approximates the zero temperature field theory. Thermodynamic quantities are obtained by taking derivatives of the logarithm of the partition function as in the continuum.

It is conjectured that at high temperature QCD undergoes a drastic change of phase from colorless hadrons to a strongly interacting quark-gluon plasma where the color charges are no longer confined. One speaks of a confinement-deconfinement transition. Ongoing experiments at the Large Hadron Collider (LHC) in CERN as well as the Relativistic Heavy Ion Collider (RHIC) in Brookhaven National Laboratory, USA hope to detect signatures of this transition and the quark-gluon-plasma phase. The transition varies from strong first order to a mild crossover depending on the masses of the three light quarks. For pure Yang-Mills theories, the expectation value of the Polyakov loop serves as an order parameter for this transition. Once the quarks become dynamical, the Polyakov loop is not an order parameter anymore but it still shows a sharp jump and is often used to locate the transition point. Among other observables are the various susceptibilities that one can construct.

In Figure 6 we show a conjectured phase diagram of QCD for $$N_f=2+1\ .$$

### Nuclear physics

Computing spectra of nuclei via lattice simulations is a formidable task since the physical volumes required are larger than those presently accessible. Whereas typical hadronic scales are of the order of hundreds of MeV, nuclear forces are resulting effects of only a few MeV.

There are various approaches to this problem. Fortunately it seems that the 2-nucleon force is the dominant interaction in low-lying nuclei although the 3-body interaction is required for a more detailed description. The phenomenological NN potentials have a long range part dominated by pion exchange as first proposed by Yukawa, have a strong spin orbit force in the $$I=1$$ channel at medium range, and at short distances a repulsive core. They are extracted by fitting to experimental phase shifts and bound state spectra (e.g. the deuteron). In lattice QCD the phase shifts can be computed as discussed in section on scattering.

The NN potential thus obtained is then applied in other contexts e.g. in the discussion of stellar structure. Many-body techniques enable calculations of ground states of light nuclei with atomic number $$A\lesssim 14\ .$$

A recent attempt by the HAL group to directly compute NN-potentials is via the computation of Nambu-Bethe-Salpeter wave functions, correlation functions of two nucleon operators separated by a distance $$r$$ between vacuum and two-nucleon states$\phi_E(r)=\langle 0|N(x-r,0)N(x,0)|6q,E\rangle\,,\quad E=2\sqrt{k^2+m_N^2}\,.$ As discussed in the section on scattering, outside the range of interaction one extracts the phase shifts$\phi_E^l(r)\simeq A_l\frac{\sin(kr-l\pi/2+\delta_l(k))}{kr}\,$ for $$r\to\infty\ .$$ The proposal is simply to define a potential by application of the free Schrödinger operator to the amplitude at all distances. The potential thus obtained depends on the energies $$E$$ and on the interpolating nucleon operators. At low energies the energy dependence may be mild but the short distance behavior depends strongly on the operators. The method has however the advantage that the definition is direct and can be applied to various situations e.g. to a coupled channel analysis e.g. $$\Lambda\Lambda$$ and $$\Sigma N\ .$$

Some direct computations of states with $$\ge3$$ baryons have been initiated. Lüscher's formulae have been extended to cases of $$n\ge3$$ particles $$\triangle E_n=\frac{4\pi\bar{\ell}}{ML^3}\frac{1}{2} n(n-1)[1-....]$$ with $$\ell=\bar{\ell}-(2\pi\bar{\ell}^3 r/L^3)\{1-\bar{\ell}\mathcal{I}/\pi L\}$$ with a geometrical constant $$\mathcal{I}\ .$$ In 2009 PACS-CS initiated feasibility measurements of 4-baryon correlation functions. These will eventually give information on the $$\alpha$$-particle (the $$~^4{\rm He}$$ nucleus), but at present the simulated pion masses are much too large $$\sim 800{\rm MeV}$$ to make contact with experiment.

Kaplan and coworkers have proposed to first study simpler models, in particular unitary fermions which are non-relativistic short range systems with an infinite s-wave scattering length i.e. a constant phase shift $$\delta_0=\pi/2\ .$$ This may be a good starting point for physics since nucleons have large scattering lengths. There are also experimental studies of unitary fermions with trapped atoms.

In the unitary regime the interaction is strongly attractive and non-perturbative methods are required. The details of the microscopic interaction are unimportant and the system is adequately described by a 4-Fermi contact operators with 2-component fermions ($$L_{eff}\propto N^\dagger NN^\dagger N\,$$).

The group has developed highly improved lattice methods to explore the ground state energy of large number of strongly interacting fermions, through the exponential decay of the correlation functions in imaginary time. There is a severe signal-to-noise problem since the overlap between the source and ground state decreases exponentially with the number of particles. However using improved sources to tackle this problem, the group has obtained results for up to 38 unitary fermions and $$1\%$$ error-bar measurements for up to 100 fermions seems feasible in the near future. Measurements of the ground state energy at density $$\rho$$ in the unitary limit $$E(\rho)=\xi E_{\rm free}(\rho)\ ,$$ where $$\xi$$ is the Bertsch parameter, have been performed as well as of the pairing gap $$\triangle\ ,$$ the difference between chemical potential and the minimum energy required to add one fermion to the unpolarized unitary Fermi gas.

De Forcrand and Fromm have investigated the NN potential in $$N_f=1$$ QCD at strong coupling. In this limit the partition function becomes a weighted sum over configurations of dimers and self-avoiding baryon loops. Interestingly the NN potential thus obtained is again consistent with phenomenological ones! Further, the average energies per nucleon of larger nuclei up to $$~^{12}{\rm C}$$ are found to be qualitatively described by the Weizsäcker phenomenological formula ($$m(A)/A=c+dA^{-1/3}$$).

## Error Analysis

All computations of physical quantities in the continuum and thermodynamic limits via lattice simulations involve the following different sources of errors:

1. Finite lattice spacing effects
2. Finite volume effects
3. Sometimes measurements are made at unphysical quark masses
4. Statistical errors

A huge effort in the lattice community is invested to reduce/control the systematic errors (1)-(3). The source (3), if present, is analyzed using $$\chi$$PT. Sources (1),(2) and (4) are briefly discussed below.

### Lattice artifacts

There are various manifestations of finite lattice artifacts, in particular the continuous rotational symmetry is broken and only a discrete subgroup remains. In the continuum limit the lattice structure should become irrelevant and the full rotational symmetry restored.

Eventually lattice spacings in numerical simulations may be so small that lattice artifacts are numerically negligible. However at present this is not the case and the question is both of practical and theoretical importance. A ratio of physical masses should remain finite in the continuum limit and the approach is thought to be power-like (up to logs)

$\tag{52} m_1/m_2\to (m_1/m_2)_{a=0} +{\rm O}(a^p)\,.$

Universality conjectures that wide classes of actions yield the same continuum limit but the power $$p$$ appearing above depends on the choice of lattice action.

Most of our analytical knowledge on lattice artifacts stems from investigations in the framework of perturbation theory and also in the framework of $1/N$ expansions in lower-dimensional theories. Based on such studies Symanzik conjectured that leading lattice artifacts for correlation functions of basic fields should be described (Weisz, 2009) by an effective continuum action with a Lagrangian of the form

$\tag{53} L_{\rm eff}=L_{\rm cont}+a^p\sum_i k_i\mathcal{L}_i\,,$

where $$\mathcal{L}_i$$ are local operators of dimension $$4+p$$ having the same symmetries of the lattice action. The coefficients $$k_i$$ which depend on the lattice action are only weakly (logarithmically) dependent on $$a\ .$$ For correlation functions of composite operators extra terms specific to the operators involved are present.

The Symanzik conjecture is of structural nature and it is plausible that it is valid non-perturbatively. All numerical simulations to date support the conjecture, and it is reassuring that after performing the expected extrapolations the continuum results obtained using different actions are consistent with one another. More detailed renormalization group considerations of this scenario indicate that the $$a^p$$ cutoff effects are modified by powers of logarithms. If the conjecture is true it should be possible to design actions with more rapid asymptotic approach to the continuum limit. These are called Symanzik improved actions. In principle there are classes of lattice actions with no artifacts; these are called perfect actions. However they involve an infinite space of couplings which must of course be truncated in practical applications.

### Finite volume effects

In an attempt to directly measure an infinite volume physical quantity it is necessary either to establish that finite volume effects are negligible or, if the volume dependence is understood, to perform associated extrapolations. Present simulations are now done in the range $$L\sim 1.5-2.5~{\rm fm}\ .$$

For some quantities the dependence is exponential e.g. for stable particles in the special case of self interacting bosons the finite volume mass shift is given by ($$m=M(\infty)$$) $\tag{54} M(L)-m=-\frac{3}{8\pi mL}\left[\frac{\lambda^2}{2m}{\rm e}^{-\sqrt{3}mL/2}+\int_{-\infty}^\infty\frac{{\rm d} q_0}{2\pi}{\rm e}^{-L\sqrt{q_0^2 +m^2}}F(iq_0)\right]+{\rm O}\left({\rm e}^{-\sqrt{3}mL/\sqrt{2}}\right)\,,$

where $$F(\nu)$$ is the forward scattering amplitude and $$\lambda^2/2=\lim_{\nu\to\pm m/2}(\nu^2-m^2/4)F(\nu)\ .$$ For other spectra the dependence is a power law, e.g. for the masses of 2-particle states, and as discussed in the section on scattering where it was mentioned that this dependence is used to extract scattering data.

The large volume regimes are characterized by $$m_\pi L\gg1\ .$$ However the same action describes the physics in all volumes and hence other regimes are also of physical interest. For example in sections couplings and quark masses we have mentioned definitions of couplings and masses running with the volume.

Lattice simulations can be performed in various volume regimes to determine parameters of the effective chiral Lagrangian (Gasser and Leutwyler). The so called $$p$$-regime is characterized by $$m_\pi L\gg1\ .$$ Other situations that have been most commonly studied are the $$\epsilon$$ and $$\delta$$ regimes. The $$\epsilon$$ regime is characterized by $$m_\pi L_\mu\le1$$ for all $$\mu=1,..,4$$ in which case $$m\Sigma V\le1\ .$$ In the $$\delta$$ regime one has small extent in the spatial directions $$m_\pi L_j\le1$$ for $$j=1,2,3$$ and $$L_4\gg L_j$$ i.e. cylindrical geometry.

### Statistics

As we have seen in the previous sections, evaluation of correlation functions in lattice gauge theory is carried out using sampling techniques. The measurements therefore have to be analyzed accordingly and it is of utmost importance to have an estimate of the error with which an expectation value (sample mean) has been determined.

Observables whose expectation values are directly calculated in a simulation are called primary observables. If $$X$$ is a primary observable for which we have the uncorrelated sequence of values $$X_1\ldots X_n\ ,$$ an unbiased estimator for the expectation value of $$X$$ ($${\widehat X}$$) and its variance, is given by (Gattringer and Lang, 2010)

$\tag{55} {\widehat X} =\frac{1}{n}\sum_{i=1}^{n}\;X_i \qquad {\rm and} \qquad {\widehat \sigma}^2_X = \frac{1}{n-1}\sum_{i=1}^{n}\;\left( X_i - {\widehat X}\right )^2.$

The error on $${\widehat X}$$ is given by $${\widehat \sigma}_X /\sqrt{n}\ .$$

In practice the sequence $$X_1\ldots X_n$$ is never uncorrelated and this correlation (autocorrelation) affects the computation of the error on the sample expectation value. An autocorrelation function is defined (Gattringer and Lang, 2010) by, $$C_X(X_i,X_{i+t})=\langle X_iX_{i+t}\rangle - \langle X_i\rangle\langle X_{i+t}\rangle \ .$$ For a Markov chain in equilibrium, $$C_X$$ is independent of $$i$$ and the normalized function $$\Gamma_X \equiv C_X(t)/C_X(0)$$ shows an exponential fall for large $$t$$ with a characteristic time $$\tau_X\ .$$ From $$\Gamma_X$$ we can define the integrated autocorrelation time $$\tau_{X,{\rm int}} =\frac{1}{2} + \sum_{t=1}^N\;\Gamma_X(t)\ .$$ Successive values of $$X$$ separated by several times $$\tau_{X,{\rm int}}$$ can be considered statistically independent. In this case, the error on $${\widehat X}$$ is given by $$\sqrt{2\tau_{X,{\rm int}}{\widehat \sigma}^2_X}\ .$$ Determination of $$\tau_{X,{\rm int}}$$ requires Markov chains of length several times that of $$\tau_{X,{\rm int}}\ .$$ Moreover $$\tau_{X,{\rm int}}$$ is not the same for all observables but depends on $$X\ .$$ Usually $$\tau_{X,{\rm int}}$$ is "small" if $$X$$ is a local quantity (e.g. plaquette) while it is "large" if $$X$$ is extended (e.g. a zero mode of the Dirac operator). A simpler procedure called binning (Gattringer and Lang, 2010) is often adopted if the Markov chains are not long enough for a reliable determination of $$\tau_{X,{\rm int}}\ .$$

At finite lattice volume, configurations with different number of zero modes of the Dirac operator (different topological sectors) have to be sampled for an unbiased measurement. Simulation algorithms often get stuck in a fixed topological sector for a long time especially as one gets closer to the continuum limit. Recently there has been a proposal by (Lüscher and Schaefer, 2011) to use open boundary conditions in the time direction to get around this problem in spite of the disadvantage that with such boundary conditions time translation invariance is lost. In our current way of doing things, short runs can give biased results and underestimated errors as the autocorrelation times would be grossly underestimated. Unfortunately, as mentioned before, HMC is a non-renormalizable algorithm and there is no known analytic way to estimate the autocorrelation time at small lattice spacings from measurements at bigger lattice spacings.

Often physical quantities are obtained as "fits" to correlation functions which in turn are functions of the appropriate primary quantities. The error on the fitted quantity is obtained by doing the fit several times and looking at its variance. For a typical hadron correlator the fit itself is done by minimizing the so called $$\chi^2$$ which is given by

$\tag{56} \chi^2 = \sum_{l,l'=l_{\min}}^{l_{\max}} (C(l)-f(l)){\rm Cov}^{-1}(l,l')(C(l')-f(l')).$

Here $$C(l)$$ and $$C(l')$$ are hadron operators at time slices $$l$$ and $$l'\ ,$$ $$f$$ is the hypothesis function and $${\rm Cov}(l,l')$$ is an element of the covariance matrix. The covariance matrix is estimated (from $$n$$ measurements) as

$\tag{57} {\rm Cov}_n(l,l') = \frac{1}{n-1}\langle (C(l)-\langle C(l)\rangle_n) (C(l')-\langle C(l')\rangle_n)\rangle_n$

Usually we never have such a large sample of correlation functions that we can divide the data into independent sets (for doing the fit several times) while maintaining $$n$$ for each set large enough to get a stable reliable estimate of $${\rm Cov}^{-1}(l,l')\ .$$ In such cases one usually resorts to using jackknife estimators. It is expected that the jackknife method gives more reliable errors on fitted quantities compared to the naive variance. For more details about the jackknife method see (Gattringer and Lang, 2010) and (Lüscher, 2009).

As we have been emphasizing in the previous paragraphs, lattice simulation of QCD is quite resource intensive and we usually have the bare minimum data required for analysis. Quite a bit of effort is therefore directed towards variance reduction techniques or improving the signal-to-noise ratio (SNR) of the data obtained. For pure gauge simulations (where fermions are not part of the action), if we are interested in observables which are local, it is now possible to improve the SNR exponentially. This recent major advancement goes under the name of multilevel algorithm (Lüscher and Weisz, 2001). With fermions, one usually averages over multiple random sources to reduce the fluctuation of the hadron correlators. Smearing the gauge fields is also another common practice (Gattringer and Lang, 2010). In spite of all these developments, correlators of only the lightest pseudo-scalar mesons can be determined with relative ease. Even for the lightest baryon (proton) the SNR is proportional to $$\sqrt{N}{\rm e}^{-(m_P-\frac{3}{2}m_{\pi}) |l-l'|}\ .$$ Since $$(m_P-\frac{3}{2}m_{\pi})$$ is not small, the SNR for the proton propagator tends to disappear quickly in the statistical noise for even moderate values of $$|l-l'|$$ (Lüscher, 2009). Other hadron correlators heavier than the light mesons are also similarly affected.

## Topology

In classical continuum Yang-Mills theory smooth Euclidean gauge fields are classified by a topological charge

$\tag{58} Q=\int{\rm d}^4 x\,q(x)\,,$

with topological density

$\tag{59} q(x)=\frac{g^2}{32\pi^2}\epsilon_{\mu\nu\rho\tau}{\rm tr} F_{\mu\nu}(x)F_{\rho\tau}(x)\,.$

Moreover the classical field equations have (self-dual) solutions with finite action and topological charges $$Q=\pm1$$ called (anti-)instantons. There are also multi-instanton solutions.

The possible role of topology in the quantum field theory is not so transparent. An oft-investigated quantity which gives a particular measure of topological effects in QFT is the topological susceptibility:

$\tag{60} \chi_t=\frac{1}{V}\langle Q^2\rangle\,.$

Formally

$\tag{61} \chi_t=\int{\rm d}^4 x\,G(x)\,,$

where $$G(x)$$ is the Euclidean two-point function of the topological density:

$\tag{62} G(x)=\langle q(x)q(0)\rangle\,.$

In QFT $$q(x)$$ is a composite operator and has to be defined with care. Some properties of the (Euclidean) 2-point function (which is assumed to satisfy the Osterwalder-Schrader axioms) are well established. In particular a perturbative analysis of the short distance behavior leads to the expectation:

$\tag{63} G(x)={\rm O}\left(|x|^{-8}\right) \quad {\rm for}\quad x\to0\,,$

up to possible logarithmic factors. Because of this non-integrable singularity it follows that equation (61) is not well defined until $$G(x)$$ is defined as a distribution i.e. a well defined linear functional on the space of test functions finite at the origin $$x=0$$$$^{~\![16]}$$. It is interesting to note that since $$q(x)$$ is odd under time reflections $$G(x)$$ has the property

$\tag{64} G(x)\le0 \,\,\,\,{\rm for}\,\,\,\,x\ne0\,,$

i.e. the distribution at the origin must hence be such that $$\chi_t$$ is positive.

For large $$|x|$$ the 2-point function decays exponentially

$\tag{65} G(x)={\rm O}\left({\rm e}^{-m|x|}\right)\,\,\,\,\,{\rm for}\,,\,\,\,|x|\to\infty\,,$

where $$m$$ is the mass of the lightest pseudo-scalar state with the quantum numbers of $$q(x)\ .$$ We expect that $$m>0$$ both in Yang-Mills and in full QCD$$^{~\![17]}$$.

A thorough analysis of the status of topology in QFT requires a non-perturbative regularization, e.g. the lattice. In the latter setting given a physically local Dirac operator $$D$$ satisfying the Ginsparg-Wilson relation (eq. (28)) one can define the topological charge by

$\tag{66} Q=a^4\sum_xq(x)\,,\quad q(x)=-\frac{1}{2}\bar{a}\,{\rm tr}\{\gamma_5D(x,x)\}\,,$

since $$Q=\nu$$ where $$\nu$$ is the index of $$D$$ i.e. the difference $$n_{+}-n_{-}$$ of the numbers of exact zero modes with positive and negative chirality. Here $\bar{a}$ is as defined in eq.$$~\!$$ (28). The charge density thus defined is a local gauge-invariant expression in the link variables.

In lattice QCD the space of fields is connected and the assignment of a topological charge to every lattice gauge field is consequently not unique e.g. the definition above depends on the particular choice of $$D\ .$$ Close to the continuum limit, the integration measure in the functional integral may, however, be increasingly supported on fields where the charge assignment is unambiguous. The topological sectors would then arise dynamically and any differences in the definition of the charge would be ultimately irrelevant.

Measurements of $$\chi_t r_0^4$$ in pure $$SU(3)$$ Yang-Mills theory by DelDebbio, Giusti, and Pica, using the GW definition, indicate that $$\chi_t r_0^4$$ approaches a continuum limit (with corrections of the order $$a^2$$) suggesting that the susceptibility is indeed a physical quantity. Setting $$r_0=0.5~{\rm fm}$$ yields $$\chi_t^{1/4}=194.5(2.1){\rm MeV}\,.$$ We note that earlier determinations of the topological susceptibility obtained by means of gluonic operators (e.g. by cooling or by subtracting renormalizations) gave results in the same ball-park.

In the mean time Lüscher has presented field theoretical definitions of $$\chi_t\ ,$$ based on the low-lying spectrum of the Wilson Dirac operator, which are singularity free and which do not require renormalization e.g.

$\tag{67} \chi_t=\frac{\langle ({\mathbb P}_M)\rangle}{V}\frac{\langle{\rm tr}(\gamma_5{\mathbb P}_M){\rm tr}(\gamma_5{\mathbb P}_M)\rangle} {\langle{\rm tr}(\gamma_5{\mathbb P}_M\gamma_5{\mathbb P}_M)\rangle}\,.$

Some analytic attempts to go beyond perturbation theory involve the semi-classical approximation, where the functional integral is expanded about instanton solutions. In fact any classical configuration can be approximated arbitrarily well by lattice fields. Thus the sectors are definitely included in the functional integral but not well separated. How the sectors become divided in the continuum limit has recently been explained by Lüscher from considerations of Wilson (gradient) flow.

A famous formula proposed by Witten and Veneziano

$\tag{68} m^2_{\eta'}=\frac{2N_f}{F_\pi^2}\chi_t^{(N_f=0)}\,,\,\,\,\,(N\to\infty)\,,$

relates the mass of the $$\eta'$$ meson to the quenched topological susceptibility $$\chi_t^{(N_f=0)}$$ in the large $$N$$ limit.

In QCD with dynamical fermions the topological susceptibility vanishes in the chiral limit. Chiral zero modes inhibit the non-zero topological charge sectors. In leading order $$\chi PT\ ,$$ $$\chi_t=m\Sigma/N_f\ .$$

The fact that the topological charge $$Q$$ is given in terms of a local density $$q(x)$$ allows us to derive an asymptotic formula for the probability $$P_{\nu}$$ to find a gauge field with topological charge $$Q=\nu$$ at large volumes (for $$N_f=0$$):

$\tag{69} P_{\nu}={{\rm e}^{-{\nu^2\over2\chi_t V}}\over\sqrt{2\pi\chi_t V}} \left\{1-\frac{c_4}{8V\chi_t^2}+{\rm O}\left(V^{-2},\frac{\nu^2}{\chi_t^2}\right)\right\}\,.$

Here $$c_4$$ appears in the expansion of the QCD free energy with small topological theta parameter $$E(\theta)=\frac{1}{2}\chi_t\theta^2+\frac{1}{24}c_4\theta^4+\dots\,,$$ and this has been measured in various simulations. Moreover simulations at fixed topology can also be useful to measure infinite volume quantities via finite volume effects.

It is often said that QCD has an extra parameter which corresponds to the addition to the Lagrangian of a term $$\propto\theta q(x)\ .$$ This addition which retains the renormalizability, breaks P and CP symmetries in the same way as complex quark masses for $$N_f\ge3$$ can. Because of the anomalous breaking of axial U(1) symmetry one can transform such terms in the Lagrangian into one another such that only the combination $$\bar{\theta}=\theta-{\rm arg\,(det\,}m)$$ remains an independent physical parameter. The experimental limit on the neutron dipole moment sets an upper bound $$\bar{\theta}<10^{-11}\ .$$ In our opinion one should define QCD as the model without the $$\theta$$ term. If one adds the $$\theta$$-term one should for phenomenological consistency keep an ultraviolet cutoff and add further non-renormalizable terms which appear when the non-hadronic fields are "integrated out", since they induce potential effects of the same magnitude.

## In Brief

### Finite density

The physics of the interior of neutron stars requires the understanding of hadronic matter at densities much greater than nuclear.

The study of QCD at finite density $$\rho$$ is one of the most challenging problems at this time since the associated Euclidean action is complex. In such a case one is faced with a complex measure not allowing a probabilistic interpretation. This is generally referred to as the sign problem. Various approaches have been proposed to tackle this problem including e.g. working with complex density and analytically continuing back to real density, or computing the first coefficients in a Taylor expansion in $$\rho\ .$$ These methods work quite well for low densities but are not applicable at high densities. One promising proposal is to use the complex Langevin equation where the system is replaced by one on a complexified space but with real measure. A problem of this procedure is that in some cases it converges to wrong results. Aarts, James, Seiler and Stamatescu have given criteria which are necessary for convergence to the correct limit. The authors have also specified a set of criteria which guarantee convergence to the correct limit, however this set is infinite and hence impractical.

### Heavy quark effective theory

B-physics plays an important role in flavor physics e.g. to give constraints on the CKM matrix. The bottom flavored mesons have heavy masses $$m_B\sim 5{\rm GeV}$$ and hence it is at present difficult to have lattices with small cutoff effects $$am_B\ll1$$ and at the same time small IR effects $$m_\pi L\gg1\ .$$ For this reason the heavy quark effective theory (HQET) was constructed to provide asymptotic expansions of quantities in powers of the inverse heavy quark mass $$1/m$$ with all other scales fixed. Much progress in B-physics has been achieved using HQET in the past years including computations of the B-meson masses and their leptonic decay constants $$f_B, f_{B_s}\ .$$ For a recent review we refer to (Sommer, 2009).

### 1/N Expansion

An expansion in the number of field degrees of freedom at one site has in the past given valuable qualitative information on statistical mechanical models. In Yang-Mills theory in the limit of infinite colors $$N\to\infty\ ,$$ with the 't Hooft coupling $$\lambda=g^2N$$ held fixed some simplification takes place but the system still remains quite complicated. When regulated in UV and IR the series in $$\lambda$$ has a finite radius convergence. Only planar diagrams contribute, and the theory is like that of weakly self-interacting strings. In this limit QCD retains confinement and spontaneous chiral symmetry breaking, but mesons do not interact (in the leading order). Furthermore numerical lattice studies indicated that some spectral observables are weakly dependent on $$N\ .$$

The amazing speculation was made by Eguchi and Kawai that in the $$N\to\infty$$ limit many physical infinite volume quantities might be obtained from computations on lattices of any size including one consisting of a single site! This is called volume reduction. In the single site model a link variable $$U_\mu(x)$$ is simply replaced by a variable $$U_\mu$$ independent of $$x$$ $$\!\!\!^{~\![18]}$$. It was later observed that certain conditions must hold for volume reduction to hold, in particular the $${\mathbb Z}_N^4$$ center symmetry ($$U_\mu\to U_\mu z_\mu\,,\,\,\,z_\mu\in {\mathbb Z}_N$$) of the single site model should not be broken$$^{~\![19]}$$. In fact this symmetry is broken for the pure gauge theory (Eguchi-Kawai) and various variants. Center symmetry requires that traces of open loops $$\langle{\rm tr} U_\mu\rangle, \langle{\rm tr} U_\mu U_\nu\rangle,....$$ must all vanish. This fails for small $$L$$ since in weak coupling the free energy goes to $$\sum_{a,b}\ln\left[\sum_\mu\sin^2((\theta^a_\mu-\theta^b_\mu)/2)\right]$$ where $$U_\mu=V_\mu^\dagger \Lambda^\mu V_\mu\,,\,\,\,\Lambda_\mu={\rm diag}({\rm e}^{i\theta^a_\mu})\ ,$$ implying that eigenvalues attract for $$d>2\ .$$

Numerical studies by Narayanan and Neuberger have however given evidence for partial volume independence, meaning that results are practically independent of $$L$$ if $$L$$ exceeds some critical value $$L_c\ .$$ Further it has been suggested by Kovtun, Unsal and Yaffe that it may hold in presence of dynamical fermions in the adjoint representation$$^{~[20]}$$.

### Random matrix theory

Shuryak and Verbaarschot already in 1993 proposed that the eigenvalues of the Dirac operator in the $$\epsilon$$-regime of QCD are distributed in the same way as a large random matrix of a certain universality class. The most important prediction of random matrix theory (RMT) is that at vanishing quark masses the eigenvalues scale proportionally to $$(\Sigma V)^{-1}$$ ($$V$$ the space-time volume), so that the spectrum near the origin becomes dense when the volume increases.

So far the proposition remains an unproven conjecture based on symmetry and universality arguments, however it is supported by $$\chi$$PT where the Leutwyler-Smilga sum rules are reproduced by RMT. Other predictions of RMT include the joint statistical distribution of individual eigenvalues in any topological sector. Comparisons of these predictions with data from lattice QCD show reasonable agreement (albeit not complete matching) in the quenched theory for volumes larger than about $$5{\rm fm}^4\ .$$ Similar comparisons have been made with dynamical fermions. For a recent review we refer to (Damgaard, 2011).

## Outlook

### Conventional Wisdom

There are still many unproven properties of lattice QCD.

1. Is the critical point of pure Yang-Mills theory at $$g_0=0\ ?$$
2. Is the continuum limit of the lattice theory asymptotically free?
3. Does the continuum limit of pure Yang-Mills theory have a mass gap?
4. Are quarks and gluons confined ?
5. Is chiral symmetry spontaneously broken?

The conventional wisdom is that the answers to all questions are in the affirmative. The numerical evidence is in many cases very compelling but there are so far no rigorous proofs. Incidentally, the author of a proof of (3) and (4) is eligible for US\$ 1 Million Prize by the Clay Mathematics Institute! Only if conventional wisdom were correct would QCD qualify to be coined "our most perfect theory" Kronfeld and Quigg (2010).

### Present activities

An impression of the present trends in lattice theory are best obtained from reading the report on the latest International Lattice Workshops.

Work is continuing in improving measurements of the spectrum and low-energy scattering phase shifts, and computing matrix elements of phenomenological interest. In the next 5 years measurements with dynamical fermions with physical pion masses in volumes of size $$(6{\rm fm})^4$$ and lattice spacings $$a\sim 0.05{\rm fm}$$ will be standard.

A better understanding of the physics at finite temperatures is a further goal, as is the presently unsolved problem of simulating QCD at large density. Extracting nuclear physics from first principles will be an area of activity for some years.

The search for new more efficient simulation algorithms with shorter autocorrelation times than HMC is highly desirable. So too a better intuitive (and perhaps also rigorous) understanding of confinement, spontaneous symmetry breaking, and the approach to the continuum limit.

Simulations introducing QED effects in QCD have being initiated which will give more insight in isospin breaking effects as manifested in mass differences of hadrons in the same (approximate) isospin multiplet.

Extension of lattice investigations to theories other than QCD which may be relevant to physics beyond the Standard model are also pursued. For example some progress is being made constructing lattice theories with a subgroup of supersymmetries. Also there is a large effort to studying QCD with more flavors; it is believed that there is a window around $$N_f\sim10$$ where AF still holds but the theory is conformal.

## References

### Books

• Creutz, M. (1983). Quarks, Gluons And Lattices, Cambridge monographs on mathematical physics. Cambridge Univ. Press, Cambridge, Uk.
• DeGrand, T. and Detar, C. E. (2006). Lattice methods for quantum chromodynamics, World Scientific, Singapore.
• Gattringer, C. and Lang, C. B. (2010). Quantum chromodynamics on the lattice, Lecture notes in Physics 788 Springer, Heidelberg, Germany.
• Meyer-Ortmanns, H. and Reisz, T. (2007). Principles of phase structures in particle physics, World Scientific lecture notes in physics. 77 World Scientific, Hackensack, USA.
• Montvay, I. and Münster, G. (1994). Quantum fields on a lattice, Cambridge monographs on mathematical physics. Cambridge Univ. Press, Cambridge, Uk.
• Parisi, G. (1988). Statistical Field Theory, Frontiers in Physics 66 Addison-Wesley, Redwood City, USA.
• Rothe, H. J. (2005). Lattice gauge theories: An Introduction, World Scientific Lecture Notes in Physics 74. World Scientific, Singapore.
• Zinn-Justin, J. (2002). Quantum Field Theory and Critical Phenomena, International series of monographs on physics 113. Clarendon Press, Oxford, UK.

### Articles

• Chandrasekharan, S (2008). A new computational approach to lattice quantum field theories, PoS LATTICE2008: 3.
• Clark, M A (2006). The Rational Hybrid Monte Carlo Algorithm, PoS LAT2006: 4.
• Clark, M A et. al. (2010). Solving Lattice QCD systems of equations using mixed precision solvers on GPUs, Comput. Phys. Comm. 181: 1571. doi:10.1016/j.cpc.2010.05.002.
• Damgaard, P H (2011). Chiral Random Matrix Theory and Chiral Perturbation Theory, J. Phys. Conf. Ser. 287: 12004. doi:10.1088/1742-6596/287/1/012004.
• Egri, G I et. al. (2007). Lattice QCD as a video game, Comput. Phys. Comm. 177: 631. doi:10.1016/j.cpc.2007.06.005.
• Faddeev, L. D. (2009). Faddeev-Popov ghosts, Scholarpedia 4(4): 7389. doi:10.4249/scholarpedia.7389.
• FLAG working group of FLAVIANET, Colangelo, G et. al. (2011). Review of lattice results concerning low energy particle physics, Eur.Phys.J. C 71: 1695. doi:10.1140/epjc/s10052-011-1695-1.
• Kanaya, K (2010). Finite Temperature QCD on the Lattice – Status 2010, PoS LATTICE2010: 12.
• Korzec, T and Wolff, U (2010). A worm-inspired algorithm for the simulation of Abelian gauge theories, PoS LATTICE2010: 29.
• Kronfeld, A. S. and Quigg, C. (2010). Resource Letter: Quantum Chromodynamics, Am. J. Phys. 78: 1081-1116. doi:10.1119/1.3454865.
• Lüscher, M (2009). Computational Strategies in Lattice QCD, Les Houches lectures : . arXiv:1002.4232
• Lüscher, M and Schaefer, S (2011a). Non-renormalizability of the HMC algorithm, J. High Energy Physics 04: 104. doi:10.1007/jhep04(2011)104.
• Lüscher, M and Schaefer, S (2011b). Lattice QCD without topology barriers, J. High Energy Physics 1107: 36. doi:10.1007/jhep07(2011)036.
• Lüscher, M and Weisz, P (2001). Locality and exponential error reduction in numerical lattice gauge theory, J. High Energy Physics 9: 10. doi:10.1088/1126-6708/2001/09/010.
• Münster, G. (2011). Lattice quantum field theory, Scholarpedia 5(12): 8613.
• Nakamura, K et. al. (2010). Review of Particle Physics, J. Phys. G 37: 075021. doi:10.1088/0954-3899/37/7a/075021.
• Neuberger, H. (). Lattice chiral fermions, Scholarpedia : .
• Niedermayer, F. (1999). Exact chiral symmetry, topological charge and related topics, Nucl. Phys. Proc. Suppl. 73: 105. doi:10.1016/s0920-5632(99)85011-7.
• Slavnov, A. A. (2010). Slavnov-Taylor identities, Scholarpedia 3(10): 7119. doi:10.4249/scholarpedia.7119.
• Sommer, R (2009). Introduction to Non-perturbative Heavy Quark Effective Theory, Les Houches lectures : . arXiv:1008.0710
• Weisz, P (2009). Renormalization and lattice artifacts, Les Houches lectures : . arXiv:1004.3462
• Wilczek, F. (). Asymptotic freedom, Scholarpedia : .

### Footnotes

1. except as external probes
2. see the particle data book which is annually updated.
3. One speaks of mass generation and dimensional transmutation.
4. (this might not be necessary)
5. similar relations have been derived for many other quantities e.g. for the amplitudes $$A_0,A_2$$
6. $$s$$ is a parameter which is tuned to optimize the physical locality of the operator.
7. The string tension depends on the $$N$$-ality of the representation $$\mathcal{R}$$ of the link variables. Casimir scaling (which is exact in two dimensions) is a good approximation $$\sigma_\mathcal{R}/\sigma_F = C_\mathcal{R}/C_F$$ i.e. $$\sigma_k/\sigma_1= k(N-k)/(n-1)$$ as is also $$\sin(\pi k/N)/\sin(\pi/N)\ .$$
8. although in finite volume one can attempt to solve Schwinger--Dyson equations for gauge invariant observables perturbatively without gauge fixing.
9. A way out has using quasi-periodic gauge fixing functions been proposed by Testa.
10. Worm algorithms are being explored in cases where the cluster algorithms fail. They have been successfully applied for some spin systems (even with finite chemical potential) and abelian gauge theories.
11. a CDC 7600 with clock speed 40 Megahertz
12. $$c_1=\lim_{\Lambda\to\infty}\left\{\frac{1}{\pi}\sum_{j\in\Z^3,j\ne0,|j|<\Lambda} \frac{1}{|j|^2}-4\Lambda\right\}=-2.837\quad ,\quad c_2=c_1^2-\frac{1}{\pi^2}\sum_{j\in\Z^3,j\ne0}\frac{1}{|j|^4}=6.375....\,.$$
13. the historical development of AF is still under debate
14. One must be aware that the ASQTAD group uses staggered fermions with the so-called 4th root trick, with unfortunately possible associated systematic errors that are not well controlled.
15. In this setting we are studying equilibrium systems and there is no role of time. The Lagrangian is Euclidean.
16. This requires addition of 3 terms consistent with Euclidean invariance proportional to $$\triangle^r\delta(x)\,,\,\,\,r=0,1,2\,.$$
17. This has the consequence that the Fourier transform $$\tilde{G}(p)$$ is analytic in a neighborhood of the real axis with a cut from $$-\infty$$ to $$-m^2\ .$$
18. e.g. $$S_{\rm Wilson}\to Nb\sum_{\mu<\nu}2{\mathfrak R}{\rm tr}(U_\mu U_\nu U_\mu^\dagger U_\nu^\dagger)\ .$$
19. Volume independence is an example of large $$N$$ orbifold equivalence (Kovtun et al) which holds if the orbifolding symmetries (translation and center symmetries) are unbroken.
20. Fermions in the fundamental representation are relatively unimportant in the $$N\to\infty$$ limit.