Day 1: Plenary sessions

Bookmark and Share

DIS 2012 kicked off today with a full day of plenary sessions, general talks that everyone in the conference attends. (Well, not everyone attends, but there’s nothing else going on at any rate.) The slides of all the talks presented today are available on the conference website, but here are some of the interesting results.

Results from the Tevatron and LHC

Under the principle of “save the best for last,” I am getting this out of the way first: none of the major experiments have any new results of widespread importance to present. In particular, the Higgs search stands exactly where it was two weeks ago when the Moriond results were presented. This is no surprise because, for one thing, the Higgs boson is an electroweak phenomenon whereas DIS is more about the strong force; also, any major results would be presented at a bigger conference. DIS is a fairly specialized field of study so it doesn’t attract all that many people, in the grand scheme of things.

Of course, that’s not to say there is nothing to report at all. The Tevatron experiments are finishing up analysis of their data and they have found some interesting tidbits, like \(\mathrm{W}\) boson production in conjunction with charm quarks. And the ATLAS and CMS talks contain a pretty detailed overview of the results of the Higgs search that have been made public over the past several weeks.

Electroweak precision measurements

Even though we understand the theory of the electroweak interaction pretty well, there are plenty of interesting measurements to be made, especially regarding the weak boson mass. Since it is the main mediator of the weak force, lots of particles’ decay rates depend on this mass, so the better we can calculate it, the better we can determine the decay parameters. These parameters are involved in checking the standard model for internal consistency, and certain extensions of the standard model (like MSSM) will require a precisely known W mass to check.

With the latest calculation of the W boson mass based on Tevatron data, the global uncertainty is reduced by about 30%, and the preliminary new world average stands at \(80385\pm 15\ \si{GeV}\).

\(\alpha_s\) status

Arguably the most fundamental parameter in QCD is the strong coupling, \(\alpha_s\). As the most fundamental parameter, it can’t be predicted theoretically; it needs to be measured, and there are continual research efforts underway to determine what the value (at any given energy) actually is. The worldwide average stands at \(\alpha_s(m_Z) = 0.1184\pm 0.0007\), mostly determined by lattice QCD calculations. That’s very precise, but there are definitely some discrepancies to investigate. Other methods of determining the strong coupling include various methods of analyzing \(\tlp\) decays, the angular distributions of \(\elp\ealp\) collisions, and the parton distribution functions from hadron scattering (like DIS), and not all of them agree. It’s hoped (and expected) that LHC data can soon start contributing to this determination as well, which may help clarify some of the discrepancies.

Theory developments in QCD

On the theoretical side, much of the recent progress in QCD has centered on two fronts: automating NNLO calculations and computing resummations.


In perturbative QCD, all important quantities are expressed as power series in \(\alpha_s\). The individual terms in the series are quite complicated to calculate, so it can save a lot of time if we get computers to do it for us, but unfortunately it can be as hard to define an efficient procedure for a computer to calculate the terms as it is to just do it by hand.

Right now we’ve been able to program computers to calculate the leading order term in the series, \(\orderof{\alpha_s^0}\), and the next-to-leading order term, \(\orderof{\alpha_s^1}\). There are a bunch of different programs you can use to do this (MadGraph and MadLoop respectively, among many others). So the frontier of this programming is now on calculation of the NNLO \(\orderof{\alpha_s^2}\) term. A few selected results are already available, and we’re close enough that a general framework could be implemented within a couple of years.


Alternatively, there are efforts underway to resum the series by organizing the terms in a different way — instead of writing a cross section or decay rate as a series in \(\alpha_s\), make it a series in \(\alpha_s\ln(E_1/E_2)\). You still need to calculate term by term, but with this different organization, more of the contribution is hopefully shifted into the early terms in the series, so if you stop at, say, third order, the pieces you’re missing aren’t as significant.